paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_F0DowhX7_x
Structured Energy Network As a Loss
Belanger & McCallum (2016) and Gygli et al. (2017) have shown that an energy network can capture arbitrary dependencies amongst the output variables in structured prediction; however, their reliance on gradient-based inference (GBI) makes the inference slow and unstable. In this work, we propose Structured Energy As Loss (SEAL) to take advantage of the expressivity of energy networks without incurring the high inference cost. This is a novel learning framework that uses an energy network as a trainable loss function (loss-net) to train a separate neural network (task-net), which is then used to perform the inference through a forward pass. We establish SEAL as a general framework wherein various learning strategies like margin-based, regression, and noise-contrastive, could be employed to learn the parameters of loss-net. Through extensive evaluation on multi-label classification, semantic role labeling, and image segmentation, we demonstrate that SEAL provides various useful design choices, is faster at inference than GBI, and leads to significant performance gains over the baselines.
Accept
This work proposes using structured energy networks as loss functions for training feed forward networks to solve structured prediction tasks. The reviewers find the paper to be well written and easy to follow. The contribution is well positioned with respect to the literature and empirical results are strong. During the discussion period the authors addressed the concerns of the most negative reviewer sufficiently for them to increase their score. I can therefore recommend accepting this paper.
train
[ "65o8ngimmILS", "uNwTS5_V31dc", "P6qLQSTAqU5", "rqMFXStOP3W", "ukOLYF-8FN", "2_GqG5IKBMU", "f1UIzvShO3P", "4hURIvEizhP", "F-Y6-Odh_R3" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your prompt response! We will reflect your suggestions into our camera ready.", " Thank you for the detailed responses. My questions were primarily based on the added complexity of GBI for model training and the provided clarifications are really useful. I see the significance of the proposed tech...
[ -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "uNwTS5_V31dc", "2_GqG5IKBMU", "f1UIzvShO3P", "f1UIzvShO3P", "F-Y6-Odh_R3", "4hURIvEizhP", "nips_2022_F0DowhX7_x", "nips_2022_F0DowhX7_x", "nips_2022_F0DowhX7_x" ]
nips_2022_Tq2XqINV1Jz
Moment Distributionally Robust Tree Structured Prediction
Structured prediction of tree-shaped objects is heavily studied under the name of syntactic dependency parsing. Current practice based on maximum likelihood or margin is either agnostic to or inconsistent with the evaluation loss. Risk minimization alleviates the discrepancy between training and test objectives but typically induces a non-convex problem. These approaches adopt explicit regularization to combat overfitting without probabilistic interpretation. We propose a moment-based distributionally robust optimization approach for tree structured prediction, where the worst-case expected loss over a set of distributions within bounded moment divergence from the empirical distribution is minimized. We develop efficient algorithms for arborescences and other variants of trees. We derive Fisher consistency, convergence rates and generalization bounds for our proposed method. We evaluate its empirical effectiveness on dependency parsing benchmarks.
Accept
This paper presents theoretical results for structured prediction over trees and empirical results on three syntactic dependency parsing datasets. All reviewers agree that the paper is well written and novel. The empirical gains are not huge, but the comparisons are done in a thorough and fair way. All reviewers suggest acceptance (even though some reviewers have low confidence) and the meta reviewer agrees as well.
train
[ "NdV_jDgvt5a", "h6HO1Zbu1JG", "2DU4o4jStt", "CYrc3wMt62O", "dHd8lkEYvmN", "EIvt203jlkg", "zOgux94-tRO", "jRPANm19fSA" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have improved my score according. I believe this paper will make a solid contribution in structured prediction. ", " **Representation Learning**. Incorporating automatic representation learning into our method is indeed highly desired because of its practical value in applications. We omitted the discussion o...
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "2DU4o4jStt", "jRPANm19fSA", "jRPANm19fSA", "zOgux94-tRO", "EIvt203jlkg", "nips_2022_Tq2XqINV1Jz", "nips_2022_Tq2XqINV1Jz", "nips_2022_Tq2XqINV1Jz" ]
nips_2022_6LBfSduVg0N
Iso-Dream: Isolating Noncontrollable Visual Dynamics in World Models
World models learn the consequences of actions in vision-based interactive systems. However, in practical scenarios such as autonomous driving, there commonly exists noncontrollable dynamics independent of the action signals, making it difficult to learn effective world models. Naturally, therefore, we need to enable the world models to decouple the controllable and noncontrollable dynamics from the entangled spatiotemporal data. To this end, we present a reinforcement learning approach named Iso-Dream, which expands the Dream-to-Control framework in two aspects. First, the world model contains a three-branch neural architecture. By solving the inverse dynamics problem, it learns to factorize latent representations according to the responses to action signals. Second, in the process of behavior learning, we estimate the state values by rolling-out a sequence of noncontrollable states (less related to the actions) into the future and associate the current controllable state with them. In this way, the isolation of mixed dynamics can greatly facilitate long-horizon decision-making tasks in realistic scenes, such as avoiding potential future risks by predicting the movement of other vehicles in autonomous driving. Experiments show that Iso-Dream is effective in decoupling the mixed dynamics and remarkably outperforms existing approaches in a wide range of visual control and prediction domains.
Accept
This paper studies the problem of building world models that can decouple controllable and uncontrollable factors in the environment. The paper received reviews that generally tended towards acceptance. However, the reviewers had difficulty understanding some details and had concerns that the setup might not be the same across environments. The authors provided a rebuttal that addressed most of the reviewers' concerns. The paper was discussed and all the reviewers updated their reviews in the post-rebuttal phase. Reviewers generally agree that the paper should be accepted. AC agrees with the reviewers and suggests acceptance. However, the authors are urged to look at reviewers' feedback and incorporate their comments into the camera-ready.
train
[ "b_s55X3639", "nqJJEY7NaN", "Ma5puGEv7Cb", "V0-jQ6YduNCu", "jgjcBfo4hr5", "BnxKYWaPK-7", "o_H5hQ1xv41", "3KOkv2W8L4", "VZGu59zqWfM", "FOLSWEipuDH", "k10uJ-bHzdf", "W7IWEjSNXNH", "ESqvxu5aezq", "isbFq0NmQmQ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their efforts in the rebuttal. They have clarified all of my concerns. I am raising my score to accept. I do think it'll be interesting to try unrolling the policy as well as the environment in two separate streams that see each other. Hopefully this results into an interesti...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "nqJJEY7NaN", "3KOkv2W8L4", "V0-jQ6YduNCu", "VZGu59zqWfM", "o_H5hQ1xv41", "nips_2022_6LBfSduVg0N", "isbFq0NmQmQ", "k10uJ-bHzdf", "FOLSWEipuDH", "ESqvxu5aezq", "W7IWEjSNXNH", "nips_2022_6LBfSduVg0N", "nips_2022_6LBfSduVg0N", "nips_2022_6LBfSduVg0N" ]
nips_2022_Mf3CwoSuvwv
Improving RENet by Introducing Modified Cross Attention for Few-Shot Classification
Few-shot classification is challenging since the goal is to classify unlabeled samples with very few labeled samples provided. It has been shown that cross attention helps generate more discriminative features for few-shot learning. This paper extends the idea and proposes two cross attention modules, namely the cross scaled attention (CSA) and the cross aligned attention (CAA). Specifically, CSA scales different feature maps to make them better matched, and CAA adopts the principal component analysis to further align features from different images. Experiments showed that both CSA and CAA achieve consistent improvements over state-of-the-art methods on four widely used few-shot classification benchmark datasets, miniImageNet, tieredImageNet, CIFAR-FS, and CUB-200-2011, while CSA is slightly faster and CAA achieves higher accuracies.
Reject
This paper studies the few-shot classification with the focus of extending RENet with cross scaled attention and cross aligned attention. The technical novelty is marginal, and the authors fail to provide feedback during the rebuttal. I recommend rejection.
train
[ "pNbOPscs_mC", "A4sG2fx29F", "GuPT4d0fzL", "kPylwBQLdI", "eDPXo5q7Ypq" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I can't quite tell if there has been a new updated draft of the paper. If the authors have not submitted a rebuttal, without new information, I would continue to maintain my current recommendation for rejection.", " This paper extends relational embedding network [18] by introducing two novel cross attention mo...
[ -1, 4, 4, 3, 3 ]
[ -1, 3, 5, 4, 5 ]
[ "GuPT4d0fzL", "nips_2022_Mf3CwoSuvwv", "nips_2022_Mf3CwoSuvwv", "nips_2022_Mf3CwoSuvwv", "nips_2022_Mf3CwoSuvwv" ]
nips_2022_gthKzdymDu2
On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels
We study the properties of various over-parameterized convolutional neural architectures through their respective Gaussian Process and Neural Tangent kernels. We prove that, with normalized multi-channel input and ReLU activation, the eigenfunctions of these kernels with the uniform measure are formed by products of spherical harmonics, defined over the channels of the different pixels. We next use hierarchical factorizable kernels to bound their respective eigenvalues. We show that the eigenvalues decay polynomially, quantify the rate of decay, and derive measures that reflect the composition of hierarchical features in these networks. Our theory provides a concrete quantitative characterization of the role of locality and hierarchy in the inductive bias of over-parameterized convolutional network architectures.
Accept
The paper received positive and borderline reviews. A few technical concerns were raised but the rebuttal has addressed those convincingly. There is a clear consensus that the paper makes a valuable theoretical contribution and that it should be accepted. The AC agrees with the reviewer's assessment and follows their recommendation.
train
[ "SKarcCcQY1B", "P-cgEDHkg0", "icyA7PCU-R1", "PZuRBWqch4", "X0IFk6DLZgE", "EPywTgO8Y7k", "e5KR6ekG_2H", "7HdBKLHDq9JR", "xf--TCFTHRZ", "FB4oYww5Ie", "5xuQdrp9CrF", "VspdkEb7jVy", "oUAIExBJqkp", "gZranMuUzME", "lmHPh0OA1Hp", "RnT1Hvvu4KI", "eNkiteZbse", "-K0EnMxnz0T", "_EL8jQzEZXI"...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " The theorems with the appropriate global scale factors ($c_L,\\tilde c_L,c_1,c_2,c_3,c_4$) are valid for all $k$. However, when restricted to larger $|k|$, due to these scale factors, the bounds become tighter. \n", " If I understand you reply, the results in Theorem 3.4 and 3.5 only hold for large |k| and not ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "P-cgEDHkg0", "7HdBKLHDq9JR", "xf--TCFTHRZ", "X0IFk6DLZgE", "EPywTgO8Y7k", "e5KR6ekG_2H", "gZranMuUzME", "FB4oYww5Ie", "5xuQdrp9CrF", "VspdkEb7jVy", "oUAIExBJqkp", "_EL8jQzEZXI", "-K0EnMxnz0T", "eNkiteZbse", "RnT1Hvvu4KI", "nips_2022_gthKzdymDu2", "nips_2022_gthKzdymDu2", "nips_202...
nips_2022_KCN0ZRqxcDm
On Robust Multiclass Learnability
This work analyzes the robust learning problem in the multiclass setting. Under the framework of Probably Approximately Correct (PAC) learning, we first show that the graph dimension and the Natarajan dimension, which characterize the standard multiclass learnability, are no longer applicable in robust learning problem. We then generalize these notions to the robust learning setting, denoted as the adversarial graph dimension (AG-dimension) and the adversarial Natarajan dimension (AN-dimension). Upper and lower bounds of the sample complexity of robust multiclass learning are rigorously derived based on the AG-dimension and AN-dimension, respectively. Moreover, we calculate the AG-dimension and AN-dimension of the class of linear multiclass predictors, and show that the graph (Natarajan) dimension is of the same order as the AG(AN)-dimension. Finally, we prove that the AG-dimension and AN-dimension are not equivalent.
Accept
Exceptional contribution to multi-class theory
train
[ "l95vWn3op1", "lNfDqtRL-sQ", "mtO5fV6LPhX", "8zkoA-hcN_S", "EH0GXkzFIgfh", "Rgi6YqZtb1I", "QIwWlvpA30l", "VV9lMfmgPYwE", "6M0lYqkNmEn", "ZHKxJzu-AsU", "AD9zp1O6N0q", "1h8MgxH-kf-" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer mVKM, thanks for your comments. Following your suggestions, we have tried our best to improve the presentation in revision. Would you please take a look about our new Subsection 1.1 in revision and provide some suggestions, if possible?Many thanks for your time.", " We appreciate the feedback of r...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 7, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 5 ]
[ "mtO5fV6LPhX", "mtO5fV6LPhX", "VV9lMfmgPYwE", "6M0lYqkNmEn", "1h8MgxH-kf-", "AD9zp1O6N0q", "ZHKxJzu-AsU", "6M0lYqkNmEn", "nips_2022_KCN0ZRqxcDm", "nips_2022_KCN0ZRqxcDm", "nips_2022_KCN0ZRqxcDm", "nips_2022_KCN0ZRqxcDm" ]
nips_2022_SGQeKZ126y-
Formulating Robustness Against Unforeseen Attacks
Existing defenses against adversarial examples such as adversarial training typically assume that the adversary will conform to a specific or known threat model, such as $\ell_p$ perturbations within a fixed budget. In this paper, we focus on the scenario where there is a mismatch in the threat model assumed by the defense during training, and the actual capabilities of the adversary at test time. We ask the question: if the learner trains against a specific ``source" threat model, when can we expect robustness to generalize to a stronger unknown ``target" threat model during test-time? Our key contribution is to formally define the problem of learning and generalization with an unforeseen adversary, which helps us reason about the increase in adversarial risk from the conventional perspective of a known adversary. Applying our framework, we derive a generalization bound which relates the generalization gap between source and target threat models to variation of the feature extractor, which measures the expected maximum difference between extracted features across a given threat model. Based on our generalization bound, we propose variation regularization (VR) which reduces variation of the feature extractor across the source threat model during training. We empirically demonstrate that using VR can lead to improved generalization to unforeseen attacks during test-time, and combining VR with perceptual adversarial training (Laidlaw et al., 2021) achieves state-of-the-art robustness on unforeseen attacks. Our code is publicly available at https://github.com/inspire-group/variation-regularization.
Accept
This paper seeks to theoretically formulate the problem of learning a classifier which is robust against unseen attacks. Reviewers generally liked the way authors formulated this understudied problem in adv robustness and found the paper well written. There were some questions and concerns regarding the usefulness of these results in reasonable learning problems as well as its numerical analysis. Given all, I think the paper is above the accept threshold.
train
[ "ztl4jPFv1K", "8oWy8GBlqB", "imKIc1URpk", "WOpz9PVq1yH", "0R7wd9hmHH", "80SwkalnKENd", "ZGQbyo8skKS", "SKhnRkBJMgN", "3-hc0pa8BX7", "PSiz06aZVmO", "nDdwpkBD4cG", "NE1DeLfDD_K", "OFOzhu_d1K", "sdXadK2VwN0", "X9wdrvI0Qm8", "J1Md1tejhFy" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for these answers. I will increase my score from 4 to 5.", " Thank you for the follow up questions. Please see our responses below:\n\n1. Clean accuracy trend: In Appendix Table 3, there is no entry for (CIFAR-10, ResNet-18, $\\ell_{\\infty}$, $\\lambda=1.0$). The highest value provided is for $\\lambd...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "8oWy8GBlqB", "imKIc1URpk", "3-hc0pa8BX7", "X9wdrvI0Qm8", "SKhnRkBJMgN", "sdXadK2VwN0", "SKhnRkBJMgN", "J1Md1tejhFy", "PSiz06aZVmO", "X9wdrvI0Qm8", "NE1DeLfDD_K", "OFOzhu_d1K", "sdXadK2VwN0", "nips_2022_SGQeKZ126y-", "nips_2022_SGQeKZ126y-", "nips_2022_SGQeKZ126y-" ]
nips_2022_Nlsr4DepNt
Improving Barely Supervised Learning by Discriminating Unlabeled Samples with Super-Class
In semi-supervised learning (SSL), a common practice is to learn consistent information from unlabeled data and discriminative information from labeled data to ensure both the immutability and the separability of the classification model. Existing SSL methods suffer from failures in barely-supervised learning (BSL), where only one or two labels per class are available, as the insufficient labels cause the discriminative information being difficult or even infeasible to learn. To bridge this gap, we investigate a simple yet effective way to leverage unlabeled samples for discriminative learning, and propose a novel discriminative information learning module to benefit model training. Specifically, we formulate the learning objective of discriminative information at the super-class level and dynamically assign different classes into different super-classes based on model performance improvement. On top of this on-the-fly process, we further propose a distribution-based loss to learn discriminative information by utilizing the similarity relationship between samples and super-classes. It encourages the unlabeled samples to stay closer to the distribution of their corresponding super-class than those of others. Such a constraint is softer than the direct assignment of pseudo labels, while the latter could be very noisy in BSL. We compare our method with state-of-the-art SSL and BSL methods through extensive experiments on standard SSL benchmarks. Our method can achieve superior results, \eg, an average accuracy of 76.76\% on CIFAR-10 with merely 1 label per class.
Accept
This paper proposes a simple but effective method for barely-supervised learning, where the label is scarce, for instance, 1 example per class. The method is based on k-means clustering of the unlabeled data to form superclass and rely on a contrastive-type of loss to enforce discriminativeness. The proposed method improves class separation in the presence of scarce labels and significantly improves the overall performance. Though the proposed method is rather heuristic, the idea to address barely supervised learning is novel and interesting. The superior performance is demonstrated via strong empirical studies. I would recommend acceptance of this paper given the novelty and simplicity of the idea and the strong empirical evidence.
train
[ "t9B0V8XtpR", "xq4XZrwZq8", "IgTTodV1xEl", "Td6-K3s8tKl", "aoPwDqGiaz2", "oezDQO3_U8H", "NOtFDkUaIZy", "L-YXdhOavn2", "e6KoituLoUH", "2u7O85vm2cY", "owt5AA7QS-", "h9dHwfOQ2vF", "HOqtMDPY__f", "igVKNu4fTpO" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nThanks again for your valuable comments! We have carefully responded to all your concerns.\n\n In our work, we argue the two main factors affecting the performance of BSL/SSL, immutability and separability. Especially, we highlight the importance of discriminative information for BSL scenarios ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "nips_2022_Nlsr4DepNt", "IgTTodV1xEl", "2u7O85vm2cY", "HOqtMDPY__f", "HOqtMDPY__f", "h9dHwfOQ2vF", "h9dHwfOQ2vF", "igVKNu4fTpO", "igVKNu4fTpO", "owt5AA7QS-", "nips_2022_Nlsr4DepNt", "nips_2022_Nlsr4DepNt", "nips_2022_Nlsr4DepNt", "nips_2022_Nlsr4DepNt" ]
nips_2022_458a8dN8L6
Alleviating Adversarial Attacks on Variational Autoencoders with MCMC
Variational autoencoders (VAEs) are latent variable models that can generate complex objects and provide meaningful latent representations. Moreover, they could be further used in downstream tasks such as classification. As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input. Here, we examine several objective functions for adversarial attacks construction proposed previously and present a solution to alleviate the effect of these attacks. Our method utilizes the Markov Chain Monte Carlo (MCMC) technique in the inference step that we motivate with a theoretical analysis. Thus, we do not incorporate any extra costs during training and the performance on non-attacked inputs is not decreased. We validate our approach on a variety of datasets (MNIST, Fashion MNIST, Color MNIST, CelebA) and VAE configurations ($\beta$-VAE, NVAE, $\beta$-TCVAE), and show that our approach consistently improves the model robustness to adversarial attacks.
Accept
This paper received generally positive reviews, with all reviewers backing acceptance (albeit typically quite weakly) after discussion with the authors. The paper was praised on a number of fronts such as novelty, potential significance, clarity, and experimental evaluation. Though there were a few high-level concerns raised about the work by the reviewers (at least not those which were not successfully explained away by the authors), there were some notable lower-level technical concerns, such as the drop in performance for non-adversarial inputs and increased inference time. My own view of the paper is that the underlying ideas and core contributions of the work are very strong and clearly worthy of acceptance, but there are also quite a large number of technical and lower-level issues that need to be sorted out for publication (as laid out in my comment to the authors). I do not feel that any of these is individually especially serious, but together they do represent quite a large body of things that would need addressing for the camera ready and one could argue that they would collectively represent too large of an update to accept the work. On balance, my recommendation is to give the authors the benefit of the doubt and accept the work, as the underlying contribution is clearly solid and there are no fundamental flaws. Most importantly, I think all of the issues raised, though numerous, are addressable for the camera ready and the authors seem to be already making good progress towards this. I also feel that the large number of low-level issues raised are partly due to the relatively high level of scrutiny the paper has undergone, rather than being purely a reflection of the number of issues actually present. I do hope though that the authors take the concerns raised seriously and make appropriate updates for the camera ready. Additional comments to those given in the previous comment to authors - The introduction should make it much clearer that you are utilising the decoder to help protect the encoder. For me, the key to what is going on here is that the encoder is what is used for downstream tasks and is also the thing that is vulnerable to attack because it is applied before the randomness of the embedding. Using the, more robust, decoder to help protect the encoder is a really neat idea, but I do not think this is currently properly emphasised at present. Without making this clear I think the approach is quite confusing: the first impression I got when you say that you use MCMC to “fix the latent code” is that you are utilising forbidden information about the original input, rather than utilising the fact we essentially have access to two models. - The first page of the paper has a lot of grammatical errors and some poorly worded sentences, it needs a few low-level edit passes. This is not such an issue later in the paper, but there are still some errors later to correct.
test
[ "_PGdrZjlrQR", "TOG6IyT-4Cu", "-jXVVdeu6N", "bUNd18LZUt7", "F7HNN2jhJN", "3A30SP0JotU", "wFCWNv2FKpV", "QLDGaaeEo3A", "J3hYZUp3QxE", "eXxrc3MD4L", "4KTXuJNe9kj", "V7ODVy6e8mC", "wNYU9z3NrCi", "iN2FLljxw3v", "DMtqk7loeZx", "txZgcR0LBd3", "KKrUUs1MVmX", "lHvGyzuIgAz", "gl3B-jIFOXD"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " Thanks for the follow up and sorry again for giving you this on such late notice.\n\nYour comments all sound very reasonable and this seems like good progress to get everything sorted, I appreciate more time will be needed to do everything fully and I'm pretty impressed what you managed to do this quickly!\n\nI a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "TOG6IyT-4Cu", "-jXVVdeu6N", "bUNd18LZUt7", "F7HNN2jhJN", "nips_2022_458a8dN8L6", "txZgcR0LBd3", "QLDGaaeEo3A", "4KTXuJNe9kj", "V7ODVy6e8mC", "wNYU9z3NrCi", "xXEnZ3bTXqb", "KKrUUs1MVmX", "iN2FLljxw3v", "DMtqk7loeZx", "RduP2JlXEz", "y1i19AhjjR0", "lHvGyzuIgAz", "gC-3P45k6n2", "xXE...
nips_2022_d_m7OKOmPiM
MORA: Improving Ensemble Robustness Evaluation with Model Reweighing Attack
Adversarial attacks can deceive neural networks by adding tiny perturbations to their input data. Ensemble defenses, which are trained to minimize attack transferability among sub-models, offer a promising research direction to improve robustness against such attacks while maintaining a high accuracy on natural inputs. We discover, however, that recent state-of-the-art (SOTA) adversarial attack strategies cannot reliably evaluate ensemble defenses, sizeably overestimating their robustness. This paper identifies the two factors that contribute to this behavior. First, these defenses form ensembles that are notably difficult for existing gradient-based method to attack, due to gradient obfuscation. Second, ensemble defenses diversify sub-model gradients, presenting a challenge to defeat all sub-models simultaneously, simply summing their contributions may counteract the overall attack objective; yet, we observe that ensemble may still be fooled despite most sub-models being correct. We therefore introduce MORA, a model-reweighing attack to steer adversarial example synthesis by reweighing the importance of sub-model gradients. MORA finds that recent ensemble defenses all exhibit varying degrees of overestimated robustness. Comparing it against recent SOTA white-box attacks, it can converge orders of magnitude faster while achieving higher attack success rates across all ensemble models examined with three different ensemble modes (i.e, ensembling by either softmax, voting or logits). In particular, most ensemble defenses exhibit near or exactly $0\%$ robustness against MORA with $\ell^\infty$ perturbation within $0.02$ on CIFAR-10, and $0.01$ on CIFAR-100. We make MORA open source with reproducible results and pre-trained models; and provide a leaderboard of ensemble defenses under various attack strategies.
Accept
This paper introduces a new way to generate adversarial examples for defenses that work by ensembling over many different independent predictors. The reviewers all liked this paper and believe that the results were a useful improvement on prior work. While the margin of improvement is not massive (and it is a smaller gap as perturbations get larger) but the improvement appears correct and is well explained.
train
[ "WRvYN-zLDS", "zWZHLemTkm-", "JvlB6_Bu5b", "CcHewFpcqcL", "mJS3rHWReJN", "KqqPu1WZsBmA", "tgG7VacNe1y", "cLRo0_lTZBu", "vtVKHeXsS8j", "PP_FITI2-sK", "Bvj3xiGatd3", "cmLp--p1TZ0", "nWbv6DLaw8", "Nb5dGlD9trU", "Isgd4zy776z" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Please let us know if you have any unresolved concerns, and we would be happy to answer them in the remaining next few hours before discussion period ends to further strengthen our submission.", " Thanks for your rebuttals. It addresses my concerns. Therefore, I raise my score to 5 accordingly.", " Thank you ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "CcHewFpcqcL", "cLRo0_lTZBu", "mJS3rHWReJN", "nips_2022_d_m7OKOmPiM", "vtVKHeXsS8j", "Bvj3xiGatd3", "Isgd4zy776z", "Isgd4zy776z", "Nb5dGlD9trU", "nWbv6DLaw8", "cmLp--p1TZ0", "nips_2022_d_m7OKOmPiM", "nips_2022_d_m7OKOmPiM", "nips_2022_d_m7OKOmPiM", "nips_2022_d_m7OKOmPiM" ]
nips_2022_prQT0gN81oG
Shadow Knowledge Distillation: Bridging Offline and Online Knowledge Transfer
Knowledge distillation can be generally divided into offline and online categories according to whether teacher model is pre-trained and persistent during the distillation process. Offline distillation can employ existing models yet always demonstrates inferior performance than online ones. In this paper, we first empirically show that the essential factor for their performance gap lies in the reversed distillation from student to teacher, rather than the training fashion. Offline distillation can achieve competitive performance gain by fine-tuning pre-trained teacher to adapt student with such reversed distillation. However, this fine-tuning process still costs lots of training budgets. To alleviate this dilemma, we propose SHAKE, a simple yet effective SHAdow KnowlEdge transfer framework to bridge offline and online distillation, which trades the accuracy with efficiency. Specifically, we build an extra shadow head on the backbone to mimic the predictions of pre-trained teacher as its shadow. Then, this shadow head is leveraged as a proxy teacher to perform bidirectional distillation with student on the fly. In this way, SHAKE not only updates this student-aware proxy teacher with the knowledge of pre-trained model, but also greatly optimizes costs of augmented reversed distillation. Extensive experiments on classification and object detection tasks demonstrate that our technique achieves state-of-the-art results with different CNNs and Vision Transformer models. Additionally, our method shows strong compatibility with multi-teacher and augmentation strategies by gaining additional performance improvement. Code is made publicly available at https://lilujunai.github.io/SHAKE/.
Accept
This paper addresses an interesting point: the reversed distillation from student to teacher matters the KD performance. Based on this finding, the authors propose SHAKE to make the best of the worlds of offline and online KD. The technical contribution is significant to the KD community and the authors also provide comprehensive SOTA results on various datasets and model architectures. During rebuttal, the authors addressed most of the reviewers' concerns. AC appreciates the discussions and recommends accept.
train
[ "bFHZsxDcSIR", "zAKmO6OXWM", "WrP1ltHWApN", "suHOn32exd", "yrFEagEoLd5", "deHeCDM8qPS", "9_keagxHdi", "o9RSyH-luBV", "eRQzRCxFynG", "LjTzSn4zTR", "mpW07aPyn30", "7hSAZqZpDN", "rpmS2WdIcxL", "0dKax0oazeD", "DzRMT_eE4CK", "Zxp1HuzvRU", "6gkFITzBada", "TqXuobTIlta" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate the reviewer for the thorough and constructive comments. We are really happy that the novelty of the proposed method was recognized by the reviewer. To address the concerns and requests from the reviewer, we are carefully improving the explanations, claims, experiments, and discussions.\n\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "6gkFITzBada", "WrP1ltHWApN", "rpmS2WdIcxL", "TqXuobTIlta", "TqXuobTIlta", "6gkFITzBada", "o9RSyH-luBV", "eRQzRCxFynG", "Zxp1HuzvRU", "6gkFITzBada", "TqXuobTIlta", "Zxp1HuzvRU", "TqXuobTIlta", "6gkFITzBada", "Zxp1HuzvRU", "nips_2022_prQT0gN81oG", "nips_2022_prQT0gN81oG", "nips_2022...
nips_2022_p9zeOtKQXKs
A Theoretical Understanding of Gradient Bias in Meta-Reinforcement Learning
Gradient-based Meta-RL (GMRL) refers to methods that maintain two-level optimisation procedures wherein the outer-loop meta-learner guides the inner-loop gradient-based reinforcement learner to achieve fast adaptations. In this paper, we develop a unified framework that describes variations of GMRL algorithms and points out that existing stochastic meta-gradient estimators adopted by GMRL are actually \textbf{biased}. Such meta-gradient bias comes from two sources: 1) the compositional bias incurred by the two-level problem structure, which has an upper bound of $\mathcal{O}\big(K\alpha^{K}\hat{\sigma}_{\text{In}}|\tau|^{-0.5}\big)$ \emph{w.r.t.} inner-loop update step $K$, learning rate $\alpha$, estimate variance $\hat{\sigma}^{2}_{\text{In}}$ and sample size $|\tau|$, and 2) the multi-step Hessian estimation bias $\hat{\Delta}_{H}$ due to the use of autodiff, which has a polynomial impact $\mathcal{O}\big((K-1)(\hat{\Delta}_{H})^{K-1}\big)$ on the meta-gradient bias. We study tabular MDPs empirically and offer quantitative evidence that testifies our theoretical findings on existing stochastic meta-gradient estimators. Furthermore, we conduct experiments on Iterated Prisoner's Dilemma and Atari games to show how other methods such as off-policy learning and low-bias estimator can help fix the gradient bias for GMRL algorithms in general.
Accept
All reviewers are positive about this paper and recommend either weak (3x) or borderline (1x) accept. We had some good discussions, through which some of the reviewers (RjbJ and H2bz) decided to increase their initial scores. The reviewers consider this work a relevant topic, providing intuitive and novel theoretical results, well-written, and with the possibility of guiding future algorithmic design for meta-RL. I believe this is a good paper, and I recommend its acceptance.
train
[ "vDm2_pm0Wom", "WXfI11Bjb8U", "ElzoWIH5crh", "73yHcitc7d_", "cyYHw72PFbg", "akkHaa09Yg", "HTstJ5BnRbF", "KOBv89uI4vR", "oCDTNw-wu0", "jnNGMhgoy1F", "RK1VB-LYJXL", "rcmK_dlq2Jl", "QzcZ_Uufixj", "8pG3WGH48p3", "mGZSciw8jrE", "ag1Rmd5uTa3", "WmlKCbuRv1v", "Zobs-aLt9cd", "aRdu14kdp5q...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " I appreciate the authors' response, particularly the added clarifications regarding the specificity of the results to RL settings, rather than general gradient-based supervised meta-learning, as well as the discussion of the potentially limited applicability of the assumptions used in Appendix D. However, I still...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "mGZSciw8jrE", "73yHcitc7d_", "akkHaa09Yg", "cyYHw72PFbg", "KOBv89uI4vR", "QzcZ_Uufixj", "K5Sml_K8vqs", "WmlKCbuRv1v", "nips_2022_p9zeOtKQXKs", "RK1VB-LYJXL", "ag1Rmd5uTa3", "QzcZ_Uufixj", "K5Sml_K8vqs", "aRdu14kdp5q", "Zobs-aLt9cd", "WmlKCbuRv1v", "nips_2022_p9zeOtKQXKs", "nips_20...
nips_2022_SPoiDLr3WE7
EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning. Despite their extraordinary predictive accuracy, existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs, rendering these models vulnerable to graph structural attacks and with limited capacity in generalizing to graphs of varied homophily levels. Although many methods have been proposed to improve the robustness of GNN models, most of these techniques are restricted to the spatial domain and employ complicated defense mechanisms, such as learning new graph structures or calculating edge attentions. In this paper, we study the problem of designing simple and robust GNN models in the spectral domain. We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter. Based on our theoretical analysis in both spatial and spectral domains, we demonstrate that EvenNet outperforms full-order models in generalizing across homophilic and heterophilic graphs, implying that ignoring odd-hop neighbors improves the robustness of GNNs. We conduct experiments on both synthetic and real-world datasets to demonstrate the effectiveness of EvenNet. Notably, EvenNet outperforms existing defense models against structural attacks without introducing additional computational costs and maintains competitiveness in traditional node classification tasks on homophilic and heterophilic graphs.
Accept
This paper proposes a simple and effective idea to only use even-order neighbors to improve the robustness and generalization ability of spectral GNNs. It is based on the intuition that a friend's friend is friend, and an enemy's enemy is also friend, thus only using even-order neighbors improves the generalization across different homophily/heterophily levels. Considering that spectral GNNs' expressive power have been shown to easily saturate recently, analyzing their generalization power is of great importance and is a natural next step. All the reviewers agree on the value of the paper. However, they also point out several issues, such as concerns on the theoretical analysis. I encourage the authors to further polish the paper and improve the theoretical analysis in the camera ready version.
test
[ "VMPUJ0Weo28", "b3RA5I07Sxl", "BOrnc8B1RDB", "7ukTZWpredg", "pOPqy8fA8aE", "o8YzL8Ptkfw", "lmw5pn1iiSl", "tZ6_HVK5kd", "mYB2AjeDi-U", "plFZa5tD7ZaG", "isVuhxQouDY", "LaeWCixnMoa", "jgEgU3hTk6D", "bgAqr7SAui0", "YTbSA8yGTFz", "Mo4qeozX3k", "Aztkjm-L9E", "qvKNnwwy8es" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comment about the genetic algorithms!\nWe will limit the scope of our method to non-targeted attacks in the final version.\nWe are also trying to run RL-S2V, which could take some time in training.\nIf we are able to report the results on time, we will add them to the dropbox link. ", " The RL-S...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "b3RA5I07Sxl", "7ukTZWpredg", "nips_2022_SPoiDLr3WE7", "pOPqy8fA8aE", "o8YzL8Ptkfw", "tZ6_HVK5kd", "tZ6_HVK5kd", "jgEgU3hTk6D", "plFZa5tD7ZaG", "isVuhxQouDY", "qvKNnwwy8es", "qvKNnwwy8es", "Mo4qeozX3k", "Mo4qeozX3k", "Aztkjm-L9E", "nips_2022_SPoiDLr3WE7", "nips_2022_SPoiDLr3WE7", "...
nips_2022_LPB2BFZvncQ
An Information-theoretic Perspective of Hierarchical Clustering
A combinatorial cost function for hierarchical clustering was introduced by Dasgupta \cite{dasgupta2016cost}. It has received great attention and several new cost functions from similar combinatorial perspective have been proposed. In this paper, we investigate hierarchical clustering from the \emph{information-theoretic} perspective and formulate a new objective function. We also establish the relationship between these two perspectives. In algorithmic aspect, we present two algorithms for expander-like and well-clustered cardinality weighted graphs, respectively, and show that both of them achieve $O(1)$-approximation for our new objective function. For practical use, we consider non-binary hierarchical clustering problem. We get rid of the traditional top-down and bottom-up frameworks, and present a new one. Our new framework stratifies the sparsest level of a cluster tree recursively in guide with our objective function. Our algorithm called HCSE outputs a $k$-level cluster tree by an interpretable mechanism to choose $k$ automatically without any hyper-parameter. Our experimental results on synthetic datasets show that HCSE has its own superiority in finding the intrinsic number of hierarchies, and the results on real datasets show that HCSE also achieves competitive costs over the popular non-binary hierarchical clustering algorithms LOUVAIN and HLP.
Reject
The paper introduces a new cost function inspired by structural information theory for hierarchical clustering of graphs. The paper show the relationship of the new cost function with previous work and proposes two algorithms to obtain a constant factor approximation for expander-like and for well-clustered graphs. The paper presents some new ideas and some interesting results although additional work is needed and the paper is not ready for publication in the current state. The main weaknesses highlighted by the committee are: - the new cost function is not well-motivated and additional discussion will be needed - the experiments are not too convincing - the current presentation should be substantially improved before acceptance
train
[ "nviuzm3J_Tl", "mhUCQrwl60B", "NnU45M61FM1", "BDRkPAR42T6", "_m77Wsxc_-c", "DeMBS31asVV", "HO_5N4IIJ2C", "zIcUhkrmj6", "Q8zcGzH5QEP", "rx-7SAgpHKS", "u2i4fXw0P8P", "SeHXtlj5vD0" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the explanations. I agree that the algorithmic aspects are somewhat interpretable. But the statement that the cost can be simply exchanged by some other cost bothers me a bit: If this is done, then how are the first and second part of the paper connected? (I think a similar concern was already raised b...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "NnU45M61FM1", "_m77Wsxc_-c", "BDRkPAR42T6", "HO_5N4IIJ2C", "SeHXtlj5vD0", "rx-7SAgpHKS", "u2i4fXw0P8P", "Q8zcGzH5QEP", "nips_2022_LPB2BFZvncQ", "nips_2022_LPB2BFZvncQ", "nips_2022_LPB2BFZvncQ", "nips_2022_LPB2BFZvncQ" ]
nips_2022_2bE4He5a9eQ
Generalization Bounds with Minimal Dependency on Hypothesis Class via Distributionally Robust Optimization
Established approaches to obtain generalization bounds in data-driven optimization and machine learning mostly build on solutions from empirical risk minimization (ERM), which depend crucially on the functional complexity of the hypothesis class. In this paper, we present an alternate route to obtain these bounds on the solution from distributionally robust optimization (DRO), a recent data-driven optimization framework based on worst-case analysis and the notion of ambiguity set to capture statistical uncertainty. In contrast to the hypothesis class complexity in ERM, our DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function. Notably, when using statistical distances such as maximum mean discrepancy, Wasserstein distance, or $\phi$-divergence in the DRO, our analysis implies generalization bounds whose dependence on the hypothesis class appears the minimal possible: The bound depends solely on the true loss function, independent of any other candidates in the hypothesis class. To our best knowledge, it is the first generalization bound of this type in the literature, and we hope our findings can open the door for a better understanding of DRO, especially its benefits on loss minimization and other machine learning applications.
Accept
The paper gives generalization bounds out of distributionally robust optimization (DRO). Unlike standard bounds, e.g., ERM, that depend on the complexity of the function space, the presented DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function. The authors instantiate their DRO bounds through the maximum mean discrepancy DRO distance, the interesting aspect being the sole dependence on the loss of the best in class hypothesis. An interesting viewpoint that may spur further investigation in the theory of generalization bounds.
train
[ "UzclRCM2tZx", "UyoE2-Ilf-4", "tt6HxXBBkqf", "YMhopoBVhluO", "3iCx6wEya-X", "gKHfvpNOPCR", "s6865uKQJwQ", "se6zW_VW9QA", "_MHNx3488JM", "VXyTwK9mwQG", "0d_MAAtvIU", "6mohadG_F-", "dYYXmiqzXYA", "sGBTQUelIwu", "1MtB0IxorwS", "-hP17zh9lWm" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank all the reviewers again for their valuable feedback. It is encouraging to hear the positive opinions, especially from Reviewers J945 and xNyC who believe our revisions and explanations resolve their main concerns. We also thank Reviewers fMCk and xNyC for raising our scores. We will continue to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5 ]
[ "nips_2022_2bE4He5a9eQ", "6mohadG_F-", "_MHNx3488JM", "se6zW_VW9QA", "s6865uKQJwQ", "s6865uKQJwQ", "-hP17zh9lWm", "1MtB0IxorwS", "dYYXmiqzXYA", "dYYXmiqzXYA", "dYYXmiqzXYA", "dYYXmiqzXYA", "sGBTQUelIwu", "nips_2022_2bE4He5a9eQ", "nips_2022_2bE4He5a9eQ", "nips_2022_2bE4He5a9eQ" ]
nips_2022__WQ6XkVP23f
PALBERT: Teaching ALBERT to Ponder
Currently, pre-trained models can be considered the default choice for a wide range of NLP tasks. Despite their SoTA results, there is practical evidence that these models may require a different number of computing layers for different input sequences, since evaluating all layers leads to overconfidence in wrong predictions (namely overthinking). This problem can potentially be solved by implementing adaptive computation time approaches, which were first designed to improve inference speed. Recently proposed PonderNet may be a promising solution for performing an early exit by treating the exit layer's index as a latent variable. However, the originally proposed exit criterion, relying on sampling from trained posterior distribution on the probability of exiting from the $i$-th layer, introduces major variance in exit layer indices, significantly reducing the resulting model's performance. In this paper, we propose improving PonderNet with a novel deterministic Q-exit criterion and a revisited model architecture. We adapted the proposed mechanism to ALBERT and RoBERTa and compared it with recent methods for performing an early exit. We observed that the proposed changes can be considered significant improvements on the original PonderNet architecture and outperform PABEE on a wide range of GLUE tasks. In addition, we also performed an in-depth ablation study of the proposed architecture to further understand Lambda layers and their performance.
Accept
This is an interesting paper which improved during discussion and the authors in my opinion addressed many of the suggestions and concerns from all of the reviewers (two reviewers who engaged after significant discussion were supportive of acceptance). I personally think that the paper will be an interesting addition to neurips but could be stronger if experiments were performed beyond just the GLUE/SuperGLUE setup and considered other complex inference tasks such as question answering (e.g. natural questions).
train
[ "W_QzFpqI5m", "-QtftR9-v5b", "9dpSV7AgNtY", "UFGwr23gb2Tk", "967MRH75eSg", "RbLA44bxQtr", "3z7JN35wK1o", "S3SB_T8wGo1", "7Jwe45xXGo", "hY2El8hJMOD", "uqmMABKOg2e" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have addressed the weaknesses discussed in the review and performed experiments with the most important ones. We uploaded new results to the rebuttal revision of the paper and kindly ask reviewers to consider them.\n\n1) Reviewers id4o and yzdP were concerned about the importance of the deterministic exit crit...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "nips_2022__WQ6XkVP23f", "9dpSV7AgNtY", "967MRH75eSg", "uqmMABKOg2e", "hY2El8hJMOD", "7Jwe45xXGo", "S3SB_T8wGo1", "nips_2022__WQ6XkVP23f", "nips_2022__WQ6XkVP23f", "nips_2022__WQ6XkVP23f", "nips_2022__WQ6XkVP23f" ]
nips_2022_-uxUxmlr3qT
Provable General Function Class Representation Learning in Multitask Bandits and MDP
While multitask representation learning has become a popular approach in reinforcement learning (RL) to boost the sample efficiency, the theoretical understanding of why and how it works is still limited. Most previous analytical works could only assume that the representation function is already known to the agent or from linear function class, since analyzing general function class representation encounters non-trivial technical obstacles such as generalization guarantee, formulation of confidence bound in abstract function space, etc. However, linear-case analysis heavily relies on the particularity of linear function class, while real-world practice usually adopts general non-linear representation functions like neural networks. This significantly reduces its applicability. In this work, we extend the analysis to general function class representations. Specifically, we consider an agent playing $M$ contextual bandits (or MDPs) concurrently and extracting a shared representation function $\phi$ from a specific function class $\Phi$ using our proposed Generalized Functional Upper Confidence Bound algorithm (GFUCB). We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP for the first time. Lastly, we conduct experiments to demonstrate the effectiveness of our algorithm with neural net representation.
Accept
This paper investigates the problem of multitask representation learning with general function class in contextual bandits and MDPs. The assumption is that functions of interest (e.g. the average reward in the bandit setting) share, among various tasks, a common representation. For example, in contextual bandits, this representation corresponds to that used in contextual linear bandits but where the feature maps are the same for the various tasks. Here we need to learn this feature map along with the remaining parameters. Compared to previous works, see Hu et al., the possible feature maps are not linear but belong to some given functional space. The authors propose GFUCB, an algorithm that extend the idea of UCB to this setting (the confidence intervals of UCB are replaced by a subset of likely functions). The authors derive regret upper bounds for their algorithms. These bounds illustrate the gains achieved leveraging the information gathered across tasks. The paper received 4 informative and insightful reviews. On the positive side, the paper is the first to give regret upper bounds for these kind of RL problems with representation learning. All reviewers further acknowledge the importance of this learning problems in practice. On the less positive side, reviewers have raised some concerns and have suggested ways to improve the manuscript. Among the concern, is the intractability of implementing the algorithm in practice. For example, the authors do not comment on how to compute $\hat{f}_t$ and (*) in practice. These quantities are at the core of the algorithm. We could say that an Oracle will solve these problems, but the authors could try to explain what kind of optimization problem we are facing and if there is a chance to solve it. This intractability limits the interest of the paper. Some reviewers were also concerned about the novelty of the approach (combining techniques from [14] and [25]). The paper is judged incremental. I believe that the authors have done more than just combining techniques, but the paper is very unclear about the technical novelty. The authors do not explain any intuition behind the proofs. The latter are actually, in my view, extremely hard to follow and to check. The presentation has to be significantly improved before we are able to really assess the contributions. In their rebuttal, the authors try to explain the novelty of their approach; all these arguments should be clear and apparent in the paper.
train
[ "LhRuXaRPLL", "iylbwSeXLWw", "LoIx5uQJOk", "TXrhv7XjSzu", "HTyOvHCtn6s", "iBgMXqXbiD", "Ng1TSTnf3BT", "FUbAccnUVI4", "HqP5pJarKX", "ZRjcAqeYmk", "enHslUpzCH", "foNe09FFFsc", "-7V2sRzoOxf", "Yz463wt-J5lE", "M7hIrp-yjX", "E8NndrECEbVH", "edv32qiLFlb", "tYYtXyn5WIT", "uF6anSQA4jM", ...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ " ## Revised version submitted\nThank you for your time and constructive suggestions. We agree that the discussion should back on track. We have carefully read your suggestion and agree with your opinion that a more tractable algorithm would be much more practical and beneficial for reality. Your thoughtful suggest...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 4 ]
[ "HTyOvHCtn6s", "enHslUpzCH", "TXrhv7XjSzu", "-7V2sRzoOxf", "Ng1TSTnf3BT", "E8NndrECEbVH", "FUbAccnUVI4", "HqP5pJarKX", "ZRjcAqeYmk", "M7hIrp-yjX", "foNe09FFFsc", "jxfCGH8JVNA", "uF6anSQA4jM", "tYYtXyn5WIT", "tYYtXyn5WIT", "edv32qiLFlb", "nips_2022_-uxUxmlr3qT", "nips_2022_-uxUxmlr3...
nips_2022_f39vsgpEaY5
Exact Shape Correspondence via 2D graph convolution
For exact 3D shape correspondence (matching or alignment), i.e., the task of matching each point on a shape to its exact corresponding point on the other shape (or to be more specific, matching at geodesic error 0), most existing methods do not perform well due to two main problems. First, on nearly-isometric shapes (i.e., low noise levels), most existing methods use the eigen-vectors (eigen-functions) of the Laplace Beltrami Operator (LBO) or other shape descriptors to update an initialized correspondence which is not exact, leading to an accumulation of update errors. Thus, though the final correspondence may generally be smooth, it is generally inexact. Second, on non-isometric shapes (noisy shapes), existing methods are generally not robust to noise as they usually assume near-isometry. In addition, existing methods that attempt to address the non-isometric shape problem (e.g., GRAMPA) are generally computationally expensive and do not generalise to nearly-isometric shapes. To address these two problems, we propose a 2D graph convolution-based framework called 2D-GEM. 2D-GEM is robust to noise on non-isometric shapes and with a few additional constraints, it also addresses the errors in the update on nearly-isometric shapes. We demonstrate the effectiveness of 2D-GEM by achieving a high accuracy of 90.5$\%$ at geodesic error 0 on the non-isometric benchmark SHREC16, i.e., TOPKIDS (while being much faster than GRAMPA), and on nearly-isometric benchmarks by achieving a high accuracy of 92.5$\%$ on TOSCA and 84.9$\%$ on SCAPE at geodesic error 0.
Accept
All the reviewers agreed that the paper is a nice addition to the shape matching literature. While its contributions are a bit incremental, they highlighted the quality of the evaluation, and the introduction of ML methods in the graphics community should stimulate follow-up works.
train
[ "vy4JqAgZfHl", "QVeGDfbZJ3", "imq5f9DPhj", "weieGmn_Tjk", "X7p4MndYEd", "PpGeCrI1IX", "0DCzVjAfpLF", "uPgKO_EJs3l", "X6jondP1avY", "6hg5PLW--2E", "mQsEBTrrf8c", "Ati2bVBm9nM", "6QKd90y4w0y0", "1XlJL6B0v1n", "Kf6yQySwTD0", "RQxYYlI1YPn", "Ct-nFddSG_l", "3_AgxR74yk", "uVzdesfezH", ...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Thank you again for your comments and suggestions.\n\n>Q.1. I appreciate the discussion on related papers: [1] in answer to my review and [1] in reply to NtNA's review. I suggest briefly adding them to the related work discussion in Section 1. In the new version submitted, I cannot see them.\n\nR.1. We will discu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "X6jondP1avY", "0DCzVjAfpLF", "uPgKO_EJs3l", "PpGeCrI1IX", "Ct-nFddSG_l", "RQxYYlI1YPn", "Kf6yQySwTD0", "6QKd90y4w0y0", "mQsEBTrrf8c", "mQsEBTrrf8c", "YlbQLP-WNit", "Kf6yQySwTD0", "1XlJL6B0v1n", "UkeZHBuL3EA", "uVzdesfezH", "3_AgxR74yk", "nips_2022_f39vsgpEaY5", "nips_2022_f39vsgpE...
nips_2022_jwGa6cEUFRn
NeMF: Neural Motion Fields for Kinematic Animation
We present an implicit neural representation to learn the spatio-temporal space of kinematic motions. Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF). Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate $t$ and a random vector $z$ for controlling the style. The model is then trained as a Variational Autoencoder (VAE) with motion encoders to sample the latent space. We train our model with a diverse human motion dataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/nemf/.
Accept
All reviewers are in favor of accepting the submission post rebuttal. The AC agrees. Please further revise the paper based on the reviews, in particular the very constructive comments from reviewer wNce.
train
[ "yRev0fLiU9Z", "3N6HRjDHANp", "ovi_66ac3wfF", "NLMncDnOE_", "-WN-7gd9J9w", "hv_fXKTZPjd", "UxK1KMBSDYk", "AK-DnE6it5t", "F1Bb9Og3v33", "NUDSkdQZf6h", "CncyvAL8dw3", "_o2dXUqGnVk", "xnXqEyx5IX", "gmSrNx4dCVz", "o_cP7tmM_xC", "3manAblHBxU", "CST3UgWnGTf" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Very thanks for the repose. I think all the things in this paper are great. I love the idea of this paper and I believe it is an important progress and baseline for human motion synthesis.", " Dear Reviewer wNce:\n\nAs we wrote on [L130] of our supplementary, we train an autoencoder as the feature extractor to ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "_o2dXUqGnVk", "NLMncDnOE_", "F1Bb9Og3v33", "NUDSkdQZf6h", "UxK1KMBSDYk", "CncyvAL8dw3", "CncyvAL8dw3", "gmSrNx4dCVz", "o_cP7tmM_xC", "o_cP7tmM_xC", "o_cP7tmM_xC", "3manAblHBxU", "CST3UgWnGTf", "nips_2022_jwGa6cEUFRn", "nips_2022_jwGa6cEUFRn", "nips_2022_jwGa6cEUFRn", "nips_2022_jwGa...
nips_2022_wiGXs_kS_X
Logical Credal Networks
We introduce Logical Credal Networks (or LCNs for short) -- an expressive probabilistic logic that generalizes prior formalisms that combine logic and probability. Given imprecise information represented by probability bounds and conditional probability bounds on logic formulas, an LCN specifies a set of probability distributions over all its interpretations. Our approach allows propositional and first-order logic formulas with few restrictions, e.g., without requiring acyclicity. We also define a generalized Markov condition that allows us to identify implicit independence relations between atomic formulas. We evaluate our method on benchmark problems such as random networks, Mastermind games with uncertainty and credit card fraud detection. Our results show that the LCN outperforms existing approaches; its advantage lies in aggregating multiple sources of imprecise information.
Accept
The theoretical connection of logic and probability via credal networks has been seen as positive. The computational complexity is a negative point (as it is even harder than credal networks, which are already very hard to solve). The expressive representation framework has outweighed the high complexity in the discussions. There are many suggestions that could be integrate into the manuscript, including with other works on credal networks, decision criteria and maximum entropy. Approximate methods may well be the only way forward to scale it, and theoretical properties can be further expanded clarified in terms of algorithmic complexity and representation power, not to mention how to learn from data plus constraints (hinting here to future work, obviously). It is a huge area with many open problems, and the paper has been considered as opening some doors (including the connection to MLN). The recommendation is not unanimous though, and I reckon that a credal set would better represent the opinion of the experts about this submission.
train
[ "cvNyRTPnFcQs", "RLunTSA7Qkm7", "hWI609hPlTj", "40-mcDY0Ho", "5ZzJaBYsneP", "smnJcRS2IJY", "TWUU4wYU5NK", "jVCIlkNnJJc", "ESYavnaNBYi" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the comments. Yes, please include a discussion of approximate inference in the paper.\nI will raise my evaluation to Accept", " Thank you for your comments. We will fix all the presentation issues found. \n\nWe agree that the main limitation of our approach is scalability. Clearly, using and/or adapt...
[ -1, -1, -1, -1, -1, 4, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "RLunTSA7Qkm7", "ESYavnaNBYi", "jVCIlkNnJJc", "TWUU4wYU5NK", "smnJcRS2IJY", "nips_2022_wiGXs_kS_X", "nips_2022_wiGXs_kS_X", "nips_2022_wiGXs_kS_X", "nips_2022_wiGXs_kS_X" ]
nips_2022_K3efgD7QzVp
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Knowledge Distillation (KD) aims at transferring the knowledge of a well-performed neural network (the {\it teacher}) to a weaker one (the {\it student}). A peculiar phenomenon is that a more accurate model doesn't necessarily teach better, and temperature adjustment can neither alleviate the mismatched capacity. To explain this, we decompose the efficacy of KD into three parts: {\it correct guidance}, {\it smooth regularization}, and {\it class discriminability}. The last term describes the distinctness of {\it wrong class probabilities} that the teacher provides in KD. Complex teachers tend to be over-confident and traditional temperature scaling limits the efficacy of {\it class discriminability}, resulting in less discriminative wrong class probabilities. Therefore, we propose {\it Asymmetric Temperature Scaling (ATS)}, which separately applies a higher/lower temperature to the correct/wrong class. ATS enlarges the variance of wrong class probabilities in the teacher's label and makes the students grasp the absolute affinities of wrong classes to the target class as discriminative as possible. Both theoretical analysis and extensive experimental results demonstrate the effectiveness of ATS. The demo developed in Mindspore is available at \url{https://gitee.com/lxcnju/ats-mindspore} and will be available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/ats}.
Accept
This paper studies distillation consequences of carefully tuning the softmax temperature. Reviewers found a variety of weaknesses but were positive overall. I recommend acceptance, but that the authors carefully address the concerns raised during review, which will greatly improve the strength of the paper's message.
train
[ "0qS21a1St0", "8_vwlR2opBv", "zNVGFXwXZ4C", "TUQVHuNFql5", "Gifxw8iWHfK", "5Z1Y8SFgt0B", "DHIY5OOi8wbk", "9E8trr-YNNQ", "IfsfxRE5Nx", "fq1VIL4_Fk_", "TOznOIDqAi" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Appreciate the authors for the revision. I am increasing my score after-rebuttal.", " Thanks for the reviewer's work and valuable suggestions again! We provide the details for the newly-modified version. The modified contents are shown in blue.\n\n(1) We take the reviewer RAjp's suggestions and re-organize the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "fq1VIL4_Fk_", "nips_2022_K3efgD7QzVp", "TUQVHuNFql5", "9E8trr-YNNQ", "nips_2022_K3efgD7QzVp", "TOznOIDqAi", "fq1VIL4_Fk_", "IfsfxRE5Nx", "nips_2022_K3efgD7QzVp", "nips_2022_K3efgD7QzVp", "nips_2022_K3efgD7QzVp" ]
nips_2022_m_JSC3r9td7
DeepMed: Semiparametric Causal Mediation Analysis with Debiased Deep Learning
Causal mediation analysis can unpack the black box of causality and is therefore a powerful tool for disentangling causal pathways in biomedical and social sciences, and also for evaluating machine learning fairness. To reduce bias for estimating Natural Direct and Indirect Effects in mediation analysis, we propose a new method called DeepMed that uses deep neural networks (DNNs) to cross-fit the infinite-dimensional nuisance functions in the efficient influence functions. We obtain novel theoretical results that our DeepMed method (1) can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and (2) can adapt to certain low dimensional structures of the nuisance functions, significantly advancing the existing literature on DNN-based semiparametric causal inference. Extensive synthetic experiments are conducted to support our findings and also expose the gap between theory and practice. As a proof of concept, we apply DeepMed to analyze two real datasets on machine learning fairness and reach conclusions consistent with previous findings.
Accept
The decision is to accept the paper. The paper provides a theoretical treatment of using DNN's for nuisance function estimation in multiply-robust mediation analysis. The theory developed here is DNN-specific, and novel synthetic experiments that better test the ability of nuisance functions to adapt to complex ground-truth functions are proposed. A proof-of-concept demonstration on fairness-oriented mediation analysis is also provided. While the subject matter is dense, the paper builds on a well-established line of work, and makes solid theoretical and evaluation-design contributions. I agree with the authors that in mediation analysis, synthetic evaluation is in many ways the best we can do, especially for evaluating the theoretical claims in this paper. One suggestion I have from looking over the paper: In the experiments section, I hope the authors can make clearer what the empirical basis is for judging a method to be semi-parametric efficient in their synthetic experiments (e.g., the authors note that DeepMed is not semiparamertric efficient in some of the synthetic settings---which is fine---but don't explain in the main text how such a judgment was made). I suspect this comes from comparing performance to the estimate that used oracle nuisance functions under appropriately large sample size, but this should be stated explicitly, especially for the benefit of others in the community who might want to extend this work using the evaluation framework developed in this paper.
test
[ "tfW_rlgLhb", "2eQLa3eIZqU", "VpMqZTTx4NS", "iZvv5l445z", "YMVF8cn5jp-", "tS16DaHqGb", "HHHHlkWaggH", "9lKaHNyL5w", "JcZFe2r8OVk", "koUugODFDM_", "E0Gr_JPka2b", "bOvWsMN6_9", "h8J8ElJFXro" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' explanation, but essentially nothing has changed in terms of paper content, and hence I will keep my original score.", " Dear reviewers,\n\nDid we address all your questions? Do you have any further questions? The time window to respond for us is closing tomorrow. We are more than will...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1, 1, 3 ]
[ "iZvv5l445z", "nips_2022_m_JSC3r9td7", "HHHHlkWaggH", "tS16DaHqGb", "JcZFe2r8OVk", "E0Gr_JPka2b", "bOvWsMN6_9", "h8J8ElJFXro", "koUugODFDM_", "nips_2022_m_JSC3r9td7", "nips_2022_m_JSC3r9td7", "nips_2022_m_JSC3r9td7", "nips_2022_m_JSC3r9td7" ]
nips_2022_QudXypzItbt
SnAKe: Bayesian Optimization with Pathwise Exploration
"Bayesian Optimization is a very effective tool for optimizing expensive black-box functions. Inspired by applications developing and characterizing reaction chemistry using droplet microfluidic reactors, we consider a novel setting where the expense of evaluating the function can increase significantly when making large input changes between iterations. We further assume we are working asynchronously, meaning we have to decide on new queries before we finish evaluating previous experiments. This paper investigates the problem and introduces 'Sequential Bayesian Optimization via Adaptive Connecting Samples' (SnAKe), which provides a solution by considering large batches of queries and preemptively building optimization paths that minimize input costs. We investigate some convergence properties and empirically show that the algorithm is able to achieve regret similar to classical Bayesian Optimization algorithms in both the synchronous and asynchronous settings, while reducing the input costs significantly. We show the method is robust to the choice of its single hyper-parameter and provide a parameter-free alternative."
Accept
There was disagreement among the reviewers about whether this paper should be accepted. But taking the long view about how this paper might be perceived in 10 years, and reading the paper in detail myself, I lean towards acceptance being the right decision for this paper. My reasoning: - The problem being solved is completely novel as far as I am aware, and is well-motivated by a realistic real-world scenario in a large field of research. - The proposed approach is a highly-original variation on BO that, despite not being justified by theory at all, seems like a plausible direction to explore to get faster algorithms. In light of the above, the potential impact of the paper might be large: we may discover other applications that have the same cost setup, we may encourage ML researchers to take on chemistry-motivated problems, we may encourage theory researchers to analyze the method (or more likely to come up with methods that are justified), or we may see empiricists use similar ideas in other settings. That being said, the reviewers point out some completely-valid criticisms. So I expect to see quite a few updates in the final version of the paper. Please comb through the reviews carefully (particularly the two critical reviews) and update the paper based on the reviewer's comments, including and beyond what is already written in the author response. I could imagine many readers having similar issues (e.g., tone down any indication that the algorithm is theoretically justified from a regret perspective), and some of the comments represent ways the paper could be complete in its exploration of the topic.
val
[ "z28PG7WnDay", "c3OWPSIN-Nj", "LiSHOg0f6vl", "a5-YqHNW1PV", "uV-4I7Z132N0", "zMko_CnaU4T", "BMob7npMFuL", "ev9cAnO4QgQ", "QA-6epLPlt", "ihSX0OzzzXM", "SVZp6w3CljF", "nU2jwIqNrNo", "9natIRxUD0g", "WxFX5D1oATX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank you for reading the rebuttal and providing a detailed response.\n\n1. The problem as defined in the paper can be reduced to: “For a given budget, $T$, optimize the black-box function $f$ for the cheapest cost (e.g. distance between variables) possible”. As mentioned in the rebuttal, this se...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "c3OWPSIN-Nj", "QA-6epLPlt", "uV-4I7Z132N0", "zMko_CnaU4T", "zMko_CnaU4T", "ihSX0OzzzXM", "WxFX5D1oATX", "9natIRxUD0g", "SVZp6w3CljF", "nU2jwIqNrNo", "nips_2022_QudXypzItbt", "nips_2022_QudXypzItbt", "nips_2022_QudXypzItbt", "nips_2022_QudXypzItbt" ]
nips_2022_0ZKyTHwF5V1
Distributionally Robust Optimization via Ball Oracle Acceleration
We develop and analyze algorithms for distributionally robust optimization (DRO) of convex losses. In particular, we consider group-structured and bounded $f$-divergence uncertainty sets. Our approach relies on an accelerated method that queries a ball optimization oracle, i.e., a subroutine that minimizes the objective within a small ball around the query point. Our main contribution is efficient implementations of this oracle for DRO objectives. For DRO with $N$ non-smooth loss functions, the resulting algorithms find an $\epsilon$-accurate solution with $\widetilde{O}\left(N\epsilon^{-2/3} + \epsilon^{-2}\right)$ first-order oracle queries to individual loss functions. Compared to existing algorithms for this problem, we improve complexity by a factor of up to $\epsilon^{-4/3}$.
Accept
The paper provides new complexity guarantees for distributionally robust optimization (DRO) in two main settings: (i) when the ambiguity set is discrete and consists of a finite number of possible distributions (group DRO) and (ii) when the ambiguity set is based on f-divergence. The algorithmic techniques build on the recent results on ball oracle acceleration by Carmon et al., (2021). The key technical contribution is building a novel unbiased gradient oracle estimator and combining it with SGD & ball oracle acceleration to obtain tight gradient oracle complexity guarantees. The same results cannot be recovered by simply applying existing algorithms. The improvement over existing algorithms in terms of the gradient oracle complexity is reducing it from $N/\epsilon^2$ to $N/\epsilon^{2/3} + 1/\epsilon^2$. The obtained improvements seem to be primarily of theoretical interest at this point, as they require an exact minimizer over a ball of a given radius (i.e., an exact ball oracle); however, this contribution is within the scope of the conference. While numerical experiments are not a requirement for NeurIPS papers, if the authors do decide to add them, they are advised to ensure that any comparison to other algorithms they make is a fair apple-to-apple comparison. The paper would also benefit from being made more accessible to novice readers who may not be familiar with prior closely related results on ball oracle acceleration.
train
[ "kfjwi5sTWKE", "u4n64bTOYXY", "aGXq86M6d91", "v-5Cp_iulnU", "wf8ne7m9dVo", "8QiVDmE3RlM", "C78mdt9IyHw", "gycG18oshXj", "o350Dx83_-", "R5QHStC4mdm", "yMLxAt5iNXA" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarifications. I understand better the challenges of achieving your results.\n\nI support your paper, and will keep my score.", " A number of reviewers have asked about the lack of empirical evaluation and practical potential of our method. We acknowledge that - despite their significant the...
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "C78mdt9IyHw", "nips_2022_0ZKyTHwF5V1", "yMLxAt5iNXA", "yMLxAt5iNXA", "R5QHStC4mdm", "o350Dx83_-", "gycG18oshXj", "nips_2022_0ZKyTHwF5V1", "nips_2022_0ZKyTHwF5V1", "nips_2022_0ZKyTHwF5V1", "nips_2022_0ZKyTHwF5V1" ]
nips_2022_bydKs84JEyw
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
We present a unified Vision-Language pretrained Model (VLMo) that jointly learns a dual encoder and a fusion encoder with a modular Transformer network. Specifically, we introduce Multiway Transformer, where each block contains a pool of modality-specific experts and a shared self-attention layer. Because of the modeling flexibility of Multiway Transformer, pretrained VLMo can be fine-tuned as a fusion encoder for vision-language classification tasks, or used as a dual encoder for efficient image-text retrieval. Moreover, we propose a stagewise pre-training strategy, which effectively leverages large-scale image-only and text-only data besides image-text pairs. Experimental results show that VLMo achieves state-of-the-art results on various vision-language tasks, including VQA, NLVR2 and image-text retrieval.
Accept
After the rebuttal and discussion, this paper unanimously receives positive rates. As the reviews show satisfaction on the authors’ feedback, the final draft needs to respect it accordingly. Additionally, reviewers suggest several post-rebuttal points, so the final draft may need to clarify them, including the comparison with Flamingo, the setup of stages and contributions of experts.
train
[ "7JVUVclqOnh", "57_pwt1LqYtx", "pO7ZFrrO4Y-", "iOjiKWKXB4", "BZFov31lqJp", "DX4McEtPZBu", "XNLPhmN6iq" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In general, I think you have answered my questions quite well. Q1 and Q2 answer my concerns about increased costs, and make me better understand the contribution of the setups. As to other issues like better introduction and more experiments, I am convinced that you will improve the quality of the paper. What sti...
[ -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, 2, 4, 5 ]
[ "XNLPhmN6iq", "XNLPhmN6iq", "DX4McEtPZBu", "BZFov31lqJp", "nips_2022_bydKs84JEyw", "nips_2022_bydKs84JEyw", "nips_2022_bydKs84JEyw" ]
nips_2022_IwC_x50fvU
Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation
Open-Set Domain Adaptation (OSDA) assumes that a target domain contains unknown classes, which are not discovered in a source domain. Existing domain adversarial learning methods are not suitable for OSDA because distribution matching with $\textit{unknown}$ classes leads to negative transfer. Previous OSDA methods have focused on matching the source and the target distribution by only utilizing $\textit{known}$ classes. However, this $\textit{known}$-only matching may fail to learn the target-$\textit{unknown}$ feature space. Therefore, we propose Unknown-Aware Domain Adversarial Learning (UADAL), which $\textit{aligns}$ the source and the target-$\textit{known}$ distribution while simultaneously $\textit{segregating}$ the target-$\textit{unknown}$ distribution in the feature alignment procedure. We provide theoretical analyses on the optimized state of the proposed $\textit{unknown-aware}$ feature alignment, so we can guarantee both $\textit{alignment}$ and $\textit{segregation}$ theoretically. Empirically, we evaluate UADAL on the benchmark datasets, which shows that UADAL outperforms other methods with better feature alignments by reporting state-of-the-art performances.
Accept
This paper proposes a novel method called UADAL (Unknown-Aware Domain Adversarial Learning) for Open-Set Domain Adaptation (OSDA). Specifically, the proposed method performs source and target-known distribution alignment while simultaneously separating source and target-unknown distributions in the feature alignment procedure. In the OSDA, the idea of source and target-known distribution alignment while simultaneously separating the source and target-unknown distributions through explicit adversarial learning is novel. Furthermore, a theoretical analysis of the optimization process of the proposed UADAL is conducted. All three reviewers had similar positive comments on this paper and were satisfied with the authors' feedback. Thus the AC recommends it for acceptance.
train
[ "bd5MDa_7uyP", "N4-zVu3ecoW", "nNks0coeePd", "eRqr5GXqUsS", "JbudGuVn9S2", "GP6ubf3w9G", "PbkMb-_F3r", "l1tJ6X21HxL", "V9Hj5-jAJ4G", "lIOTAVGoehV", "h9GRqxxKm2I", "OtjRj63LL3G", "ea_AkW5yOtN", "2YfIWlZ69LQ", "LEchD_smT7C", "618cXtfKxUs" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Again, thanks for your work on the reviews and the feedback. Please leave a comment if you have further questions. We will be happy to provide additional clarification. Thank you so much for your time.", " Again, thanks for your work on the reviews and the feedback. Please leave a comment if you have further qu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "LEchD_smT7C", "nNks0coeePd", "l1tJ6X21HxL", "OtjRj63LL3G", "PbkMb-_F3r", "PbkMb-_F3r", "618cXtfKxUs", "618cXtfKxUs", "LEchD_smT7C", "LEchD_smT7C", "2YfIWlZ69LQ", "2YfIWlZ69LQ", "nips_2022_IwC_x50fvU", "nips_2022_IwC_x50fvU", "nips_2022_IwC_x50fvU", "nips_2022_IwC_x50fvU" ]
nips_2022_uKYvlNgahrz
Constrained Monotonic Neural Networks
Deep neural networks are becoming increasingly popular in approximating arbitrary functions from noisy data. But wider adoption is being hindered by the need to explain such models and to impose additional constraints on them. Monotonicity constraint is one of the most requested properties in real-world scenarios and is the focus of this paper. One of the oldest ways to construct a monotonic fully connected neural network is to constrain its weights to be non-negative while employing a monotonic activation function. Unfortunately, this construction does not work with popular non-saturated activation functions such as ReLU, ELU, SELU etc, as it can only approximate convex functions. We show this shortcoming can be fixed by employing the original activation function for a part of the neurons in the layer, and employing its point reflection for the other part. Our experiments show this approach of building monotonic deep neural networks have matching or better accuracy when compared to other state-of-the-art methods such as deep lattice networks or monotonic networks obtained by heuristic regularization. This method is the simplest one in the sense of having the least number of parameters, not requiring any modifications to the learning procedure or post-learning steps. Finally, we give a proof it can approximate any continuous monotone function on a compact subset of $\mathbb{R}^n$.
Reject
There are some ethical issues about using COMPAS dataset in the paper. The ethics reviewers recommend that 1) Given the limitations of the COMPAS dataset, the authors could just use another dataset with the monotonicity constraint property. There are a lot of datasets out there with this property. But the authors should discuss the issues with the COMPAS data in greater detail if they want to keep the results in there. 2) The changes below could be implemented in the current version of the paper: - The quoted sentence from the introduction should be removed and replaced with specific, evidence backed statements about why monotonicity is a goal without making value judgements and stating them as fact. - The checklist answers could be updated as described above. - How monotonic vs not features are determined could be added. The reviewer also suggests the authors do more benchmark and revise the mathematical writing since it makes the readers hard to follow. Although the paper is in borderline, most of the reviewers do not really support the publication of the paper. Therefore, I encourage the authors to take the comments and suggestions from the reviewers and improve it in order to resubmit to the next conference.
train
[ "ZLoUyvGQOH", "agF7wSsnANZ", "55xUppxygfI", "Pha9hBrbyUC", "xvhgAuMjprK", "atMbnnfzsvs", "cGROT8ZOrED", "aeDGl5gPk4", "9kGV2rP6mpd", "FHN2INE60EC", "gzWm1707MC", "Myxu1F9o4C", "UQloNXmzxse", "6WQBgSD-9cI", "E_GS-fV7_au", "W4TlLcjsN8", "_jO1Yem6xnb" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments. Please find our responses below.\n\n“Indeed I am sorry, I missed the fact it is the reproduction of a proof in this 2010 paper. It seems I have a lot to say about it (even though I believe the theorem must be true) ;)”\n\nResponse:\nThank you for acknowledging this. \n\n\n“The ethical...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "55xUppxygfI", "nips_2022_uKYvlNgahrz", "atMbnnfzsvs", "xvhgAuMjprK", "UQloNXmzxse", "aeDGl5gPk4", "nips_2022_uKYvlNgahrz", "FHN2INE60EC", "nips_2022_uKYvlNgahrz", "W4TlLcjsN8", "_jO1Yem6xnb", "E_GS-fV7_au", "6WQBgSD-9cI", "nips_2022_uKYvlNgahrz", "nips_2022_uKYvlNgahrz", "nips_2022_uK...
nips_2022_KpuObEWvvOX
Semi-Supervised Generative Models for Multiagent Trajectories
Analyzing the spatiotemporal behavior of multiple agents is of great interest to many communities. Existing probabilistic models in this realm are formalized either in an unsupervised framework, where the latent space is described by discrete or continuous variables, or in a supervised framework, where weakly preserved labels add explicit information to continuous latent representations. To overcome inherent limitations, we propose a novel objective function for processing multi-agent trajectories based on semi-supervised variational autoencoders, where equivariance and interaction of agents are captured via customized graph networks. The resulting architecture disentangles discrete and continuous latent effects and provides a natural solution for injecting expensive domain knowledge into interactive sequential systems. Empirically, our model not only outperforms various state-of-the-art baselines in trajectory forecasting, but also learns to effectively leverage unsupervised multi-agent sequences for classification tasks on interactive real-world sports datasets.
Accept
This paper studies an interesting problem, and overall the reviewers agreed the exposition and validation are sufficient. We encourage the authors to consider the issues raised by the reviewers and further improve the work in the final version.
train
[ "hq4k1ZYsSz", "lUgTRn7_i2B", "yPE4inPhKQi", "3AwUT68ZeaWt", "OlNV-sWmMKw", "1iA93H8qY9k", "UqrdrlbnG2VQ", "IiPYEkzm6Je", "Yj-gp91sFk4", "cCprT0015pQ", "bAnZkvCGGJM", "OU9GoVAf8Lr", "KWk3NswBH2G", "lATDgO2rwik", "XNtfgNMIZH", "gxAQwBTbRhV", "P8PJqCASWv2", "HLHKxgb00Y9", "Ibv2hnt4j...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for responding to my comments, concerns, and suggestions! I do not have further questions at this time, and I appreciate your clarifications of my earlier misunderstandings about the work. \n\nAfter reviewing the rebuttal and other review comments/discussions, I will update my score positively. I also w...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "3AwUT68ZeaWt", "Yj-gp91sFk4", "UqrdrlbnG2VQ", "OlNV-sWmMKw", "1iA93H8qY9k", "Ibv2hnt4jkC", "IiPYEkzm6Je", "HLHKxgb00Y9", "cCprT0015pQ", "bAnZkvCGGJM", "OU9GoVAf8Lr", "P8PJqCASWv2", "lATDgO2rwik", "gxAQwBTbRhV", "nips_2022_KpuObEWvvOX", "nips_2022_KpuObEWvvOX", "nips_2022_KpuObEWvvOX...
nips_2022_iy2G-yLGuku
Learning to Generate Inversion-Resistant Model Explanations
The wide adoption of deep neural networks (DNNs) in mission-critical applications has spurred the need for interpretable models that provide explanations of the model's decisions. Unfortunately, previous studies have demonstrated that model explanations facilitate information leakage, rendering DNN models vulnerable to model inversion attacks. These attacks enable the adversary to reconstruct original images based on model explanations, thus leaking privacy-sensitive features. To this end, we present Generative Noise Injector for Model Explanations (GNIME), a novel defense framework that perturbs model explanations to minimize the risk of model inversion attacks while preserving the interpretabilities of the generated explanations. Specifically, we formulate the defense training as a two-player minimax game between the inversion attack network on the one hand, which aims to invert model explanations, and the noise generator network on the other, which aims to inject perturbations to tamper with model inversion attacks. We demonstrate that GNIME significantly decreases the information leakage in model explanations, decreasing transferable classification accuracy in facial recognition models by up to 84.8% while preserving the original functionality of model explanations.
Accept
This paper proposes the GNIME method that defends against explanation-aware model inversion attacks by adversarially manipulating the saliency map. The reviews are mostly positive. The (minor) concerns and suggestions are - The paper lacks a theoretical analysis of the adversarial framework. A theoretical analysis even at optimality will improve the paper. However, this is not a major concern as the experiment results are quite good. - The idea of adversarially manipulating a representation (here a saliency map) so as to remove sensitive factors of variation while preserving as much information as possible, is not new. For example, these adversarial frameworks have been studied in the context for learning fair representations, and in many other areas. I suggest including a discussion about this in the final manuscript.
train
[ "vdko3ePVB2", "UC3CuCketS", "C710nP7aIbP", "IIFGEVk6M88", "EQI1Wtwq9DU", "5sDbIdioXIa", "YcRlBk81KRe", "joQOMZzvGv9", "xyjIiA3vnc9", "O4pi17GBa34", "M1uEOQIsdfk", "a2jUTNf0WjZ", "RZQI-Lya2V-" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I really appreciate the additional work and explanations by the authors. I keep my rating as the paper is outstanding and my questions are well explained. ", " We realized that our links to the anonymous submission site are unstable, which may make the reviewers unable to access the figures and tables. So, we u...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 3 ]
[ "5sDbIdioXIa", "YcRlBk81KRe", "5sDbIdioXIa", "xyjIiA3vnc9", "joQOMZzvGv9", "a2jUTNf0WjZ", "RZQI-Lya2V-", "M1uEOQIsdfk", "O4pi17GBa34", "nips_2022_iy2G-yLGuku", "nips_2022_iy2G-yLGuku", "nips_2022_iy2G-yLGuku", "nips_2022_iy2G-yLGuku" ]
nips_2022_78aj7sPX4s-
Stability Analysis and Generalization Bounds of Adversarial Training
In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set. This phenomenon is called robust overfitting, and it can be observed when adversarially training neural nets on common datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet. In this paper, we study the robust overfitting issue of adversarial training by using tools from uniform stability. One major challenge is that the outer function (as a maximization of the inner function) is nonsmooth, so the standard technique (e.g., Hardt et al., 2016) cannot be applied. Our approach is to consider $\eta$-approximate smoothness: we show that the outer function satisfies this modified smoothness assumption with $\eta$ being a constant related to the adversarial perturbation $\epsilon$. Based on this, we derive stability-based generalization bounds for stochastic gradient descent (SGD) on the general class of $\eta$-approximate smooth functions, which covers the adversarial loss. Our results suggest that robust test accuracy decreases in $\epsilon$ when $T$ is large, with a speed between $\Omega(\epsilon\sqrt{T})$ and $\mathcal{O}(\epsilon T)$. This phenomenon is also observed in practice. Additionally, we show that a few popular techniques for adversarial training (\emph{e.g.,} early stopping, cyclic learning rate, and stochastic weight averaging) are stability-promoting in theory.
Accept
This paper uses uniform stability to study generalization in networks under adversarial training. The authors use their framework to understand robust overfitting: the phenomenon in which a network is adversarially robust on the training set, but generalize poorly. The theoretical results in this paper are backed up by experiments and applications to provide theoretical clarity to a number of common methods for reducing overfitting. All reviewers supported acceptance of this paper. Reviewers enjoyed the exposition and writing, and found the theoretical developments and explanation of overfitting to be convincing. Finally, the reviewers liked the experiments backing up the theory and the practical applications.
train
[ "50yOVe4AtUv", "fkwNlinbft", "bdAYUIegVs", "rCAAidszkk", "JzY_VA2sRwQ", "RXeCZ_-VE6p5", "lZgIy9f6mO", "ZsomP8RWnYu", "OA-1Zi56GRv", "coptwXDnX2Z", "x87ilfd_Pt4", "SWPKOsKBoX", "dx9Rh9t3phF", "1oJHVx3psrf" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. We have uploaded the updated paper.", " Thanks for the question.\n\nFirstly, for neural networks, $L_z$-gradient Lipschitz with respect to $z$ is a strong assumption due to the non-smooth activation function (e.g., ReLU). This is already discussed in the submitted version:\n\n''While Re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "JzY_VA2sRwQ", "rCAAidszkk", "nips_2022_78aj7sPX4s-", "OA-1Zi56GRv", "RXeCZ_-VE6p5", "1oJHVx3psrf", "SWPKOsKBoX", "dx9Rh9t3phF", "x87ilfd_Pt4", "x87ilfd_Pt4", "nips_2022_78aj7sPX4s-", "nips_2022_78aj7sPX4s-", "nips_2022_78aj7sPX4s-", "nips_2022_78aj7sPX4s-" ]
nips_2022_rQ1cNbi07Vq
Rethinking and Improving Robustness of Convolutional Neural Networks: a Shapley Value-based Approach in Frequency Domain
The existence of adversarial examples poses concerns for the robustness of convolutional neural networks (CNN), for which a popular hypothesis is about the frequency bias phenomenon: CNNs rely more on high-frequency components (HFC) for classification than humans, which causes the brittleness of CNNs. However, most previous works manually select and roughly divide the image frequency spectrum and conduct qualitative analysis. In this work, we introduce Shapley value, a metric of cooperative game theory, into the frequency domain and propose to quantify the positive (negative) impact of every frequency component of data on CNNs. Based on the Shapley value, we quantify the impact in a fine-grained way and show intriguing instance disparity. Statistically, we investigate adversarial training(AT) and the adversarial attack in the frequency domain. The observations motivate us to perform an in-depth analysis and lead to multiple novel hypotheses about i) the cause of adversarial robustness of the AT model; ii) the fairness problem of AT between different classes in the same dataset; iii) the attack bias on different frequency components. Finally, we propose a Shapley-value guided data augmentation technique for improving the robustness. Experimental results on image classification benchmarks show its effectiveness.
Accept
All reviewers find the paper's approach on using Shapley values to measure the impact of frequency spectrum on the adversarial robustness of CNNs novel and the results interesting. Initially there were concerns about limited evaluation of the proposed approach and presentation. Author's response provided additional results and better presentation of the paper. This has largely alleviated reviewer's main concerns and most of them are positive about the work. One reviewer still rates the paper negatively and are not fully convinced about the effectiveness of the proposed approach. I think overall the response seems positive to me and I suggest acceptance. I encourage authors to update the paper incorporating the reviewers suggestions and adding more experimental results that came up during the discussion.
train
[ "rY-amP866H", "CMQlRB14jH", "3BA77gORc7U", "MZYdKpVHHHe", "veRxXuyPF4", "7D2-rC4c7Q-", "ieD1s7P1R2t", "KTJe_UluK-K", "wtc0B_p4mA", "w1UpqroGG_h", "GauzGsv5s0J", "9q7S9JRfvt2", "zeRnc1ZIHtS", "C9jroly0dJc", "pl446hfOgky", "j-f7VZTrfZ9", "NN-gSVh8mrF" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer mEHT,\n\nWe sincerely thank you for your positive feedback and for updating the score. We further address your concerns regarding the motivation and the additional examples(Fig. 8, 9).\n\n- **The Motivation of our framework**\n\n To inspect CNNs in frequency domain in a more fine-grained way, we ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "CMQlRB14jH", "wtc0B_p4mA", "C9jroly0dJc", "j-f7VZTrfZ9", "7D2-rC4c7Q-", "KTJe_UluK-K", "nips_2022_rQ1cNbi07Vq", "NN-gSVh8mrF", "j-f7VZTrfZ9", "pl446hfOgky", "pl446hfOgky", "C9jroly0dJc", "nips_2022_rQ1cNbi07Vq", "nips_2022_rQ1cNbi07Vq", "nips_2022_rQ1cNbi07Vq", "nips_2022_rQ1cNbi07Vq"...
nips_2022_ScwfQ7hdwyP
On the Convergence of Stochastic Multi-Objective Gradient Manipulation and Beyond
The conflicting gradients problem is one of the major bottlenecks for the effective training of machine learning models that deal with multiple objectives. To resolve this problem, various gradient manipulation techniques, such as PCGrad, MGDA, and CAGrad, have been developed, which directly alter the conflicting gradients to refined ones with alleviated or even no conflicts. However, the existing design and analysis of these techniques are mainly conducted under the full-batch gradient setting, ignoring the fact that they are primarily applied with stochastic mini-batch gradients. In this paper, we illustrate that the stochastic gradient manipulation algorithms may fail to converge to Pareto optimal solutions. Firstly, we show that these different algorithms can be summarized into a unified algorithmic framework, where the descent direction is given by the composition of the gradients of the multiple objectives. Then we provide an explicit two-objective convex optimization instance to explicate the non-convergence issue under the unified framework, which suggests that the non-convergence results from the determination of the composite weights solely by the instantaneous stochastic gradients. To fix the non-convergence issue, we propose a novel composite weights determination scheme that exponentially averages the past calculated weights. Finally, we show the resulting new variant of stochastic gradient manipulation converges to Pareto optimal or critical solutions and yield comparable or improved empirical performance.
Accept
This paper analyses a multi-objective minimization problem. Author(s) show that several methods, such as PCGrad, MGDA, and CAGrad, can fail to even converge to Pareto optimal solutions. On top of that, they are mostly analyzed in a batch setting. Therefore, the author(s) design a carefully crafted problem where these phenomena can be studied. By averaging past weights with a carefully designed scheme, they proposed a new algorithm that can provably converge to a Pareto optimal solution. I believe that the NeurIPS community will benefit from this paper, and therefore I recommend acceptance. Please, for the camera-ready version, please incorporate the suggestions from the authors and explain better parts that were not clear. Thanks
train
[ "nyCOVEyNoM5", "-5I9c7Wu6FW", "8pQ4vFW1C9", "DfPT_m7OqbN", "zGcry4eNRbP", "9-WNlMQbkSR7", "LK9hY_dSNTA", "h5vzfAKDe0_", "vSrOGNrSn4x", "-e_zf9Ty0Qq", "fO4LQjkQs7H", "kRXmiOcG8C", "vBR1XXKcLND", "4it2P_aDiq4", "BCQFRRe2fdI", "9Hps-6zzbS", "xGtjR1ZW8CZ", "DCRKP5rk6jf", "HCMWt22IGI0...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers:\n\nThank you very much for reading our rebuttal and giving feedbacks!\n\nAs for Reviewer sBF7 and Reviewer eaLg, both of you have pressed the \"author rebuttal acknowledgment\" button, but did not provide more specific comments. So we're very confused about whether our response sufficiently addres...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4, 4 ]
[ "nips_2022_ScwfQ7hdwyP", "vBR1XXKcLND", "kRXmiOcG8C", "-e_zf9Ty0Qq", "LK9hY_dSNTA", "nips_2022_ScwfQ7hdwyP", "h5vzfAKDe0_", "DCRKP5rk6jf", "HCMWt22IGI0", "fO4LQjkQs7H", "xGtjR1ZW8CZ", "9Hps-6zzbS", "4it2P_aDiq4", "BCQFRRe2fdI", "nips_2022_ScwfQ7hdwyP", "nips_2022_ScwfQ7hdwyP", "nips_...
nips_2022_e5HTq2VA7mu
Revisiting Injective Attacks on Recommender Systems
Recent studies have demonstrated that recommender systems (RecSys) are vulnerable to injective attacks. Given a limited fake user budget, attackers can inject fake users with carefully designed behaviors into the open platforms, making RecSys recommend a target item to more real users for profits. In this paper, we first revisit existing attackers and reveal that they suffer from the difficulty-agnostic and diversity-deficit issues. Existing attackers concentrate their efforts on difficult users who have low tendencies toward the target item, thus reducing their effectiveness. Moreover, they are incapable of affecting the target RecSys to recommend the target item to real users in a diverse manner, because their generated fake user behaviors are dominated by large communities. To alleviate these two issues, we propose a difficulty and diversity aware attacker, namely DADA. We design the difficulty-aware and diversity-aware objectives to enable easy users from various communities to contribute more weights when optimizing attackers. By incorporating these two objectives, the proposed attacker DADA can concentrate on easy users while also affecting a broader range of real users simultaneously, thereby boosting the effectiveness. Extensive experiments on three real-world datasets demonstrate the effectiveness of our proposed attacker.
Accept
This paper explores an interesting new attack vector against recommender systems -- one that targets recommending an item to users whose top-K lists are easier to manipulate for that item and to a diverse set of users from different interest communities. Overall, the paper does a reasonable job with showcasing the effectiveness of the attacks. However, there are concerns both about the actual effectiveness of the attacks in practice, where defenses can use external information to detect the fake user profiles (based on their behaviors) used to conduct the attack as well as about the ethical concerns raised by the lack of an appropriate defense strategy for the attack, if it were really effective in practice. The authors are strongly encouraged to discuss the limitations and ethical concerns raised by the attack. In summary, this is a paper that deserves to be accepted out of procedural fairness -- it has been rated well by the three reviewers. It does describe a cute intellectually interesting attack, even if it is unlikely to be practical or useful.
train
[ "ChxgPeVJCA", "0eo-CXWyhi4", "R_sJa2pFJjN", "LN8H-gITtQf", "UiwVnK9nsus", "NTkyc1OYc2", "0SzWZVBwKG", "nnTViB4Z1K", "XScOjYonuf", "l7SEGwGGMgys", "Lw-fYgaYN53a", "orFXSyKxt3", "rkrc-6fMg4kE", "wex9BsQ9dnZ", "yifqC9_KMdK", "XBk45udVwZs", "rSQDEt8VkVF" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their careful responses and revisions. Most of my questions have been answered. Overall I think this is a good paper worth publishing. After re-evaluation, I'd like to raise my rating accordingly.", " \nWe sincerely thank your efforts for helping us make the paper clearer and stronger!...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "rSQDEt8VkVF", "R_sJa2pFJjN", "nnTViB4Z1K", "nips_2022_e5HTq2VA7mu", "NTkyc1OYc2", "0SzWZVBwKG", "rSQDEt8VkVF", "XScOjYonuf", "l7SEGwGGMgys", "Lw-fYgaYN53a", "yifqC9_KMdK", "XBk45udVwZs", "XBk45udVwZs", "nips_2022_e5HTq2VA7mu", "nips_2022_e5HTq2VA7mu", "nips_2022_e5HTq2VA7mu", "nips_...
nips_2022_4WgqjmYacAf
Seeing Differently, Acting Similarly: Heterogeneously Observable Imitation Learning
In many real-world imitation learning tasks, the demonstrator and the learner have to act under totally different observation spaces. This situation brings significant obstacles to existing imitation learning approaches, since most of them learn policies under homogeneous observation spaces. On the other hand, previous studies under different observation spaces have strong assumptions that these two observation spaces coexist during the entire learning process. However, in reality, the observation coexistence will be limited due to the high cost of acquiring expert observations. In this work, we study this challenging problem with limited observation coexistence under heterogeneous observations: Heterogeneously Observable Imitation Learning (HOIL). We identify two underlying issues in HOIL, i.e., the dynamics mismatch and the support mismatch, and further propose the Importance Weighting with REjection (IWRE) algorithm based on importance-weighting and learning with rejection to solve HOIL problems. Experimental results show that IWRE can successfully solve various HOIL tasks, including the challenging tasks of transforming the vision-based demonstrations to random access memory (RAM)-based policies in the Atari domain, even with limited visual observations.
Reject
In this paper, the authors tackle the problem of demonstrators and learners having different observation spaces, by proposing an importance-weighted learning algorithm to bring the support of the imitator state marginal closer to that of the expert demonstrator's state marginal. All reviewers have voted to weak accept, but it would seem that this year's NeurIPS mechanism of reviewer assignment has resulted in a much higher acceptance rate than typical, with 80%+ of the AC batch having accept votes. As such, I am tasked with the tough job of rejecting some papers that reviewers were only lukewarmly excited about. It pains me to do this to a paper that reviewers have found methodologically correct. I will be recommending this paper be rejected based on calibration against other papers I'm AC'ing, mostly for the following reason: The HOIL paper assumes a problem setting where: 1. expert observations with more privilege observations than learner: 2. access to expert observations being high cost and invasive 3. importance-weighting the data to close the support between learner and expert state distributions mitigates (1) and (2) However, it's not demonstrated that (1) and (2) is an actual problem in practical applications. Is sensor mismatch between human and autonomous vehicles the actual problem for learning? Do self-driving cars even utilize a policy formulation explored in the HOIL paper? How big are these state / support mismatches in practice, and couldn't they be mitigated by simpler methods? This paper is the first I've heard suggested that sensor mismatch between humans and autonomous vehicle sensors is the bottleneck for performance. If we remove this motivation from the paper and only consider the importance weighting algorithm used for removing out of distribution examples, then it doesn't feel quite as novel as there are many works that propose some kind of distribution-projection step as a way to mitigate state marginal differences between teachers and learners (e.g. CQL). In fact, the setting (1) is not only avoided, but actively exploited by some learning papers (Asymmetric Actor Critic, Guided Policy Search, and other ideas) to *boost* the sample efficiency of learning, with the idea that the asymmetry in state support allows experts to provide useful learning signal in a way that the learner cannot adversarially overfit easily. From reviewer pD56, ``` I didn't find the authors' response to be particularly convincing (the self-driving example doesn't seem like a good fit for what they're doing, and I think they're conflating differences in perceptual accuracy with differences in sensing hardware). ```
train
[ "jzzZK2ioOrF", "NX6q4SDSkI", "FTxbco3bxhk", "c_lMzLyyY2e", "JgO5pkCd-dF", "bSjf8P7ha7d", "8qCZQtKFNJZ", "4E9HjD-WhUy", "S2sO4CRqfoA", "bZ3EnifInd", "dYcoK-V4Fuz", "z2ZHW58JBx" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate Reviewer pD56 for the positive feedback and detailed discussions on the example of HOIL! These comments would help us improve the clarifications of our motivation. We apologize for not having clarified the differences and misunderstandings clearly enough in the original version and we see how that c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "NX6q4SDSkI", "4E9HjD-WhUy", "c_lMzLyyY2e", "8qCZQtKFNJZ", "bSjf8P7ha7d", "z2ZHW58JBx", "dYcoK-V4Fuz", "bZ3EnifInd", "nips_2022_4WgqjmYacAf", "nips_2022_4WgqjmYacAf", "nips_2022_4WgqjmYacAf", "nips_2022_4WgqjmYacAf" ]
nips_2022_btpIaJiRx6z
Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions
Pruning is one of the predominant approaches for compressing deep neural networks (DNNs). Lately, coresets (provable data summarizations) were leveraged for pruning DNNs, adding the advantage of theoretical guarantees on the trade-off between the compression rate and the approximation error. However, coresets in this domain were either data dependant or generated under restrictive assumptions on both the model's weights and inputs. In real-world scenarios, such assumptions are rarely satisfied, limiting the applicability of coresets. To this end, we suggest a novel and robust framework for computing such coresets under mild assumptions on the model's weights and without any assumption on the training data. The idea is to compute the importance of each neuron in each layer with respect to the output of the following layer. This is achieved by an elegant combination of L\"{o}wner ellipsoid and Caratheodory theorem. Our method is simultaneously data-independent, applicable to various networks and datasets (due to the simplified assumptions), and theoretically supported. Experimental results show that our method outperforms existing coreset based neural pruning approaches across a wide range of networks and datasets. For example, our method achieved a $62\%$ compression rate on ResNet50 on ImageNet with $1.09\%$ drop in accuracy.
Accept
The paper proposes a method for pruning neural network weights at each layer. The authors have addressed the concerns from the reviewers and the reviewers raised their scores. Although the reviewer somewhat agrees with the authors that the standard practice in the network pruning literature is to report a very limited set of results, the reviewer expects the authors to try a bit harder to expand the results for the final revision. Please incorporate the reviewers’ suggestions in their detailed reviews and revise the final version of the paper properly.
train
[ "_Zb8Pdpx4TP", "DCLD0Bg0UFi", "wDxSmzLQCr-", "_DXsNUibL6Z", "vzlYMrVzMtn", "ahF4MWjo3w", "xLHmx9Zui8", "JRFY9B0wJF", "qjNuaZ_SmFQ", "HTGiq-m4D2F", "8-vioct-PFB", "vKhJupuGbFxy", "1zaFgPkC9v5", "3gWe9cUfWyM", "T12L_1zNAW", "PqDQsmRhsBs", "t9P9fYWOiCF", "te50ornWtpC", "r-oRe8zdtBK"...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Dear Reviewers, ACs, and SACs, \n\nWe would like to emphasize our appreciation for the openly communicated review process, and to the reviewers who made this process very valuable and beneficial by communicating with us, asking and answering questions, suggesting many helpful suggestions, and raising their scores...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "nips_2022_btpIaJiRx6z", "_DXsNUibL6Z", "te50ornWtpC", "a8lNVTaD5CP", "T12L_1zNAW", "JRFY9B0wJF", "qjNuaZ_SmFQ", "3gWe9cUfWyM", "3gWe9cUfWyM", "T12L_1zNAW", "nips_2022_btpIaJiRx6z", "PqDQsmRhsBs", "-zYb6ig2bL8", "-zYb6ig2bL8", "P0SOXo9UNB", "P0SOXo9UNB", "Yzf1iJsQ1vk", "Yzf1iJsQ1vk...
nips_2022_uNYqDfPEDD8
The Policy-gradient Placement and Generative Routing Neural Networks for Chip Design
Placement and routing are two critical yet time-consuming steps of chip design in modern VLSI systems. Distinct from traditional heuristic solvers, this paper on one hand proposes an RL-based model for mixed-size macro placement, which differs from existing learning-based placers that often consider the macro by coarse grid-based mask. While the standard cells are placed via gradient-based GPU acceleration. On the other hand, a one-shot conditional generative routing model, which is composed of a special-designed input-size-adapting generator and a bi-discriminator, is devised to perform one-shot routing to the pins within each net, and the order of nets to route is adaptively learned. Combining these techniques, we develop a flexible and efficient neural pipeline, which to our best knowledge, is the first joint placement and routing network without involving any traditional heuristic solver. Experimental results on chip design benchmarks showcase the effectiveness of our approach, with code that will be made publicly available.
Accept
This paper proposes a neural network pipeline for chip design that utilizes reinforcement learning for mixed-size macro placement and a conditional generative routing network that can perform routing in one shot. While the reviewers had some reservations regarding baselines, sensitivity to hyperparameters, and the resulting wirelength compared to DeepPlace, these have largely been addressed in the rebuttals and discussion. It was generally felt that this approach is quite novel and effective. It would be good to put the extra results in the final draft.
train
[ "zkPSLy9ldk", "nWddwLBANRb", "pyvTbxNdSP", "429Ymjotz40", "o-XR7JBQ-X", "TJZXlniI94x", "BkThJ8NXAS", "8byBtZwSVvj", "5wf-WciuLMz", "ME5hNUqgsFK", "LIWpamvbozF", "fgn7hLycA6C", "YC90IbETJJZ", "6xpQ6uadwOQ", "4UBjEZkxvJ0", "SWLKzYe_Ww2" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your valuable and constructive comments. We also agree with your current two points and we have tried to polish the logic of the paper in the newly submitted version. We assure that in our new version routing is always discussed after placement and finally joint placement and routing pipeline is discus...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "nWddwLBANRb", "LIWpamvbozF", "nips_2022_uNYqDfPEDD8", "YC90IbETJJZ", "TJZXlniI94x", "SWLKzYe_Ww2", "4UBjEZkxvJ0", "5wf-WciuLMz", "ME5hNUqgsFK", "6xpQ6uadwOQ", "fgn7hLycA6C", "YC90IbETJJZ", "nips_2022_uNYqDfPEDD8", "nips_2022_uNYqDfPEDD8", "nips_2022_uNYqDfPEDD8", "nips_2022_uNYqDfPEDD...
nips_2022_H1FQgq2QbV1
Distributed Learning of Conditional Quantiles in the Reproducing Kernel Hilbert Space
We study distributed learning of nonparametric conditional quantiles with Tikhonov regularization in a reproducing kernel Hilbert space (RKHS). Although distributed parametric quantile regression has been investigated in several existing works, the current nonparametric quantile setting poses different challenges and is still unexplored. The difficulty lies in the illusive explicit bias-variance decomposition in the quantile RKHS setting as in the regularized least squares regression. For the simple divide-and-conquer approach that partitions the data set into multiple parts and then takes an arithmetic average of the individual outputs, we establish the risk bounds using a novel second-order empirical process for quantile risk.
Accept
The paper investigates distributed quantile regression for RKHS-based estimators. Unlike in the usually considered case of distributed least squares learning, the quantile setup is more challenging from a technical perspective. Clearly, the strength of this paper is to tackle this challenge. The weakness of the paper is that it is a technically dense paper that might not be suited for a broader audience, as one of the reviews also suggests. For researchers more familiar with the overall material, however, the paper is nicely written, and for this reason I believe the strength outweighs the weakness. But in any case, a more gentle description of the assumptions plus examples would improve the paper. Finally, from my personal reading I was surprised that the Bernoulli 2011 paper by Steinwart and Christmann on RKHS-based quantile regression was not mentioned. In particular a comparison of the results seems to be necessary. In summary, a technically sound paper that should be accepted provided the competition is not too fierce.
train
[ "0MEp3oZV3oL", "FRaD_oWhSid", "H1t2SrgyRPG", "rZI4Hy_6dsI", "oCMVAjCGJKdK", "nv_1xVX6B1", "UC7vNFFTvu92", "owAgXQQ2KNC", "e5lqvgQTrrB", "lSQs_XWDknn", "DLj9EmH5-6B" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I understand that the weighted least square loss may not be directly used to compute GCV based on data, but it can still be used as a theoretical device to derive the specific form and prove the optimality of a distributed version of quantile-GCV. This could be interesting.", " Thanks again for your comments. A...
[ -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 1 ]
[ "FRaD_oWhSid", "H1t2SrgyRPG", "nv_1xVX6B1", "DLj9EmH5-6B", "lSQs_XWDknn", "e5lqvgQTrrB", "owAgXQQ2KNC", "nips_2022_H1FQgq2QbV1", "nips_2022_H1FQgq2QbV1", "nips_2022_H1FQgq2QbV1", "nips_2022_H1FQgq2QbV1" ]
nips_2022_8OH6t0YQGPJ
Modeling the Machine Learning Multiverse
Amid mounting concern about the reliability and credibility of machine learning research, we present a principled framework for making robust and generalizable claims: the multiverse analysis. Our framework builds upon the multiverse analysis introduced in response to psychology's own reproducibility crisis. To efficiently explore high-dimensional and often continuous ML search spaces, we model the multiverse with a Gaussian Process surrogate and apply Bayesian experimental design. Our framework is designed to facilitate drawing robust scientific conclusions about model performance, and thus our approach focuses on exploration rather than conventional optimization. In the first of two case studies, we investigate disputed claims about the relative merit of adaptive optimizers. Second, we synthesize conflicting research on the effect of learning rate on the large batch training generalization gap. For the machine learning community, the multiverse analysis is a simple and effective technique for identifying robust claims, for increasing transparency, and a step toward improved reproducibility.
Accept
This paper suggests a novel way to conduct machine learning empirical research and report the results. While ablation studies became a common practice in analyzing the contributions of different components in the ML system, this paper takes it much further by suggesting a way to explore the entire hyper-parameter space. While the components used in the work (Gaussian Processes, Active Learning, …) are not new, the novelty of this work is in combining them into addressing a timely question. This paper has the potential to contribute to the way ML research is conducted and reported.
train
[ "V5hwPWFwvHj", "vH-hEo9wSg2", "grYEV-dm4jU", "ev30gw6GN3", "sOI9PS-aCQ8", "GOoPgD05sB", "m-aruzgch22", "q2vjnz1w9m", "uEtmDj430-m", "Gg0_hPI8yF0", "9BqNInpAF7S", "XLjcdH_i-gy", "jmncEMMr7LN" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Makes sense, though if your interpretation is correct, should we expect the reported gap between SGD and Adam to fit within the range of the surrogate model's posterior uncertainty? Not critical, but would be another sanity check on that claim. ", " Thanks for your response. If figure 2.a. is still causing conf...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "vH-hEo9wSg2", "ev30gw6GN3", "m-aruzgch22", "GOoPgD05sB", "nips_2022_8OH6t0YQGPJ", "Gg0_hPI8yF0", "9BqNInpAF7S", "XLjcdH_i-gy", "jmncEMMr7LN", "nips_2022_8OH6t0YQGPJ", "nips_2022_8OH6t0YQGPJ", "nips_2022_8OH6t0YQGPJ", "nips_2022_8OH6t0YQGPJ" ]
nips_2022_2uAaGwlP_V
DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps
Diffusion probabilistic models (DPMs) are emerging powerful generative models. Despite their high-quality generation performance, DPMs still suffer from their slow sampling as they generally need hundreds or thousands of sequential function evaluations (steps) of large neural networks to draw a sample. Sampling from DPMs can be viewed alternatively as solving the corresponding diffusion ordinary differential equations (ODEs). In this work, we propose an exact formulation of the solution of diffusion ODEs. The formulation analytically computes the linear part of the solution, rather than leaving all terms to black-box ODE solvers as adopted in previous works. By applying change-of-variable, the solution can be equivalently simplified to an exponentially weighted integral of the neural network. Based on our formulation, we propose DPM-Solver, a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. DPM-Solver is suitable for both discrete-time and continuous-time DPMs without any further training. Experimental results show that DPM-Solver can generate high-quality samples in only 10 to 20 function evaluations on various datasets. We achieve 4.70 FID in 10 function evaluations and 2.87 FID in 20 function evaluations on the CIFAR10 dataset, and a 4~16x speedup compared with previous state-of-the-art training-free samplers on various datasets.
Accept
The paper is very well-written, provides a very useful recipe for training diffusion models faster and combines that with extensive experiments. Good work!
val
[ "Scs_o5cA9tB", "QM6zf2eKGse", "iHnUt_Mc15w", "IZqqE0_Gsy", "y6g60niSpO", "Lq4dNEWbj1z", "DRkCmQnVJWS", "dFqVsEBnrM", "dh4g1kP8D4I", "zAAXISJfLDd", "4k6Sc_zXRms", "s4QZTpc2meb", "6GFI5mf1VxC" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We'd like to thank reviewer DD9K for the valuable response and the strong acceptance of our work. If our response has addressed the concerns and brings new insights to the reviewer, we will highly appreciate it if the reviewer considers raising the score.\n\nIn addition, please let us know if you have any other q...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "dFqVsEBnrM", "iHnUt_Mc15w", "y6g60niSpO", "s4QZTpc2meb", "6GFI5mf1VxC", "6GFI5mf1VxC", "s4QZTpc2meb", "4k6Sc_zXRms", "zAAXISJfLDd", "nips_2022_2uAaGwlP_V", "nips_2022_2uAaGwlP_V", "nips_2022_2uAaGwlP_V", "nips_2022_2uAaGwlP_V" ]
nips_2022_UBqGF-tW6A2
Bezier Gaussian Processes for Tall and Wide Data
Modern approximations to Gaussian processes are suitable for ``tall data'', with a cost that scales well in the number of observations, but under-performs on ``wide data'', scaling poorly in the number of input features. That is, as the number of input features grows, good predictive performance requires the number of summarising variables, and their associated cost, to grow rapidly. We introduce a kernel that allows the number of summarising variables to grow exponentially with the number of input features, but requires only linear cost in both number of observations and input features. This scaling is achieved through our introduction of the ``Bezier buttress'', which allows approximate inference without computing matrix inverses or determinants. We show that our kernel has close similarities to some of the most used kernels in Gaussian process regression, and empirically demonstrate the kernel's ability to scale to both tall and wide datasets.
Accept
The submission considers Bernstein polynomial-based kernels for large-scale Gaussian process regression. To deal with high dimensional inputs, two approximations were proposed: a parameterisation to amortise/share parameters between the random control points of the polynomials and ensembling over randomly permuted orders. In addition, a prior adjustment to the polynomial coefficients was introduced to retain the desirable properties of the variance in the prior. The reviewers raise some concerns about these approximations in terms of approximation quality and impact on predictive performance, and the overfitting issue. The authors acknowledged some of these concerns in the text and during the rebuttal. The AC also notes that the "behaviour foundational to practitioners" [Line 250] is not illustrated well, and perhaps a comparison to non-Bernstein polynomial kernels could be provided to demonstrate the differences/benefits. Overall, all reviewers think the proposed method is interesting and offers a fairly novel approach to scalable GP regression with high dimensional inputs (with some limitations that require further experiments/analyses), and thus the paper should be accepted. We hope the comments/discussions here are useful for the next iteration.
train
[ "COQGMsNkGzM", "2IlXH1yYbhT", "INTJfggeimT", "_T2SqTj-fnc", "AK5y2maF3A", "YjEK35rJkQe", "bQCNTI2b9oR", "BUSfYTrUfp", "dSkx2heLThq", "pOjDvRqbqz" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I want to thank the authors of the paper for their effort to clarify my doubts and questions, I am happy with the different responses. I am just curious to see how the paper looks after correcting all the typos and the additional comments regarding future works and clarifications.\n\nIn general I appreciate the e...
[ -1, -1, -1, -1, -1, -1, 5, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "_T2SqTj-fnc", "INTJfggeimT", "pOjDvRqbqz", "dSkx2heLThq", "BUSfYTrUfp", "bQCNTI2b9oR", "nips_2022_UBqGF-tW6A2", "nips_2022_UBqGF-tW6A2", "nips_2022_UBqGF-tW6A2", "nips_2022_UBqGF-tW6A2" ]
nips_2022_1W8UwXAQubL
Multi-Agent Reinforcement Learning is a Sequence Modeling Problem
Large sequence models (SM) such as GPT series and BERT have displayed outstanding performance and generalization capabilities in natural language process, vision and recently reinforcement learning. A natural follow-up question is how to abstract multi-agent decision making also as an sequence modeling problem and benefit from the prosperous development of the SMs. In this paper, we introduce a novel architecture named Multi-Agent Transformer (MAT) that effectively casts cooperative multi-agent reinforcement learning (MARL) into SM problems wherein the objective is to map agents' observation sequences to agents' optimal action sequences. Our goal is to build the bridge between MARL and SMs so that the modeling power of modern sequence models can be unleashed for MARL. Central to our MAT is an encoder-decoder architecture which leverages the multi-agent advantage decomposition theorem to transform the joint policy search problem into a sequential decision making process; this renders only linear time complexity for multi-agent problems and, most importantly, endows MAT with monotonic performance improvement guarantee. Unlike prior arts such as Decision Transformer fit only pre-collected offline data, MAT is trained by online trial and error from the environment in an on-policy fashion. To validate MAT, we conduct extensive experiments on StarCraftII, Multi-Agent MuJoCo, Dexterous Hands Manipulation, and Google Research Football benchmarks. Results demonstrate that MAT achieves superior performance and data efficiency compared to strong baselines including MAPPO and HAPPO. Furthermore, we demonstrate that MAT is an excellent few-short learner on unseen tasks regardless of changes in the number of agents. See our project page at https://sites.google.com/view/multi-agent-transformer.
Accept
The reviewers are largely in consensus that this is a well-executed application of a highly relevant technology (transformers) to an interesting and challenging problem, with pleasing results. I believe the community will be quite interested to see this development presented at NeurIPS. Note: I agree with reviewer MEiB that important prior work also using transformers needs careful comparison, but the author's rebuttals gave confidence that will be done appropriately. Please do be very dilligent on this point in the final version.
train
[ "ONsHCYKQXcF", "5KufvynZ16", "tFwtI9pT3-W", "JCScy1Z-f2p", "ya1Hk1ACuEo", "Xx0k05sOUtp", "YZBwc4GORiY", "GyoYTGyTgTH", "4UaHMltxwiM", "MH0Q7Wfz7KX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to say thank you for your deep engagement with our work, especially the direction you have suggested helps us make the paper more understandable. Thanks a lot.\n\nIndeed, both UPDeT and MAT are trying to model the interrelationships between agents using Transformers. However, while UPDeT implements ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "5KufvynZ16", "ya1Hk1ACuEo", "Xx0k05sOUtp", "MH0Q7Wfz7KX", "4UaHMltxwiM", "YZBwc4GORiY", "GyoYTGyTgTH", "nips_2022_1W8UwXAQubL", "nips_2022_1W8UwXAQubL", "nips_2022_1W8UwXAQubL" ]
nips_2022_ofwkaIWFqqv
GRASP: Navigating Retrosynthetic Planning with Goal-driven Policy
Retrosynthetic planning occupies a crucial position in synthetic chemistry and, accordingly, drug discovery, which aims to find synthetic pathways of a target molecule through a sequential decision-making process on a set of feasible reactions. While the majority of recent works focus on the prediction of feasible reactions at each step, there have been limited attempts toward improving the sequential decision-making policy. Existing strategies rely on either the expensive and high-variance value estimation by online rollout, or a settled value estimation neural network pre-trained with simulated pathways of limited diversity and no negative feedback. Besides, how to return multiple candidate pathways that are not only diverse but also desirable for chemists (e.g., affordable building block materials) remains an open challenge. To this end, we propose a Goal-dRiven Actor-critic retroSynthetic Planning (GRASP) framework, where we identify the policy that performs goal-driven retrosynthesis navigation toward a user-demand objective. Our experiments on the benchmark Pistachio dataset and a chemists-designed dataset demonstrate that the framework outperforms state-of-the-art approaches by up to 32.2% on search efficiency and 5.6% on quality. Remarkably, our user studies show that GRASP successfully plans pathways that accomplish the goal prescribed with a designated goal (building block materials).
Accept
This work proposes a goal driven actor-critic method for finding pathways with a specific prescribed goal such as building block materials. Various aspects like self-generation of data, hindsight goal relabeling while not novel by themselves, have found meaningful application on the chemical reaction tasks and the improved results has been appreciated by all reviewers. The authors are encouraged to incorporate reviewer feedback into account and also discuss the concurrent submissions as mentioned in the reviews for the sake of completeness in the camera-ready version.
train
[ "y5xtESCoD-1", "kkutK5pdcZj", "gXNnRUenp8", "D5XtTd-trAy", "gHS81oe1MeC", "hvsRV-9WZ-t", "Mip8te5oaJx", "M957sth6Uwl" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for letting us know your remaining concern on broader impacts. In the revised manuscript, we have further elaborated on the broader impacts of GRASP in Line 345-351.", " Thanks for the authors' well-written rebuttal. The authors almost address all my questions and concerns. However, I still haven't se...
[ -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 1 ]
[ "kkutK5pdcZj", "gXNnRUenp8", "Mip8te5oaJx", "hvsRV-9WZ-t", "M957sth6Uwl", "nips_2022_ofwkaIWFqqv", "nips_2022_ofwkaIWFqqv", "nips_2022_ofwkaIWFqqv" ]
nips_2022_agJEk7FhvKL
Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs
To build an artificial neural network like the biological intelligence system, recent works have unified numerous tasks into a generalist model, which can process various tasks with shared parameters and do not have any task-specific modules. While generalist models achieve promising results on various benchmarks, they have performance degradation on some tasks compared with task-specialized models. In this work, we find that interference among different tasks and modalities is the main factor to this phenomenon. To mitigate such interference, we introduce the Conditional Mixture-of-Experts (Conditional MoEs) to generalist models. Routing strategies under different levels of conditions are proposed to take both the training/inference cost and generalization ability into account. By incorporating the proposed Conditional MoEs, the recently proposed generalist model Uni-Perceiver can effectively mitigate the interference across tasks and modalities, and achieves state-of-the-art results on a series of downstream tasks via prompt tuning on 1% of downstream data. Moreover, the introduction of Conditional MoEs still holds the generalization ability of generalist models to conduct zero-shot inference on new tasks, e.g., videotext retrieval and video caption. Code and pre-trained generalist models are publicly released at https://github.com/fundamentalvision/Uni-Perceiver.
Accept
This paper proposes a generalist model Uni-Perceiver using conditional mixture of experts for processing multi-tasks. Most reviewers acknowledged the technical novelty of this work in CMOE, attribute routing, as well as extensive experiments. Reviewer KJqa argues the unclear affect of task interference for MTL. The authors reply well from a gradient perspective and experiments show task interference can be reduced. The other concerns about clarity in paper presentation and results are also addressed. The meta-reviewers thus recommend accepting it.
train
[ "oJT873IJ6Ej", "ToUqSRowvy4", "lRsUxovde", "boHxmtI5zc-", "EpQ9oZlmgq", "yGPXE87DkMt", "OwkX5N01Emb", "XBAEMbgJjju", "vJ8c4KAjsyo", "4O69Xmlb9U" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ---\n\n> __Q4: The proposed model does not outperform other methods. What is the use-case for this model?__\n\nIn fact, compared with task-specific models, we reduce the model development costs for different tasks. Compared to vanilla generalist models, we improve the training efficiency.\n\nBefore generalist mod...
[ -1, -1, -1, -1, -1, -1, 6, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "ToUqSRowvy4", "4O69Xmlb9U", "vJ8c4KAjsyo", "yGPXE87DkMt", "OwkX5N01Emb", "XBAEMbgJjju", "nips_2022_agJEk7FhvKL", "nips_2022_agJEk7FhvKL", "nips_2022_agJEk7FhvKL", "nips_2022_agJEk7FhvKL" ]
nips_2022_TEmAR013vK
Efficient Architecture Search for Diverse Tasks
While neural architecture search (NAS) has enabled automated machine learning (AutoML) for well-researched areas, its application to tasks beyond computer vision is still under-explored. As less-studied domains are precisely those where we expect AutoML to have the greatest impact, in this work we study NAS for efficiently solving diverse problems. Seeking an approach that is fast, simple, and broadly applicable, we fix a standard convolutional network (CNN) topology and propose to search for the right kernel sizes and dilations its operations should take on. This dramatically expands the model's capacity to extract features at multiple resolutions for different types of data while only requiring search over the operation space. To overcome the efficiency challenges of naive weight-sharing in this search space, we introduce DASH, a differentiable NAS algorithm that computes the mixture-of-operations using the Fourier diagonalization of convolution, achieving both a better asymptotic complexity and an up-to-10x search time speedup in practice. We evaluate DASH on ten tasks spanning a variety of application domains such as PDE solving, protein folding, and heart disease detection. DASH outperforms state-of-the-art AutoML methods in aggregate, attaining the best-known automated performance on seven tasks. Meanwhile, on six of the ten tasks, the combined search and retraining time is less than 2x slower than simply training a CNN backbone that is far less accurate.
Accept
This paper proposes a significantly improved neural architecture discovery method specialized for convolutional networks. The idea is to train a large super-network and cache common computation and select the best performing subnetwork. Also, it presents several useful ideas to accelerate to training and exploration of convolutional operators at a large scale. While the work is the combination of somewhat well-known approaches, the overall approach is novel, interesting and well-motivated, however its application domain is somewhat limited. Still, given the convincing experimental results, good execution and significant gains in terms of training time compared to competing appraoches, I propose this paper to be accepted for NeurIPS 2022.
val
[ "AKTtmvkF83a", "NkItQtNbpgZ", "3pDhrZ6rx_R", "ytwlFnMuFjp", "B-BL127pDYN", "UlgXpUYcJh0", "wyQm1VouiZgo", "KEnLWNK2cIT", "G_hcfKJ5zBr", "nGad9LTup11", "oqrNddptBMG", "IrGAjkj7W72", "_7QpW6QNnC7", "BzM2RMGFrpJ", "tdH3TgIZgdD" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have updated the DASH-ConvNeXt performance results on ImageNet in our general response. Since the discussion period is drawing to a close, we hope that our responses have provided you with enough information to address any concerns you may have. Please let us know if there are any further clarifications that w...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "KEnLWNK2cIT", "BzM2RMGFrpJ", "_7QpW6QNnC7", "IrGAjkj7W72", "_7QpW6QNnC7", "BzM2RMGFrpJ", "KEnLWNK2cIT", "G_hcfKJ5zBr", "tdH3TgIZgdD", "BzM2RMGFrpJ", "_7QpW6QNnC7", "nips_2022_TEmAR013vK", "nips_2022_TEmAR013vK", "nips_2022_TEmAR013vK", "nips_2022_TEmAR013vK" ]
nips_2022_sMezXGG5So
NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification
Graph neural networks have been extensively studied for learning with inter-connected data. Despite this, recent evidence has revealed GNNs' deficiencies related to over-squashing, heterophily, handling long-range dependencies, edge incompleteness and particularly, the absence of graphs altogether. While a plausible solution is to learn new adaptive topology for message passing, issues concerning quadratic complexity hinder simultaneous guarantees for scalability and precision in large networks. In this paper, we introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes, as an important building block for a new class of Transformer networks for node classification on large graphs, dubbed as NodeFormer. Specifically, the efficient computation is enabled by a kernerlized Gumbel-Softmax operator that reduces the algorithmic complexity to linearity w.r.t. node numbers for learning latent graph structures from large, potentially fully-connected graphs in a differentiable manner. We also provide accompanying theory as justification for our design. Extensive experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs (with up to 2M nodes) and graph-enhanced applications (e.g., image classification) where input graphs are missing. The codes are available at https://github.com/qitianwu/NodeFormer.
Accept
The paper presents an approximate way to perform all-pair message passing within the context of GNNs. The paper's main contribution is a series of extensive empirical results and at the same time theoretical justification for the approach. All the reviewers liked the paper and noted the impressive scalability of the approach. Some technical questions/concerns were also addressed post rebuttal. This paper is recommended for acceptance.
train
[ "aBNztOV1IW5", "S-upD7jTbzXJ", "xKKvi9bEDf6", "Wycx4L2lscI", "v5zwy3CvMu6", "0IjiNaJTGf6", "_QaVtB3upvA", "aZcnIftimLK", "S5c8e5lE9qM", "KkdVSeXchF", "3VpvrjkZnbk", "hZ5pM96kpOk", "cJtLS5ok9Q8", "c8Ht5EOnttr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' further clarification. This makes sense to me now. I will raise my score to 7.", " Thank you for the valuable feedback and pointing out the part we can further improve.\n\nWe have modified the description under Eqn. 12 in a more precise way in the newly uploaded paper (see the part color...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "S-upD7jTbzXJ", "xKKvi9bEDf6", "_QaVtB3upvA", "0IjiNaJTGf6", "nips_2022_sMezXGG5So", "c8Ht5EOnttr", "aZcnIftimLK", "cJtLS5ok9Q8", "hZ5pM96kpOk", "3VpvrjkZnbk", "nips_2022_sMezXGG5So", "nips_2022_sMezXGG5So", "nips_2022_sMezXGG5So", "nips_2022_sMezXGG5So" ]
nips_2022_uIXyp4Ip9fG
Active Surrogate Estimators: An Active Learning Approach to Label-Efficient Model Evaluation
We propose Active Surrogate Estimators (ASEs), a new method for label-efficient model evaluation. Evaluating model performance is a challenging and important problem when labels are expensive. ASEs address this active testing problem using a surrogate-based estimation approach that interpolates the errors of points with unknown labels, rather than forming a Monte Carlo estimator. ASEs actively learn the underlying surrogate, and we propose a novel acquisition strategy, XWED, that tailors this learning to the final estimation task. We find that ASEs offer greater label-efficiency than the current state-of-the-art when applied to challenging model evaluation problems for deep neural networks.
Accept
This paper tackles the active testing problem and proposes a novel estimator (ASE) of the expected loss based on a surrogate function, and a novel acquisition function (EWED) to minimize the error. SOTA performance is reported, and some ablation study is conducted. Reviewers raised concerns about novelty, high dependence on the quality of the surrogate, and missing baselines, which the authors have well addressed and convinced the reviewers. Although the proposed methods are small modifications from existing methods, by using the surrogate, they significantly improve the existing methods in the experiments.
train
[ "lLUPzLUhx7P", "fYC04K9yS", "mPGrFMUYln0", "b6_KKXxXs7S", "fRTCsZzkZFf", "5uxeS_rsXp4", "dCnlxHQRXhI", "gCby1lcs7G", "zkJejRfGUD1", "SrYw2EGGo52", "GzZCe0MXi9I", "leGyYhcvvDm", "qOpvOFYeB63", "Z3UpeDcg03", "2x5sb_qBon" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for taking the time to read our rebuttal.", " Thanks for the clarifications.", " Thanks for your reply. \nMany issues are well-addressed.\nMaybe in theoretical aspects, there can be many issues. However, this paper concerns the algorithm.\nAlso, I believe that the architecture well-cooperated with $\\p...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "fYC04K9yS", "gCby1lcs7G", "dCnlxHQRXhI", "fRTCsZzkZFf", "zkJejRfGUD1", "2x5sb_qBon", "Z3UpeDcg03", "qOpvOFYeB63", "SrYw2EGGo52", "leGyYhcvvDm", "nips_2022_uIXyp4Ip9fG", "nips_2022_uIXyp4Ip9fG", "nips_2022_uIXyp4Ip9fG", "nips_2022_uIXyp4Ip9fG", "nips_2022_uIXyp4Ip9fG" ]
nips_2022_XQu7UFSbzd2
Towards Out-of-Distribution Sequential Event Prediction: A Causal Treatment
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events, with applications to sequential recommendation, user behavior analysis and clinical treatment. In practice, the next-event prediction models are trained with sequential data collected at one time and need to generalize to newly arrived sequences in remote future, which requires models to handle temporal distribution shift from training to testing. In this paper, we first take a data-generating perspective to reveal a negative result that existing approaches with maximum likelihood estimation would fail for distribution shift due to the latent context confounder, i.e., the common cause for the historical events and the next event. Then we devise a new learning objective based on backdoor adjustment and further harness variational inference to make it tractable for sequence learning problems. On top of that, we propose a framework with hierarchical branching structures for learning context-specific representations. Comprehensive experiments on diverse tasks (e.g., sequential recommendation) demonstrate the effectiveness, applicability and scalability of our method with various off-the-shelf models as backbones.
Accept
This paper proposes a method for predicting the next event given sequential data. For the prediction under distribution shift, the proposed method uses a backdoor adjustment, variational inference of latent context, and hierarchical branching structure. The proposed method that combines techniques from different fields is interesting. In particular, the use of causal methods for the distribution shift problem in temporal event prediction is novel. The experimental results demonstrate the effectiveness of the proposed method well quantitatively and qualitatively. The paper should be improved according to the reviewers’ comments, e.g., clarifying the motivation in real-world applications.
test
[ "ODY472ud7bE", "dL9B1py0XVo", "aJ2_jTMTH0n", "vWo-irvTnw8", "cLWmYz95_P", "oNh4wONHQf", "MclOV6jfnq2", "P_ONh-cxGd", "nbtKOn7cXb_", "1w9yet0P6_I", "7OocOMGru9l", "no7wm4zIwf1", "7uOnweEMH5O", "zICLK59ku1", "4iBSt71IcBr", "l3sB-fG7dcY", "vVPIloiWVgM", "F7dl77V9uEf", "jVBvA33OOv", ...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed replies to reviewer comments. The clarification on the role of context was helpful for me to better understand the contribution.", " I have read the reply to my questions as well as other reviews. I will keep the same score (6).", " Dear Reviewers,\n\nThanks again for your...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 3 ]
[ "1w9yet0P6_I", "vWo-irvTnw8", "nips_2022_XQu7UFSbzd2", "ocJpNSIVloP", "jVBvA33OOv", "F7dl77V9uEf", "vVPIloiWVgM", "nbtKOn7cXb_", "ocJpNSIVloP", "jVBvA33OOv", "no7wm4zIwf1", "F7dl77V9uEf", "zICLK59ku1", "4iBSt71IcBr", "vVPIloiWVgM", "nips_2022_XQu7UFSbzd2", "nips_2022_XQu7UFSbzd2", ...
nips_2022_pF5aR69c9c
Learning to Constrain Policy Optimization with Virtual Trust Region
We introduce a constrained optimization method for policy gradient reinforcement learning, which uses two trust regions to regulate each policy update. In addition to using the proximity of one single old policy as the first trust region as done by prior works, we propose forming a second trust region by constructing another virtual policy that represents a wide range of past policies. We then enforce the new policy to stay closer to the virtual policy, which is beneficial if the old policy performs poorly. We propose a mechanism to automatically build the virtual policy from a memory buffer of past policies, providing a new capability for dynamically selecting appropriate trust regions during the optimization process. Our proposed method, dubbed Memory-Constrained Policy Optimization (MCPO), is examined in diverse environments, including robotic locomotion control, navigation with sparse rewards and Atari games, consistently demonstrating competitive performance against recent on-policy constrained policy gradient methods.
Accept
The paper addresses a constrained policy optimization through an addition of a virtual trust region trained with an attention. Evaluations show that the proposed method outperforms other on-policy methods most of the time. The reviewers agree that the method is novel and effective, and the evaluations are extensive. The revised paper added ablation studies that show effects of different components of the method, although some reviewers would like to see deeper analysis.
train
[ "cnrj2H6K1Eo", "9dOSzR36me", "lDo7DZF29tM", "hhJKwe-OJSR", "wFBOgK0VThg", "iXd6xLBcAKB", "u3Di-ycLGQ4", "RblQ74ORr9c", "Uf7Iov9L6VM", "dju5bURgXIJ", "1nVP8hv0DfS", "fvjnS9pi3gD" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response to my comments. After reading them as well as the discussions with the other reviewers, I still feel that a more technical understanding of why the method works or should work is needed, which I believe is also noted by the other reviewers. The problem is that although the int...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "RblQ74ORr9c", "lDo7DZF29tM", "wFBOgK0VThg", "nips_2022_pF5aR69c9c", "fvjnS9pi3gD", "1nVP8hv0DfS", "1nVP8hv0DfS", "dju5bURgXIJ", "nips_2022_pF5aR69c9c", "nips_2022_pF5aR69c9c", "nips_2022_pF5aR69c9c", "nips_2022_pF5aR69c9c" ]
nips_2022_7WGNT3MHyBm
Geometric Knowledge Distillation: Topology Compression for Graph Neural Networks
We study a new paradigm of knowledge transfer that aims at encoding graph topological information into graph neural networks (GNNs) by distilling knowledge from a teacher GNN model trained on a complete graph to a student GNN model operating on a smaller or sparser graph. To this end, we revisit the connection between thermodynamics and the behavior of GNN, based on which we propose Neural Heat Kernel (NHK) to encapsulate the geometric property of the underlying manifold concerning the architecture of GNNs. A fundamental and principled solution is derived by aligning NHKs on teacher and student models, dubbed as Geometric Knowledge Distillation. We develop non- and parametric instantiations and demonstrate their efficacy in various experimental settings for knowledge distillation regarding different types of privileged topological information and teacher-student schemes.
Accept
In this paper the authors propose the first distillation method for GNN to promote the transfer of knowledge from a large graph to a smaller graph. The authors propose an extension of Neural Heat Kernel to encode information in the GNN and use it to propose Geometric Knowledge Distillation. Numerical experiments are done with parametric and non-parametric versions of the method and show its interest on several tasks such as node classification, and more classical distillation tasks such as model compression and self distillation. The contribution was appreciated and all reviewers agree about the novelty of the connection between tangent kernel and GNN, and its possible use in other applications in the future. The experiments are interesting and often show a large gain in performance wrt classical distillation methods. Some concerns were raised by reviewers: numerical complexity, limited practical interest, lack of discussion about limits. Most of the concerns have been answered quite well by the authors during the discussion which was much appreciated. The precise tables with numerical complexity and the limitation discussion now in appendix are very interesting. So the consensus among reviewers was that the paper deserves publication but the numerical complexity and complexity discussion MUST be integrated in the main paper instead of being in supplementary.
train
[ "A-yK98_0UQe", "qO65b4OnH_h", "oXVOKYB4OteH", "-X9ip1T20b", "QAW85fe2b8a", "08ZbBTYhKfMI", "klz_0pUTP5y", "bjg2ACAxWAXR", "DcL0q17DCyl", "sBFh6IaqISk", "CCXbABuIf3j", "DEefHWxz6pg", "CYJMHcr1iEQ", "mmbTItRf2SC", "J3sU6n8IyX", "LDAhrn6xass", "bzmAVODXdix", "AU0FzlVB-Mb", "-FySdZec...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. Most of my concerns are addressed. I raised my score as leaning more for acceptance :)", " Dear Reviewers,\n\nWe again appreciate your time and valuable feedback. Based on the initial review comments, we have significantly improved the draft with a new version uploaded. For your refere...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "DcL0q17DCyl", "nips_2022_7WGNT3MHyBm", "-X9ip1T20b", "QAW85fe2b8a", "DEefHWxz6pg", "iaLQQyWgsh_", "-FySdZecYcE", "AU0FzlVB-Mb", "bzmAVODXdix", "iaLQQyWgsh_", "-FySdZecYcE", "AU0FzlVB-Mb", "mmbTItRf2SC", "bzmAVODXdix", "nips_2022_7WGNT3MHyBm", "nips_2022_7WGNT3MHyBm", "nips_2022_7WGN...
nips_2022_U8k0QaBgXS
Exploring evolution-aware & -free protein language models as protein function predictors
Large-scale Protein Language Models (PLMs) have improved performance in protein prediction tasks, ranging from 3D structure prediction to various function predictions. In particular, AlphaFold, a ground-breaking AI system, could potentially reshape structural biology. However, the utility of the PLM module in AlphaFold, Evoformer, has not been explored beyond structure prediction. In this paper, we investigate the representation ability of three popular PLMs: ESM-1b (single sequence), MSA-Transformer (multiple sequence alignment), and Evoformer (structural), with a special focus on Evoformer. Specifically, we aim to answer the following key questions: (1) Does the Evoformer trained as part of AlphaFold produce representations amenable to predicting protein function? (2) If yes, can Evoformer replace ESM-1b and MSA-Transformer? (3) How much do these PLMs rely on evolution-related protein data? In this regard, are they complementary to each other? We compare these models by empirical study along with new insights and conclusions. All code and datasets for reproducibility are available at https://github.com/elttaes/Revisiting-PLMs .
Accept
The paper provides an empirical study of the representation learning component of Alphafold for protein sequences, called Evoformer. The topic of protein language models (PLMs), of which Evoformer is an example, is relevant and interesting to the machine learning community. This is due to the success of Alphafold in protein structure prediction, as well as the popularity of large language models in natural language. Four reviewers have carefully considered the results, and there was a very constructive interaction between the authors and three reviewers during the discussion phase. Thank you to both the authors and reviewers for engaging in good faith. All reviewers unanimously agreed that the paper is well written and the method reproducible (modulo compute). The revised paper provides useful empirical results to guide future researchers interested in representing protein sequences for different possible tasks. It gives me great pleasure to recommend the paper for acceptance to NeurIPS 2022.
train
[ "jAA060dZuM", "4wm1fM6n-DM", "dBZZ0SG4uD1", "LzILfhAHMtS", "CZdMwMjzRN", "SzPow9P___M", "Dfm6KWu464s", "0ErbCxwy6t", "ojDg8qhm89x", "CelaWmJ3xtz", "Cpmx9boGGj7", "Nd1-3i4EP9m", "xQESxC1PJL", "JsEBjRv2xTa", "w5MG9vHEcB5", "tbf7EwefhYI" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, we promise we would add results of MSA transformer as you suggested. Also thanks for provding us the the fast MSA searching guideline. Searching MSA is not a big problem, just takes a bit time and money for buying CPU services. Best Regards.", " Thank you for adding results of ESM with smaller t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 1 ]
[ "4wm1fM6n-DM", "LzILfhAHMtS", "Dfm6KWu464s", "0ErbCxwy6t", "Dfm6KWu464s", "Dfm6KWu464s", "Cpmx9boGGj7", "CelaWmJ3xtz", "tbf7EwefhYI", "w5MG9vHEcB5", "JsEBjRv2xTa", "xQESxC1PJL", "nips_2022_U8k0QaBgXS", "nips_2022_U8k0QaBgXS", "nips_2022_U8k0QaBgXS", "nips_2022_U8k0QaBgXS" ]
nips_2022_VVCI8-PYYv
Robust Graph Structure Learning over Images via Multiple Statistical Tests
Graph structure learning aims to learn connectivity in a graph from data. It is particularly important for many computer vision related tasks since no explicit graph structure is available for images for most cases. A natural way to construct a graph among images is to treat each image as a node and assign pairwise image similarities as weights to corresponding edges. It is well known that pairwise similarities between images are sensitive to the noise in feature representations, leading to unreliable graph structures. We address this problem from the viewpoint of statistical tests. By viewing the feature vector of each node as an independent sample, the decision of whether creating an edge between two nodes based on their similarity in feature representation can be thought as a ${\it single}$ statistical test. To improve the robustness in the decision of creating an edge, multiple samples are drawn and integrated by ${\it multiple}$ statistical tests to generate a more reliable similarity measure, consequentially more reliable graph structure. The corresponding elegant matrix form named $\mathcal{B}$$\textbf{-Attention}$ is designed for efficiency. The effectiveness of multiple tests for graph structure learning is verified both theoretically and empirically on multiple clustering and ReID benchmark datasets. Source codes are available at https://github.com/Thomas-wyh/B-Attention.
Accept
All reviews are quite positive and raised concerns have been addressed. Accept.
train
[ "ep8zeHyB6b", "SyWkKp0-xav", "yjmNOQQllrB", "Iw0xETp2Eon", "yzclXW7NE-a", "o_tPGdjHz6D", "VUZCQNfGKqC", "d3KFjhCzuze", "3NFAM8fVcu1" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestion! The manuscript has been updated by adding a short paragraph (in red color) in Section 2 to explicitly describe the sub-graph sampling strategy for training and inference.\n\nThe sub-graph sampling strategy we adopted is similar to that in DGL (Wang et al., 2019) and Ada-NETS (Wang et al...
[ -1, -1, -1, -1, -1, -1, 7, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "SyWkKp0-xav", "o_tPGdjHz6D", "nips_2022_VVCI8-PYYv", "d3KFjhCzuze", "VUZCQNfGKqC", "3NFAM8fVcu1", "nips_2022_VVCI8-PYYv", "nips_2022_VVCI8-PYYv", "nips_2022_VVCI8-PYYv" ]
nips_2022_LqGA2JMLwBw
On the Tradeoff Between Robustness and Fairness
Interestingly, recent experimental results [2, 26, 22] have identified a robust fairness phenomenon in adversarial training (AT), namely that a robust model well-trained by AT exhibits a remarkable disparity of standard accuracy and robust accuracy among different classes compared with natural training. However, the effect of different perturbation radii in AT on robust fairness has not been studied, and one natural question is raised: does a tradeoff exist between average robustness and robust fairness? Our extensive experimental results provide an affirmative answer to this question: with an increasing perturbation radius, stronger AT will lead to a larger class-wise disparity of robust accuracy. Theoretically, we analyze the class-wise performance of adversarially trained linear models with mixture Gaussian distribution. Our theoretical results support our observations. Moreover, our theory shows that adversarial training easily leads to more serious robust fairness issue than natural training. Motivated by theoretical results, we propose a fairly adversarial training (FAT) method to mitigate the tradeoff between average robustness and robust fairness. Experimental results validate the effectiveness of our proposed method.
Accept
While the reviews are a bit divergent, it seems many of the concerns raised are addressed properly and some of them are confirmed with the reviewer. I believe the paper is well-written and the contribution is clear with supporting experiments and theory. Hence, I recommend the acceptance of this paper.
train
[ "ZsCju0nxh8", "eKkIPSJ7DeF", "JiHaSQP-Pmr", "5fQtW5lsH8c", "2QSlHfn00Yt", "FjAZ0_vt1GN", "ZzDoSK3Pw02", "2uT-W_DAtRV", "qNH9jfU3Zv7", "ITJJADWLpP" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer F5Kr,\n\nThank you for reviewing our paper. Since your rating is low, we hope to have more discussion with you. If you have any other questions, please let us know.\n\n", " Dear reviewer zHuG,\n\nwe will really appreciate it if the reviewer can go over our detailed response. Please feel free to as...
[ -1, -1, -1, -1, -1, -1, -1, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "2uT-W_DAtRV", "qNH9jfU3Zv7", "ITJJADWLpP", "2uT-W_DAtRV", "qNH9jfU3Zv7", "2uT-W_DAtRV", "ITJJADWLpP", "nips_2022_LqGA2JMLwBw", "nips_2022_LqGA2JMLwBw", "nips_2022_LqGA2JMLwBw" ]
nips_2022_u6MpfQPx9ck
Active Learning Through a Covering Lens
Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry. Until recently, deep active learning methods were ineffectual in the low-budget regime, where only a small number of examples are annotated. The situation has been alleviated by recent advances in representation and self-supervised learning, which impart the geometry of the data representation with rich information about the points. Taking advantage of this progress, we study the problem of subset selection for annotation through a “covering” lens, proposing ProbCover – a new active learning algorithm for the low budget regime, which seeks to maximize Probability Coverage. We then describe a dual way to view the proposed formulation, from which one can derive strategies suitable for the high budget regime of active learning, related to existing methods like Coreset. We conclude with extensive experiments, evaluating ProbCover in the low-budget regime. We show that our principled active learning strategy improves the state-of-the-art in the low-budget regime in several image recognition benchmarks. This method is especially beneficial in the semi-supervised setting, allowing state-of-the-art semi-supervised methods to match the performance of fully supervised methods, while using much fewer labels nonetheless. Code is available at https://github.com/avihu111/TypiClust.
Accept
This paper introduces an active learning method, ProbCover, that seeks to maximize probability coverage for the low budget regime. It also provides theoretical analysis and a dual way to view the proposed method with respect to methods better suited to the high budget regime like Coreset. The paper received two weak accept and a weak reject rating. After reviewing the paper, reviews, author responses, add additional discussion with reviewers, I believe that on the balance the strengths of the paper outweigh the weaknesses. Reviewers overall appreciated the importance of the task, the theoretical analysis, and thorough experiments. Most of the questions and clarifications about the approach were sufficiently addressed in the rebuttal, as well as additional requested experiments. Reviewers considered the use of a representation space from an auxiliary model to be a weakness, but I agree with the assessment (also concurred by reviewer) that this is not a limitation that invalidates the contributions of the paper. In practice it is a reasonable approach given the low budget regime. Reviewers also questioned the comparison with TypiClust, which was originally performed using only the SimCLR representation space, and not the one based on SCAN that TypiClust also used. The authors added comparison based on the SCAN representation for CIFAR-10 in their rebuttal, which is appreciated, and showed improvement in this setting as well. However, since an experiment on only one dataset was provided, and the improvement is smaller than for the main reported results using SimCLR (and TypiClust outperforms ProbCover at the smallest budget), the authors are highly encouraged include more comprehensive comparisons using the SCAN representation as well in the final paper for completeness. Overall, acceptance is recommended for this paper.
val
[ "56AQFq42om9", "8cshpqHKkSP", "Bp2Yzqy584B", "ct0UMM_HZD", "K2FeK5mPYyh", "Iq8sKJ1DckJ", "BL4T-1t7BWe", "KnEtghqm3sV", "qKOWXSY7UdP", "WHxzyp808Xe", "juxaqHgRRrX", "PU0S7GplC4l", "ImM5Hh8Rlr5", "-QaTSRLgEmw", "EYqg16EFMg" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I was pointing out the discrepancy between the main and auxiliary models not transferring learning or representation learning in general. \n\n> In our view, the fact that representations learned by one model are beneficial to the training of another model is a very strong aspect of deep learning, rather than a li...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "Bp2Yzqy584B", "juxaqHgRRrX", "ct0UMM_HZD", "Iq8sKJ1DckJ", "Iq8sKJ1DckJ", "BL4T-1t7BWe", "KnEtghqm3sV", "qKOWXSY7UdP", "WHxzyp808Xe", "EYqg16EFMg", "-QaTSRLgEmw", "ImM5Hh8Rlr5", "nips_2022_u6MpfQPx9ck", "nips_2022_u6MpfQPx9ck", "nips_2022_u6MpfQPx9ck" ]
nips_2022_Qr8n979lusV
NeIF: Representing General Reflectance as Neural Intrinsics Fields for Uncalibrated Photometric Stereo
Uncalibrated photometric stereo is challenging due to the general bas-relief ambiguity. Existing solutions alleviate this ambiguity by either building an explicit relationship between reflectance and lighting or resolving lighting information in a supervised manner before recovering surface normal, which suffers from poor generalization to unseen reflectance or data. In contrast, this paper builds the implicit relationship between general reflectance (specular, cast shadow) and lighting by representing the reflectance as several neural intrinsics fields, based on which we optimize the surface normal and lighting in an unsupervised manner. Specifically, the neural intrinsics fields include reflectance features (i.e., diffuse, specular, diffuse coefficient, specular coefficient, cast shadow) and shading features (i.e., surface normal, lighting information). The implicit relationship is achieved by feeding the lighting information to neural specular & shadow fields and optimizing all intrinsics through a rendering equation in an unsupervised manner, which facilitates the better generalization to unseen reflectance and data. Our method achieves a superior performance advantage over state-of-the-art uncalibrated photometric stereo methods on public datasets in terms of the surface normal & lighting estimation.
Reject
This paper had reviews ranging from a Reject to a Weak accept. The key shared concerns among reviewers were concern about how much is really new relative to the CVPR paper [Li et. al, 2022]. The most negative reviewer (who is quite expert in this field) engaged strongly in the discussion with the authors, highlighting sustained concerns about novelty, substantially slower speed at rendering time, and the loss of the ability to render materials with interesting reflectance properties. I found this review most accurate and detailed. The remaining reviews, while borderline or weakly positive, retained concerns about the limited lighting and reflectance properties. Therefore I am deciding to reject this paper
val
[ "tnBoS2e-qaW", "meo2mrlzPPQ", "a_DFytWgQWx", "zBvx3cdAgy9", "h66hQ1wFK0ig", "NGgoC75VdhZ", "tqVfb0r20a5", "U0H_3lvbkT8u", "3hcHGiRFw1B", "W06DDL4-P9F", "6enJTnQ232d", "RHhqydbAHb", "FNGeqilN0--", "qR4yJzLyZVh", "SgX7bi31p6v", "R1_opWsCKqL" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading the authors response, and the other reviews, I increase my rating to a borderline accept. ", " Dear Reviewer yGxh,\n\n We sincerely thank your suggestions and respect your rating. According to the post-rebuttal comments, your primary concern is the **uncertainty about the revision**. We will defin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, 3 ]
[ "SgX7bi31p6v", "a_DFytWgQWx", "qR4yJzLyZVh", "h66hQ1wFK0ig", "SgX7bi31p6v", "nips_2022_Qr8n979lusV", "R1_opWsCKqL", "SgX7bi31p6v", "qR4yJzLyZVh", "FNGeqilN0--", "RHhqydbAHb", "nips_2022_Qr8n979lusV", "nips_2022_Qr8n979lusV", "nips_2022_Qr8n979lusV", "nips_2022_Qr8n979lusV", "nips_2022_...
nips_2022_mT18WLu9J_
Amplifying Membership Exposure via Data Poisoning
As in-the-wild data are increasingly involved in the training stage, machine learning applications become more susceptible to data poisoning attacks. Such attacks typically lead to test-time accuracy degradation or controlled misprediction. In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples. To this end, we demonstrate a set of data poisoning attacks to amplify the membership exposure of the targeted class. We first propose a generic dirty-label attack for supervised classification algorithms. We then propose an optimization-based clean-label attack in the transfer learning scenario, whereby the poisoning samples are correctly labeled and look "natural" to evade human moderation. We extensively evaluate our attacks on computer vision benchmarks. Our results show that the proposed attacks can substantially increase the membership inference precision with minimum overall test-time model performance degradation. To mitigate the potential negative impacts of our attacks, we also investigate feasible countermeasures.
Accept
Reviewers 47fC and gUo3, who are experts in this area, were very pleased by the author discussion and clarification. They appreciated that the authors will tone down their claims and reduce confusion in the final version. As gUo3 said, there are concurrent works, but that can't hurt this work -- it only shows that there is significant interest in this topic. The same reviewer did feel that the scope of the results seem somewhat weak (just transfer learning), but there is some evidence that things extend beyond that setting. If the authors wish to further improve the work, it may be worth performing further experimental evaluation. Note that the two other reviewers, who were negative, neither engaged with the authors nor myself when prompted to in private deliberation. Since they appear unwilling to defend their negative evaluation of the paper, I defer to the positive sentiments of the other two reviewers who did discuss privately.
train
[ "U1O-yuYZxX", "j3kHVMe-E-w", "Iy1DxERNwZF", "71B691G7Oa4", "Hw_9Kz0kT8h", "n4YYFMYh9cY", "m8pYpa2HsRu1", "9PqKIsA9LR9", "czY9Nq7LcWY", "uu0bEwDzjhd", "NZnom6_05V_", "22nPFIOAHyA", "Gx7fpeDlkTU", "bYMEd-Hp6cy", "3lkhvno8Z0G", "KA8jbqhQF81", "LuHBqXBCMcc", "Oyh4GFYKOdp", "79FtzV0eQ...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author"...
[ " Dear Reviewer Ewj4,\n\nThank you again for the valuable comments. As the discussion period is close to the end, we want to reach out to see whether you find our responses satisfactory.\n\nWe would like to hear from you about any further feedback, which is very important for us to improve the paper, and we really ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 5 ]
[ "22nPFIOAHyA", "Iy1DxERNwZF", "l9k8b2LO56H", "Hw_9Kz0kT8h", "czY9Nq7LcWY", "nips_2022_mT18WLu9J_", "nips_2022_mT18WLu9J_", "nips_2022_mT18WLu9J_", "uu0bEwDzjhd", "NZnom6_05V_", "ZU8_3K5doH", "Gx7fpeDlkTU", "bYMEd-Hp6cy", "3lkhvno8Z0G", "KA8jbqhQF81", "dbPJo2iDa3P", "Oyh4GFYKOdp", "...
nips_2022_ebCk2FNI1za
Universality of Group Convolutional Neural Networks Based on Ridgelet Analysis on Groups
We show the universality of depth-2 group convolutional neural networks (GCNNs) in a unified and constructive manner based on the ridgelet theory. Despite widespread use in applications, the approximation property of (G)CNNs has not been well investigated. The universality of (G)CNNs has been shown since the late 2010s. Yet, our understanding on how (G)CNNs represent functions is incomplete because the past universality theorems have been shown in a case-by-case manner by manually/carefully assigning the network parameters depending on the variety of convolution layers, and in an indirect manner by converting/modifying the (G)CNNs into other universal approximators such as invariant polynomials and fully-connected networks. In this study, we formulate a versatile depth-2 continuous GCNN $S[\gamma]$ as a nonlinear mapping between group representations, and directly obtain an analysis operator, called the ridgelet trasform, that maps a given function $f$ to the network parameter $\gamma$ so that $S[\gamma]=f$. The proposed GCNN covers typical GCNNs such as the cyclic convolution on multi-channel images, networks on permutation-invariant inputs (Deep Sets), and $\mathrm{E}(n)$-equivariant networks. The closed-form expression of the ridgelet transform can describe how the network parameters are organized to represent a function. While it has been known only for fully-connected networks, this study is the first to obtain the ridgelet transform for GCNNs. By discretizing the closed-form expression, we can systematically generate a constructive proof of the $cc$-universality of finite GCNNs. In other words, our universality proofs are more unified and constructive than previous proofs.
Accept
The paper studies the approximation properties of group convolutional neural networks. It establishes the “cc-universality” of group CNNs, i.e. that such networks can approximate any continuous function over any compact set, using a new constructive approach which is based on a generalization of the ridgelet transform. The proof is constructive, in the sense that approximating networks are given in closed form by discretizing the transform. This approach may have applications beyond the scope of the paper — most immediately, to identifying classes of functions for which neural network approximations are accurate in a quantitative sense; this can be tied to the decay properties of the ridgelet transform. Reviewers found the paper to be clearly written and of high technical quality, albeit somewhat mathematically dense in its presentation. Universal approximation theorems provide an important piece of theoretical background, as well as a “sanity check” for new network architectures; having a unified, constructive approach to derive them could stimulate further work.
test
[ "JngB_q6EDRf", "6eqfSTUEvyV", "NFmXq0lViv3", "Y8jSUgBLYvR", "qZKviPXTIhe", "dlHEreNWy4t", "LyanRyZkT6h", "R7vAcoiPImw", "IAyyFSUehuU", "rVH7zyBX9U1" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors have addressed most of my comments. Including the authors' comment in the manuscript improves the motivation and related works. I have updated my score accordingly.", " We would like to thank all the reviewers for their constructive feedback and the time spent reading the paper.\n\nIn reference to t...
[ -1, -1, -1, -1, -1, -1, 5, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 2, 2 ]
[ "qZKviPXTIhe", "nips_2022_ebCk2FNI1za", "rVH7zyBX9U1", "IAyyFSUehuU", "R7vAcoiPImw", "LyanRyZkT6h", "nips_2022_ebCk2FNI1za", "nips_2022_ebCk2FNI1za", "nips_2022_ebCk2FNI1za", "nips_2022_ebCk2FNI1za" ]
nips_2022_0PfIQs-ttQQ
Self-supervised surround-view depth estimation with volumetric feature fusion
We present a self-supervised depth estimation approach using a unified volumetric feature fusion for surround-view images. Given a set of surround-view images, our method constructs a volumetric feature map by extracting image feature maps from surround-view images and fuse the feature maps into a shared, unified 3D voxel space. The volumetric feature map then can be used for estimating a depth map at each surround view by projecting it into an image coordinate. A volumetric feature contains 3D information at its local voxel coordinate; thus our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the volumetric feature map into the target viewpoints. Furthermore, assuming static camera extrinsics in the multi-camera system, we propose to estimate a canonical camera motion from the volumetric feature map. Our method leverages 3D spatio- temporal context to learn metric-scale depth and the canonical camera motion in a self-supervised manner. Our method outperforms the prior arts on DDAD and nuScenes datasets, especially estimating more accurate metric-scale depth and consistent depth between neighboring views.
Accept
Initially, the paper had mixed reviews (455). The major concerns from the reviews were: 1. missing refs about unprojection. (K314) 2. quality advantage is not convincing, slightly better than FSM, while qualitative results show not obvious improvements. (K314) 3. visualize the depth maps as point clouds (K314) 4. what is the trade-off between resolution/memory, computation, and depth estimation? (K314, ZBww) 5. insufficient experiments (9Ef1) 6. comparison with monocular methods (9Ef1) 7. cubic vs spherical space? (9Ef1) 8. can it be trained on real data w/o GT supervision? (9Ef1) 9. how to handle collision of multiple pixel rays? (ZBww) 10. why use different MLPs to fuse "overlap" and "non-overlap" features? No ablation study on this. (ZBww) The authors wrote a response to address these concerns, providing more qualitative results and ablation studies, as well as further explanations. The reviewers were satisfied with the response, and K314 upgraded their rating to 6, while other reviewers maintained 5s. The reviewers appreciated the novel problem and the solution that can produce more consistent depth maps across views, and also synthesize depth maps in novel views. After reading the paper, the AC agrees with the reviewers, noting that the paper addresses the limitations of the problem setup of previous work [13], thus developing a new line of research. Thus, the AC recommends accept. The authors should prepare a revised version of the paper according to the reviews, rebuttal, and discussion.
train
[ "-SVKjnd4R2G", "5BpD6_9TLCS", "7EHHPumix6E", "aQJZz7GDd1m", "9zUdxDj40y", "eeaQyO9nwVz", "J8IkkUK18Vd" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your positive feedback and suggestions!\n\n***\n\n**● Details for constructing a unified volumetric feature. How to handle multi-feature collisions?**\n\nWe provide clearer details here and revised the main paper accordingly (`Sec. 3.2` and `Fig. 3`).\n\nFor each voxel, we find a corresponding pixel...
[ -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "J8IkkUK18Vd", "eeaQyO9nwVz", "9zUdxDj40y", "nips_2022_0PfIQs-ttQQ", "nips_2022_0PfIQs-ttQQ", "nips_2022_0PfIQs-ttQQ", "nips_2022_0PfIQs-ttQQ" ]
nips_2022_7YXXt9lRls
Learning Representations via a Robust Behavioral Metric for Deep Reinforcement Learning
Learning an informative representation with behavioral metrics is able to accelerate the deep reinforcement learning process. There are two key research issues on behavioral metric-based representation learning: 1) how to relax the computation of a specific behavioral metric, which is difficult or even intractable to compute, and 2) how to approximate the relaxed metric by learning an embedding space for states. In this paper, we analyze the potential relaxation and/or approximation gaps for existing behavioral metric-based representation learning methods. Based on the analysis, we propose a new behavioral distance, the RAP distance, and develop a practical representation learning algorithm on top of it with a theoretical analysis. We conduct extensive experiments on DeepMind Control Suite with distraction, Robosuite, and autonomous driving simulator CARLA to demonstrate new state-of-the-art results.
Accept
Unanimous accept from 4 reviewers This paper was initially divisive among reviewers (scoring 7744), now 7756 post rebuttal While the consensus of this representation learning work had initial strengths of clear analysis of previous metrics, clear evaluations, writing, and good experimentation; the prior weaknesses were mainly confusion about derivations and equation interpretations, no analysis of learned model, no visualizations of the latents space, lack of explanation why the method only does well in some environments and not others. These confusions have since been cleared up, in particular the authors have since added additional analysis in the appendix is useful (e.g. Table 4 and Fig 15) to satisfy reviewer's nvyx's concerns over clarifications about equations
test
[ "al_gHuBA2I0", "_8XyDUQD5kv", "4nP9_BEFW7t", "D2R8GxlUtQ1", "_13bsMLJnC_", "9y8ZrFrTq7", "nXUNrJo-Qgj", "29a88CyzRpQ", "Loj8muFprwf", "0oArMb_jusH", "fN0MrjHKX5r", "AqENIcVIEJo" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > In addition to the lack of connexion between the theorems and the actual algorithm due to the metric d, an important characteristic/potential limitation of the RAP distance is that it depends on the policy. The explicit dependance on the policy is removed in Equation 13, why? How do you train the algorithm in p...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "AqENIcVIEJo", "AqENIcVIEJo", "fN0MrjHKX5r", "fN0MrjHKX5r", "0oArMb_jusH", "Loj8muFprwf", "Loj8muFprwf", "nips_2022_7YXXt9lRls", "nips_2022_7YXXt9lRls", "nips_2022_7YXXt9lRls", "nips_2022_7YXXt9lRls", "nips_2022_7YXXt9lRls" ]
nips_2022_hgNxCMKARgt
Descent Steps of a Relation-Aware Energy Produce Heterogeneous Graph Neural Networks
Heterogeneous graph neural networks (GNNs) achieve strong performance on node classification tasks in a semi-supervised learning setting. However, as in the simpler homogeneous GNN case, message-passing-based heterogeneous GNNs may struggle to balance between resisting the oversmoothing that may occur in deep models, and capturing long-range dependencies of graph structured data. Moreover, the complexity of this trade-off is compounded in the heterogeneous graph case due to the disparate heterophily relationships between nodes of different types. To address these issues, we propose a novel heterogeneous GNN architecture in which layers are derived from optimization steps that descend a novel relation-aware energy function. The corresponding minimizer is fully differentiable with respect to the energy function parameters, such that bilevel optimization can be applied to effectively learn a functional form whose minimum provides optimal node representations for subsequent classification tasks. In particular, this methodology allows us to model diverse heterophily relationships between different node types while avoiding oversmoothing effects. Experimental results on 8 heterogeneous graph benchmarks demonstrates that our proposed method can achieve competitive node classification accuracy.
Accept
This paper studies heterogeneous GNN and proposes a novel HGNN architecture HALO. It tries to make connected nodes share similar labels, with a relation-dependent compatibility matrices to resolve the mistmatch between different node types. The key is to derive novel GNN layers from the minimization steps of a relation-aware energy function. Authors have addressed concerns on the designed choices, experiments with larger and more datasets during the rebuttal. While the idea is extended from previous works [32, 35], reviewers generally appreciate its contributions made to heterogeneous GNNs.
train
[ "tDNQHHJLaj", "YH1Hu-06uZv", "UqrlBZf9bj5", "gVCbbJsCDfR", "VWwslq2gyWw", "7otSqLv_yj0", "ZizIr9_Bf-9", "tx9nfQ4e6XW", "3r84pxzutZZ", "1ALcGvbAjpjf", "TtYDYMsvdFH", "vIFXFPG2TlE", "ep6JDk5av4O", "xh_DXWxES1r", "wJLtbH0MYQQ", "cEE4zITg_X9", "3OkFfSuh6uu", "S-BMij_s-4", "RA19qfSihF...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " Regarding tests with homogeneous baselines, we now understand the reviewer's intended meaning. And for reference, the exact test the reviewer suggests has actually already been conducted in prior work [15] cited in our submission. Specifically, from Table 4 in [15] there are head-to-head results comparing GCN a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "UqrlBZf9bj5", "wJLtbH0MYQQ", "xh_DXWxES1r", "VWwslq2gyWw", "ZizIr9_Bf-9", "tx9nfQ4e6XW", "3r84pxzutZZ", "RA19qfSihFx", "vIFXFPG2TlE", "TtYDYMsvdFH", "RA19qfSihFx", "ep6JDk5av4O", "S-BMij_s-4", "3OkFfSuh6uu", "cEE4zITg_X9", "nips_2022_hgNxCMKARgt", "nips_2022_hgNxCMKARgt", "nips_20...
nips_2022_hT0RbC2jCYZ
Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures
This paper considers the Pointer Value Retrieval (PVR) benchmark introduced in [ZRKB21], where a `reasoning' function acts on a string of digits to produce the label. More generally, the paper considers the learning of logical functions with gradient descent (GD) on neural networks. It is first shown that in order to learn logical functions with gradient descent on symmetric neural networks, the generalization error can be lower-bounded in terms of the noise-stability of the target function, supporting a conjecture made in [ZRKB21]. It is then shown that in the distribution shift setting, when the data withholding corresponds to freezing a single feature (referred to as canonical holdout), the generalization error of gradient descent admits a tight characterization in terms of the Boolean influence for several relevant architectures. This is shown on linear models and supported experimentally on other models such as MLPs and Transformers. In particular, this puts forward the hypothesis that for such architectures and for learning logical functions such as PVR functions, GD tends to have an implicit bias towards low-degree representations, which in turn gives the Boolean influence for the generalization error under quadratic loss.
Accept
The paper addresses in a formal and solid way a specific learning task with the intent to assess the limitations of different neural networks architectures. The addressed task requires some degree of reasoning for the architecture to learn it in a satisfactory way. Although the authors eventually restrict the scope of their work, personally I think the paper gives insights about some aspects of learning bias in specific architectures that are not so evident at first sight, in fact contributing to the creation of a different perspective for the pursuit of a better understanding of deep learning models. In their rebuttal authors have clarified some issues raised by the reviewers, and no specific negative concerns have been raised on the presented work, except for the relatively limited scope of the theoretical analysis and range of applicability to other architectures not covered in the paper. Overall I think the paper contains interesting material and concepts that can be of interest for NeurIPS audience, and harbingers of future developments.
train
[ "m_KLAPETfVp", "TLD0iexO-9", "nK01Y7tQ2_R", "mjsVytCW_Iq", "bt8rFkTv4Jaq", "LrplwVwTyuc", "nrs9VHvmPfL", "7OGCryw5JXC", "-MOO3hJstx-", "hetyKIHztwG", "IvBTUnI2Irq", "s8ZYM-DroNt", "UVGG_Vm5ut9" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for appreciating our comments and raising the score.", " Given the response of the authors, the reviewer is raising the score. ", " Thanks for the response. I have read it and do not have further questions from the authors.", " We thank the reviewer for the constructive comments. We ad...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 3 ]
[ "TLD0iexO-9", "mjsVytCW_Iq", "bt8rFkTv4Jaq", "UVGG_Vm5ut9", "LrplwVwTyuc", "s8ZYM-DroNt", "7OGCryw5JXC", "IvBTUnI2Irq", "hetyKIHztwG", "nips_2022_hT0RbC2jCYZ", "nips_2022_hT0RbC2jCYZ", "nips_2022_hT0RbC2jCYZ", "nips_2022_hT0RbC2jCYZ" ]
nips_2022_e2TBb5y0yFf
Large Language Models are Zero-Shot Reasoners
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding ``Let's think step by step'' before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with 175B parameter InstructGPT model, as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
Accept
This paper shows that large language models, if prompted in a particular way, can drastically improve their performance on zero-shot reasoning tasks. Specifically, simply adding "Let's think step by step" to the prompt causes accuracy to improve by a huge margin. The paper provides a wide variety of experimental results corroborating the central claim, and situates itself well in the existing literature. The reviewers are on the fence about this work, and as an area chair I find myself similarly torn: the contribution itself is unequivocally mundane -- and I mean this without any offense to the authors, but as a matter of principle I do not think that "prompt engineering", much like "tuning on the test set", is a scientific discipline that deserves much (if any) attention in a machine learning conference. That said, this paper has two main things speaking for it: the fact that **the improvement is so large** AND the fact that **it works increasingly well with scale**. Oftentimes simple ideas are the most impactful ones, and I think that this work can have an impact on the field as a sort of existence proof: the mere observation that a single prompt can lead to such large gains, and that these gains only come about in very large-scale models, either points to serious flaws in our evaluation capabilities (from data overlap to faulty benchmarks) or some as-yet-poorly-understood capability in these models. This work opens up the way to important follow-up work that examines these questions more closely and conclusively, and consequently I think the paper in its current form merits acceptance. The main shortcoming of the initial submission was that it only used one model, but that has been addressed in the rebuttal with the addition of PaLM. I would strongly encourage the authors to heed Reviewer 1ywu's advice to try to add more models and look more closely at the training data. While it is true that open source models like Bloom (where the training data is publicly available) were not available yet by the time of submission, they are now, and the paper would be considerably strengthened by an analysis into the causes of this effect. In addition, I was a bit disappointed that the "instructive" prompts were not very diverse -- it is well known that large language models are particularly susceptible to minor variations in prompts, so it would have been interesting to try a more complete spectrum of prompts including things like "Let’s solve the problem bit by bit", "If we solve the problem stepwise/gradually/piecemeal", "Let’s be smart about this", "Let’s be careful/make sure we get the right answer" to try to get more clarity on what exactly is happening (and to show the audience they need to be careful with overly trusting their prompts will do the right thing). Lastly, the work almost invites anthropomorphization, along the lines of "humans also do better when prompted to solve a problem step by step", and I would have liked to see a more serious discussion of this aspect of the work (and hence why I think further analysis of training data/prompt variations should be included in the final version).
train
[ "I4OjCYmsOjR", "-KRorVKbjv", "ZI0NktpU7um", "HFNoYuScQAW", "jc4cfOR8FIF", "EVu8GHu-l3", "_s3nzdSsili", "YpGJ5nYxw34", "7l2Cp_q2Wiy", "F-t573bG4Eh", "ISREwTzFmDy", "3wJidJ6381R", "THmbD43F7lZ", "qO2a_5bsHUG" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **To : Area Chairs and all the reviewers**\n\nAs of now, we have got only one response from one reviewer out of 5.\nIf any of the other reviewers have any question / comment about the the rebuttal or the results of the additional experiments, we would be happy to answer them.\n\nWe are very sorry for the last min...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4, 4 ]
[ "nips_2022_e2TBb5y0yFf", "ZI0NktpU7um", "EVu8GHu-l3", "qO2a_5bsHUG", "THmbD43F7lZ", "3wJidJ6381R", "ISREwTzFmDy", "F-t573bG4Eh", "nips_2022_e2TBb5y0yFf", "nips_2022_e2TBb5y0yFf", "nips_2022_e2TBb5y0yFf", "nips_2022_e2TBb5y0yFf", "nips_2022_e2TBb5y0yFf", "nips_2022_e2TBb5y0yFf" ]
nips_2022_HwP4XJ04Je1
Effective Adaptation in Multi-Task Co-Training for Unified Autonomous Driving
Aiming towards a holistic understanding of multiple downstream tasks simultaneously, there is a need for extracting features with better transferability. Though many latest self-supervised pre-training methods have achieved impressive performance on various vision tasks under the prevailing pretrain-finetune paradigm, their generalization capacity to multi-task learning scenarios is yet to be explored. In this paper, we extensively investigate the transfer performance of various types of self-supervised methods, e.g., MoCo and SimCLR, on three downstream tasks, including semantic segmentation, drivable area segmentation, and traffic object detection, on the large-scale driving dataset BDD100K. We surprisingly find that their performances are sub-optimal or even lag far behind the single-task baseline, which may be due to the distinctions of training objectives and architectural design lied in the pretrain-finetune paradigm. To overcome this dilemma as well as avoid redesigning the resource-intensive pre-training stage, we propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training, where the off-the-shelf pretrained models can be effectively adapted without increasing the training overhead. During the adapt stage, we utilize learnable multi-scale adapters to dynamically adjust the pretrained model weights supervised by multi-task objectives while leaving the pretrained knowledge untouched. Furthermore, we regard the vision-language pre-training model CLIP as a strong complement to the pretrain-adapt-finetune paradigm and propose a novel adapter named LV-Adapter, which incorporates language priors in the multi-task model via task-specific prompting and alignment between visual and textual features. Our experiments demonstrate that the adapt stage significantly improves the overall performance of those off-the-shelf pretrained models and the contextual features generated by LV-Adapter are of general benefits for downstream tasks.
Accept
This paper provides an empirical analysis of the effectiveness of self-supervised learning-based pre-training on multi-task learning, specifically for tasks within autonomous driving. After showing that the standard fine-tuning procedure does not work well in this context, the authors propose a pretrain-adapt-finetune procedure, involving multi-scale adapters and a language-to-vision adapter via task-specific prompt learning. The reviewers appreciated the focus on multi-task learning, especially in a domain where it is highly relevant and standard self-supervised learning methods are under-explored. The experiments were noted to be thorough, with results across many different tasks. However, several concerns were raised including a better set of ablations and adapter design exploration, and performance on another dataset. The author provided a thorough rebuttal, including significant new ablations/experiments and results on NuImages. All reviewers subsequently recommended acceptance, including the reviewer who had a borderline acceptance but mentioned the new results are convincing. While many of the elements of the proposed approach are not new, after the new experiments and ablations especially, this paper provides a nice contribution exploring an under-appreciated setting of multi-task learning and the use of pre-trained models for them. The experiments are thoroughly done and would be of value to the community. As a result, I recommend acceptance.
train
[ "tPz-odI04a", "Ml8bhBaN5OG", "1AWXgKGSHOr", "SHu0ZtHtFwI", "9hMRIsPJQU2", "najkUeJ0ffk", "K9LCHNJ4aLl", "0YY8mOlXlxT", "hvBQwACr6-pr", "oy92vu6mdaV", "O5i0RMbvHPY", "4haaf0OHQYw" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response and suggestions. We have added the experiments and analysis in Appendix, and clarified that architecture is a core reason for the degradation in L309 in the paper.", " Thank you for your subsequent experiments and analysis. I have no more questions, just some suggestions.\nResults in...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "Ml8bhBaN5OG", "najkUeJ0ffk", "nips_2022_HwP4XJ04Je1", "9hMRIsPJQU2", "4haaf0OHQYw", "K9LCHNJ4aLl", "O5i0RMbvHPY", "hvBQwACr6-pr", "oy92vu6mdaV", "nips_2022_HwP4XJ04Je1", "nips_2022_HwP4XJ04Je1", "nips_2022_HwP4XJ04Je1" ]
nips_2022_KOHC_CYEIuP
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning
In recent centralized nonconvex distributed learning and federated learning, local methods are one of the promising approaches to reduce communication time. However, existing work has mainly focused on studying first-order optimality guarantees. On the other side, second-order optimality guaranteed algorithms, i.e., algorithms escaping saddle points, have been extensively studied in the non-distributed optimization literature. In this paper, we study a new local algorithm called Bias-Variance Reduced Local Perturbed SGD (BVR-L-PSGD), that combines the existing bias-variance reduced gradient estimator with parameter perturbation to find second-order optimal points in centralized nonconvex distributed optimization. BVR-L-PSGD enjoys second-order optimality with nearly the same communication complexity as the best known one of BVR-L-SGD to find first-order optimality. Particularly, the communication complexity is better than non-local methods when the local datasets heterogeneity is smaller than the smoothness of the local loss. In an extreme case, the communication complexity approaches to $\widetilde \Theta(1)$ when the local datasets heterogeneity goes to zero. Numerical results validate our theoretical findings.
Accept
The reviewers have reached a consensus of accepting the paper.
val
[ "lip7lPmBJJ", "HTmKBxHe5n", "xZYopi7mKn", "tZSmlSFKz4d", "8r7WXvLqp-f", "MJU9AN1vjU", "Bk5A-gH3lA", "Q6P1-E8N6S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and the new content added.\nI have no further concerns. ", " Reviewer RYcp is satisfied with the authors' response and keeps rating 6. \nFor Q13, it is interesting that for small number of communication rounds, the gradient norm is smaller than that for 1k communication rounds. ", ...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "8r7WXvLqp-f", "xZYopi7mKn", "Q6P1-E8N6S", "Bk5A-gH3lA", "MJU9AN1vjU", "nips_2022_KOHC_CYEIuP", "nips_2022_KOHC_CYEIuP", "nips_2022_KOHC_CYEIuP" ]
nips_2022_pz2UcXyX0Cj
Causality-driven Hierarchical Structure Discovery for Reinforcement Learning
Hierarchical reinforcement learning (HRL) has been proven to be effective for tasks with sparse rewards, for it can improve the agent's exploration efficiency by discovering high-quality hierarchical structures (e.g., subgoals or options). However, automatically discovering high-quality hierarchical structures is still a great challenge. Previous HRL methods can only find the hierarchical structures in simple environments, as they are mainly achieved through the randomness of agent's policies during exploration. In complicated environments, such a randomness-driven exploration paradigm can hardly discover high-quality hierarchical structures because of the low exploration efficiency. In this paper, we propose CDHRL, a causality-driven hierarchical reinforcement learning framework, to build high-quality hierarchical structures efficiently in complicated environments. The key insight is that the causalities among environment variables are naturally fit for modeling reachable subgoals and their dependencies; thus, the causality is suitable to be the guidance in building high-quality hierarchical structures. Roughly, we build the hierarchy of subgoals based on causality autonomously, and utilize the subgoal-based policies to unfold further causality efficiently. Therefore, CDHRL leverages a causality-driven discovery instead of a randomness-driven exploration for high-quality hierarchical structure construction. The results in two complex environments, 2D-Minecraft and Eden, show that CDHRL can discover high-quality hierarchical structures and significantly enhance exploration efficiency.
Accept
After a strong rebuttal from the authors and an extensive discussion among the reviewers, I believe the paper's pros outweigh its cons and this paper will be a valuable contribution to NeurIPS. I recommend it for acceptance and encourage the authors to address the reviewers comments for the camera-ready version of the paper - specifically please add the posted clarifications and experiments from the rebuttal and justify the limitation of available environment variables.
train
[ "jSGL1-OoWzw", "eyv0y_CbpUd", "kT1v2a6Vsa3", "z2NG3F8O1rh", "lKBWFLwOqPA", "8YA39bMIGdo", "FpSKu6sqcxA", "vcnp2Zz-vX", "V6lDWDN4DbF", "3IrbGFResnt", "aIUMq0YxTfF", "9W7gwnIY3QJ", "6pQ38aKvwdW", "JNRGq5hU_YG", "aN4csjSAc7", "sVrWPTs9vGI" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear ACs and all reviewers,\n\nWe sincerely thank the reviewers for their time and diligence in reviewing our submission. Since the deadline of discussion is approaching, we look forward to follow-up discussions needed from reviewers. \n\nSo far, we received three positive scores (6s) and one negative score (4). ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "nips_2022_pz2UcXyX0Cj", "FpSKu6sqcxA", "nips_2022_pz2UcXyX0Cj", "lKBWFLwOqPA", "8YA39bMIGdo", "6pQ38aKvwdW", "vcnp2Zz-vX", "sVrWPTs9vGI", "3IrbGFResnt", "aN4csjSAc7", "9W7gwnIY3QJ", "JNRGq5hU_YG", "nips_2022_pz2UcXyX0Cj", "nips_2022_pz2UcXyX0Cj", "nips_2022_pz2UcXyX0Cj", "nips_2022_pz...
nips_2022_yZ_JlZaOCzv
Are AlphaZero-like Agents Robust to Adversarial Perturbations?
The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions. In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are ``semantically'' equivalent to the original state by adding meaningless moves to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for Go beginners. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that the actions taken by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58\% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players and 90\% of examples indeed lead the Go agent to play an obviously inferior action.
Accept
This paper proposes a novel challenge setting: adversarial attacks on a discrete observation space sequential decision making problem which has been well studied in recent times — the game of go. While there has been work in recent years on discrete domains such as language, the use of this setting and the notion of "semantic invariance" exploited in lieu of epsilon L-infinity constraints is intuitive and novel. The reviewers by and large quite liked the paper, with the average score primarily being drawn down by one slightly hostile reviewer. Looking over the discussion surrounding this review, I found the authors made a bona fide attempt to address the reviewers concerned, and that the reviewer, while they did respond, did not quite engage with the counter-argument in a way which caused me to further trust their review. I am happy to go with the majority vote here and recommend acceptance.
train
[ "DkfU7PoYXS7", "soYqJJpiL3Z", "SiQsI7oybMb", "9iX2Fn7DSC1", "7UQKhbaNCy", "5-HFj1o2I5s", "RbfWX5Nx6lu", "Fi42qslThIQ", "3Vf8nLKvqy", "x4a29ojFLrF", "XONZEbJyqlw", "kyW3KlnvM9O", "rwkWRUTUydm", "zZ98wuk1koK", "QojBEovV-fJ", "7fGJTstolV4", "AKBpImHKVGZ", "G60-DKPejRU", "d6gP7aNVLvb...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Thank you for your suggestion. We agree that without the knowledge of Go, the adversarial example is way less impressive. We will add more background knowledge to the supplementary materials. Note that some information can be found in Appendix E Territory. Thank you again for the help in improving our paper. We h...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 5 ]
[ "soYqJJpiL3Z", "AKBpImHKVGZ", "9iX2Fn7DSC1", "7UQKhbaNCy", "5-HFj1o2I5s", "RbfWX5Nx6lu", "x4a29ojFLrF", "XONZEbJyqlw", "XONZEbJyqlw", "QojBEovV-fJ", "zZ98wuk1koK", "nips_2022_yZ_JlZaOCzv", "pfvJaUfAk2b", "pfvJaUfAk2b", "hyjdYU8zhEY", "d6gP7aNVLvb", "G60-DKPejRU", "nips_2022_yZ_JlZa...
nips_2022_wmdbwZz65FM
Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs
In principle, applying variational autoencoders (VAEs) to sequential data offers a method for controlled sequence generation, manipulation, and structured representation learning. However, training sequence VAEs is challenging: autoregressive decoders can often explain the data without utilizing the latent space, known as posterior collapse. To mitigate this, state-of-the-art models `weaken' the `powerful decoder' by applying uniformly random dropout to the decoder input. We show theoretically that this removes pointwise mutual information provided by the decoder input, which is compensated for by utilizing the latent space. We then propose an adversarial training strategy to achieve information-based stochastic dropout. Compared to uniform dropout on standard text benchmark datasets, our targeted approach increases both sequence modeling performance and the information captured in the latent space.
Accept
The work analyzes the use of decoder input dropout in training sequence VAEs and proposes an adversarial dropout scheme in place of the typical uniform dropout. The paper is clearly written (reviewers Lidz, WbUQ), well-motivated (all reviewers), and provides insights into posterior collapse and word dropout (Lidz, WbUQ). All reviewers were concerned that the empirical performance improvement seems small, but in light of the writing, analysis, and careful experiments I nonetheless recommend acceptance.
train
[ "c9Ot-YwmP-X", "0THk--9xF7x", "2ldbRwRmG3K", "13a-HgQr5vw", "TbDk-MLYWf", "gDCAaSy1zvF", "3-_6gEkGExr", "njN3C_V_3_b", "KOmlrmYqkpV", "kGLPzlM9GwU", "QSxmQnx53OP", "T3r3LB7ZEv", "nQ07Pi1ETh", "gHCkncb4tgs", "0rihsqqxzSy", "IpmzWm2BFIO", "5as-pA-jsc_", "LFvE756Ss5N", "IYb8gan3epl"...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have verified the revised Figure 2. It clearly shows that the proposed AWD significantly improves KL/MI without sacrificing PPL/ELBO compared to word dropout. I appreciate the authors for the clarification with the additional results. Lastly, I recommend authors carefully consider how to present the experimenta...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "0THk--9xF7x", "2ldbRwRmG3K", "IYb8gan3epl", "LFvE756Ss5N", "nips_2022_wmdbwZz65FM", "IYb8gan3epl", "IYb8gan3epl", "IYb8gan3epl", "IYb8gan3epl", "IYb8gan3epl", "eKTkwjaYeuH", "eKTkwjaYeuH", "eKTkwjaYeuH", "eKTkwjaYeuH", "-IAry_ocigp", "-IAry_ocigp", "-IAry_ocigp", "-IAry_ocigp", ...
nips_2022_GiEnzxTnaMN
Wasserstein Iterative Networks for Barycenter Estimation
Wasserstein barycenters have become popular due to their ability to represent the average of probability measures in a geometrically meaningful way. In this paper, we present an algorithm to approximate the Wasserstein-2 barycenters of continuous measures via a generative model. Previous approaches rely on regularization (entropic/quadratic) which introduces bias or on input convex neural networks which are not expressive enough for large-scale tasks. In contrast, our algorithm does not introduce bias and allows using arbitrary neural networks. In addition, based on the celebrity faces dataset, we construct Ave, celeba! dataset which can be used for quantitative evaluation of barycenter algorithms by using standard metrics of generative models such as FID.
Accept
This paper proposes a new iterative method for Wasserstein barycenter based on a generator parametrization of the barycenter and fixed point method that alternates learning the generator and learning the OT maps from barycenter to measures. The work is empirical and lacks theory but proposes a new image benchmark on celeba for barycenter evaluation. Reviewers were positive about the paper. Accept.
train
[ "1ylPvEfqXB-", "0Rl6OyzvJL_", "C4uSxuou6zV", "j_n9qD-1SV", "K5xoEJyfkk-", "jWBlv92og0", "CFTSsn_lMtP", "XndOUrp2yhO", "6fDfTG9G-ZJ", "Ns5bL89G5C", "RCaApafRVs8", "2Qa2QhdPiwY", "BvgBDu2GJzx", "VFLqrLmqDg-", "bWmEjz3lvnv", "q0W2A-7-7PS", "N9gqJsk4z7" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi! Thank you for your efforts. I'd like to increase my score. Please distribute your rebuttal discussions into the revision. Especially, please add a discussion of the inverse map. Now I still see in line 251, you have \"Our algorithm does not compute them (inverse map)\", this can be misleading to readers. You ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "0Rl6OyzvJL_", "jWBlv92og0", "j_n9qD-1SV", "VFLqrLmqDg-", "RCaApafRVs8", "XndOUrp2yhO", "6fDfTG9G-ZJ", "N9gqJsk4z7", "q0W2A-7-7PS", "bWmEjz3lvnv", "VFLqrLmqDg-", "BvgBDu2GJzx", "nips_2022_GiEnzxTnaMN", "nips_2022_GiEnzxTnaMN", "nips_2022_GiEnzxTnaMN", "nips_2022_GiEnzxTnaMN", "nips_2...
nips_2022_KETwimTQexH
FedPop: A Bayesian Approach for Personalised Federated Learning
Personalised federated learning (FL) approaches aim at collaboratively learning a machine learning model taylored for each client. Albeit promising advances have been made in this direction, most of existing personalised FL works do not allow for uncertainty quantification which is crucial in many applications. In addition, personalisation in the cross-device setting still involves important issues, especially for new clients or those having small data sets. This paper aims at filling this gap. To this end, we propose a novel methodology coined FedPop by recasting personalised FL into the population modeling paradigm where clients' models involve fixed common population parameters and random individual ones, aiming at explaining data heterogeneity. To derive convergence guarantees for our scheme, we introduce a new class of federated stochastic optimisation algorithms which relies on Markov chain Monte Carlo methods. Compared to existing personalised FL methods, the proposed methodology has important benefits: it is robust to client drift, practical for inference on new clients, and above all, enables uncertainty quantification under mild computational and memory overheads. We provide non-asymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks.
Accept
The reviewers agree that the paper makes significant progress on uncertainty quantification for personalized federated learning. This is clearly a fundamental problem. The new approach is shown to have theoretical guarantees. The paper is easy to read. Based on the above, I recommend acceptance. Meanwhile, please carefully revise the paper and incorporate new experimental results in the final version according to the reviews.
train
[ "CqSsGjS2S2x", "lJBsNoNk4H6", "A1Dtx3dD2j6", "bJWSMkGJ3GT", "EdtcuLs_q-2", "UytVyV3UNhg", "VqY6eR1Xs-Y", "DYmAhpTErms", "V3soUkJNWGi", "B4mpRCUl0X6", "kIJ8wWs9ffh", "asFsE9L3SY6" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional information on the cross-devices experiments. It is a little surprising to see that on StackOverflow, the personalized parameters $z$ is even larger than $\\phi$ (and since $M$ is 50, the proposed algorithm requires 50 times memory than other algorithms that only need a single model on e...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "lJBsNoNk4H6", "bJWSMkGJ3GT", "DYmAhpTErms", "EdtcuLs_q-2", "asFsE9L3SY6", "VqY6eR1Xs-Y", "kIJ8wWs9ffh", "V3soUkJNWGi", "B4mpRCUl0X6", "nips_2022_KETwimTQexH", "nips_2022_KETwimTQexH", "nips_2022_KETwimTQexH" ]
nips_2022_ATiz_CDA66
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Pretraining Vision Transformers (ViTs) has achieved great success in visual recognition. A following scenario is to adapt a ViT to various image and video recognition tasks. The adaptation is challenging because of heavy computation and memory storage. Each model needs an independent and complete finetuning process to adapt to different tasks, which limits its transferability to different visual domains. To address this challenge, we propose an effective adaptation approach for Transformer, namely AdaptFormer, which can adapt the pre-trained ViTs into many different image and video tasks efficiently. It possesses several benefits more appealing than prior arts. Firstly, AdaptFormer introduces lightweight modules that only add less than 2% extra parameters to a ViT, while it is able to increase the ViT's transferability without updating its original pre-trained parameters, significantly outperforming the existing 100\% fully fine-tuned models on action recognition benchmarks. Secondly, it can be plug-and-play in different Transformers and scalable to many visual tasks. Thirdly, extensive experiments on five image and video datasets show that AdaptFormer largely improves ViTs in the target domains. For example, when updating just 1.5% extra parameters, it achieves about 10% and 19% relative improvement compared to the fully fine-tuned models on Something-Something~v2 and HMDB51, respectively. Code is available at https://github.com/ShoufaChen/AdaptFormer.
Accept
Authors introduce lightweight parallel FC layers to the MLP layers for fine-tuning, freezing all other parameters of the model. Experiments across 5 datasets (3 Image: CIFAR-100, SVHN, Food-101 -- 2 Video: SSv2, HMDB51) show that performance is similar to full-finetuning while changing less than 2% of the parameters. Some performance improvements are noted in the video domain. Pros: - [R/AC] Paper is well written - [R/AC] Ablation study is comprehensive - [R/AC] Performance is impressive - [R/AC] Efficiency gains for finetuning are significant. - [R/AC] Modifications are simple and can be applied to variety of ViT architectures. Cons: - [R/AC] The novelty is low. In regards to its motivation, the work shares a great deal of similarity to LORA. In regards to its technical approach, only minor adjustments are made in comparison to LORA, with the studied application domain being vision. - [R/AC] Authors do not provide sufficiently clear description on how learning parameters were chosen across AdaptFormer and full fine-tune approaches. Were parameters independently optimized, or fixed? - [AC] Method seems very sensitive to weight initialization and scaling mixing parameter s. - [R] Efficiency was mainly measured as parameters. More information regarding latency and FLOPS should be added. Authors have addressed this concern by including additional information, and should ensure that this new information is in the final manuscript. - [R] Motivation behind different weight initialization schemes is not clear. Authors have sufficiently answered with more information. - [R] Need more analysis on why 64 dimensions is the optimal point. What is the explanation? Authors didn't really answer this question other than providing an ablation. Insight behind result is not clear. - [R] Evaluation is not thorough enough. Some of the datasets used are saturated. Authors added one dataset (NUS-WIDE) to appendix. - [R] Rationale of design is unclear. [AC] This approach is from a class of similar approaches previously studied. Reviewers also answered this part well. Reviewer consensus is that this work in its current form is borderline, but the majority of reviewers lean toward accept. AC has concerns about novelty of the method and missing details about how finetuning parameters were chosen across the various experiments. AC will maintain reviewer tendency toward accept, though will comment that the work can be improved if full details about how fine-tuning parameters were chosen across experiments, and that authors optimized those parameters independently in each experiment. AC Rating: Borderline Accept
train
[ "01o30LMSfCj", "eVKbJYYaxwe", "7mMSQxVJz1", "R28Sg5ut4Eh", "7Ay09ORcb3l", "djPvD9-o9nE", "gtt-zb8nlw", "KVQqpZTTthC", "L5kqoIjTSez", "jcnmUm17hBK", "1f92I7C7PMH", "pWsInQK8z6", "p846uVmpc0m", "za__NUSxTq", "nppq4MauWfu", "sjKx6xuHpQ", "qUCEbeXf0iah", "dMvqkA1zBaI", "nQs_wJRusNj",...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " \n**Q2. Clarification of the performance difference with ImageNet and Kinetics pre-trained weights**\n\nA2. The type of spatiotemporal attention (**divided** *vs.* **joint**) determines whether the performance of the model pre-trained on ImageNet can outperform the model pre-trained on Kinetics.\n\n- A similar ph...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "eVKbJYYaxwe", "djPvD9-o9nE", "qUCEbeXf0iah", "KVQqpZTTthC", "KVQqpZTTthC", "p846uVmpc0m", "H5aeyVhn2OP", "nQs_wJRusNj", "jcnmUm17hBK", "1f92I7C7PMH", "nppq4MauWfu", "nips_2022_ATiz_CDA66", "za__NUSxTq", "H5aeyVhn2OP", "sjKx6xuHpQ", "wVMsH7dQLXN", "dMvqkA1zBaI", "x8eyCd_Zxrt", "u...
nips_2022_dNXg-h6YX9h
Active Learning of Classifiers with Label and Seed Queries
We study exact active learning of binary and multiclass classifiers with margin. Given an $n$-point set $X \subset \mathbb{R}^m$, we want to learn an unknown classifier on $X$ whose classes have finite strong convex hull margin, a new notion extending the SVM margin. In the standard active learning setting, where only label queries are allowed, learning a classifier with strong convex hull margin $\gamma$ requires in the worst case $\Omega\big(1+\frac{1}{\gamma}\big)^{\frac{m-1}{2}}$ queries. On the other hand, using the more powerful \emph{seed} queries (a variant of equivalence queries), the target classifier could be learned in $O(m \log n)$ queries via Littlestone's Halving algorithm; however, Halving is computationally inefficient. In this work we show that, by carefully combining the two types of queries, a binary classifier can be learned in time $\operatorname{poly}(n+m)$ using only $O(m^2 \log n)$ label queries and $O\big(m \log \frac{m}{\gamma}\big)$ seed queries; the result extends to $k$-class classifiers at the price of a $k!k^2$ multiplicative overhead. Similar results hold when the input points have bounded bit complexity, or when only one class has strong convex hull margin against the rest. We complement the upper bounds by showing that in the worst case any algorithm needs $\Omega\big(k m \log \frac{1}{\gamma}\big)$ seed and label queries to learn a $k$-class classifier with strong convex hull margin $\gamma$.
Accept
The problem is well introduced and the main results are clearly presented While there are no experimental results, the theoretical contribution seems strong. Please add the main open problems to the final version.
val
[ "-hfN-T4ZXbl", "fdUhrLgxrX5", "M8kS6xcgth", "0AZXeE5KrVq", "Sgzra-umxeN", "YB8u6J-Hsb", "T-lSaR1PRD8" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for appreciating our contribution even though it does not fall into their area of expertise.\n", " R: “No experimental results limit the impact of the work.”\n\nA: Our work has a theoretical goal: understanding whether, by carefully combining two types of query, one can bypass the well-kno...
[ -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, 3, 3, 1 ]
[ "T-lSaR1PRD8", "YB8u6J-Hsb", "Sgzra-umxeN", "nips_2022_dNXg-h6YX9h", "nips_2022_dNXg-h6YX9h", "nips_2022_dNXg-h6YX9h", "nips_2022_dNXg-h6YX9h" ]
nips_2022_pm8Y8unXkkJ
Optimal Binary Classification Beyond Accuracy
The vast majority of statistical theory on binary classification characterizes performance in terms of accuracy. However, accuracy is known in many cases to poorly reflect the practical consequences of classification error, most famously in imbalanced binary classification, where data are dominated by samples from one of two classes. The first part of this paper derives a novel generalization of the Bayes-optimal classifier from accuracy to any performance metric computed from the confusion matrix. Specifically, this result (a) demonstrates that stochastic classifiers sometimes outperform the best possible deterministic classifier and (b) removes an empirically unverifiable absolute continuity assumption that is poorly understood but pervades existing results. We then demonstrate how to use this generalized Bayes classifier to obtain regret bounds in terms of the error of estimating regression functions under uniform loss. Finally, we use these results to develop some of the first finite-sample statistical guarantees specific to imbalanced binary classification. Specifically, we demonstrate that optimal classification performance depends on properties of class imbalance, such as a novel notion called Uniform Class Imbalance, that have not previously been formalized. We further illustrate these contributions numerically in the case of $k$-nearest neighbor classification.
Accept
After discussion with the authors, reviewers are positive about this paper. Some loose ends remain, but it is impressive that the authors have discovered something new about binary classification, the single most intensively studied topic in all of ML.
train
[ "VWfNNh-1E6Y", "HPlx0yqm8tS", "0tzZ5R9EhGK", "O5Lg75MtVc2", "pxTs5LEFmV3", "7xIHC8rR4w", "ehrNqXDNV4Z", "G9Aw3mbRaHs", "QSOxJrxx6Im", "WLgwmlY57A6V", "0DMvv3B4DgS", "Y0VXSZNF8ub", "DkFreeDmuFH", "2uJMFU4vVHs", "CG4fjOAhEW-", "9JCrORzpDGS" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer rWrH,\n\nWe wanted to express our sincere thanks for your careful review of the paper, especially the proof of Theorem 3. It is rare to have a reviewer check the mathematical details so carefully, and the points you raised definitely clarified the writing.\n\nSince you carefully checked the mathemat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1, 3 ]
[ "0tzZ5R9EhGK", "O5Lg75MtVc2", "G9Aw3mbRaHs", "G9Aw3mbRaHs", "Y0VXSZNF8ub", "9JCrORzpDGS", "DkFreeDmuFH", "CG4fjOAhEW-", "9JCrORzpDGS", "CG4fjOAhEW-", "2uJMFU4vVHs", "DkFreeDmuFH", "nips_2022_pm8Y8unXkkJ", "nips_2022_pm8Y8unXkkJ", "nips_2022_pm8Y8unXkkJ", "nips_2022_pm8Y8unXkkJ" ]
nips_2022_PW1VAoxeOU
On Margin Maximization in Linear and ReLU Networks
The implicit bias of neural networks has been extensively studied in recent years. Lyu and Li (2019) showed that in homogeneous networks trained with the exponential or the logistic loss, gradient flow converges to a KKT point of the max margin problem in parameter space. However, that leaves open the question of whether this point will generally be an actual optimum of the max margin problem. In this paper, we study this question in detail, for several neural network architectures involving linear and ReLU activations. Perhaps surprisingly, we show that in many cases, the KKT point is not even a local optimum of the max margin problem. On the flip side, we identify multiple settings where a local or global optimum can be guaranteed.
Accept
The reviewers reached a consensus that the paper can be accepted by NeuRIPS. The AC notices a few weaknesses pointed out by the reviewers and personally thinks the paper's results are somewhat expected. Nevertheless, the AC would like to recommend acceptance.
test
[ "3dUnbnqTPGN", "pi3Z_HIsooW", "Gdl9lDjKJW9", "bNV6LcFDVmR", "fuYdM7K8zkI", "E0lTxENKWxT", "F3rR-DghYGc", "CHT5vRspGOV", "Dw1J5x0ky5-" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper is still good and I will keep my score unchanged.\n\nA minor comment is that Corollary 4.5 in Lyu & Li (2019) can imply Theorem 5.2 (and also the one in the ReLU case) in the strict sense, although their corollary considers an optimization problem w.r.t. all the parameters. This is because one can write...
[ -1, -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "fuYdM7K8zkI", "Dw1J5x0ky5-", "CHT5vRspGOV", "F3rR-DghYGc", "E0lTxENKWxT", "nips_2022_PW1VAoxeOU", "nips_2022_PW1VAoxeOU", "nips_2022_PW1VAoxeOU", "nips_2022_PW1VAoxeOU" ]
nips_2022_XdMusblCkB
Causality Preserving Chaotic Transformation and Classification using Neurochaos Learning
Discovering cause and effect variables from observational data is an important but challenging problem in science and engineering. In this work, a recently proposed brain inspired learning algorithm namely-\emph{Neurochaos Learning} (NL) is used for the classification of cause and effect time series generated using coupled autoregressive processes, coupled 1D chaotic skew tent maps, coupled 1D chaotic logistic maps and a real-world prey-predator system. In the case of coupled skew tent maps, the proposed method consistently outperforms a five layer Deep Neural Network (DNN) and Long Short Term Memory (LSTM) architecture for unidirectional coupling coefficient values ranging from $0.1$ to $0.7$. Further, we investigate the preservation of causality in the feature extracted space of NL using Granger Causality for coupled autoregressive processes and Compression-Complexity Causality for coupled chaotic systems and real-world prey-predator dataset. Unlike DNN, LSTM and 1D Convolutional Neural Network, it is found that NL preserves the inherent causal structures present in the input timeseries data. These findings are promising for the theory and applications of causal machine learning and open up the possibility to explore the potential of NL for more sophisticated causal learning tasks.
Accept
This paper has been thoroughly evaluated by four competent reviewers. One of them voted strongly for accepting it, while the three others bid borderline rejections. The work tackles an important problem, and it is well written-up. The most important aspect of controversy between the reviewers’ opinions is in whether the proposed method can or cannot preserve causality. Based on my understanding, it appears designed for just that. The authors have provided extensive rebuttals and, in my opinion, have addressed most of the issues brought up in the initial review. In summary, even though this paper should be rejected based on the straight vote of the reviewers, I would like to encourage the program committee to consider accepting it, provided that there is enough room for it in the program.
train
[ "qtM0qFUP5d6", "XdbkNwGSUJ3", "w4xtQmBcKHF", "azsLoD4Oe2i", "64Y0ch2EOs", "WbIfpVnU_f", "8WnUP2Oby1d", "52zlMwPBFBt", "LdCJXXZzzoI", "rjPxoTcy5yW", "j1_TcYDEc7", "oDkTHJeChnL", "XJcwpZIfnXY", "eaJDasM7t_R", "HcnejdkUR9S", "HQ_a4mzga6L", "yGvvr6A5u6d" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I was quite clear and confident in y assessment of the paper. I don't think additional experiments are necessary on other architectures to prove causality preservation but strengthened the paper nonetheless. I will stand by initial rating. This is an outstanding paper.", " **Addressing Weakness 3**: We understa...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "j1_TcYDEc7", "XJcwpZIfnXY", "j1_TcYDEc7", "8WnUP2Oby1d", "WbIfpVnU_f", "eaJDasM7t_R", "52zlMwPBFBt", "HcnejdkUR9S", "rjPxoTcy5yW", "j1_TcYDEc7", "oDkTHJeChnL", "HQ_a4mzga6L", "yGvvr6A5u6d", "nips_2022_XdMusblCkB", "nips_2022_XdMusblCkB", "nips_2022_XdMusblCkB", "nips_2022_XdMusblCkB...
nips_2022_mn1MWh0iDCA
A Closer Look at Offline RL Agents
Despite recent advances in the field of Offline Reinforcement Learning (RL), less attention has been paid to understanding the behaviors of learned RL agents. As a result, there remain some gaps in our understandings, i.e., why is one offline RL agent more performant than another? In this work, we first introduce a set of experiments to evaluate offline RL agents, focusing on three fundamental aspects: representations, value functions and policies. Counterintuitively, we show that a more performant offline RL agent can learn relatively low-quality representations and inaccurate value functions. Furthermore, we showcase that the proposed experiment setups can be effectively used to diagnose the bottleneck of offline RL agents. Inspired by the evaluation results, a novel offline RL algorithm is proposed by a simple modification of IQL and achieves SOTA performance. Finally, we investigate when a learned dynamics model is helpful to model-free offline RL agents, and introduce an uncertainty-based sample selection method to mitigate the problem of model noises. Code is available at: https://anonymous.4open.science/r/RIQL-BE73.
Accept
The main strengths of this paper are that (1) it provides some interesting analysis that leads to some somewhat surprising findings, and (2) it presents and evaluates some new technical algorithmic ideas based on this analysis that lead to improved performance. After the author discussion, the main weaknesses is that the new ant-maze results are somewhat disappointing, showing that the algorithmic ideas don't improve over IQL on a more complex problem setting. The ant maze tasks are a lot more interesting and complex than the standard locomotion tasks, and so this as a fairly major weakness. Of lesser importance, the title is not particularly descriptive, and could be used to describe a lot of papers. So, I would like to suggest to the authors to make the title more specific to the contributions of this paper. Overall, the reviewers and AC think the strengths outweigh the weaknesses, especially since the analysis is interesting on its own and since there is some new analysis on more complex image-based settings, irrespective of the technical ideas only providing benefits on simplistic locomotion tasks. Nonetheless, we encourage the authors to use our feedback to further improve the paper.
train
[ "AyJqrQPfHXC", "YLGbJZLRkfi", "CfNuVW1kag", "B1Pxz2uGMAl", "p8mYOhmXFl", "Qb2pa5K2fFJ", "CtelRka--Lk", "J049lXeYwwu", "WLVUmOINP0M", "fIb-HvuKPKP", "YbkzwXX7Yab" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > Experiment on the `medium-replay` dataset\n\nWe did include the medium-replay dataset in the experiments, which is named as `-med-rep-v2`. We are sorry for the confusion, and we will clarify the datasets we used in the paper.\n\n> Problem of using PBT-based RL\n\nWe agree that PBT-based RL doesn't necessarily p...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "YLGbJZLRkfi", "CtelRka--Lk", "J049lXeYwwu", "YbkzwXX7Yab", "fIb-HvuKPKP", "WLVUmOINP0M", "WLVUmOINP0M", "nips_2022_mn1MWh0iDCA", "nips_2022_mn1MWh0iDCA", "nips_2022_mn1MWh0iDCA", "nips_2022_mn1MWh0iDCA" ]
nips_2022_-H6kKm4DVo
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?
As the scope of machine learning broadens, we observe a recurring theme of *algorithmic monoculture*: the same systems, or systems that share components (e.g. datasets, models), are deployed by multiple decision-makers. While sharing offers advantages like amortizing effort, it also has risks. We introduce and formalize one such risk, *outcome homogenization*: the extent to which particular individuals or groups experience the same outcomes across different deployments. If the same individuals or groups exclusively experience undesirable outcomes, this may institutionalize systemic exclusion and reinscribe social hierarchy. We relate algorithmic monoculture and outcome homogenization by proposing the *component sharing hypothesis*: if algorithmic systems are increasingly built on the same data or models, then they will increasingly homogenize outcomes. We test this hypothesis on algorithmic fairness benchmarks, demonstrating that increased data-sharing reliably exacerbates homogenization and individual-level effects generally exceed group-level effects. Further, given the current regime in AI of foundation models, i.e. pretrained models that can be adapted to myriad downstream tasks, we test whether model-sharing homogenizes outcomes across tasks. We observe mixed results: we find that for both vision and language settings, the specific methods for adapting a foundation model significantly influence the degree of outcome homogenization. We also identify societal challenges that inhibit the measurement, diagnosis, and rectification of outcome homogenization in deployed machine learning systems.
Accept
This paper has the potential to catalyse a new and important line of research within algorithmic fairness. It does that (in a simple, yet interesting way) by identifying that the key to the concerns about algorithmic monoculture lies in homogeneous outcomes and by defining intuitively justified metrics of homogeneous outcomes at individual and group-levels. It then characterises the potential outcome homogenisation in models trained using the same/similar datasets and in models fine-tuned using the same underlying foundational models (a trend that is increasing, of late). To be fair, the reviewers rightly criticised for a somewhat limited set of experiments and the conclusions drawn from them do appear rather preliminary. However, this can be partly justified as the paper is exploring a previously unexplored territory and the potential benefits of accepting the paper and the follow-on work that it will very likely trigger far outweigh the potential risks of not accepting the paper (after all, the experiments are rigorous, even if limited / preliminary and the authors draw appropriate conclusions).
train
[ "_K84v6dP1RB", "985iPxXakSS", "xoGbDT3yzJs", "VcxOY4OXbtH", "7gFItbDzU3p", "WF03pm2xrUn", "VVcO2_iOey", "-q4YV9TpmDo", "aGnQHoHjsSo", "ifJt4N3Z20a", "Spc_lPiEqcA", "FPMksSsdu6L", "_e06JnB3uz", "Sdgeo-D-bhO", "f7p5VcdApOw", "HoKURIt1DX8" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I think this gets to the core of the issue. This discussion here is the nuance I was hoping to see. I would encourage you to include this framing in your paper. Thanks for the update on the NLP experiments. I’ll move my score to recommend acceptance.", " Thanks for the followup, we appreciate your engagement an...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "985iPxXakSS", "xoGbDT3yzJs", "-q4YV9TpmDo", "HoKURIt1DX8", "f7p5VcdApOw", "Sdgeo-D-bhO", "HoKURIt1DX8", "aGnQHoHjsSo", "ifJt4N3Z20a", "f7p5VcdApOw", "FPMksSsdu6L", "_e06JnB3uz", "Sdgeo-D-bhO", "nips_2022_-H6kKm4DVo", "nips_2022_-H6kKm4DVo", "nips_2022_-H6kKm4DVo" ]
nips_2022_XIDSEPE68yO
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i.e.~where each instantaneous loss is a scalar convex function of a linear function. We show that in this setting, early stopped Gradient Descent (GD), without any explicit regularization or projection, ensures excess error at most $\varepsilon$ (compared to the best possible with unit Euclidean norm) with an optimal, up to logarithmic factors, sample complexity of $\tilde{O}(1/\varepsilon^2)$ and only $\tilde{O}(1/\varepsilon^2)$ iterations. This contrasts with general stochastic convex optimization, where $\Omega(1/\varepsilon^4)$ iterations are needed Amir et al. 2021. The lower iteration complexity is ensured by leveraging uniform convergence rather than stability. But instead of uniform convergence in a norm ball, which we show can guarantee suboptimal learning using $\Theta(1/\varepsilon^4)$ samples, we rely on uniform convergence in a distribution-dependent ball.
Accept
This work is concerned with an analysis of early-stopped gradient descent in a stochastic setting, and shows improved complexity, by relying on an assumption of a distribution-dependent ball for the parameter. The reviewers all think that this is an interesting submission that should be accepted, and I agree with them.
train
[ "Y85gnkD8V1u", "4AAWe8sCKIb", "-t8rWJ1qBVW", "tAGquvGI8G", "7vKoDudVQa", "d-w231Xaoud", "a0MAMPr3ML6", "R6-gF8-6-27", "wtYZYnHvm5d", "MEh2XNeAD1z", "wppw6oGG85", "OjJuowjO_1X", "9OAIZwV5wOF", "fEuJ9onKR1I" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " thank you for your answers. \n\nI maintain my score ", " > Since it is standard, can you provide some standard references?\n\nIn [33], Lemma 14.1 you can find such an example where the algorithm output is not necessarily bounded. Specifically, they consider gradient steps without projection, similarly to our wo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "4AAWe8sCKIb", "tAGquvGI8G", "wtYZYnHvm5d", "wppw6oGG85", "fEuJ9onKR1I", "OjJuowjO_1X", "MEh2XNeAD1z", "nips_2022_XIDSEPE68yO", "fEuJ9onKR1I", "9OAIZwV5wOF", "OjJuowjO_1X", "nips_2022_XIDSEPE68yO", "nips_2022_XIDSEPE68yO", "nips_2022_XIDSEPE68yO" ]
nips_2022_sj9l1JCrAk6
Federated Submodel Optimization for Hot and Cold Data Features
We focus on federated learning in practical recommender systems and natural language processing scenarios. The global model for federated optimization typically contains a large and sparse embedding layer, while each client’s local data tend to interact with part of features, updating only a small submodel with the feature-related embedding vectors. We identify a new and important issue that distinct data features normally involve different numbers of clients, generating the differentiation of hot and cold features. We further reveal that the classical federated averaging algorithm (FedAvg) or its variants, which randomly selects clients to participate and uniformly averages their submodel updates, will be severely slowed down, because different parameters of the global model are optimized at different speeds. More specifically, the model parameters related to hot (resp., cold) features will be updated quickly (resp., slowly). We thus propose federated submodel averaging (FedSubAvg), which introduces the number of feature-related clients as the metric of feature heat to correct the aggregation of submodel updates. We prove that due to the dispersion of feature heat, the global objective is ill-conditioned, and FedSubAvg works as a suitable diagonal preconditioner. We also rigorously analyze FedSubAvg’s convergence rate to stationary points. We finally evaluate FedSubAvg over several public and industrial datasets. The evaluation results demonstrate that FedSubAvg significantly outperforms FedAvg and its variants.
Accept
This paper considers a particular FL scenario, where the model includes a large embedding layer as is typical in NLP and recommendation models. To make training feasible or more efficient, the FedSubAvg method is proposed. In particular, it deals with the setting where not all features are equally encountered in training data. This is leveraged to reduce communication and computation overhead, and also to improve optimization dynamics. The proposed approach comes with theoretical guarantees, and the paper also provides a thorough numerical evaluation demonstrating benefits over other approaches. The reviewers raised concerns about the relevance and potential narrowness of the setup, assumptions, and whether the proposed FedSubAvg approach would be comparable with privacy-enhancing technologies like secure aggregation and differential privacy. It is clear that the setup considered is indeed relevant given the prevalence of models with large embedding layers in NLP and recommendation models, and the useful of such models in several applications. Given this, it isn't necessary for the authors to demonstrate any relevance to training standard MLPs since that isn't the focus and no claims are made in the paper about such architectures. The authors responses also were convincing that the approach can be made compatible with DP and secure aggregation in a reasonable way. I'm happy to recommend that this paper be accepted. When preparing the camera ready, to make the paper accessible to a broader audience, it would be helpful to include (in the intro, or early in the paper) additional material and references to motivate the relevance of models with large embedding layers, in addition to the key revisions already made in response to the initial reviews.
train
[ "vt7b2J2qMC", "Yrn7DQIby5Q", "I20zQC5DNhA", "8WqbI_DG3w9", "dU7qutabIm7", "szmb_7QFYlm", "3TvAF5nGCi5v", "a-bUfoeYU9Y", "Xaj8pBNeZDd", "uPoFHtGXLh7", "NPfquCVal7Y", "X5U6NsQgrsX" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our response near the end of the rebuttal phase. We believe that we have addressed all your comments quite well. We stress some key points as follows.\n\nFirst, we stress that NLP and recommendation have been recognized as **quite important fields** of deep learning **in both academia and in...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "Yrn7DQIby5Q", "a-bUfoeYU9Y", "nips_2022_sj9l1JCrAk6", "uPoFHtGXLh7", "X5U6NsQgrsX", "NPfquCVal7Y", "uPoFHtGXLh7", "Xaj8pBNeZDd", "nips_2022_sj9l1JCrAk6", "nips_2022_sj9l1JCrAk6", "nips_2022_sj9l1JCrAk6", "nips_2022_sj9l1JCrAk6" ]
nips_2022_215KQFiU65l
Parameter-free Dynamic Graph Embedding for Link Prediction
Dynamic interaction graphs have been widely adopted to model the evolution of user-item interactions over time. There are two crucial factors when modelling user preferences for link prediction in dynamic interaction graphs: 1) collaborative relationship among users and 2) user personalized interaction patterns. Existing methods often implicitly consider these two factors together, which may lead to noisy user modelling when the two factors diverge. In addition, they usually require time-consuming parameter learning with back-propagation, which is prohibitive for real-time user preference modelling. To this end, this paper proposes FreeGEM, a parameter-free dynamic graph embedding method for link prediction. Firstly, to take advantage of the collaborative relationships, we propose an incremental graph embedding engine to obtain user/item embeddings, which is an Online-Monitor-Offline architecture consisting of an Online module to approximately embed users/items over time, a Monitor module to estimate the approximation error in real time and an Offline module to calibrate the user/item embeddings when the online approximation errors exceed a threshold. Meanwhile, we integrate attribute information into the model, which enables FreeGEM to better model users belonging to some under represented groups. Secondly, we design a personalized dynamic interaction pattern modeller, which combines dynamic time decay with attention mechanism to model user short-term interests. Experimental results on two link prediction tasks show that FreeGEM can outperform the state-of-the-art methods in accuracy while achieving over 36X improvement in efficiency. All code and datasets can be found in https://github.com/FudanCISL/FreeGEM.
Accept
Reviewers were overall on the side of acceptance although not strongly so, with one reviewer being only borderline. Several positive aspects of the paper were appreciated: - The idea was considered novel and the motivation clear. - The combination of matrix factorization with dynamic graph-structured data, was appreciated, and the methodological use of the two modules and online-monitor-offline architecture were considered novel. - The efficiency improvement was appreciated. - The experiments were considered comprehensive and sufficient. - The paper was considered well organized. On the negative side, - Only addressing link prediction was considered too limited. - The method was criticized as still requiring search across hyperparameters. - More clarification of the methodology was desired, and the role of the different submodules compared to functionalities in other methods was considered hard to understand. - Lack of provided code and data sets was criticized; a partial code release was provided by authors in their response. uRAL - Potential negative social impacts for underrepresented groups could have been mentioned. Overall, I think this paper can be acceptable for NeurIPS if the authors take the reviewer comments and the follow-up discussion into account in their final manuscript.
train
[ "LMjgsS9g25", "Q9zsY_6K2C5", "55gpUYIclwj", "WuRlytZsOBv", "tT4vFKC2eyZ", "h7sU0c9ab5-", "NXmkfLNzHQ6", "OurmwGqSKZ_", "IsWdg-wuMn", "eabi3YsO5Bj", "yMNabx-77NN", "i-NbJg3cFKo", "CRNH67PqTnA", "RQcSU2ZWcJ7", "GR5c25b9xBy" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification and efforts to address my comments.", " Patience :)", " Dear Reviewer uRAL,\n\nOnce again, we would like to thank you for your constructive comments in your initial reviews. We hope that we have addressed all the concerns raised in your reviews in the rebuttal. Please do let us...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "NXmkfLNzHQ6", "55gpUYIclwj", "CRNH67PqTnA", "tT4vFKC2eyZ", "h7sU0c9ab5-", "GR5c25b9xBy", "OurmwGqSKZ_", "IsWdg-wuMn", "RQcSU2ZWcJ7", "yMNabx-77NN", "CRNH67PqTnA", "nips_2022_215KQFiU65l", "nips_2022_215KQFiU65l", "nips_2022_215KQFiU65l", "nips_2022_215KQFiU65l" ]
nips_2022_ldl2V3vLZ5
S3GC: Scalable Self-Supervised Graph Clustering
We study the problem of clustering graphs with additional side-information of node features. The problem is extensively studied, and several existing methods exploit Graph Neural Networks to learn node representations. However, most of the existing methods focus on generic representations instead of their cluster-ability or do not scale to large scale graph datasets. In this work, we propose S3GC which uses contrastive learning along with Graph Neural Networks and node features to learn clusterable features. We empirically demonstrate that S3GC is able to learn the correct cluster structure even when graph information or node features are individually not informative enough to learn correct clusters. Finally, using extensive evaluation on a variety of benchmarks, we demonstrate that S3GC is able to significantly outperform state-of-the-art methods in terms of clustering accuracy -- with as much as 5% gain in NMI -- while being scalable to graphs of size 100M.
Accept
TL;DR: Accept is there is room. The paper proposes a Scalable Self-Supervised method that uses graph neural networks (1 layered graph convolutional network to encode feature and structural information), diffusion augmentations, and contrastive learning for graph clustering. The work is more applied, the baselines are somewhat weak (but these are the ones that scale), and overall there is nothing ground-breaking about this work. Generally NeurIPS does not accept this type of work but it could be interesting to part of the community focused on scalability. During rebuttal almost all reviewer concerns were addressed. The authors did a good job showing with their experiments. It worries me that the authors answered the questions but did not make significant changes to their draft (despite saying they would do that). I was inclined to reject the paper because of the absence of updates but I will trust the authors' word this time. But if the discussed changes are not implemented there should be consequences. "Our method is also doing positional encoding similar to node2vec, as we are forming positives/negatives in our contrastive learning formulation based on some kind of random walk." => I read it again and I highly doubt this statement. Contrastive learning does not necessarily create positional representations. Please clearly state the type of embedding and add a proof to your paper (it can be added in the appendix).
train
[ "BomXXp4y9F", "UOLBhg5QIMw", "O1ZTHYyuLP", "Xrehr73jRMv", "P8c6GdBT_fK", "xz6yAMCuiP", "_cq20AQr7SI", "i9mefNrAwSO", "qixrCE9QaNN", "MNxCKVYeFl", "IH-bakLDgnxJ", "FItDAa11XJy", "klTE4b-ynx", "b3wjkca9yhO", "JqJ9Z5Y9Kc8V", "nZ4hd8NK4nKs", "vODeXLNavtv", "oeU1I2kEk9", "gTQS0gUorSk"...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", ...
[ " I thank the authors for reasonably responding to my queries and providing a detailed response. I have updated my score accordingly.", " Thanks for the feedback. There exists theory studying the effectiveness of each of the S3GC components individually and in different settings [1,2,3,4], but none on studying th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "_cq20AQr7SI", "qixrCE9QaNN", "klTE4b-ynx", "IH-bakLDgnxJ", "b3wjkca9yhO", "JqJ9Z5Y9Kc8V", "JqJ9Z5Y9Kc8V", "FItDAa11XJy", "vODeXLNavtv", "b3wjkca9yhO", "gTQS0gUorSk", "nips_2022_ldl2V3vLZ5", "gTQS0gUorSk", "oeU1I2kEk9", "wv69iKkd7nJ", "nPUCcVZ5Oo", "nPUCcVZ5Oo", "5RGX04GsoC", "5R...
nips_2022_kHNKDNLVp1E
Consistent Sufficient Explanations and Minimal Local Rules for explaining the decision of any classifier or regressor
To explain the decision of any regression and classification model, we extend the notion of probabilistic sufficient explanations (P-SE). For each instance, this approach selects the minimal subset of features that is sufficient to yield the same prediction with high probability, while removing other features. The crux of P-SE is to compute the conditional probability of maintaining the same prediction. Therefore, we introduce an accurate and fast estimator of this probability via random Forests for any data $(\boldsymbol{X}, Y)$ and show its efficiency through a theoretical analysis of its consistency. As a consequence, we extend the P-SE to regression problems. In addition, we deal with non-discrete features, without learning the distribution of $\boldsymbol{X}$ nor having the model for making predictions. Finally, we introduce local rule-based explanations for regression/classification based on the P-SE and compare our approaches w.r.t other explainable AI methods. These methods are available as a Python Package.
Accept
I have read all comments and responses carefully. Reviewers praise the novelty of the solution and recognized the under-explored nature of the problem. The proposed estimator seems reasonable, particularly for tabular data. Reviewers complained about the lack of explanation on some heuristics used and the limited scope of applicability of the method. Overall, reviewers agree that this is an important and yet underexplored problem and the authors have provided useful contributions. I, therefore, have decided to recommend the acceptance of the paper.
train
[ "lDlQtlSPeN", "RleBlkMxA1r", "Lk0gpoxOHc", "5kBRChp2-v", "n1oYj-rc6avv", "LHdk-m2-kxrw", "pQ9hB7ZfLaw", "Rpwkb6x3fUM", "YD5oZ-7PU8h", "UfAUTlmHhjP", "YxfmdfkncgW", "zEa2A8_tyzk", "4g7iexM6Zss", "ItTKrHNjHSr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We agree with reviewer rsCo that the concept of Pertinent Positive developed is related to the concept of sufficient minimal explanation. \nThe Pertinent Positive is defined as “a factor whose presence is minimally sufficient in justifying the final classification” , it is defined mathematically by an optimisatio...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1, 4 ]
[ "Lk0gpoxOHc", "YD5oZ-7PU8h", "UfAUTlmHhjP", "pQ9hB7ZfLaw", "Rpwkb6x3fUM", "nips_2022_kHNKDNLVp1E", "ItTKrHNjHSr", "4g7iexM6Zss", "zEa2A8_tyzk", "YxfmdfkncgW", "nips_2022_kHNKDNLVp1E", "nips_2022_kHNKDNLVp1E", "nips_2022_kHNKDNLVp1E", "nips_2022_kHNKDNLVp1E" ]
nips_2022_1r1GDXPtuWz
Detecting danger in gridworlds using Gromov's Link Condition
Gridworlds have been long-utilised in AI research, particularly in reinforcement learning, as they provide simple yet scalable models for many real-world applications such as robot navigation, emergent behaviour, and operations research. We initiate a study of gridworlds using the mathematical framework of reconfigurable systems and state complexes due to Abrams, Ghrist & Peterson. State complexes represent all possible configurations of a system as a single geometric space, thus making them conducive to study using geometric, topological, or combinatorial methods. The main contribution of this work is a modification to the original Abrams, Ghrist & Peterson setup which we introduce to capture agent braiding and thereby more naturally represent the topology of gridworlds. With this modification, the state complexes may exhibit geometric defects (failure of Gromov's Link Condition). Serendipitously, we discover these failures occur exactly where undesirable or dangerous states appear in the gridworld. Our results therefore provide a novel method for seeking guaranteed safety limitations in discrete task environments with single or multiple agents, and offer useful safety information (in geometric and topological forms) for incorporation in or analysis of machine learning systems. More broadly, our work introduces tools from geometric group theory and combinatorics to the AI community and demonstrates a proof-of-concept for this geometric viewpoint of the task domain through the example of simple gridworld environments.
Reject
This paper analyses grid worlds, more precisely multiple objects moving in grid worlds, using the mathematical idea of state complexes. The state complex represents all possible configurations as a single space, from which domain properties can be ascertained by group-theoretic, combinatorial, or geometric analysis. In particular, the paper develops a theory around "Gromov's Link Condition" to analyze conditions under which collisions can be prevented in such domains. The reviewers had a mixed initial response to this paper. On the positive side, the reviewers appreciated the theoretical development (txoD) and novelty (2SSW). On the negative side, the reviewers struggled to see the significance or relevance of the work to learning or AI (2SSW, 5vs3). The reviewers understood the work as a mechanism for collision checking (2SSW), a means to support learning (5vs3), and a computational mechanism for analyzing gridworld dynamics (txoD). The author response clarified several aspects of the reviews that were misunderstood. The author response did not sway the reviewers. Primarily, the concern is that the paper failed to communicate the relevance of the mathematical analysis of gridworlds to an AI audience. The sole positive reviewer ultimately concurred with the arguments made by the negative reviewers. Two reviewers indicate to reject, and one indicates a weak accept. Based on the failure of the paper to clearly communicate the relevance of its ideas to any reviewer, the paper is rejected. One suggestion for a future revision would be to present these ideas in the general setting of an MDP (instead of a specific domain of a gridworld). The local combinatorial analysis on a generic MDP could potentially be more useful to the MDP community when considering planning for AI safety or problems of mechanism design. The evidence needed to validate the ideas for those communities might again be different from the evidence provided in this paper. As a separate comment, the analysis of the transition dynamics of actions may have related work stemming from predictive state representations. In the paper's current form, the reviewers were unable to see a clear contribution.
val
[ "55Yq3Bn3Vg-", "iK5KUnD6Zv", "KZc2BgfLS5R", "2bO_B96GYWJ", "fZ4qESv6DMR", "epA_WqkN-06", "DeM-dq9t46Q", "-5K67TrJGMH", "cvWg65rCebk", "cOa9JzAVSw0" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are surprised 2SSW finds our detailed responses and other reviews unconvincing. We also find it notable that 2SSW has avoided in the above reply engaging with any of the mathematical content, arguments, results, or references.\n\n2SSW claims that we\n\n> clarified a small technical point, [and our] response se...
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "iK5KUnD6Zv", "KZc2BgfLS5R", "DeM-dq9t46Q", "cOa9JzAVSw0", "cvWg65rCebk", "-5K67TrJGMH", "nips_2022_1r1GDXPtuWz", "nips_2022_1r1GDXPtuWz", "nips_2022_1r1GDXPtuWz", "nips_2022_1r1GDXPtuWz" ]
nips_2022_o8H6h13Avjy
MExMI: Pool-based Active Model Extraction Crossover Membership Inference
With increasing popularity of Machine Learning as a Service (MLaaS), ML models trained from public and proprietary data are deployed in the cloud and deliver prediction services to users. However, as the prediction API becomes a new attack surface, growing concerns have arisen on the confidentiality of ML models. Existing literatures show their vulnerability under model extraction (ME) attacks, while their private training data is vulnerable to another type of attacks, namely, membership inference (MI). In this paper, we show that ME and MI can reinforce each other through a chained and iterative reaction, which can significantly boost ME attack accuracy and improve MI by saving the query cost. As such, we build a framework MExMI for pool-based active model extraction (PAME) to exploit MI through three modules: “MI Pre-Filter”, “MI Post-Filter”, and “semi-supervised boosting”. Experimental results show that MExMI can improve up to 11.14% from the best known PAME attack and reach 94.07% fidelity with only 16k queries. Furthermore, the precision and recall of the MI attack in MExMI are on par with state-of-the-art MI attack which needs 150k queries.
Accept
This paper studies the problem of securing a model once it is published as a service. In most prior studies the focus was on either protecting the model from extraction attacks (ME) or from identifying the data used for training the model (MI). The authors propose that a simultaneous attack on both surfaces is even more powerful since the MI attack provides more information to the ME attack. The reviewers found that this work is interesting, and results are relevant to the committee as they can highlight the need for protecting the multiple surfaces of attack against ML models. However, there is a concern about the results presented in response to the comments of reviewer LJzA: in the response, the authors reported on the result of an experiment in which a defense against MI was simulated. From the gist of the paper, one would expect that this will result in some level of protection against ME attacks too. However, the results provide only weak support to this assertion. Therefore, we think the paper should address this inconsistency.
test
[ "6bn7Q7cxS2m", "AmllhIUts7t", "F4rfvAiYoS", "gFEodkyzpLu", "xUUW6SADGIOX", "WYYPq5P3UPh", "DaPxSK_JN0B", "Li1uFWNU4gL", "d7IJ865BJZq", "CPXjtKGLUu9", "Ww5XdoiFJ8P", "XGOJd40uXh5", "jRUKkhqXQWl", "gt9XSYP_JOr", "_HyGOrDXDdj" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your support of this work. In case you've got some extra time, there are a few discussions between us and the other reviewers. If you have any questions, we would be happy to answer them within the reviewer-author discussion period.", " Thanks for updating your comments and questions. To emulate a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "d7IJ865BJZq", "gFEodkyzpLu", "xUUW6SADGIOX", "CPXjtKGLUu9", "Li1uFWNU4gL", "DaPxSK_JN0B", "Ww5XdoiFJ8P", "XGOJd40uXh5", "jRUKkhqXQWl", "gt9XSYP_JOr", "_HyGOrDXDdj", "nips_2022_o8H6h13Avjy", "nips_2022_o8H6h13Avjy", "nips_2022_o8H6h13Avjy", "nips_2022_o8H6h13Avjy" ]
nips_2022_vbPsD-BhOZ
Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs
Cellular sheaves equip graphs with a ``geometrical'' structure by assigning vector spaces and linear maps to nodes and edges. Graph Neural Networks (GNNs) implicitly assume a graph with a trivial underlying sheaf. This choice is reflected in the structure of the graph Laplacian operator, the properties of the associated diffusion equation, and the characteristics of the convolutional models that discretise this equation. In this paper, we use cellular sheaf theory to show that the underlying geometry of the graph is deeply linked with the performance of GNNs in heterophilic settings and their oversmoothing behaviour. By considering a hierarchy of increasingly general sheaves, we study how the ability of the sheaf diffusion process to achieve linear separation of the classes in the infinite time limit expands. At the same time, we prove that when the sheaf is non-trivial, discretised parametric diffusion processes have greater control than GNNs over their asymptotic behaviour. On the practical side, we study how sheaves can be learned from data. The resulting sheaf diffusion models have many desirable properties that address the limitations of classical graph diffusion equations (and corresponding GNN models) and obtain competitive results in heterophilic settings. Overall, our work provides new connections between GNNs and algebraic topology and would be of interest to both fields.
Accept
All reviewers agree that this paper deserves to be published in NeurIPS 2022, with some minor concerns that are (mostly) adressed in the rebuttal. Please incorporate the remaining reviewers' feedback for the camera-ready version.
test
[ "rlBI2vIgn2C", "tLhbSbS02C0", "QjWjoj2ysTm", "Yhky8YLWPb2", "YzCOolGTbRm", "xUtvfKb60UZ", "X5K6rus6ojd", "161G0VN2XG", "-2Esmppt-Vw", "wellfqWrTdQ", "iqCET0Mb6GH", "oobYeSSuvg5", "-MlDIhQVnDu", "w3YZP6jB5Lr", "IjxzwXDucq3" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for these clarifications! We misunderstood the point the reviewer was making originally. The analysis of the reviewer is correct. The discrepancy comes from slightly different assumptions that we clarify below. To avoid any confusion for the readers, we will include the number of feature channels explic...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 9, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "tLhbSbS02C0", "iqCET0Mb6GH", "161G0VN2XG", "wellfqWrTdQ", "-2Esmppt-Vw", "nips_2022_vbPsD-BhOZ", "nips_2022_vbPsD-BhOZ", "IjxzwXDucq3", "w3YZP6jB5Lr", "-MlDIhQVnDu", "oobYeSSuvg5", "nips_2022_vbPsD-BhOZ", "nips_2022_vbPsD-BhOZ", "nips_2022_vbPsD-BhOZ", "nips_2022_vbPsD-BhOZ" ]
nips_2022_-OfK_B9Q5hI
Improving Neural Ordinary Differential Equations with Nesterov's Accelerated Gradient Method
We propose the Nesterov neural ordinary differential equations (NesterovNODEs), whose layers solve the second-order ordinary differential equations (ODEs) limit of Nesterov's accelerated gradient (NAG) method, and a generalization called GNesterovNODEs. Taking the advantage of the convergence rate $\mathcal{O}(1/k^{2})$ of the NAG scheme, GNesterovNODEs speed up training and inference by reducing the number of function evaluations (NFEs) needed to solve the ODEs. We also prove that the adjoint state of a GNesterovNODEs also satisfies a GNesterovNODEs, thus accelerating both forward and backward ODE solvers and allowing the model to be scaled up for large-scale tasks. We empirically corroborate the advantage of GNesterovNODEs on a wide range of practical applications, including point cloud separation, image classification, and sequence modeling. Compared to NODEs, GNesterovNODEs require a significantly smaller number of NFEs while achieving better accuracy across our experiments.
Accept
The paper proposes an elegant way to use continious analogue for Nesterov accelerated gradient method instead of the Neural Ordinary Differential Equations (NODE) models for mapping the initial conditions to the output. The authors derive the adjoint ODEs, and show that such new "layer" results in lower number of function evaluations while integrating the NesterovODE. Thus, the authors clearly confirmed that the method is practical and efficient in all scenarios where NODE models are used. This is a good paper.
train
[ "P-eLfOUn-Mg", "cckADc7q87c", "RNkKKBrAyC", "r893QCIj8bw", "Vn8YGfCmFZr", "Nm2kCvIRULJ", "tPKtfy8sc8", "ZFJPNyqUKug", "7z_W7U8pzW", "hTuVI2dgcO1", "_1X4muyCHpN", "czuVUiuRq_U", "qi0Juw6iGFN", "pJtnRelTn0q", "HnhNhGq_60wq", "FEOAI8ZhrH", "TBw3n-_X-8", "Xo07v0ktSknt", "oOVrKL-x3p",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", ...
[ " Thanks for your response and we appreciate your endorsement.", " Dear authors, \n\nthank you for additional experiments and empirical evaluations of the memory consumption! They significantly improve the quality of the submission and make the proposed approach more convincing and competitive with state-of-the-a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "cckADc7q87c", "Nm2kCvIRULJ", "r893QCIj8bw", "7z_W7U8pzW", "frB9yUOSQmO", "sOEUTmj2gV6", "qi0Juw6iGFN", "nips_2022_-OfK_B9Q5hI", "hTuVI2dgcO1", "_1X4muyCHpN", "czuVUiuRq_U", "Xo07v0ktSknt", "frB9yUOSQmO", "FEOAI8ZhrH", "TBw3n-_X-8", "sOEUTmj2gV6", "frB9yUOSQmO", "oOVrKL-x3p", "1_...
nips_2022_NpeHeIkbfYU
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing
Residual networks have shown great success and become indispensable in today’s deep models. In this work, we aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing, and further propose a new training strategy to strengthen the performance of residual networks. As residual networks can be viewed as ensembles of relatively shallow networks (i.e., unraveled view) in prior works, we also start from such view and consider that the final performance of a residual network is co-determined by a group of sub-networks. Inspired by the social loafing problem of social psychology, we find that residual networks invariably suffer from similar problem, where sub-networks in a residual network are prone to exert less effort when working as part of the group compared to working alone. We define this previously overlooked problem as network loafing. As social loafing will ultimately cause the low individual productivity and the reduced overall performance, network loafing will also hinder the performance of a given residual network and its sub-networks. Referring to the solutions of social psychology, we propose stimulative training, which randomly samples a residual sub-network and calculates the KL-divergence loss between the sampled sub-network and the given residual network, to act as extra supervision for sub-networks and make the overall goal consistent. Comprehensive empirical results and theoretical analyses verify that stimulative training can well handle the loafing problem, and improve the performance of a residual network by improving the performance of its sub-networks. The code is available at https://github.com/Sunshine-Ye/NIPS22-ST.
Accept
This paper proposes to study the "loafing" problem in deep ResNets, which suggests that the sub networks of a deep ResNet perform significantly worse than the same architecture trained alone. It proposes a simple technique which jointly trains the main network and the KL divergence the main network's output and that of a random subnetwork. It is shown empirically that this technique improves the final accuracy for both the main network and random subnetworks. The reviewers agreed that the "loafing" problem is an interesting phenomenon, but also raised concerns about both the motivation/presentation and comparison with similar techniques like deep supervised and self distillation. The authors provided extensive responses with new additional experimental results. After the discussion phase, the reviewers reached to a consensus of acceptance, conditioned on that the authors carefully address the framing of "loafing" and make clear that the "loafing" term is just a loose analogy without any real implications to biology. The AC agrees that the problem identified in this paper is interesting and can have implications to both regularization and model compression. However the authors should try to remove the excessive reference to the social psychology aspects, which does provide scientific justification to the method but rather could invite unnecessary confusion and controversy.
train
[ "pW65YFSkILv", "YJqNiHmFPY", "zdMWXldMrfR", "LDJIpgfaaXr", "0k3P1yplmey", "w91MgW0-YTg", "SV5-x1kHmWz", "aDoDZKpC4cp", "Szb33btjEPs", "YxguTymQRRR", "lqj0QhbQymK", "5264aQ2LxMH", "LWZb2AqhXOf", "SDIpuKHalhD", "6yf1Enz5fLS", "cMq4ExYU8mH" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your valuable and insightful comments. We feel glad about your generally favorable assessment of our methodology. We will clarify in the final version that \"social loafing is just a metaphor to describe a behavior in neural networks that has no strong connection with biology\".", " Thank you for ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 3, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 5 ]
[ "YJqNiHmFPY", "YxguTymQRRR", "nips_2022_NpeHeIkbfYU", "nips_2022_NpeHeIkbfYU", "SDIpuKHalhD", "SDIpuKHalhD", "cMq4ExYU8mH", "6yf1Enz5fLS", "LWZb2AqhXOf", "cMq4ExYU8mH", "6yf1Enz5fLS", "LWZb2AqhXOf", "nips_2022_NpeHeIkbfYU", "nips_2022_NpeHeIkbfYU", "nips_2022_NpeHeIkbfYU", "nips_2022_N...
nips_2022_VT0Y4PlV2m0
Transformers from an Optimization Perspective
Deep learning models such as the Transformer are often constructed by heuristics and experience. To provide a complementary foundation, in this work we study the following problem: Is it possible to find an energy function underlying the Transformer model, such that descent steps along this energy correspond with the Transformer forward pass? By finding such a function, we can reinterpret Transformers as the unfolding of an interpretable optimization process. This unfolding perspective has been frequently adopted in the past to elucidate more straightforward deep models such as MLPs and CNNs; however, it has thus far remained elusive obtaining a similar equivalence for more complex models with self-attention mechanisms like the Transformer. To this end, we first outline several major obstacles before providing companion techniques to at least partially address them, demonstrating for the first time a close association between energy function minimization and deep layers with self-attention. This interpretation contributes to our intuition and understanding of Transformers, while potentially laying the ground-work for new model designs.
Accept
The submission analyses the (simplified) Transformer architecture from the unfolding optimisation perspective, which was recently used to analyse simpler MLP and CNN models. Four reviewers are positive on the submission results and agree that they potentially can bring new insights and more powerful architectures. AC recommends acceptance.
test
[ "viXXkHb0Q5n", "nNbPd_dwer", "FfUknsJsN2F", "FCMXyIllNWE", "tQSnmoZArfab", "516H6SsQbDK", "Mf0Mqg9tUIb", "4-IoMkL-Vl", "QTpEmFyVmH6", "X_TevjOPgHj", "FIvF6W64rK1", "D3d2XkAZU6S", "yEqVIuUU7-h", "j9CYAlnrTB" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for checking through our added experiments and providing updated feedback. We also highlight that our primary contribution is theoretical, and in such cases there is inevitably some gap between theory and practical/deployable models. (As a quick representative example, there are numerous NeurIPS/ICML/ICLR...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 3 ]
[ "nNbPd_dwer", "4-IoMkL-Vl", "QTpEmFyVmH6", "4-IoMkL-Vl", "516H6SsQbDK", "j9CYAlnrTB", "yEqVIuUU7-h", "D3d2XkAZU6S", "X_TevjOPgHj", "FIvF6W64rK1", "nips_2022_VT0Y4PlV2m0", "nips_2022_VT0Y4PlV2m0", "nips_2022_VT0Y4PlV2m0", "nips_2022_VT0Y4PlV2m0" ]
nips_2022_J5e13zmpj-Z
LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent Reinforcement Learning
Cooperative multi-agent reinforcement learning (MARL) has made prominent progress in recent years. For training efficiency and scalability, most of the MARL algorithms make all agents share the same policy or value network. However, in many complex multi-agent tasks, different agents are expected to possess specific abilities to handle different subtasks. In those scenarios, sharing parameters indiscriminately may lead to similar behavior across all agents, which will limit the exploration efficiency and degrade the final performance. To balance the training complexity and the diversity of agent behavior, we propose a novel framework to learn dynamic subtask assignment (LDSA) in cooperative MARL. Specifically, we first introduce a subtask encoder to construct a vector representation for each subtask according to its identity. To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy, which can dynamically group agents with similar abilities into the same subtask. In this way, agents dealing with the same subtask share their learning of specific abilities and different subtasks correspond to different specific abilities. We further introduce two regularizers to increase the representation difference between subtasks and stabilize the training by discouraging agents from frequently changing subtasks, respectively. Empirical results show that LDSA learns reasonable and effective subtask assignment for better collaboration and significantly improves the learning performance on the challenging StarCraft II micromanagement benchmark and Google Research Football.
Accept
This paper proposes a method to dynamically group agents with similar representations and assign subtasks to each group so that they can effectively share parameters among agents within the same group while specializing across groups. The results on StarCraft Micromanagement benchmark and Google Research Football domain show that the proposed method outperforms relevant baselines including QMIX, ROMA, and RODE. The reviewers found that the idea is interesting and technically sound, and the paper is very well-written. Although there were several concerns about the lack of baselines (CDC) and the lack of challenging benchmarks, the authors addressed most of them during the rebuttal period by updating the results with additional baselines, an additional benchmark (Google Research Football), and additional ablation studies. As a result, all of the reviewers agreed that the result is significant enough to be presented at NeurIPS. Thus, I recommend accepting this paper.
train
[ "iSQhBpPsoZd", "wnN3M-jfHQ", "DzMLlxXiSjS", "Eb9RWhIbqHG", "PBFi2gOs1dd", "i_kDQ07OuIq", "jIQCz6akAbt", "mBGVWpstouF", "lKCzgh42hOB", "mwudnt2GDgL", "Kcbr1VTB7qw", "yr45NvahWUy", "VLUCvCTqGNN", "F7IFCj0lchTC", "49Hdx1nBKxt", "sa2toGTmKYd", "t8FsR1y35xQG", "G3527-BX08F", "OJ9ycUAf...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", ...
[ " Thank you for replying to us again, and we feel very grateful that you can improve the score. ", " We are happy that we could address you concerns, and we feel very grateful that you can raise the score.", " Thank the authors for the rebuttal and the new revision. I upgrade the score to 5 now.", " I appreci...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "DzMLlxXiSjS", "Eb9RWhIbqHG", "PBFi2gOs1dd", "Kcbr1VTB7qw", "lKCzgh42hOB", "jIQCz6akAbt", "yr45NvahWUy", "lKCzgh42hOB", "1nX1TC2s0sj", "nips_2022_J5e13zmpj-Z", "VLUCvCTqGNN", "49Hdx1nBKxt", "sa2toGTmKYd", "zxSK8nIp_TY", "zxSK8nIp_TY", "t8FsR1y35xQG", "G3527-BX08F", "B6koww7rZ3u", ...
nips_2022_7yHte3tH8Xh
Knowledge Distillation Improves Graph Structure Augmentation for Graph Neural Networks
Graph (structure) augmentation aims to perturb the graph structure through heuristic or probabilistic rules, enabling the nodes to capture richer contextual information and thus improving generalization performance. While there have been a few graph structure augmentation methods proposed recently, none of them are aware of a potential negative augmentation problem, which may be caused by overly severe distribution shifts between the original and augmented graphs. In this paper, we take an important graph property, namely graph homophily, to analyze the distribution shifts between the two graphs and thus measure the severity of an augmentation algorithm suffering from negative augmentation. To tackle this problem, we propose a novel Knowledge Distillation for Graph Augmentation (KDGA) framework, which helps to reduce the potential negative effects of distribution shifts, i.e., negative augmentation problem. Specifically, KDGA extracts the knowledge of any GNN teacher model trained on the augmented graphs and injects it into a partially parameter-shared student model that is tested on the original graph. As a simple but efficient framework, KDGA is applicable to a variety of existing graph augmentation methods and can significantly improve the performance of various GNN architectures. For three popular graph augmentation methods, namely GAUG, MH-Aug, and GraphAug, the experimental results show that the learned student models outperform their vanilla implementations by an average accuracy of 4.6% (GAUG), 4.2% (MH-Aug), and 4.6% (GraphAug) on eight graph datasets.
Accept
The paper identifies the problem of negative augmentation in graph augmentation methods as it may cause the distribution shift issue. The paper thus proposes a knowledge distillation method that trains a teacher model on the augmented graphs and a student model on the original one. Reviewers had concerns on the novelty of the approach and experiments. The discussion between the reviewers and the authors were effective, and two of the reviewers have raised the scores to accept/borderline accept. I'd recommend acceptance.
val
[ "f9_r_lQZ8BX", "b70kn_OAqxR", "iQ4mMNotxmt", "rjMY3WN7Y5i", "3XcK5bTuv_5", "6wTo02tqBtY", "EFwaFYxoasp", "Vn8fAoH2YrN", "7yz0U0Z82ya", "OU3T-3kNtT2a", "uThGVcK3ZXh", "WQopc1Dy4t", "_nqgzPojl30", "LJICIFPMmOo", "1BoudAqD7pJ", "VLJfS5MkRyR", "dv9u6sHh6l", "2wmt3925fI1", "ecibSc2BV9...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " We' re glad to hear that we have addressed most of your concerns and that you are still willing to keep a positive rating score! Thanks for spending a large amount of time on our submission, which makes our paper even stronger.", " We' re glad to hear that we have addressed most of your concerns! Thanks for spe...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "rjMY3WN7Y5i", "iQ4mMNotxmt", "LJICIFPMmOo", "VLJfS5MkRyR", "6wTo02tqBtY", "EFwaFYxoasp", "b5Z6xL06QV8", "YOrWOE8Fjwd", "ecibSc2BV9i", "nips_2022_7yHte3tH8Xh", "0z10qn5gJkl", "_nqgzPojl30", "b5Z6xL06QV8", "1BoudAqD7pJ", "YOrWOE8Fjwd", "dv9u6sHh6l", "ecibSc2BV9i", "nips_2022_7yHte3t...
nips_2022_6niwHlzh10U
Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics
We present a novel method for guaranteeing linear momentum in learned physics simulations. Unlike existing methods, we enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers. We combine these strict constraints with a hierarchical network architecture, a carefully constructed resampling scheme, and a training approach for temporal coherence. In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially. In addition, the induced physical bias leads to significantly better generalization performance and makes our method more reliable in unseen test cases. We evaluate our method on a range of different, challenging fluid scenarios. Among others, we demonstrate that our approach generalizes to new scenarios with up to one million particles. Our results show that the proposed algorithm can learn complex dynamics while outperforming existing approaches in generalization and training performance. An implementation of our approach is available at https://github.com/tum-pbs/DMCF.
Accept
The paper models fluid particle dynamics with continuous convolutions where the convolutional kernels in the final layer are constrained to be antisymmetric. This physical constraint enforces conservation of momentum. Reviewers think this is a well-written paper. Related work suggested during the review period should be added.
train
[ "rDOQiJSPwSv", "JeTeuanxGKl", "F5tba8PtjUo", "wKhj7EGH5q", "AvpjZ10_c6g", "HpgN01nSrU4", "D8DvCJUHZfN", "mEyuNTd61Uw", "i0JD6SV1NVv" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I want to thank the authors for the detailed answers to my and other reviewers' questions. The updated version looks much stronger now. Based on these, I'm happy to raise my score and recommend the paper for acceptance.\n", " It seems the author’s have addressed the concerns of all reviewers and that the revise...
[ -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "F5tba8PtjUo", "wKhj7EGH5q", "i0JD6SV1NVv", "mEyuNTd61Uw", "D8DvCJUHZfN", "nips_2022_6niwHlzh10U", "nips_2022_6niwHlzh10U", "nips_2022_6niwHlzh10U", "nips_2022_6niwHlzh10U" ]
nips_2022_L6aVjBmtVE
Characterization of Excess Risk for Locally Strongly Convex Population Risk
We establish upper bounds for the expected excess risk of models trained by proper iterative algorithms which approximate the local minima. Unlike the results built upon the strong globally strongly convexity or global growth conditions e.g., PL-inequality, we only require the population risk to be \emph{locally} strongly convex around its local minima. Concretely, our bound under convex problems is of order $\tilde{\mathcal{O}}(1/n)$. For non-convex problems with $d$ model parameters such that $d/n$ is smaller than a threshold independent of $n$, the order of $\tilde{\mathcal{O}}(1/n)$ can be maintained if the empirical risk has no spurious local minima with high probability. Moreover, the bound for non-convex problem becomes $\tilde{\mathcal{O}}(1/\sqrt{n})$ without such assumption. Our results are derived via algorithmic stability and characterization of the empirical risk's landscape. Compared with the existing algorithmic stability based results, our bounds are dimensional insensitive and without restrictions on the algorithm's implementation, learning rate, and the number of iterations. Our bounds underscore that with locally strongly convex population risk, the models trained by any proper iterative algorithm can generalize well, even for non-convex problems, and $d$ is large.
Accept
There is a consensus among reviewers that the results are strong with novel results on the rate of convergence of the excess risk of certain iterative algorithms for convex and non-convex problems, under a local strong convexity assumption. For the camera-ready version, the authors are encouraged to provide a more detailed discussion of the limitation of the work as emphasized in the discussion with reviewer 7m5U.
train
[ "wxGB0kFzlBE", "3RBonayXxq", "ObJTkGNj0Ty", "veXhaM-EQ4Z", "dGZSf2B-w7G", "yeZ7o9lmvb", "Re31GkhV8wx", "HInU36M3WSk", "Ma71yNBgmF3", "wxYqq_vmC8Q", "YRd5XHkR4QG", "pIwi0oUcOKl", "KzR8Su5pp8O", "Weu27XQIGT", "4iZiyc2OND", "oY0aOkMBEvv", "D0dTOGuw4VJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Your response has fixed my concerns. I have no further questions.", " I would like to thank the authors for the detailed explanation. I have no further questions. ", " Thank you for the response. I revised my score to accept. I am still not satisfied with the way that the authors are not acknowledging the lim...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "Ma71yNBgmF3", "wxYqq_vmC8Q", "yeZ7o9lmvb", "pIwi0oUcOKl", "oY0aOkMBEvv", "Re31GkhV8wx", "HInU36M3WSk", "oY0aOkMBEvv", "4iZiyc2OND", "YRd5XHkR4QG", "D0dTOGuw4VJ", "Weu27XQIGT", "nips_2022_L6aVjBmtVE", "nips_2022_L6aVjBmtVE", "nips_2022_L6aVjBmtVE", "nips_2022_L6aVjBmtVE", "nips_2022_...
nips_2022_tNXumks8yHv
A Probabilistic Graph Coupling View of Dimension Reduction
Most popular dimension reduction (DR) methods like t-SNE and UMAP are based on minimizing a cost between input and latent pairwise similarities. Though widely used, these approaches lack clear probabilistic foundations to enable a full understanding of their properties and limitations. To that extent, we introduce a unifying statistical framework based on the coupling of hidden graphs using cross entropy. These graphs induce a Markov random field dependency structure among the observations in both input and latent spaces. We show that existing pairwise similarity DR methods can be retrieved from our framework with particular choices of priors for the graphs. Moreover this reveals that these methods relying on shift-invariant kernels suffer from a statistical degeneracy that explains poor performances in conserving coarse-grain dependencies. New links are drawn with PCA which appears as a non-degenerate graph coupling model.
Accept
This paper aims to develop a statistical framework for developing a more rigorous understanding of well known methods such as t-sne and UMAP. There was consensus among the reviewers that the proposed framework makes considerable progress in understanding the mentioned methods.
train
[ "b4-49JvtIRx", "CeTZdObox4F", "Lk8D8AN9sbs", "gUkqIwkJjFYp", "6ZwI6A1ihr", "Y8vVOQpndGN", "BQmcU3Ys7zJ", "bpS54CCo1cO", "_P-axJFStC" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Apologies for entering this discussion so late. Thank you so much for your time and effort responding to my concerns. This is quite helpful to my understanding.", " We thank the reviewer for the careful reading of the manuscript, her/his assessment and relevant remarks. \n\n### Notes\n\nWe thank the reviewer fo...
[ -1, -1, -1, -1, -1, 6, 5, 4, 8 ]
[ -1, -1, -1, -1, -1, 2, 3, 4, 2 ]
[ "gUkqIwkJjFYp", "_P-axJFStC", "bpS54CCo1cO", "BQmcU3Ys7zJ", "Y8vVOQpndGN", "nips_2022_tNXumks8yHv", "nips_2022_tNXumks8yHv", "nips_2022_tNXumks8yHv", "nips_2022_tNXumks8yHv" ]
nips_2022_L0U7TUWRt_X
Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum
Graph Contrastive Learning (GCL), learning the node representations by augmenting graphs, has attracted considerable attentions. Despite the proliferation of various graph augmentation strategies, there are still some fundamental questions unclear: what information is essentially learned by GCL? Are there some general augmentation rules behind different augmentations? If so, what are they and what insights can they bring? In this paper, we answer these questions by establishing the connection between GCL and graph spectrum. By an experimental investigation in spectral domain, we firstly find the General grAph augMEntation (GAME) rule for GCL, i.e., the difference of the high-frequency parts between two augmented graphs should be larger than that of low-frequency parts. This rule reveals the fundamental principle to revisit the current graph augmentations and design new effective graph augmentations. Then we theoretically prove that GCL is able to learn the invariance information by contrastive invariance theorem, together with our GAME rule, for the first time, we uncover that the learned representations by GCL essentially encode the low-frequency information, which explains why GCL works. Guided by this rule, we propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in. We combine it with different existing GCL models, and extensive experiments well demonstrate that it can further improve the performances of a wide variety of different GCL methods.
Accept
This paper had borderline reviews. The reviewers felt that the contributions were significant and that the connection between graph contrastive learning and spectral properties of the graph were valuable. Weaknesses included a focus on just node classification and some significant issues with presentation. The authors addressed many concerns presented and agreed to address the presentation issues. We hope that they will indeed do so in the final version of the paper, as the presentation issues would significantly limit the impact of the paper. In our discussion one reviewer strongly advocated rejection, while three pushed for acceptance, putting the paper above bar.
val
[ "NIuUeUAyiEu", "FEYuumWhHQr", "vgg-XGj4qKI", "syPgjf90Wjj", "H0wmlvrc45k", "gSRlqWK3MVG", "8CvfWms2SC", "yjl1zIPzWUu", "1daY1ldx953", "Tznh9zgd2Dm", "HdgFVBlKTEn", "qeDE_2SNjLW", "85-3811ymin", "xMHpPMX9zGf", "UzJrCO1hLm", "yVVYMYV5VQu", "ugpcKr1pXLe", "COnYvpMenjZ", "f864OgaOwvH...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " Dear Reviewer ndDo:\n\nWe thank you for taking the time to provide critical comments. We have provided detailed responses that we believe have covered your concerns. As this is the last day for discussion, we kindly remind you that could you check out our reply. We hope to further discuss with you whether or not ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2, 3, 5 ]
[ "ID7AumMbzn", "syPgjf90Wjj", "8CvfWms2SC", "xMHpPMX9zGf", "COnYvpMenjZ", "yVVYMYV5VQu", "f864OgaOwvH", "ID7AumMbzn", "ID7AumMbzn", "ID7AumMbzn", "kTXy6OstLRj", "kTXy6OstLRj", "kTXy6OstLRj", "kTXy6OstLRj", "Qv_GU92z8uB", "Qv_GU92z8uB", "G5cuQV4SIzL", "G5cuQV4SIzL", "nips_2022_L0U7...
nips_2022_wuunqp9KVw
Pluralistic Image Completion with Gaussian Mixture Models
Pluralistic image completion focuses on generating both visually realistic and diverse results for image completion. Prior methods enjoy the empirical successes of this task. However, their used constraints for pluralistic image completion are argued to be not well interpretable and unsatisfactory from two aspects. First, the constraints for visual reality can be weakly correlated to the objective of image completion or even redundant. Second, the constraints for diversity are designed to be task-agnostic, which causes the constraints to not work well. In this paper, to address the issues, we propose an end-to-end probabilistic method. Specifically, we introduce a unified probabilistic graph model that represents the complex interactions in image completion. The entire procedure of image completion is then mathematically divided into several sub-procedures, which helps efficient enforcement of constraints. The sub-procedure directly related to pluralistic results is identified, where the interaction is established by a Gaussian mixture model (GMM). The inherent parameters of GMM are task-related, which are optimized adaptively during training, while the number of its primitives can control the diversity of results conveniently. We formally establish the effectiveness of our method and demonstrate it with comprehensive experiments.
Accept
The paper addresses the pluralistic image completion problem. Initial reviews were borderline accept (2x) , weak accept (2x). The authors provided a rebuttal. Both borderline reviewers upgraded to weak accept. The AC agrees and considers the paper a solid contribution to NeurIPS and recommends acceptance.
val
[ "jf-xlJCsLVV", "oCl7qzGn8x", "GPt3l27F09i", "doKzQMMeOFq", "b_mi9qCeugm", "y5_55oe8f_9", "6u3JGPAcuP_", "GP66EBDJF8N", "zxAuVOV91YaC", "nLSrs2_xO3s", "9hdw0ogFlC5", "KT3KcWYgk8y", "1fs4OKIkbU", "3W1gdWJ03DE", "guFU4xeu3FE", "LG-CSKSUfaC", "CNvx_A8y1Cg", "6S3WvE8DVz", "umvhVlP9zY3...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks! We will prepare better quality results in the final version!", " Thanks for the clarification, my concerns have been addressed. I'll change my rating. I understand that the limited rebuttal period may not enable enough resources for training high-res images and I encourage authors to prepare better qual...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "oCl7qzGn8x", "GPt3l27F09i", "doKzQMMeOFq", "guFU4xeu3FE", "y5_55oe8f_9", "6u3JGPAcuP_", "nips_2022_wuunqp9KVw", "RJXp37PtmD3", "umvhVlP9zY3", "6S3WvE8DVz", "CNvx_A8y1Cg", "nips_2022_wuunqp9KVw", "RJXp37PtmD3", "umvhVlP9zY3", "6S3WvE8DVz", "CNvx_A8y1Cg", "nips_2022_wuunqp9KVw", "ni...
nips_2022_GGBe1uQ_g_8
Effective Decision Boundary Learning for Class Incremental Learning
Rehearsal approaches in class incremental learning (CIL) suffer from decision boundary overfitting to new classes, which is caused by two factors: insufficiency of old classes data for knowledge distillation (KD) and imbalanced data between the old and new classes because of the limited storage memory. In this work, we present a simple but effective approach to deal with these two factors to optimize the decision boundary. First, we employ the mixup knowledge distillation (MKD) and re-sampling strategy to improve the performance of KD, which would greatly alleviate the overfitting problem. Specifically, it utilizes mixup and re-sampling to synthesize adequate data that are more consistent with the latent distribution between the learned and new classes. Second, inspired by the influence balanced (IB) loss used in handling the long-tailed data, we propose a novel incremental influence balanced (IIB) method for CIL to address the classification on imbalanced data, which re-weights samples by their influences to create a proper decision boundary. With these two improvements, we present the effective decision boundary learning (EDBL) algorithm which improves the performance of KD and deals with the imbalanced data classification simultaneously. Experiments show that the proposed EDBL achieves state-of-the-art performances on several CIL benchmarks.
Reject
The paper aims to use the mixup and re-sampling strategy to improve knowledge distillation. Reviewers find that the classification weighting factor is not a new task, and the technique and novelty is very limited. Because both the mixup and re-sampling strategy have been used in many settings. There is no techniqual contribution. The author is encouraged to improve the paper by considering the comments from the reviewers.
train
[ "2oLKQIrjXBl", "Qn99cNbne2n", "_ojduYUYIUY", "LDAKlLpPgR7", "dif-lTX8rY", "WhReKne3PA", "1ARoDtHyTXz", "EShDFwUF6vn", "rCGhBqV7ull", "uTI-GSFno2C", "z-fGPo3Cbou", "vnP4PCitaMt5", "LibaNryUYIk", "RCMnBy0MoXZ", "IdXUCRh_zIp", "dv97rqK1EKx", "OojXROUev6y", "-qORbb8mwpL", "T7VOTSqAtA...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_r...
[ " Thanks for the feedback from the authors. \nIt seems that the proposed IIB training can replace CBF to some extent, while it does not show better performance than CBF for the CNN settings.\nEven though both proposed methods (Re-MKD and IIB loss) may not be super-novel, one can consider IIB training procedure as a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 3 ]
[ "rCGhBqV7ull", "1ARoDtHyTXz", "WhReKne3PA", "WhReKne3PA", "WhReKne3PA", "OojXROUev6y", "EShDFwUF6vn", "LibaNryUYIk", "uTI-GSFno2C", "z-fGPo3Cbou", "vnP4PCitaMt5", "-qORbb8mwpL", "RCMnBy0MoXZ", "T7VOTSqAtAc", "nips_2022_GGBe1uQ_g_8", "uX3krpK_vsJ", "a05Kr9nI0MF", "NEBx4JaoBN", "O9...
nips_2022_xLnfzQYSIue
Top Two Algorithms Revisited
Top two algorithms arose as an adaptation of Thompson sampling to best arm identification in multi-armed bandit models for parametric families of arms. They select the next arm to sample from by randomizing among two candidate arms, a leader and a challenger. Despite their good empirical performance, theoretical guarantees for fixed-confidence best arm identification have only been obtained when the arms are Gaussian with known variances. In this paper, we provide a general analysis of top-two methods, which identifies desirable properties of the leader, the challenger, and the (possibly non-parametric) distributions of the arms. As a result, we obtain theoretically supported top-two algorithms for best arm identification with bounded distributions. Our proof method demonstrates in particular that the sampling step used to select the leader inherited from Thompson sampling can be replaced by other choices, like selecting the empirical best arm.
Accept
This paper analyzes several "top-two" style algorithms for the pure-exploration multi-armed bandit problem under bounded distributions. The reviewers agreed that the theoretical contribution of this paper is solid. Some concern was raised on the algorithmic and empirical contributions of this paper. In particular, it was mentioned by one reviewer that deterministic choices of leader and challenger might also work, and the analysis might be easier. I hope this item can be addressed before this paper is published.
train
[ "6HlOYHQzrM", "acYEwgfOZ3a", "Bocl3eify-_", "wYGIIvrboCT", "qetK0H_SjBY", "lYQQviN4Fe9D", "lzO7zFs33a", "sxpjSNjn_Nr", "bM750cfG1AG", "qZrzk-giwzB", "A--hl_nPa6", "uJ5tS2CgQ3h" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your quick reply and clarification! The reply from the authors answered my question well. Although I still have some concerns about the impact of the findings, I agree with the novelty and contribution. Now I lean toward acceptance and will re-evaluate my rating after checking the paper again. ", " T...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "Bocl3eify-_", "sxpjSNjn_Nr", "wYGIIvrboCT", "qetK0H_SjBY", "uJ5tS2CgQ3h", "A--hl_nPa6", "qZrzk-giwzB", "bM750cfG1AG", "nips_2022_xLnfzQYSIue", "nips_2022_xLnfzQYSIue", "nips_2022_xLnfzQYSIue", "nips_2022_xLnfzQYSIue" ]
nips_2022_ZlCpRiZN7n
Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation
We propose a simple but effective source-free domain adaptation (SFDA) method. Treating SFDA as an unsupervised clustering problem and following the intuition that local neighbors in feature space should have more similar predictions than other features, we propose to optimize an objective of prediction consistency. This objective encourages local neighborhood features in feature space to have similar predictions while features farther away in feature space have dissimilar predictions, leading to efficient feature clustering and cluster assignment simultaneously. For efficient training, we seek to optimize an upper-bound of the objective resulting in two simple terms. Furthermore, we relate popular existing methods in domain adaptation, source-free domain adaptation and contrastive learning via the perspective of discriminability and diversity. The experimental results prove the superiority of our method, and our method can be adopted as a simple but strong baseline for future research in SFDA. Our method can be also adapted to source-free open-set and partial-set DA which further shows the generalization ability of our method. Code is available in https://github.com/Albert0147/AaD_SFDA.
Accept
This paper proposes a source-free domain adaptation method based on unsupervised clustering. The main assumption is that the source-trained model could generate target domain features that have smooth predictions in a neighbourhood. The proposed method optimizes the upper bound of the objective of prediction consistency. Experimental results show that the proposed method outperforms pseudo label and neighbourhood clustering methods. While the main idea is not significantly novel, the effectiveness of the proposed algorithm is demonstrated by solid experimental studies. This is again a simple and efficient deep learning method designed with intuitions and without strong theoretical evidence. I would recommend acceptance of this paper given its impressive performance and solidness.
val
[ "cdCr6YibWuq", "N9G9S11bSi", "xJ0xNFAL1i", "vnnM2QO5cc", "4cNpzLzyFRT", "7EzMtodt57h", "vWNGIw2o0Z", "4WbWcB51ydLH", "9Rd5qxGTT_6", "YyfMI8eimeT", "bDfXXlpHtIf", "gkvyuGnRr9e", "yPIxMQ82M6w", "MeZ7DlfC0uC", "BFQNuOOaXA", "Z_7CrVS2EHB" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply, I updated my score from 6 to 7.", " Thanks for the authors' responses to my concerns about this paper. \n\nThe authors explained some experimental details and addressed my concern about the theoretical and empirical gap of the hyperparameter lambda. I will keep my original positive rat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "xJ0xNFAL1i", "gkvyuGnRr9e", "vnnM2QO5cc", "YyfMI8eimeT", "7EzMtodt57h", "vWNGIw2o0Z", "bDfXXlpHtIf", "9Rd5qxGTT_6", "Z_7CrVS2EHB", "BFQNuOOaXA", "MeZ7DlfC0uC", "yPIxMQ82M6w", "nips_2022_ZlCpRiZN7n", "nips_2022_ZlCpRiZN7n", "nips_2022_ZlCpRiZN7n", "nips_2022_ZlCpRiZN7n" ]
nips_2022_flBYpZkW6ST
Mingling Foresight with Imagination: Model-Based Cooperative Multi-Agent Reinforcement Learning
Recently, model-based agents have achieved better performance than model-free ones using the same computational budget and training time in single-agent environments. However, due to the complexity of multi-agent systems, it is tough to learn the model of the environment. The significant compounding error may hinder the learning process when model-based methods are applied to multi-agent tasks. This paper proposes an implicit model-based multi-agent reinforcement learning method based on value decomposition methods. Under this method, agents can interact with the learned virtual environment and evaluate the current state value according to imagined future states in the latent space, making agents have the foresight. Our approach can be applied to any multi-agent value decomposition method. The experimental results show that our method improves the sample efficiency in different partially observable Markov decision process domains.
Accept
The reviewers appreciated the paper's originality in its combination of existing components, solid theoretical motivation, clear writing, and evaluation on complex problems. For these reasons, I recommend acceptance.
train
[ "UdQmmHNMmBU", "ImMDf7RJW1", "d6d8WBxcFPL", "dD0g2v7hUXl", "lI-xfqMLJa", "jumptCyjLo", "F9_W-R4o6C-", "Gdt6spgrmxD" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the clarification, and these address my questions.", " The author-reviewer discussion period ends on Aug 9. We hope our responses have addressed the concerns of reviewers and are happy to engage in further discussion for follow-up concerns.\n\nIn this paper, we propose a novel model-base...
[ -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "d6d8WBxcFPL", "nips_2022_flBYpZkW6ST", "dD0g2v7hUXl", "lI-xfqMLJa", "Gdt6spgrmxD", "F9_W-R4o6C-", "nips_2022_flBYpZkW6ST", "nips_2022_flBYpZkW6ST" ]
nips_2022_dFs4d0kqs2
Generalization Analysis on Learning with a Concurrent Verifier
Machine learning technologies have been used in a wide range of practical systems. In practical situations, it is natural to expect the input-output pairs of a machine learning model to satisfy some requirements. However, it is difficult to obtain a model that satisfies requirements by just learning from examples. A simple solution is to add a module that checks whether the input-output pairs meet the requirements and then modifies the model's outputs. Such a module, which we call a {\em concurrent verifier} (CV), can give a certification, although how the generalizability of the machine learning model changes using a CV is unclear. This paper gives a generalization analysis of learning with a CV. We analyze how the learnability of a machine learning model changes with a CV and show a condition where we can obtain a guaranteed hypothesis using a verifier only in the inference time. We also show that typical error bounds based on Rademacher complexity will be no larger than that of the original model when using a CV in multi-class classification and structured prediction settings.
Accept
All reviewers liked the presented approach of using a concurrent verifier in both the learning and inference phase in the PAC setting. The approach also presents theoretical proofs for bounds when using such a verifier. The reviews also provide many details for improving the presentation for both improving the correctness and clarity, which would be great to incorporate in the next version of the paper.
train
[ "TqDBuTu5YqR", "tPe2kkYwIiT", "pl3cU5ftT8E", "qNzi0Vecd63", "7RISOB7vLjW", "92WSbPsVs9J", "N0fzTjMnaUc", "7TRXaOGfwla" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The primary concerns of the reviewer have been met. I am raising the score. ", " Thank you for reviewing our paper. We have updated the paper and the supplementary material to reflect comments from reviewers.", " Thank you for reviewing our paper. We are happy to hear that you appreciate the importance of th...
[ -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, 2, 4, 2 ]
[ "7RISOB7vLjW", "nips_2022_dFs4d0kqs2", "7TRXaOGfwla", "N0fzTjMnaUc", "92WSbPsVs9J", "nips_2022_dFs4d0kqs2", "nips_2022_dFs4d0kqs2", "nips_2022_dFs4d0kqs2" ]
nips_2022_nE6vnoHz9--
Bridge the Gap Between Architecture Spaces via A Cross-Domain Predictor
Neural Architecture Search (NAS) can automatically design promising neural architectures without artificial experience. Though it achieves great success, prohibitively high search cost is required to find a high-performance architecture, which blocks its practical implementation. Neural predictor can directly evaluate the performance of neural networks based on their architectures and thereby save much budget. However, existing neural predictors require substantial annotated architectures trained from scratch, which still consume many computational resources. To solve this issue, we propose a Cross-Domain Predictor (CDP), which is trained based on the existing NAS benchmark datasets (e.g., NAS-Bench-101), but can be used to find high-performance architectures in large-scale search spaces. Particularly, we propose a progressive subspace adaptation strategy to address the domain discrepancy between the source architecture space and the target space. Considering the large difference between two architecture spaces, an assistant space is developed to smooth the transfer process. Compared with existing NAS methods, the proposed CDP is much more efficient. For example, CDP only requires the search cost of 0.1 GPU Days to find architectures with 76.9% top-1 accuracy on ImageNet and 97.51% on CIFAR-10.
Accept
This paper introduces ideas from domain adaptation to improve NAS: leveraging leverage existing NAS Bench to predict out-of-domain architecture’s performance. This is an important question that has been overlooked, and the authors propose to learn a predictor via closing the domain feature gap between source and target architecture space. Both the problem setting and the proposed methods are novel. The authors also present good ablation studies, and the rebuttal was able to address a few clarification questions. After the rebuttal, all reviewers seem to be positive about this work, and the AC sides with them.
train
[ "6BD4zriDUvJ", "1H1Wi75b18", "IE26003xH3", "2P6KvIznveN", "0fHI-JpZ3rI", "RWo39nfI0ru", "PBqjio7dHAc", "8EksJ4FA8Yl", "H2ZEHAq3cG", "yVXPlAt7x-z", "AcuV3g4OE_J", "m2Dldj9zDU" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the feedback! We are grateful for the constructive comments and support.", " Thank you for your response. It is somewhat surprising that it is not necessary to measure GPU Days on the same device for benchmarking (it would be nice to specify the machine for each baseline, but if the convention is so,...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 5 ]
[ "1H1Wi75b18", "PBqjio7dHAc", "2P6KvIznveN", "0fHI-JpZ3rI", "m2Dldj9zDU", "AcuV3g4OE_J", "yVXPlAt7x-z", "H2ZEHAq3cG", "nips_2022_nE6vnoHz9--", "nips_2022_nE6vnoHz9--", "nips_2022_nE6vnoHz9--", "nips_2022_nE6vnoHz9--" ]
nips_2022_swIARHfCaUB
Online Frank-Wolfe with Arbitrary Delays
The online Frank-Wolfe (OFW) method has gained much popularity for online convex optimization due to its projection-free property. Previous studies show that OFW can attain an $O(T^{3/4})$ regret bound for convex losses and an $O(T^{2/3})$ regret bound for strongly convex losses. However, they assume that each gradient queried by OFW is revealed immediately, which may not hold in practice and limits the application of OFW. To address this limitation, we propose a delayed variant of OFW, which allows gradients to be delayed by arbitrary rounds. The main idea is to perform an update similar to OFW after receiving any delayed gradient, and play the latest decision for each round. Despite its simplicity, we prove that our delayed variant of OFW is able to achieve an $O(T^{3/4}+dT^{1/4})$ regret bound for convex losses and an $O(T^{2/3}+d\log T)$ regret bound for strongly convex losses, where $d$ is the maximum delay. This is quite surprising since under a relatively large amount of delay (e.g., $d=O(\sqrt{T})$ for convex losses and $d=O(T^{2/3}/\log T)$ for strongly convex losses), the delayed variant of OFW enjoys the same regret bound as that of the original OFW.
Accept
Following a discussion with the authors, all reviewers were in favor of accepting except for Reviewer 6sxN whose main concern is around the experimental evaluation. From my own look into the paper, I tend to agree with the reviewers: the paper is well-written, it solves a natural (even if a bit niche) problem with a simple algorithm; and the theoretical analysis is well-executed and even somewhat surprising: the effect of the delay turns out to be additive, rather than multiplicative as is the case in other online optimization settings. The concerns around experimental evaluations are valid, but I do not feel that strong experiments are crucial in such a theory-focused paper. All considered, I gladly recommend to accept the paper.
train
[ "9dSG3pFnIYd", "Ni7rkPyXr0R", "ZzuMoigj8jJ", "KSFQLzxsFgF", "rB2YaYdjPOO", "dVMXgU5RxK", "xgbC8Lcn4F", "zgRv5OXgTK", "TElIIqZTtdu", "rTSLHRiYTl", "Ywo5ZnqNHah", "5NGMoolR0pg" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestion. We will revise our paper accordingly.", " I would like to thank the authors for their complete response. The authors addressed my concerns.\n\nIn point 2, the authors mentioned that \"...our delayed OFW can keep the same regret bound as OFW mainly due to the additive effect between th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 5 ]
[ "Ni7rkPyXr0R", "rB2YaYdjPOO", "xgbC8Lcn4F", "5NGMoolR0pg", "Ywo5ZnqNHah", "rTSLHRiYTl", "TElIIqZTtdu", "nips_2022_swIARHfCaUB", "nips_2022_swIARHfCaUB", "nips_2022_swIARHfCaUB", "nips_2022_swIARHfCaUB", "nips_2022_swIARHfCaUB" ]