paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_C7jm6YgJaT
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Data-free Knowledge Distillation (DFKD) has attracted attention recently thanks to its appealing capability of transferring knowledge from a teacher network to a student network without using training data. The main idea is to use a generator to synthesize data for training the student. As the generator gets updated, the distribution of synthetic data will change. Such distribution shift could be large if the generator and the student are trained adversarially, causing the student to forget the knowledge it acquired at the previous steps. To alleviate this problem, we propose a simple yet effective method called Momentum Adversarial Distillation (MAD) which maintains an exponential moving average (EMA) copy of the generator and uses synthetic samples from both the generator and the EMA generator to train the student. Since the EMA generator can be considered as an ensemble of the generator's old versions and often undergoes a smaller change in updates compared to the generator, training on its synthetic samples can help the student recall the past knowledge and prevent the student from adapting too quickly to the new updates of the generator. Our experiments on six benchmark datasets including big datasets like ImageNet and Places365 demonstrate the superior performance of MAD over competing methods for handling the large distribution shift problem. Our method also compares favorably to existing DFKD methods and even achieves state-of-the-art results in some cases.
Accept
This paper trains a generator to produce synthetic data for knowledge distillation from a teacher model, thus allowing distillation without the need of the original training data. The reviewers generally liked and had positive things to say about the method as well as the presentation, and the discussion was mostly around clarification and having better comparisons. This seemed to have satisfied the reviewers who responded to the rebuttals (not all of them did), and from my reading the authors did a good job at responding to concerns of already mostly positive reviews. The one negative review, which I felt was a little off-the-mark, I feel was addressed well by the rebuttals, but the reviewer dropped out afterwards and did not back up their ongoing criticisms. I therefore recommend acceptance of this paper to NeurIPS. Overall, discussion was rather limited, but this could be that the reviewers didn't have and serious concerns from the start and discussion was straightforward. I wish tq3D had contributed a little more as it would have been nice to arrive to a consensus.
train
[ "6LEEWFV5bs", "SOK9QVMRv9X", "YSYi3mGiky", "0jzPA-9qdbb", "tWpGyBvgzgq", "YWcs83r-fR", "EdysfoRhBB", "319ltvrhjLR", "HSxeEAy-T0L", "6A1-fOKbeb-", "Lkb8GLW0_9-", "wI7W-lwtlP9", "e4qc2f4aDRG" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. We really appreciate your time and consideration.", " I have carefully read the responses and other reviewers' comments. My concerns have been properly addressed. I think this paper is a good complement to the field of data-free distillation, and have raised the score to 7.", " Th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "SOK9QVMRv9X", "YWcs83r-fR", "e4qc2f4aDRG", "e4qc2f4aDRG", "wI7W-lwtlP9", "Lkb8GLW0_9-", "6A1-fOKbeb-", "6A1-fOKbeb-", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT" ]
nips_2022_Fx7oXUVEPW
A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits
We study federated contextual linear bandits, where $M$ agents cooperate with each other to solve a global contextual linear bandit problem with the help of a central server. We consider the asynchronous setting, where all agents work independently and the communication between one agent and the server will not trigger other agents' communication. We propose a simple algorithm named FedLinUCB based on the principle of optimism. We prove that the regret of FedLinUCB is bounded by $\widetilde{\mathcal{O}}(d\sqrt{\sum_{m=1}^M T_m})$ and the communication complexity is $\widetilde{O}(dM^2)$, where $d$ is the dimension of the contextual vector and $T_m$ is the total number of interactions with the environment by agent $m$. To the best of our knowledge, this is the first provably efficient algorithm that allows fully asynchronous communication for federated linear bandits, while achieving the same regret guarantee as in the single-agent setting.
Accept
The reviewers all recommend acceptance (with various extent), and the AC also shares their opinion. Regarding the experiments, please make sure that the final version of the paper complies with the relevant parts of the checklist (e.g., report error bars). It could also be interesting to see an experiment where $\theta^*$ is non-uniform.
train
[ "qasRiAwKrO", "SZQUjpBqaFR", "BCA4HY2rw-y", "_49LEJsQerI", "HGbB5VY7UW", "43JCxjpxDy1", "esajHkAqYJH", "vk1hz3pwML3", "sVXfMNai5-b", "Zz9JJ1-iE4i", "GMdZswhy5xR" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nSince the deadline for the author-reviewer discussion phase is fast approaching, we would like to follow up with you to see if you have any further questions. \n\nIn our rebuttal, we have addressed all your questions. In particular, per your suggestion, we have added numerical experiments to co...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "sVXfMNai5-b", "GMdZswhy5xR", "nips_2022_Fx7oXUVEPW", "GMdZswhy5xR", "GMdZswhy5xR", "Zz9JJ1-iE4i", "sVXfMNai5-b", "sVXfMNai5-b", "nips_2022_Fx7oXUVEPW", "nips_2022_Fx7oXUVEPW", "nips_2022_Fx7oXUVEPW" ]
nips_2022_bg7d_2jWv6
On Divergence Measures for Bayesian Pseudocoresets
A Bayesian pseudocoreset is a small synthetic dataset for which the posterior over parameters approximates that of the original dataset. While promising, the scalability of Bayesian pseudocoresets is not yet validated in large-scale problems such as image classification with deep neural networks. On the other hand, dataset distillation methods similarly construct a small dataset such that the optimization with the synthetic dataset converges to a solution similar to optimization with full data. Although dataset distillation has been empirically verified in large-scale settings, the framework is restricted to point estimates, and their adaptation to Bayesian inference has not been explored. This paper casts two representative dataset distillation algorithms as approximations to methods for constructing pseudocoresets by minimizing specific divergence measures: reverse KL divergence and Wasserstein distance. Furthermore, we provide a unifying view of such divergence measures in Bayesian pseudocoreset construction. Finally, we propose a novel Bayesian pseudocoreset algorithm based on minimizing forward KL divergence. Our empirical results demonstrate that the pseudocoresets constructed from these methods reflect the true posterior even in large-scale Bayesian inference problems.
Accept
There was a consensus among reviewers that this paper should be accepted. The authors formulate dataset distillation methods as approximate Bayesian pseudocoreset procedures which appears to be a novel viewpoint. They further propose a new coreset procedure and show that it performs well. The paper appears to be well-written.
train
[ "vyfgk15uQSV", "iJ3R-OBEP4j", "jzVFAWa0z9I", "MGrBO8mlcpm", "Fs6Y5fvWuNo", "xgWgt-cLvwo", "g25Hk9zofNH", "h8QY3AMAkuO", "rRJboalgx30", "cdOoLvcJNtM", "sZWmVatGXi5", "U2JLGJvw_G6", "ddNdxSIEal", "k-FHeUUZ5qx", "hNvig7OzjC5", "zVt9vEI3sUu", "an6PijyU88E" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. We have included them in Figure 7. ", " Thanks for the new results.\nYou will need to include them to figure 7 to make the trend clear to readers\nBut the results looks great to me as they give an idea of how large the set is needed for a nearly perfect match.", " We appreciate yo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "iJ3R-OBEP4j", "Fs6Y5fvWuNo", "MGrBO8mlcpm", "U2JLGJvw_G6", "xgWgt-cLvwo", "rRJboalgx30", "nips_2022_bg7d_2jWv6", "nips_2022_bg7d_2jWv6", "an6PijyU88E", "zVt9vEI3sUu", "hNvig7OzjC5", "hNvig7OzjC5", "k-FHeUUZ5qx", "nips_2022_bg7d_2jWv6", "nips_2022_bg7d_2jWv6", "nips_2022_bg7d_2jWv6", ...
nips_2022_OMZG4vsKmm7
Domain Adaptation under Open Set Label Shift
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS), where the label distribution can change arbitrarily and a new class may arrive during deployment, but the class-conditional distributions $p(x|y)$ are domain-invariant. OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning. The learner's goals here are two-fold: (a) estimate the target label distribution, including the novel class; and (b) learn a target classifier. First, we establish the necessary and sufficient for identifying these quantities. Second, motivated by advances in label shift and PU learning, we propose practical methods for both tasks that leverage black-box predictors. Unlike typical Open Set Domain Adaptation (OSDA) problems, which tend to be ill-posed and amenable only to heuristics, OSLS offers a well-posed problem amenable to more principled machinery. Experiments across numerous semi-synthetic benchmarks on vision, language, and medical datasets demonstrate that our methods consistently outperform OSDA baselines, achieving $10$--$25\%$ improvements in target domain accuracy. Finally, we analyze the proposed methods, establishing finite-sample convergence to the true label marginal and convergence to optimal classifier for linear models in a Gaussian setup. Code is available at https://github.com/acmi-lab/Open-Set-Label-Shift.
Accept
The paper addresses an interesting domain adataption question and proposes an novel and elegant solution supported with relevant theory. Although some issues have been raised, all reviewers agree that the paper worth be published, and we expect the authors to take into account the comments of the reviewers (eg discussing limitations of PULSE, checking positivity conditions...)
train
[ "OYfXrGIhrLh", "eC_-qIy3tHQ", "mhBEuO8mnNi", "JqKVglh1cj6", "Ix1AR9tAGUM", "fV_wcF2D0i", "_jhz1r9uyD", "aj9zuF1NMHf", "emBQqZQXugo", "Qi_RMEuoYxv", "SMNZ5P0Vg3f4", "rLUs7A2Ke9Z", "tphZ7Wen9MA", "CZjYFs7D1U", "YmMQwFyBxX8", "VLds9CgKWS", "8EymETiYcKF", "Zu4Nh3_JjY5" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi,\nThanks for the reply, your reply has covered most of my main concerns, and I will raise my assessment.", " Thanks again for your thoughtful review! Since the discussion window is closing, we just wanted to check in to see if our replies successfully addressed your primary concerns or if there is anything e...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "JqKVglh1cj6", "YmMQwFyBxX8", "Zu4Nh3_JjY5", "8EymETiYcKF", "SMNZ5P0Vg3f4", "_jhz1r9uyD", "Zu4Nh3_JjY5", "emBQqZQXugo", "Qi_RMEuoYxv", "8EymETiYcKF", "rLUs7A2Ke9Z", "VLds9CgKWS", "YmMQwFyBxX8", "nips_2022_OMZG4vsKmm7", "nips_2022_OMZG4vsKmm7", "nips_2022_OMZG4vsKmm7", "nips_2022_OMZG...
nips_2022_krV1UM7Uw1
Robust Bayesian Regression via Hard Thresholding
By combining robust regression and prior information, we develop an effective robust regression method that can resist adaptive adversarial attacks. Due to the widespread existence of noise and data corruption, it is necessary to recover the true regression parameters when a certain proportion of the response variables have been corrupted. Methods to overcome this problem often involve robust least-squares regression. However, few methods achieve good performance when dealing with severe adaptive adversarial attacks. Based on the combination of prior information and robust regression via hard thresholding, this paper proposes an algorithm that improves the breakdown point when facing adaptive adversarial attacks. Furthermore, to improve the robustness and reduce the estimation error caused by the inclusion of a prior, the idea of Bayesian reweighting is used to construct a more robust algorithm. We prove the theoretical convergence of proposed algorithms under mild conditions. Extensive experiments show that, under different dataset attacks, our algorithms achieve state-of-the-art results compared with other benchmark algorithms, demonstrating the robustness of the proposed approach.
Accept
The paper studies the problem of label-outlier robust regression with prior on the optimal parameter. The reviewers agree that the results are novel and significant. There is certainly a concern about the novelty of the method and about additional insights provided by the result. However, as the paper studies this relatively new problem and provides solid results for it, we recommend accepting it for publication.
test
[ "x_1Qhf8WL60", "gjCvuHnS8wP", "Umwh0mZu4e", "RwoDpda-x-g", "KqURNhZcGEC", "amVzaNH88-C", "yUZ71Qvw_5", "obPS0PGa756", "X5fZxRfRbUI" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I appreciate the authors provide additional experiments as well as more clarification in the revision. I would keep my original evaluation leaning toward acceptance. ", " Thank you for the further comment and for raising our score! We want to address our thoughts on the breakdown poin...
[ -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "KqURNhZcGEC", "Umwh0mZu4e", "amVzaNH88-C", "X5fZxRfRbUI", "obPS0PGa756", "yUZ71Qvw_5", "nips_2022_krV1UM7Uw1", "nips_2022_krV1UM7Uw1", "nips_2022_krV1UM7Uw1" ]
nips_2022__WHs1ruFKTD
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models
Deep equilibrium models (DEQs) refrain from the traditional layer-stacking paradigm and turn to find the fixed point of a single layer. DEQs have achieved promising performance on different applications with featured memory efficiency. At the same time, the adversarial vulnerability of DEQs raises concerns. Several works propose to certify robustness for monotone DEQs. However, limited efforts are devoted to studying empirical robustness for general DEQs. To this end, we observe that an adversarially trained DEQ requires more forward steps to arrive at the equilibrium state, or even violates its fixed-point structure. Besides, the forward and backward tracks of DEQs are misaligned due to the black-box solvers. These facts cause gradient obfuscation when applying the ready-made attacks to evaluate or adversarially train DEQs. Given this, we develop approaches to estimate the intermediate gradients of DEQs and integrate them into the attacking pipelines. Our approaches facilitate fully white-box evaluations and lead to effective adversarial defense for DEQs. Extensive experiments on CIFAR-10 validate the adversarial robustness of DEQs competitive with deep networks of similar sizes.
Accept
This paper studies the empirical robustness of the general deep equilibrium model (DEQ) in the traditional white-box attack-defense setting. As the topic is under-explored in the literature, the authors first pointed out the challenges of training robust DEQs. Then, they developed a method to estimate the intermediate gradients of DEQs and integrate them into the adversarial attack pipelines. The authors did a good job to address the reviewers' concerns in the author-reviewer discussion phase, and at the end, all reviewers unanimously support the acceptance. Although AC sees some limitations, e.g., limited advantages of using robust DEQs over deep CNNs, scalability to large-scale datasets and training instability, AC thinks the merits of this paper outweigh them: this paper can be a useful guideline when researchers pursue the under-explored problem in the future. Hence, AC recommends acceptance.
val
[ "RnJ_h7Wj4kB", "0Pq6Um1exW_", "l_EoOvaR4nV", "rz-JbmKVI2S", "JVJwXUl7P0", "XLQ4olDxPB5", "RemF4D-O-qq", "_9SkVBEluG", "12xzldSXidd" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for updating the score! Your feedback really helped us improve our work. We will further revise our paper with the added experiments and discussions.", " Thank you for the extra experiments and explanations. The extra results solved my concern and I would raise my score.", " Dear Reviewers,\n\nThank...
[ -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "0Pq6Um1exW_", "JVJwXUl7P0", "nips_2022__WHs1ruFKTD", "12xzldSXidd", "_9SkVBEluG", "RemF4D-O-qq", "nips_2022__WHs1ruFKTD", "nips_2022__WHs1ruFKTD", "nips_2022__WHs1ruFKTD" ]
nips_2022_z9cpLkoSNNh
Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions
Continual learning is an emerging paradigm in machine learning, wherein a model is exposed in an online fashion to data from multiple different distributions (i.e. environments), and is expected to adapt to the distribution change. Precisely, the goal is to perform well in the new environment, while simultaneously retaining the performance on the previous environments (i.e. avoid ``catastrophic forgetting''). While this setup has enjoyed a lot of attention in the applied community, there hasn't be theoretical work that even formalizes the desired guarantees. In this paper, we propose a framework for continual learning through the framework of feature extraction---namely, one in which features, as well as a classifier, are being trained with each environment. When the features are linear, we design an efficient gradient-based algorithm $\mathsf{DPGrad}$, that is guaranteed to perform well on the current environment, as well as avoid catastrophic forgetting. In the general case, when the features are non-linear, we show such an algorithm cannot exist, whether efficient or not.
Accept
This paper provides a theoretical analysis of continual learning when the learner is modeled as a featurizer followed by a linear head. The analysis provides theoretical guarantees on learnability when the featurizer is linear, and is learned using doubly projected gradient descent. The guarantee ensures good accuracy on all environments and resilience to catastrophic forgetting. For a nonlinear featureizer, it is shown that continual learning is not possible in general: there exist scenarios that even when good features exist, either catastrophic forgetting or poor performance has to occur. Reviewers raised questions about the implication of the theory for practical settings (s7RD, KsmC about how useful the results are in practice, cwgv on whether current analysis based on quadratic activation functions can carry over to ReLU, and VAS6 about whether the analysis works for classification as well [instead of regression]). The authors responded to these. They highlighted how the algorithm in the linear setting provides an insight into Orthogonal Gradient Descent (OGD), which is known algorithm for continual learning. Authors also explained the significance of the lower bounds for understanding what is fundamentally possible and impossible in continual learning. Moreover, the authors clarified that quadratic activation is not essential for the lower bound, and extended their proof to ReLU in the revised version of the submission. The authors also clarified that classification could be treated as a special case of regression, with target values being discrete, and hence the result is not limited to regression only (although I, the AC, add that for classification, L2 loss is less common, and by classification, we typically refer to loss functions like cross-entropy, but the results of the paper are interesting regardless). Final scores are all on the accept side, indicating reviewers have found the contributions strong enough for the submission to be published. In concordance, I think this paper provides very interesting insights about some fundamental aspects of learnability in continual settings.
train
[ "c92O3Bd4jrA", "SMuSSxXao_3", "tQ8FjTd3gR", "ecVgBXczXK-", "uN4gJfGLIVt", "vAbyVMaJtlP", "yMPcFkT4_r3", "FKXlwydIBaQ", "M6b2lgOlOf7", "iFTTaHv1ICc", "u7MIlPyZ7nm", "ecvhmwIHZEJ", "Fe1mcwnsYhD", "dQ6hJWqK5V", "JhCWB08j5Cm", "LEq_Cpvo1AL", "LZhUTfWnGQ_", "akGyodr4ndn" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications! Indeed OGD seems to have a massive forgetting. Looking at the results with a variety of task orders, I have a little concern about this trade-off of plasticity v/s learnability of new tasks, which is in general a big question in the CL community. This problem is in general alleviat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "SMuSSxXao_3", "FKXlwydIBaQ", "yMPcFkT4_r3", "uN4gJfGLIVt", "vAbyVMaJtlP", "JhCWB08j5Cm", "LEq_Cpvo1AL", "u7MIlPyZ7nm", "iFTTaHv1ICc", "akGyodr4ndn", "LZhUTfWnGQ_", "LEq_Cpvo1AL", "JhCWB08j5Cm", "JhCWB08j5Cm", "nips_2022_z9cpLkoSNNh", "nips_2022_z9cpLkoSNNh", "nips_2022_z9cpLkoSNNh",...
nips_2022_azBVn74t_2
DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data
Generative adversarial nets (GANs) have been remarkably successful at learning to sample from distributions specified by a given dataset, particularly if the given dataset is reasonably large compared to its dimensionality. However, given limited data, classical GANs have struggled, and strategies like output-regularization, data-augmentation, use of pre-trained models and pruning have been shown to lead to improvements. Notably, the applicability of these strategies is often constrained to particular settings, e.g., availability of a pretrained GAN, or increases training time, e.g., when using pruning. In contrast, we propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN. DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t. real images and w.r.t. the generated samples. We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available.
Accept
The paper proposes a regularizer for limited-data GAN training. All three reviewers thought the experiments were adequate to demonstrate the method's usefulness and the writing was clear. The paper's biggest weakness seems to be unconvincing conceptual intutition and lack of theoretical justification (pointed out by reviewers Mjjy and Z5nH). This is a borderline paper but I recommend acceptance.
train
[ "os_tWnqu1MY", "dH7bPl81DlG", "FMwd0tdiWPo", "Ldjvz55cI-l", "sD7k7UOWTQ", "3_LaHeI1eho", "S6cRpamMaIs", "KI0Y-gbshqL", "ykSCEjDybf9", "DuqSHFBsYBc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate authors' effort on addressing my concerns. After checking the responses, as well as other reviewers' comments and authors feedback, my concerns have been well addressed. ", " Thank you for the detailed responses to my questions, and considering including the additional 2D experiments as parts of th...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "sD7k7UOWTQ", "S6cRpamMaIs", "Ldjvz55cI-l", "DuqSHFBsYBc", "ykSCEjDybf9", "KI0Y-gbshqL", "KI0Y-gbshqL", "nips_2022_azBVn74t_2", "nips_2022_azBVn74t_2", "nips_2022_azBVn74t_2" ]
nips_2022_mE1QoOe5juz
Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret
We propose a new learning framework that captures the tiered structure of many real-world user-interaction applications, where the users can be divided into two groups based on their different tolerance on exploration risks and should be treated separately. In this setting, we simultaneously maintain two policies $\pi^{\text{O}}$ and $\pi^{\text{E}}$: $\pi^{\text{O}}$ (``O'' for ``online'') interacts with more risk-tolerant users from the first tier and minimizes regret by balancing exploration and exploitation as usual, while $\pi^{\text{E}}$ (``E'' for ``exploit'') exclusively focuses on exploitation for risk-averse users from the second tier utilizing the data collected so far. An important question is whether such a separation yields advantages over the standard online setting (i.e., $\pi^{\text{E}}=\pi^{\text{O}}$) for the risk-averse users. We individually consider the gap-independent vs.~gap-dependent settings. For the former, we prove that the separation is indeed not beneficial from a minimax perspective. For the latter, we show that if choosing Pessimistic Value Iteration as the exploitation algorithm to produce $\pi^{\text{E}}$, we can achieve a constant regret for risk-averse users independent of the number of episodes $K$, which is in sharp contrast to the $\Omega(\log K)$ regret for any online RL algorithms in the same setting, while the regret of $\pi^{\text{O}}$ (almost) maintains its online regret optimality and does not need to compromise for the success of $\pi^{\text{E}}$.
Accept
The new two-group RL framework is interesting, even though it is somewhat restricted to assume the exact same model for both groups. Both the gap-independent and the gap-dependent settings are discussed properly, with lower and upper bounds. Overall we believe that the paper is worth publishing at NeurIPS.
train
[ "4RK5prrO4M", "QJ71Z_ldNlV", "Pr3FalH_ZAV", "Nlm-GWrw32D", "MLrEi0xyc-v", "G91xWNFRILa", "IBlFAuSuNB2", "QddPZvX_AfA", "98UtLdZrC0", "xjL_YDd9r7F", "Rz9UXTOULf2", "366hMTHeNx", "WHHu5_Z7gf" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > plan 2 has high uncerntainty (large confidence interval) but its true expected value has chance to be higher than plan 1.\n\nSuch a type of uncertainty may be related to what is called \"ambiguity\" in decision-making theory. Anyway, people usually avoid an arm with large confidence interval (called ambiguity a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "QJ71Z_ldNlV", "Pr3FalH_ZAV", "MLrEi0xyc-v", "IBlFAuSuNB2", "WHHu5_Z7gf", "366hMTHeNx", "Rz9UXTOULf2", "xjL_YDd9r7F", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz" ]
nips_2022_qbSB_cnFSYn
DEQGAN: Learning the Loss Function for PINNs with Generative Adversarial Networks
Solutions to differential equations are of significant scientific and engineering relevance. Physics-Informed Neural Networks (PINNs) have emerged as a promising method for solving differential equations, but they lack a theoretical justification for the use of any particular loss function. This work presents Differential Equation GAN (DEQGAN), a novel method for solving differential equations using generative adversarial networks to "learn the loss function" for optimizing the neural network. Presenting results on a suite of twelve ordinary and partial differential equations, including the nonlinear Burgers', Allen-Cahn, Hamilton, and modified Einstein's gravity equations, we show that DEQGAN can obtain multiple orders of magnitude lower mean squared errors than PINNs that use $L_2$, $L_1$, and Huber loss functions. We also show that DEQGAN achieves solution accuracies that are competitive with popular numerical methods. Finally, we present two methods to improve the robustness of DEQGAN to different hyperparameter settings.
Reject
This paper presents a new method for solving differential equations using generative adversarial networks to "learn the loss function" for optimizing the neural network. After the discussion, the reviewers still have a few major concerns: (1) The authors' claim that the existing methods lack of theoretical justifications. However, the paper does not provide a sufficient justification on their proposed method, either, which makes their key motivation of the paper questionable. (2) Some important baseline methods are missing in the comparison as well as references. The authors should improve their literature survey. (3) The computational challenges of solving PDEs mainly lie in high dimensionality. Most of existing deep-learning based PDE solves, including PINN, attempt to demonstrate the benefit of using deep neural networks for approximating high dimensional functions or operators. However, the experiments only consider low dimensional PDEs, which not difficult to solve, and existing numerical methods can solve them efficiently without deep neural networks and complicated tuning.
train
[ "YVQyb5O5CYL", "OdxLMcOEbmq", "mwcqBji_mO3", "pdwWPEgD8g", "8EgSmS3n1I", "z-XMKOvAs7", "lgAcmR7myME", "jIkg2sL4vow", "8FmJF6QIkrr", "h2UadikiS3", "HQuIij8JYUa", "IaZwF7t__ax", "rOGwO5FbR9" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n\nDear Reviewers,\n\nWe are entering the discussion phase, where the authors will be not involved in the discussion.\n\nI would like to request you to confirm that you have already read the rebuttal from the authors.\n\nBest\n\nAC\n", " Many thanks to the AC for the comments, and we'd be happy to clarify:\n\n...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "nips_2022_qbSB_cnFSYn", "mwcqBji_mO3", "nips_2022_qbSB_cnFSYn", "z-XMKOvAs7", "lgAcmR7myME", "jIkg2sL4vow", "h2UadikiS3", "rOGwO5FbR9", "IaZwF7t__ax", "HQuIij8JYUa", "nips_2022_qbSB_cnFSYn", "nips_2022_qbSB_cnFSYn", "nips_2022_qbSB_cnFSYn" ]
nips_2022_dmCyoqxEwHf
GenerSpeech: Towards Style Transfer for Generalizable Out-Of-Domain Text-to-Speech
Style transfer for out-of-domain (OOD) speech synthesis aims to generate speech samples with unseen style (e.g., speaker identity, emotion, and prosody) derived from an acoustic reference, while facing the following challenges: 1) The highly dynamic style features in expressive voice are difficult to model and transfer; and 2) the TTS models should be robust enough to handle diverse OOD conditions that differ from the source data. This paper proposes GenerSpeech, a text-to-speech model towards high-fidelity zero-shot style transfer of OOD custom voice. GenerSpeech decomposes the speech variation into the style-agnostic and style-specific parts by introducing two components: 1) a multi-level style adaptor to efficiently model a large range of style conditions, including global speaker and emotion characteristics, and the local (utterance, phoneme, and word-level) fine-grained prosodic representations; and 2) a generalizable content adaptor with Mix-Style Layer Normalization to eliminate style information in the linguistic content representation and thus improve model generalization. Our evaluations on zero-shot style transfer demonstrate that GenerSpeech surpasses the state-of-the-art models in terms of audio quality and style similarity. The extension studies to adaptive style transfer further show that GenerSpeech performs robustly in the few-shot data setting. Audio samples are available at \url{https://GenerSpeech.github.io/}.
Accept
All 3 reviewers agree that the paper is novel, technically strong and experimentally convincing. THis paper should be accepted.
train
[ "B5sIosrToDt", "j-A7kQMpFz", "MY2ihPzavOf", "NDrrOC3Epq", "f8Ye0XkoAHT", "aRRaax5q2B1", "qwHa5Nw91Nj", "fFb2Ys3ktVU", "NQC0pKGgWGw", "qoLu_LJwfQR", "n4naLMwC0HT", "_bhoGwgl5FG", "iCuxP45zsuc" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate that you have raised your score. We believe that your valuable comments have improved the paper, and feel free to ask more questions if you have any time. Thank you again for raising the score.", " \nTo all reviewers, ACs, and PCs:\n\nWe thank all reviewers for the valuable suggestions wit...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "_bhoGwgl5FG", "nips_2022_dmCyoqxEwHf", "iCuxP45zsuc", "_bhoGwgl5FG", "nips_2022_dmCyoqxEwHf", "nips_2022_dmCyoqxEwHf", "iCuxP45zsuc", "NQC0pKGgWGw", "_bhoGwgl5FG", "n4naLMwC0HT", "nips_2022_dmCyoqxEwHf", "nips_2022_dmCyoqxEwHf", "nips_2022_dmCyoqxEwHf" ]
nips_2022_fHUBa3gQno
Improving Task-Specific Generalization in Few-Shot Learning via Adaptive Vicinal Risk Minimization
Recent years have witnessed the rapid development of meta-learning in improving the meta generalization over tasks in few-shot learning. However, the task-specific level generalization is overlooked in most algorithms. For a novel few-shot learning task where the empirical distribution likely deviates from the true distribution, the model obtained via minimizing the empirical loss can hardly generalize to unseen data. A viable solution to improving the generalization comes as a more accurate approximation of the true distribution; that is, admitting a Gaussian-like vicinal distribution for each of the limited training samples. Thereupon we derive the resulting vicinal loss function over vicinities of all training samples and minimize it instead of the conventional empirical loss over training samples only, favorably free from the exhaustive sampling of all vicinal samples. It remains challenging to obtain the statistical parameters of the vicinal distribution for each sample. To tackle this challenge, we further propose to estimate the statistical parameters as the weighted mean and variance of a set of unlabeled data it passed by a random walk starting from training samples. To verify the performance of the proposed method, we conduct experiments on four standard few-shot learning benchmarks and consolidate the superiority of the proposed method over state-of-the-art few-shot learning baselines.
Accept
In this paper, authors study a few-shot learning setting where the training distribution deviates from true distribution. To achieve a more accurate approximation of the true distribution, authors propose assuming Gaussian-like vicinal distribution around each training data point which results in a vicinal loss. Extensive empirical results in the paper show that the proposed method improves over the baseline methods. While vicinal loss has been used in other settings, this problem formulation and use of vicinal loss is novel and the empirical advantage is significant. Therefore, I am recommending acceptance. Given reviewers' concerns about typos and presentation issues, I encourage the authors to make a few more passes over the paper and improve the writing and presentation.
train
[ "W-F8aJ7AdPh", "t84xwLxApNp", "j5cLwyippPY", "gONzKfDzsf-", "1guJIVdrNKD", "2jHpWnk33kC", "v2CUwi5TChq", "t6ZYC8uYMb", "sXlUVasBchW", "W0EFgUURBxn", "3KOP_E4_dty", "XC99y4zPiKzq", "2FKPpjZY3YV", "lFWDw_lo0r_", "I_4aQY7myNP", "tfFu1_6fukR", "kcK8sn5Pt6GC", "t5TRGib_LPB", "L2-A6Ebk...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " First, we again post part II of the previous response here for your reference, in case that you have missed that part which is in the other post.\n\nSecond, if the following response still fails to address your concerns, we would very much appreciate if you elaborate the weakness of our experiments.\n\n**Q13: Exp...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "t84xwLxApNp", "gONzKfDzsf-", "L2-A6EbkRCV", "t6ZYC8uYMb", "t6ZYC8uYMb", "W0EFgUURBxn", "nips_2022_fHUBa3gQno", "1sei90iqGc", "QEGng-PHjpC", "lFWDw_lo0r_", "QEGng-PHjpC", "1sei90iqGc", "G2beXFko_sn", "G2beXFko_sn", "QEGng-PHjpC", "QEGng-PHjpC", "1sei90iqGc", "1sei90iqGc", "Qj2WXC...
nips_2022_hSxK-4KGLbI
Two-Stream Network for Sign Language Recognition and Translation
Sign languages are visual languages using manual articulations and non-manual elements to convey information. For sign language recognition and translation, the majority of existing approaches directly encode RGB videos into hidden representations. RGB videos, however, are raw signals with substantial visual redundancy, leading the encoder to overlook the key information for sign language understanding. To mitigate this problem and better incorporate domain knowledge, such as handshape and body movement, we introduce a dual visual encoder containing two separate streams to model both the raw videos and the keypoint sequences generated by an off-the-shelf keypoint estimator. To make the two streams interact with each other, we explore a variety of techniques, including bidirectional lateral connection, sign pyramid network with auxiliary supervision, and frame-level self-distillation. The resulting model is called TwoStream-SLR, which is competent for sign language recognition (SLR). TwoStream-SLR is extended to a sign language translation (SLT) model, TwoStream-SLT, by simply attaching an extra translation network. Experimentally, our TwoStream-SLR and TwoStream-SLT achieve state-of-the-art performance on SLR and SLT tasks across a series of datasets including Phoenix-2014, Phoenix-2014T, and CSL-Daily.
Accept
This paper extends models for sign language recognition and translation with a dual encoder where, first, keypoint sequences are estimated using an off-the-shelf model, then fused with the video sequence. It is a minor technical contribution to add the keypoint estimations as input since no new information was introduced; however, the authors demonstrated strong execution of experimental results. This paper can be categorized with pipeline/cascade approaches which rely on domain knowledge for engineered feature extraction and combination. The paper presents many experimental results for architecture changes to improve results: bidirectional lateral connection, sign pyramid network, and frame-level self-distillation. The authors convinced the reviewers with more experimental results during the rebuttal period leading to two solid, and one borderline accept votes.
train
[ "wiId_W7p6ig", "CSYiyPxddYc", "n6EEASGJFz8", "DY6J8WV676q", "VeWzAP4TXo", "94TX9W-RXeX", "ZDVqJqsQGLc", "FUPaH1sLtP9", "XpvHf8ZLxmf", "XHTUPkHeJs", "HonYeZFzRXx", "Oc5t4lxC-A" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I read the response and comments from other reviewers. The rebuttal has addressed all my concerns, as well as many points raised by other reviewers. This is a solid work with convincing experiments and studies. The newly added experiments to verity signer-independentrecognition and background change make the subm...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "XpvHf8ZLxmf", "HonYeZFzRXx", "HonYeZFzRXx", "HonYeZFzRXx", "HonYeZFzRXx", "XHTUPkHeJs", "XHTUPkHeJs", "XHTUPkHeJs", "Oc5t4lxC-A", "nips_2022_hSxK-4KGLbI", "nips_2022_hSxK-4KGLbI", "nips_2022_hSxK-4KGLbI" ]
nips_2022_GGi4igGZEB-
Characteristic Neural Ordinary Differential Equations
We propose Characteristic-Neural Ordinary Differential Equations (C-NODEs), a framework for extending Neural Ordinary Differential Equations (NODEs) beyond ODEs. While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves. This in turn allows the application of the standard frameworks for solving ODEs, namely the adjoint method. Learning optimal characteristic curves for given tasks improves the performance and computational efficiency, compared to state of the art NODE models. We prove that the C-NODE framework extends the classical NODE on classification tasks by demonstrating explicit C-NODE representable functions not expressible by NODEs. Additionally, we present C-NODE-based continuous normalizing flows, which describe the density evolution of latent variables along multiple dimensions. Empirical results demonstrate the improvements provided by the proposed method for classification and density estimation on CIFAR-10, SVHN, and MNIST datasets under a similar computational budget as the existing NODE methods. The results also provide empirical evidence that the learned curves improve the efficiency of the system through a lower number of parameters and function evaluations compared with baselines.
Reject
This paper proposed to model the evolution of the latent variables to the characteristic curves instead of the original ODEs. Authors proved the new method C-NODE is more expressive than the original NODE. Experiments are conducted on image classification tasks to demonstrate its effectiveness. It will be good explorations to leverage the differential equation theory to improve the NODE algorithms. The insights from the journey will help innovate breakthrough directions in operator learning. During the discussion phase, reviewers had rounds of debates about whether the method is demonstrated effective on standard tasks for NODEs. Although it is a high bar for exploration style work to achieve SOTA results, we expect some insights from the investigation. For example, why the former NODE is not expressive enough in certain tasks, what might be the factors in the real tasks influencing the expressiveness, why the image classification task need extra expressiveness, etc. Simulations can also be involved to show these insights in extreme cases.
train
[ "ZsALg8Va2k7", "IMdLb_77RiV", "VFTIdQnXwvy", "OEUkF7u4SEh", "JEKa7ccxv2uv", "sC79YfM_Dif", "prXvIt60A9q", "IEnHhF7UIY9", "MZvh1kGgZYo", "x213j0xqdXO", "ETJZyE1uWT4", "acm86c_GUX3" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer EacU\n\nWe apologize for any inconvenience that our message may cause in advance.\n\nAgain, we would like to thank you for the time you dedicated to reviewing our paper and for your valuable comments. We believe that we have addressed your concerns.\n\nSince the end of the discussion period is close...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "x213j0xqdXO", "x213j0xqdXO", "OEUkF7u4SEh", "IEnHhF7UIY9", "sC79YfM_Dif", "prXvIt60A9q", "acm86c_GUX3", "ETJZyE1uWT4", "x213j0xqdXO", "nips_2022_GGi4igGZEB-", "nips_2022_GGi4igGZEB-", "nips_2022_GGi4igGZEB-" ]
nips_2022_FQtku8rkp3
Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning
Learning generalizable policies that can adapt to unseen environments remains challenging in visual Reinforcement Learning (RL). Existing approaches try to acquire a robust representation via diversifying the appearances of in-domain observations for better generalization. Limited by the specific observations of the environment, these methods ignore the possibility of exploring diverse real-world image datasets. In this paper, we investigate how a visual RL agent would benefit from the off-the-shelf visual representations. Surprisingly, we find that the early layers in an ImageNet pre-trained ResNet model could provide rather generalizable representations for visual RL. Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner. Extensive experiments are conducted on DMControl Generalization Benchmark, DMControl Manipulation Tasks, Drawer World, and CARLA to verify the effectiveness of PIE-G. Empirical evidence suggests PIE-G improves sample efficiency and significantly outperforms previous state-of-the-art methods in terms of generalization performance. In particular, PIE-G boasts a 55% generalization performance gain on average in the challenging video background setting. Project Page: https://sites.google.com/view/pie-g/home.
Accept
This paper contains interesting findings in a research topic currently drawing a lot of interest from the community, i.e., RL with pretraining from large-scale general out-of-domain data. I think the use of low-level features and the batch-norm can be interesting to the community. As pointed by reviewer UwhJ, however, I agree that the authors should moderate and clarify some claims in such a way to acknowledge the fact that this line of research has already been studied recently in many works and thus it is not the first finding. I suggest updating the writing in the camera ready version to focus on the specific contributions such as the low-level features and batch-norm. In particular, the contribution (1) in the last paragraph of Introduction is wrong and thus should be changed because it is already well known but not discovered first in this paper.
train
[ "TSC1H7ofdx-", "ljbB5zTfJbJ", "B9ynDkTi07L", "W-ll74WETQe", "uo21N2xmao", "w70lo3TL7n", "U05iOyfsOx_", "bKb4rWGy8s", "ti4oo-Azay", "JJy4iFCt6Z", "cvJzfzyHFDB", "ySHR9FjkdO", "a6JdIjQ0dhh", "lHDksT-MJMo" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your efforts in reviewing our paper and your suggestions again.\n\nWe believe we have resolved all the concerns mentioned in the review. Please let us know if you have further concerns and we are more than happy to address them! Thank you very much !", " Thanks for the thorough respon...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "a6JdIjQ0dhh", "ti4oo-Azay", "lHDksT-MJMo", "a6JdIjQ0dhh", "ySHR9FjkdO", "a6JdIjQ0dhh", "ySHR9FjkdO", "nips_2022_FQtku8rkp3", "ySHR9FjkdO", "lHDksT-MJMo", "a6JdIjQ0dhh", "nips_2022_FQtku8rkp3", "nips_2022_FQtku8rkp3", "nips_2022_FQtku8rkp3" ]
nips_2022_q-tTkgjuiv5
Graphical Resource Allocation with Matching-Induced Utilities
Motivated by real-world applications, we study the fair allocation of graphical resources, where the resources are the vertices in a graph. Upon receiving a set of resources, an agent's utility equals the weight of the maximum matching in the induced subgraph. We care about maximin share (MMS) fairness and envy-freeness up to one item (EF1). Regarding MMS fairness, the problem does not admit a finite approximation ratio for heterogeneous agents. For homogeneous agents, we design constant-approximation polynomial-time algorithms, and also note that significant amount of social welfare is sacrificed inevitably in order to ensure (approximate) MMS fairness. We then consider EF1 allocations whose existence is guaranteed. We show that for homogeneous agents, there is an EF1 allocation that ensures at least a constant fraction of the maximum possible social welfare. However, the social welfare guarantee of EF1 allocations degrades to $1/n$ for heterogeneous agents, where $n$ is the number of agents. Fortunately, for two special yet typical cases, namely binary-weight and two-agent, we are able to design polynomial-time algorithms ensuring a constant fractions of the maximum social welfare.
Reject
Reviewers agreed that the model is new and interesting and the theoretical results are solid. The main criticisms are about the model: some reviewers felt that it is too specific and there is not enough motivation. Some reviewers liked the technical depth while others felt that it is not enough to compensate for the lack of motivation. Overall, reviewers felt that the paper in its current form is not ready for publication at NeurIPS.
val
[ "QaEfojFbwD0", "MptEuRJWNkV", "h4qXVKVuMNq", "yVuM6w6C-LC", "fRofuS9NQ7c", "JnhexD57kPh", "QZm7Rt1sdGI", "DlfpV41Mdyz", "CPlFahNUQ3q", "NMLd8DxhEnT" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and revision of the paper!", " L143: Accordingly, people mostly $\\ldots$ $\\rightarrow$ Accordingly, researchers have sought $\\ldots$\n\nL145: The line claims that EF1 is widely accepted and studied. But no citations or references are provided. I suggest that this be motivated and d...
[ -1, -1, -1, -1, -1, -1, 5, 3, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "yVuM6w6C-LC", "h4qXVKVuMNq", "CPlFahNUQ3q", "NMLd8DxhEnT", "QZm7Rt1sdGI", "DlfpV41Mdyz", "nips_2022_q-tTkgjuiv5", "nips_2022_q-tTkgjuiv5", "nips_2022_q-tTkgjuiv5", "nips_2022_q-tTkgjuiv5" ]
nips_2022_2yvUYc-YNUH
Test Time Adaptation via Conjugate Pseudo-labels
Test-time adaptation (TTA) refers to adapting neural networks to distribution shifts, specifically with just access to unlabeled test samples from the new domain at test-time. Prior TTA methods optimize over unsupervised objectives such as the entropy of model predictions in TENT (Wang et al., 2021), but it is unclear what exactly makes a good TTA loss. In this paper, we start by presenting a surprising phenomenon: if we attempt to $\textit{meta-learn}$ the ``best'' possible TTA loss over a wide class of functions, then we recover a function that is $\textit{remarkably}$ similar to (a temperature-scaled version of) the softmax-entropy employed by TENT. This only holds, however, if the classifier we are adapting is trained via cross-entropy loss; if the classifier is trained via squared loss, a different ``best'' TTA loss emerges. To explain this phenomenon, we analyze test-time adaptation through the lens of the training losses's $\textit{convex conjugate}$. We show that under natural conditions, this (unsupervised) conjugate function can be viewed as a good local approximation to the original supervised loss and indeed, it recovers the ``best'' losses found by meta-learning. This leads to a generic recipe than be used to find a good TTA loss for $\textit{any}$ given supervised training loss function of a general class. Empirically, our approach dominates other TTA alternatives over a wide range of domain adaptation benchmarks. Our approach is particularly of interest when applied to classifiers trained with $\textit{novel}$ loss functions, e.g., the recently-proposed PolyLoss (Leng et al., 2022) function, where it differs substantially from (and outperforms) an entropy-based loss. Further, we show that our conjugate based approach can also be interpreted as a kind of self-training using a very specific soft label, which we refer to as the $\textit{conjugate pseudo-label}$. Overall, therefore, our method provides a broad framework for better understanding and improving test-time adaptation. Code is available at https://github.com/locuslab/tta_conjugate.
Accept
All reviewers agree this paper presents a novel and principled approach to test time adaptation losses. All reviewers find the paper clearly written and contributions meaningful. I suggest acceptance.
train
[ "whpPKH9MO4", "OfHOfisYexO", "CdoWUnJldN0", "r2XLsm8si2l", "zSI5xgZQKn2", "8hOnTCJwDCp", "HuEsDLBRg0X", "ktjUR66JVXF", "ZcX6kJOuMl2", "GThCu4NlqT", "raCnVBoxDC", "QyW0P-HWZSt", "0SJ_hZwUh9J", "TNOzB9HXvG4" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We will add a detailed discussion on the current limitations of our work and the interesting future directions (as also discussed in the response to reviewer JZZE) to the paper. \n\nWe again thank the reviewer for their detailed feedback and for kindly increasing the score after our response.", " Thanks for add...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "OfHOfisYexO", "ktjUR66JVXF", "HuEsDLBRg0X", "zSI5xgZQKn2", "8hOnTCJwDCp", "QyW0P-HWZSt", "0SJ_hZwUh9J", "ZcX6kJOuMl2", "TNOzB9HXvG4", "raCnVBoxDC", "nips_2022_2yvUYc-YNUH", "nips_2022_2yvUYc-YNUH", "nips_2022_2yvUYc-YNUH", "nips_2022_2yvUYc-YNUH" ]
nips_2022_MSBDFwGYwwt
TANKBind: Trigonometry-Aware Neural NetworKs for Drug-Protein Binding Structure Prediction
Illuminating interactions between proteins and small drug molecules is a long-standing challenge in the field of drug discovery. Despite the importance of understanding these interactions, most previous works are limited by hand-designed scoring functions and insufficient conformation sampling. The recently-proposed graph neural network-based methods provides alternatives to predict protein-ligand complex conformation in a one-shot manner. However, these methods neglect the geometric constraints of the complex structure and weaken the role of local functional regions. As a result, they might produce unreasonable conformations for challenging targets and generalize poorly to novel proteins. In this paper, we propose Trigonometry-Aware Neural networKs for binding structure prediction, TANKBind, that builds trigonometry constraint as a vigorous inductive bias into the model and explicitly attends to all possible binding sites for each protein by segmenting the whole protein into functional blocks. We construct novel contrastive losses with local region negative sampling to jointly optimize the binding interaction and affinity. Extensive experiments show substantial performance gains in comparison to state-of-the-art physics-based and deep learning-based methods on commonly-used benchmark datasets for both binding structure and affinity predictions with variant settings.
Accept
This is a bordeline paper. All three reviewers liked the paper and appreciated the feedback from the authors. The numerical score are 7, 5 and 5. The two 5s appear a bit on the low side given the comments. The paper tackles the difficult problem of protein-ligand docking with a geometrical GNN approach inspired in part by AlphaFold. This is an important problem and the paper presents a novel approach in a clear and convincing way. Acceptance is therefore recommended.
train
[ "nc-c12I299N", "ZYaMdfpboom", "268RSUf_xi9", "bA85oQTy4q3", "ghIBxakI6uZ", "A5haORLYM4B", "7xPVPUW9d5b", "GgqUUIBiah5", "-u8chF2HxmV", "UPeVGptF6fh", "HC9z9Xs4CZj", "B1IkRpfUwdS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your constructive comments and encouragement. We are pleased that our responses have addressed your concerns. Please let us know if there is anything that could help improve your rating. Thank you!", " Thank the authors for the detailed responses and the revised supplementary. My concerns are address...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "ZYaMdfpboom", "GgqUUIBiah5", "ghIBxakI6uZ", "-u8chF2HxmV", "7xPVPUW9d5b", "7xPVPUW9d5b", "B1IkRpfUwdS", "HC9z9Xs4CZj", "UPeVGptF6fh", "nips_2022_MSBDFwGYwwt", "nips_2022_MSBDFwGYwwt", "nips_2022_MSBDFwGYwwt" ]
nips_2022_h1IHI5sV4UQ
Reconstruction on Trees and Low-Degree Polynomials
The study of Markov processes and broadcasting on trees has deep connections to a variety of areas including statistical physics, graphical models, phylogenetic reconstruction, Markov Chain Monte Carlo, and community detection in random graphs. Notably, the celebrated Belief Propagation (BP) algorithm achieves Bayes-optimal performance for the reconstruction problem of predicting the value of the Markov process at the root of the tree from its values at the leaves. Recently, the analysis of low-degree polynomials has emerged as a valuable tool for predicting computational-to-statistical gaps. In this work, we investigate the performance of low-degree polynomials for the reconstruction problem on trees. Perhaps surprisingly, we show that there are simple tree models with $N$ leaves and bounded arity where (1) nontrivial reconstruction of the root value is possible with a simple polynomial time algorithm and with robustness to noise, but not with any polynomial of degree $N^{c}$ for $c > 0$ a constant depending only on the arity, and (2) when the tree is unknown and given multiple samples with correlated root assignments, nontrivial reconstruction of the root value is possible with a simple Statistical Query algorithm but not with any polynomial of degree $N^c$. These results clarify some of the limitations of low-degree polynomials vs. polynomial time algorithms for Bayesian estimation problems. They also complement recent work of Moitra, Mossel, and Sandon who studied the circuit complexity of Belief Propagation. As a consequence of our main result, we are able to prove a result of independent interest regarding the performance of RBF kernel ridge regression for learning to predict the root coloration: for some $c' > 0$ depending only on the arity, $\exp(N^{c'})$ many samples are needed for the kernel regression to obtain nontrivial correlation with the true regression function (BP). We pose related open questions about low-degree polynomials and the Kesten-Stigum threshold.
Accept
This paper studies using low-degree polynomials for analyzing statistical/computational gaps for high-dimensional inference problems and identify average-case settings that exhibit this gap. This is a nice paper and above the bar, though it perhaps appeal to only a theoretical audience.
train
[ "vTJ4I8vcx2_", "QXeari9lRLu", "-Sog7wSLWIG", "cGpf3WOqVWa", "lK1RPfCR7sw", "krO3MHmJoE", "zSSiwCI-14j" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their feedback and answer questions below:\n\n> In Fig.1, is it a one-off run or are the results averaged over multiple samples ? It seems weird that the RecMaj algorithm is so inconsistent depending on \\lambda_2(M)\n\nGood question — the results for the RecMaj algorithm were averaged o...
[ -1, -1, -1, 7, 8, 5, 7 ]
[ -1, -1, -1, 3, 3, 1, 4 ]
[ "zSSiwCI-14j", "krO3MHmJoE", "cGpf3WOqVWa", "nips_2022_h1IHI5sV4UQ", "nips_2022_h1IHI5sV4UQ", "nips_2022_h1IHI5sV4UQ", "nips_2022_h1IHI5sV4UQ" ]
nips_2022_R7qthqYx3V1
Discovering Design Concepts for CAD Sketches
Sketch design concepts are recurring patterns found in parametric CAD sketches. Though rarely explicitly formalized by the CAD designers, these concepts are implicitly used in design for modularity and regularity. In this paper, we propose a learning based approach that discovers the modular concepts by induction over raw sketches. We propose the dual implicit-explicit representation of concept structures that allows implicit detection and explicit generation, and the separation of structure generation and parameter instantiation for parameterized concept generation, to learn modular concepts by end-to-end training. We demonstrate the design concept learning on a large scale CAD sketch dataset and show its applications for design intent interpretation and auto-completion.
Accept
As summarized by reviewer 5G2f, this paper proposes a novel learning-based approach to discover the modular concepts (i.e., modular structure) from raw CAD sketches. To tackle the problem, the authors first define a domain specific language (DSL) such that modular concepts can be represented in a network-friendly manner. A Transformer-based detection module takes in a CAD sketch sequence and outputs a a set of latent embeddings, which are further decoded to parameterized modular concepts by a generation module. The whole model is trained in an end-to-end self-supervised manner, using reconstruction loss plus regularization terms. The authors perform experiments on a large scale CAD sketch dataset and mainly demonstrate its applications for design intent interpretation (i.e., parse modular concepts from a raw CAD sketch) and auto-completion (i.e., complete a partial CAD sketch). All reviewers recognize the novelty and contribution of this work, and the reviewer-author discussion was quite fruitful as many points, ranging from designer/user interaction, comparison to baseline methods, and issues with the library size are discussed and addressed. With such clear contribution and applicability to the CAD domain, I highly recommend the acceptance of this work.
train
[ "DNjsQEtz3pa", "6CMalp0hzfV", "fwGk4PVmMOq", "YJIpcHAjWKH", "V1IkzoIVY9", "2tLJG1GhMZ0", "BCDckhqesQ9", "toACMtioLL", "zgLl2uUZxKO", "7YEnpZ_wdKL", "AWefkvOr9Zw", "9KyYZKpahHF" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their efforts in responding to reviewer comments.\n\nThe new submission reads better than the original with more explanations. I appreciate the new histograms for concept complexity. They do provide some new insights into the model.\n\nI already had a high score for the paper, so I won't c...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "2tLJG1GhMZ0", "fwGk4PVmMOq", "9KyYZKpahHF", "AWefkvOr9Zw", "7YEnpZ_wdKL", "zgLl2uUZxKO", "zgLl2uUZxKO", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1" ]
nips_2022_ITqTRTJ-nAg
HyperMiner: Topic Taxonomy Mining with Hyperbolic Embedding
Embedded topic models are able to learn interpretable topics even with large and heavy-tailed vocabularies. However, they generally hold the Euclidean embedding space assumption, leading to a basic limitation in capturing hierarchical relations. To this end, we present a novel framework that introduces hyperbolic embeddings to represent words and topics. With the tree-likeness property of hyperbolic space, the underlying semantic hierarchy among words and topics can be better exploited to mine more interpretable topics. Furthermore, due to the superiority of hyperbolic geometry in representing hierarchical data, tree-structure knowledge can also be naturally injected to guide the learning of a topic hierarchy. Therefore, we further develop a regularization term based on the idea of contrastive learning to inject prior structural knowledge efficiently. Experiments on both topic taxonomy discovery and document representation demonstrate that the proposed framework achieves improved performance against existing embedded topic models.
Accept
Hyperbolic embeddings were a fascinating alternative to Euclidean embeddings that never seemed to take off, despite having significant conceptual advantages in representing the oddities of semantics. I am happy to see more work on curved spaces as a tool for semantic analysis! This work has strong reviews, and reviewers were generally happy with the author responses. I'd like to see it published.
train
[ "k98ljqSD2UE", "xcGABl-COv", "HxqTc5I8xSb", "l6y4rrmFyDD", "rpJJWCv0LkS", "dY2NVL2AS24", "p5H_zxNobIj", "_UAM9yI8Asb" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' answers to my questions, and I agree with those, and hope that can improve the paper a bit more. The discussion on the limitation is on point, and I thought the same about antonymy and meronymy, so a caveat would be this might need to be used together with another embedding if those are ...
[ -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "l6y4rrmFyDD", "HxqTc5I8xSb", "_UAM9yI8Asb", "p5H_zxNobIj", "dY2NVL2AS24", "nips_2022_ITqTRTJ-nAg", "nips_2022_ITqTRTJ-nAg", "nips_2022_ITqTRTJ-nAg" ]
nips_2022_Lvlxq_H96lI
Learning Manifold Dimensions with Conditional Variational Autoencoders
Although the variational autoencoder (VAE) and its conditional extension (CVAE) are capable of state-of-the-art results across multiple domains, their precise behavior is still not fully understood, particularly in the context of data (like images) that lie on or near a low-dimensional manifold. For example, while prior work has suggested that the globally optimal VAE solution can learn the correct manifold dimension, a necessary (but not sufficient) condition for producing samples from the true data distribution, this has never been rigorously proven. Moreover, it remains unclear how such considerations would change when various types of conditioning variables are introduced, or when the data support is extended to a union of manifolds (e.g., as is likely the case for MNIST digits and related). In this work, we address these points by first proving that VAE global minima are indeed capable of recovering the correct manifold dimension. We then extend this result to more general CVAEs, demonstrating practical scenarios whereby the conditioning variables allow the model to adaptively learn manifolds of varying dimension across samples. Our analyses, which have practical implications for various CVAE design choices, are also supported by numerical results on both synthetic and real-world datasets.
Accept
**Summary**: This paper studies the behavior of variational auto-encoders (VAEs) and conditional VAEs (CVAEs) when trained on data that embeds a low-dimensional manifold into a higher-dimensional space. The authors demonstrate that VAEs are able to learn the intrinsic manifold dimension of the data at optimality. They also show that a similar result exists for CVAEs, and that effective conditioning should reduce the loss function at optimality. The paper then examines some common design choices in VAEs and CVAEs, observing that conditioned and unconditioned priors are theoretically equivalent, learning the decoder variance should result in better performance, and that a common weight-sharing technique in autoregressive models should be avoided. The experiments section directly addresses their claims, particularly on the synthetic dataset with known ground truth intrinsic manifold dimension. **Strengths**: Reviewers were in agreement that this is fairly original work that yields several interesting and important insights about the ability of VAEs to estimate the intrinsic dimension of a data manifold [R6JP,ZAHe,pxuR]. Reviewer [ZAHe] sees Theorem 1 as a major contribution; it demonstrates that well-optimized VAEs can estimate the intrinsic manifold dimension, under the assumption that we have a reliable method to estimate the number of active latent dimensions. The reviewer also notes taht Theorem 2 part (i) is sensible and intuitive; adding good conditioning variables can reduce the loss, and that Theorem 3 is also an interesting result that appears to shed light on an issue with a common practical technique. Moreover, reviewer [ZAHe] notes that empirical results are strong and that the introduction and motivations are very clear, and the overall structure is easy to follow. **Weaknesses**: The main criticisms from reviewers focused on numerous issues with clarifty, with many examples given by each reviewer [R6JP,pxuR]. Reviewer [R6JP] notes that the paper would be easier to follow if it included some form visualization on a toy problem and included some qualitative experiments, and that there are several aspects of writing that could be improved. The introduction takes too long to explain what the contribution is. Captions could be more informative (examples given). Reviewer [ZAHe] finds that while theoretical results link intrinsic dimension of data to activations in the latent space, there is no corresponding result that links activations in the latent space to the dimensionality of generated data. This is true in both Theorem 1 and in Theorem 2 / Corrolary 2.1. This reviewer also notes a large number of minor issues and/or addressable weakenesses (19 examples given) . **Author Reviewer Discussion**: The authors provided clarifications on many points to reviewer [R6JP] and have updated the manuscript accordingly. In response to reviewer [ZAHe] they clarified how dimesionality of generator manifold follows from latent activations, fixed the error in corrolary 2.1 pointed out by reviewer, and provided numerous responses to other questions and comments. Reviewer [pxUR] comments that while the contribution relative to (Dai & Wipf ICML 2019) regarding active dimensions is a little oversold, the paper makes other contributions. Reviewer [R6JP] updated their score 5->6. After extnsive discussion, reviewers [ZAHe] and [pxuR] also updated their scores 5->6 and 4->5. **Reviewer AC Discussion**: Reviewers were in consensus that author responses had improved the paper. All reviewers indicate that consider this paper above the bar for acceptance, but do think that this paper is somewhat borderline and could also be rejected. **Overall Recommendation**: The AC is of the opinion that the evaluation and discussion that has taken place for this paper is sufficiently thorough, and will follow the recommendations by reviewers. This is a paper that is just about above the bar for acceptance, but may also need to be rejected to make room for other papers.
train
[ "6ALF0EWxPJg", "q9N7qz4xiB0", "UqOW-2cG_Q6", "QpiNn56H5rj", "MCmbWia9lG1", "QQwswo4wtB", "pqFMFx2OCoyz", "qjj8Wk0f-he", "YoKN5Oc89qD", "IwIvHXuEUbs", "VDrJiWC5PZe", "ekGIh5H9vta", "uRcCqQaiJs", "UTYDEsYtWpm", "bNFFIb3L0iY", "1BAbEyvjfYN", "FCZg4spwdDb", "BDpk3S0cezf", "JYa-FgO2UD...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ " Thanks for your continued engagement with our paper. And per the reviewer's suggestion, we can certainly update the paper to include all the discussed changes, noting that NeurIPS allows for an additional (10th) page to address reviewer comments in the final version.", " Thanks for your continued engagement wit...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "MCmbWia9lG1", "QQwswo4wtB", "QpiNn56H5rj", "1BAbEyvjfYN", "pqFMFx2OCoyz", "YoKN5Oc89qD", "ekGIh5H9vta", "IwIvHXuEUbs", "VDrJiWC5PZe", "UTYDEsYtWpm", "bNFFIb3L0iY", "uRcCqQaiJs", "JYa-FgO2UDh", "BDpk3S0cezf", "BDpk3S0cezf", "FCZg4spwdDb", "nips_2022_Lvlxq_H96lI", "nips_2022_Lvlxq_H...
nips_2022_Sw_zDFDTr4
APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction
In many web applications, deep learning-based CTR prediction models (deep CTR models for short) are widely adopted. Traditional deep CTR models learn patterns in a static manner, i.e., the network parameters are the same across all the instances. However, such a manner can hardly characterize each of the instances which may have different underlying distributions. It actually limits the representation power of deep CTR models, leading to sub-optimal results. In this paper, we propose an efficient, effective, and universal module, named as Adaptive Parameter Generation network (APG), which can dynamically generate parameters for deep CTR models on-the-fly based on different instances. Extensive experimental evaluation results show that APG can be applied to a variety of deep CTR models and significantly improve their performance. Meanwhile, APG can reduce the time cost by 38.7\% and memory usage by 96.6\% compared to a regular deep CTR model. We have deployed APG in the industrial sponsored search system and achieved 3\% CTR gain and 1\% RPM gain respectively.
Accept
The paper focuses on the application of click-through rate (CTR), and proposes input-aware model parameters which are dynamically generated in order to boost representation power of deep CTR prediction models. To reduce time and memory complexity, the method decomposes the parameters and dynamically generates only part of the decomposed parameters. Improved results are shown on three public datasets and an A/B testing on an industrial system as claimed. Overall, this is a nice application-focused work that applies the widely studied idea of parameter generation and decomposition onto the new problem of CTR.
train
[ "KpguolYzbB0", "qY4KA8Vrod", "ARoG9h9uSHp", "nULeKTP6QGV", "FgW6ok6RD9A", "FKJzQhgnkQ", "4AFod-XUlel", "uecTo0qKW8sI", "-mbAGqRnl5j", "8XEei-0D_pP", "72gezee88WT", "8YuC0fIwpl3", "_v42aLEn2S", "PxO8VzNqYr", "gWqCpdDspSP" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks and I don't have further questions.", " Thanks to the author for the detailed reply. Most of my concerns have been addressed. I keep my original rating.", " Thanks for your valuable comments. Please allow us to further address your concerns below. \n\n**1、[Novelty]**\n\nAs mentioned in points 2 and 3 i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "-mbAGqRnl5j", "8XEei-0D_pP", "FKJzQhgnkQ", "FKJzQhgnkQ", "FKJzQhgnkQ", "uecTo0qKW8sI", "gWqCpdDspSP", "gWqCpdDspSP", "PxO8VzNqYr", "8YuC0fIwpl3", "_v42aLEn2S", "nips_2022_Sw_zDFDTr4", "nips_2022_Sw_zDFDTr4", "nips_2022_Sw_zDFDTr4", "nips_2022_Sw_zDFDTr4" ]
nips_2022_F8UV5CItyRG
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with $r$ neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most $\mathcal{O}(r)$ linear regions, implying a generalization bound. Unlike many other results in the literature, under an additional assumption on the distribution of the data, our result holds even for mild over-parameterization, where the width is $\tilde{\mathcal{O}}(r)$ and independent of the sample size.
Accept
Overall: The paper focuses on an end-to-end learning guarantee for gradient flow on shallow univariate neural networks in a binary classification setting. Reviews: The paper received four reviews. 2 Strong accepts (confident and fairly confident), Accept (confident), borderline accept (fairly confident). It seems that there are at least three reviewers that will champion the paper for publication. The reviewers found the paper is clear and has a clean presentation. The findings are interesting. The authors have provided extensive answers to reviewers' comments, answering most of them successfully. After rebuttal: A subset of the reviewers engaged in a consensus that the paper should be accepted. Confidence of reviews: Overall, the reviewers are confident. We will put more weight to the reviews that got engaged in the rebuttal discussion period.
train
[ "_r_J2cXwhy", "bMLdBT2KL3w", "1pdw8prR8aG", "Ju6rLXzKzRM", "AoZAALIASEG", "75GcvPS-ihv", "fDorkePMLi-", "iLeOXmHahX", "38AsjP-_mqZ", "PqiyswGS7p", "kTOkr8zQE36" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response and clarifications to my questions. As this resolves my questions to the authors, I keep my score unchanged.", " Thank you for your response.", " Thanks for the response!", " Thank you for the positive feedback and support.\n\n\n1. “The results of this paper only apply...
[ -1, -1, -1, -1, -1, -1, -1, 5, 8, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "75GcvPS-ihv", "AoZAALIASEG", "Ju6rLXzKzRM", "kTOkr8zQE36", "PqiyswGS7p", "38AsjP-_mqZ", "iLeOXmHahX", "nips_2022_F8UV5CItyRG", "nips_2022_F8UV5CItyRG", "nips_2022_F8UV5CItyRG", "nips_2022_F8UV5CItyRG" ]
nips_2022_WHFgQLRdKf9
DNA: Proximal Policy Optimization with a Dual Network Architecture
This paper explores the problem of simultaneously learning a value function and policy in deep actor-critic reinforcement learning models. We find that the common practice of learning these functions jointly is sub-optimal due to an order-of-magnitude difference in noise levels between the two tasks. Instead, we show that learning these tasks independently, but with a constrained distillation phase, significantly improves performance. Furthermore, we find that policy gradient noise levels decrease when using a lower \textit{variance} return estimate. Whereas, value learning noise level decreases with a lower \textit{bias} estimate. Together these insights inform an extension to Proximal Policy Optimization we call \textit{Dual Network Architecture} (DNA), which significantly outperforms its predecessor. DNA also exceeds the performance of the popular Rainbow DQN algorithm on four of the five environments tested, even under more difficult stochastic control settings.
Accept
The reviewers found this to be a well-executed technical contribution, and all reviewers agree it meets the bar for acceptance. While this paper does not seem to provide a breakthrough novel insight, it does contribute useful information for the field, and I believe sharing with the community is beneficial. I recommend accepting this paper.
test
[ "njXheTdfOuN", "qa-y8k4_kQO", "SATyGHnbux4", "K5wsjC4l3w", "9lYBki-B37", "eEC4T3t3L4c", "W1xL2YV0Hk", "C0Qo8kb4AJ2", "55fSq-JlTBg", "5WZG58YkovP", "s_IRiwZ13w4", "RSVgUSKSYvK", "knm5U764qDf", "AxIeU3lsGyb", "Al6MZAjkI-2" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and feedback. We have amended the language in the paper to clarify that our results show DNA outperforming PPG specifically on the Atari-5 benchmark rather than the previous (broader and erroneous) claim that it outperforms it in general. This change will appear in the camera-ready versi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "SATyGHnbux4", "K5wsjC4l3w", "s_IRiwZ13w4", "eEC4T3t3L4c", "RSVgUSKSYvK", "5WZG58YkovP", "55fSq-JlTBg", "55fSq-JlTBg", "nips_2022_WHFgQLRdKf9", "Al6MZAjkI-2", "AxIeU3lsGyb", "knm5U764qDf", "nips_2022_WHFgQLRdKf9", "nips_2022_WHFgQLRdKf9", "nips_2022_WHFgQLRdKf9" ]
nips_2022_hd5KRowT3oB
Self-Organized Group for Cooperative Multi-agent Reinforcement Learning
Centralized training with decentralized execution (CTDE) has achieved great success in cooperative multi-agent reinforcement learning (MARL) in practical applications. However, CTDE-based methods typically suffer from poor zero-shot generalization ability with dynamic team composition and varying partial observability. To tackle these issues, we propose a spontaneously grouping mechanism, termed Self-Organized Group (SOG), which is featured with conductor election (CE) and message summary (MS). In CE, a certain number of conductors are elected every $T$ time-steps to temporally construct groups, each with conductor-follower consensus where the followers are constrained to only communicate with their conductor. In MS, each conductor summarize and distribute the received messages to all affiliate group members to hold a unified scheduling. SOG provides zero-shot generalization ability to the dynamic number of agents and the varying partial observability. Sufficient experiments on mainstream multi-agent benchmarks exhibit superiority of SOG.
Accept
The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. They generally expressed positive views on the proposed method but also pointed out a few possible limitations of this paper. One reviewer argued that the authors properly outlined the downsides of alternative approaches and the importance of communication as a way of dealing with partial observability. This reviewer, however, brought up one limitation: that the conductor selection may benefit from/require prior information. After reading the rebuttal, however, this reviewer said that the authors satisfactorily answered most of their questions, and further argued that the insights from this paper will most likely be interesting to the emergent communication community. Another reviewer claimed that this paper introduced a novel group-based communication scheme for MARL. They argued that the ideas explored here are intuitive and well-motivated. This reviewer initially believed that some of the experimental results were inconclusive (e.g., regarding the claims that RL-based selection of conductors improves performance w.r.t. to random selection). The reviewer also commented, in their original review, on the possible lack of novelty: out of four components that compose this method, two are novel, one may follow from [4], and one may follow from [A]". After carefully analyzing the authors' rebuttal, however, this reviewer increased their score: they believe that the authors' detailed rebuttal helped clarify minor concerns and that the reviewer's initially-voiced major doubts (e.g., regarding the significance of this work) were mostly rectified. Overall, this reviewer believes (post-rebuttal) that this is indeed a novel and interesting paper introducing news ideas toward communication in MARL—all of which were well executed and appropriately studied. Another reviewer expressed concerns that the paper did not discuss important prior work [1-3], but was satisfied with the authors' responses and thanked them for adding more baselines as part of the experiments. Finally, one reviewer argued that even though this is an interesting method, they still had (pre-rebuttal) three main points of concern. After reading the authors' rebuttal, this reviewer said that "the authors gave detailed responses to address my concerns and I appreciate the additional experiments". Overall, thus, it is clear that all reviewers were positively impressed with the quality of this work and look forward to an updated version of the paper that addresses the suggestions mentioned in their reviews and during the discussion phase.
train
[ "Gw8Io6XHZ9H", "4FspeBs-L10", "4KwzlWXNp3d", "Ak9dJfsZswA", "Q1MLcLKgn0P", "MHcP7B4tEw", "1j7PCMvA1LHI", "BRpW2ueYOEv", "HBOGb5xOkK1", "eF5radKObCG", "d0nwoK4hbLY", "uk-w6pBHPG8", "Fo8ZepA2bCe", "mBPLnQ4Vqgo", "He2V3in1Cnq" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " <eom>", " Thank you for your revision and comment! We have included NCC and Gated-ACML in the experiment discussions, as well as the above mentioned paper in the related works.", " Thank you for your comment. We have included MAGIC and HetNet in the related works, and given a guidance of their experiment res...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "4KwzlWXNp3d", "Ak9dJfsZswA", "1j7PCMvA1LHI", "BRpW2ueYOEv", "MHcP7B4tEw", "eF5radKObCG", "HBOGb5xOkK1", "He2V3in1Cnq", "mBPLnQ4Vqgo", "Fo8ZepA2bCe", "uk-w6pBHPG8", "nips_2022_hd5KRowT3oB", "nips_2022_hd5KRowT3oB", "nips_2022_hd5KRowT3oB", "nips_2022_hd5KRowT3oB" ]
nips_2022_hMGSz9PNQes
MaskTune: Mitigating Spurious Correlations by Forcing to Explore
A fundamental challenge of over-parameterized deep learning models is learning meaningful data representations that yield good performance on a downstream task without over-fitting spurious input features. This work proposes MaskTune, a masking strategy that prevents over-reliance on spurious (or a limited number of) features. MaskTune forces the trained model to explore new features during a single epoch finetuning by masking previously discovered features. MaskTune, unlike earlier approaches for mitigating shortcut learning, does not require any supervision, such as annotating spurious features or labels for subgroup samples in a dataset. Our empirical results on biased MNIST, CelebA, Waterbirds, and ImagenNet-9L datasets show that MaskTune is effective on tasks that often suffer from the existence of spurious correlations. Finally, we show that \method{} outperforms or achieves similar performance to the competing methods when applied to the selective classification (classification with rejection option) task. Code for MaskTune is available at https://github.com/aliasgharkhani/Masktune.
Accept
The paper considers the problem of reliance of NN models to spurious correlations and proposes MaskTune a method to alleviate this. MaskTune is a masking strategy to prevent over-reliance on spurious correlations. The masking strategy forces the model to reduce reliance on spurious correlations and learn new features via the masked examples. The empirical results show that MaskTune improve model performance on multiple datasets and has applications to the task of selective classification. The paper focuses on a very important current problem with DNN models, and the proposed idea works on a number of benchmarks. The paper is well-written too. There were a number of questions raised by the reviewers and additional experiments requested that were addressed by the authors during the rebuttal period. Therefore I vote for acceptance and I ask the authors to update the paper accordingly for the camera ready version.
train
[ "COH2OrZbHp", "Jo3dEKv5okk", "TaPy2AgnX8L", "rDTvn76szzvV", "OAHRQmygdIj", "q0fy5uGEYc3", "4oGdQ8-P8ZZ", "2POs4tIptwA", "u3mjW_Q9QIH", "FOUVKiMGol0" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response, I continue to recommend acceptance. ", " - *What happens in the synthetic MNIST experiment when there are two spurious correlations. E.g., two squares instead of one?* \n\nPlease see the general response section\n\n- *Why only fine-tune for one epoch? What happens when you do more?*\n\n...
[ -1, -1, -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "Jo3dEKv5okk", "FOUVKiMGol0", "u3mjW_Q9QIH", "2POs4tIptwA", "4oGdQ8-P8ZZ", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes" ]
nips_2022_iAktFMVfeff
House of Cans: Covert Transmission of Internal Datasets via Capacity-Aware Neuron Steganography
In this paper, we present a capacity-aware neuron steganography scheme (i.e., Cans) to covertly transmit multiple private machine learning (ML) datasets via a scheduled-to-publish deep neural network (DNN) as the carrier model. Unlike existing steganography schemes which treat the DNN parameters as bit strings, \textit{Cans} for the first time exploits the learning capacity of the carrier model via a novel parameter sharing mechanism. Extensive evaluation shows, Cans is the first working scheme which can covertly transmit over $10000$ real-world data samples within a carrier model which has $220\times$ less parameters than the total size of the stolen data, and simultaneously transmit multiple heterogeneous datasets within a single carrier model, under a trivial distortion rate ($<10^{-5}$) and with almost no utility loss on the carrier model ($<1\%$). Besides, Cans implements by-design redundancy to be resilient against common post-processing techniques on the carrier model before the publishing.
Accept
The reviewers are generally positive, with Reviewer Lxtr being mostly enthusiastic about the work. Both Reviewer Lxtr and 5E9b believe the work of encrypting and transmitting secrete data with DNN is novel and the work is well executed with thorough experimental results and technical details. The reviewers raised several questions about the related work and baselines, as well as algorithmic clarity. The authors should further address these points by incorporating additional discussions and results from the rebuttal in the revision.
train
[ "5cZMLUlDy0c", "cO5ibES8Zxm", "F4lv_Jk8M4", "0jFiDYh8jwN", "D2z4ZMMSjCW", "eas_tQVjUQi" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nThank you for the encouragement on our work. Below, we reply one by one to the comments.\n\n**W1.** ***The paper is missing an integration of the main algorithmic steps (Fill, Propagate, Decode) with the overarching flow diagram in Fig 1 which creates a gap in the presentation.***\n\n**Re:** We will refine Fig....
[ -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, 2, 3, 4 ]
[ "eas_tQVjUQi", "0jFiDYh8jwN", "D2z4ZMMSjCW", "nips_2022_iAktFMVfeff", "nips_2022_iAktFMVfeff", "nips_2022_iAktFMVfeff" ]
nips_2022_0SVOleKNRAU
Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently
Driven by the empirical success and wide use of deep neural networks, understanding the generalization performance of overparameterized models has become an increasingly popular question. To this end, there has been substantial effort to characterize the implicit bias of the optimization algorithms used, such as gradient descent (GD), and the structural properties of their preferred solutions. This paper answers an open question in this literature: For the classification setting, what solution does mirror descent (MD) converge to? Specifically, motivated by its efficient implementation, we consider the family of mirror descent algorithms with potential function chosen as the $p$-th power of the $\ell_p$-norm, which is an important generalization of GD. We call this algorithm $p$-$\textsf{GD}$. For this family, we characterize the solutions it obtains and show that it converges in direction to a generalized maximum-margin solution with respect to the $\ell_p$-norm for linearly separable classification. While the MD update rule is in general expensive to compute and not suitable for deep learning, $p$-$\textsf{GD}$ is fully parallelizable in the same manner as SGD and can be used to train deep neural networks with virtually no additional computational overhead. Using comprehensive experiments with both linear and deep neural network models, we demonstrate that $p$-$\textsf{GD}$ can noticeably affect the structure and the generalization performance of the learned models.
Accept
This paper studies mirror descent in the classification setting with exponential and logistic losses. The reviewers agreed that the problem is important, and the paper is clear and well written.
train
[ "xELAewaRd6", "0BYvWvCZA-", "9rlyzxqsxHv", "pBFAuX2w-F-", "_vRiw-Ero_", "ZtFoRhhV4qL", "jDqygQUjlk5", "vNbcF0mvbcR", "Af58mAVKfF", "tUUAjBY9IZS", "8IGO3iPzdo", "vJp4SeRPOPr", "-YG12kyj4hC", "jtHM-8KbZyH", "rOt8CnU0Klq", "O9DXIixpyAM", "mxq_CkMBun", "ldCobU8Rpm" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Once again, we thank all the reviewers for the insightful discussion, which helped us further improve the paper. We have uploaded a new revision and highlighted all changes in orange.", " Thank you again for your comments and for increasing your score. We agree that a deeper understanding of the effect of $p$ o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "jtHM-8KbZyH", "_vRiw-Ero_", "Af58mAVKfF", "jDqygQUjlk5", "tUUAjBY9IZS", "vNbcF0mvbcR", "-YG12kyj4hC", "vJp4SeRPOPr", "8IGO3iPzdo", "ldCobU8Rpm", "mxq_CkMBun", "O9DXIixpyAM", "rOt8CnU0Klq", "nips_2022_0SVOleKNRAU", "nips_2022_0SVOleKNRAU", "nips_2022_0SVOleKNRAU", "nips_2022_0SVOleKN...
nips_2022_9u05zr0nhx
DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems
Recently, deep reinforcement learning (DRL) models have shown promising results in solving NP-hard Combinatorial Optimization (CO) problems. However, most DRL solvers can only scale to a few hundreds of nodes for combinatorial optimization problems on graphs, such as the Traveling Salesman Problem (TSP). This paper addresses the scalability challenge in large-scale combinatorial optimization by proposing a novel approach, namely, DIMES. Unlike previous DRL methods which suffer from costly autoregressive decoding or iterative refinements of discrete solutions, DIMES introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions. Such a continuous space allows stable REINFORCE-based training and fine-tuning via massively parallel sampling. We further propose a meta-learning framework to enable the effective initialization of model parameters in the fine-tuning stage. Extensive experiments show that DIMES outperforms recent DRL-based methods on large benchmark datasets for Traveling Salesman Problems and Maximal Independent Set problems.
Accept
This paper proposes a differentiable meta-solver applicable to large-scale combinatorial optimization. After a thorough discussion phase, all the reviewers are on the positive side of this paper. The reviewers appreciated the novelty of this paper and the importance of scaling neural combinatorial optimization for large-scale instances. Overall, I recommend acceptance for this paper. However, the reviewers also showed concerns about the presentation of this paper. The gap between generalization and testing performance is not clearly discussed and the connection to prior works using continuous latent space should be clearly stated. Since scalability is an important issue, it would be useful to clear up time/objective comparison and unify experimental settings as suggested by Reviewer fQdp and fe3B.
train
[ "UWYjLP5Axu0", "-Glzk0IaDkB", "AUSaU0GY-a0", "FgHaW-LXrxv", "Od5NVv2NKaB", "tb_7mQqdP9M", "0FC8-ME0apIl", "OWLeMiuivgLR", "OB5icDMhwEp", "PLMwagGUXS_", "PVApuj_Kl5K2", "s3cS73Hhm6k", "1OJx9gpifSDB", "nj5C_rkr1LQ", "RCjO6sksdh", "0dxK52_x6w_", "2J0haKZfP9v", "DtfZJx_gy-c", "H2NQ-l...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " Thank you for your further response. \n\nFor point 1, I can understand the authors' viewpoint on the contribution, but still think a new decoding method over the same compact output matrix (and indeed the whole model structure) cannot fully support the claim that it \"introduces a compact continuous space for par...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "-Glzk0IaDkB", "tb_7mQqdP9M", "nj5C_rkr1LQ", "0FC8-ME0apIl", "s3cS73Hhm6k", "PLMwagGUXS_", "H2NQ-lK9GmK", "OB5icDMhwEp", "H2NQ-lK9GmK", "PVApuj_Kl5K2", "DtfZJx_gy-c", "1OJx9gpifSDB", "2J0haKZfP9v", "0dxK52_x6w_", "nips_2022_9u05zr0nhx", "nips_2022_9u05zr0nhx", "nips_2022_9u05zr0nhx",...
nips_2022_VdQWVdT_8v
LOG: Active Model Adaptation for Label-Efficient OOD Generalization
This work discusses how to achieve worst-case Out-Of-Distribution (OOD) generalization for a variety of distributions based on a relatively small labeling cost. The problem has broad applications, especially in non-i.i.d. open-world scenarios. Previous studies either rely on a large amount of labeling cost or lack of guarantees about the worst-case generalization. In this work, we show for the first time that active model adaptation could achieve both good performance and robustness based on the invariant risk minimization principle. We propose \textsc{Log}, an interactive model adaptation framework, with two sub-modules: active sample selection and causal invariant learning. Specifically, we formulate the active selection as a mixture distribution separation problem and present an unbiased estimator, which could find the samples that violate the current invariant relationship, with a provable guarantee. The theoretical analysis supports that both sub-modules contribute to generalization. A large number of experimental results confirm the promising performance of the new algorithm.
Accept
The reviews and the discussions converged on the consensus that the paper contains novel ideas and is theoretically solid. However, a discrepancy between the scores remains after the discussions due to different opinions on the experimental part, especially the lack of comparison with standard adaptation baselines in the computer vision community. I read the manuscript, and I agree with reviewer WaQo and nvJG that the experiments are sufficient to support the claims, considering that the proposed method does not naturally apply to image data. That being said, I kindly ask the authors to take into account reviewers' comments while preparing the camera-ready version.
train
[ "1acJK91L9Vb", "GIZGl1sXGgd", "Ba4_w6U-PO", "cGzwYsxMg9", "-ob_RAz3H-l", "zJd-Ryvr3FT", "Ib59l4YyOdJ", "je-HM8Gdfpb", "p43xOQCbbk3" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to authors for their reply. \n\nAs the paper tackles active adaptation, and all three active adaptation baselines considered in the paper [37, 28, 6] show results on datasets like DIGITS and OFFICE, I still believe it would make a stronger paper with comparisons on these datasets. I acknowledge that the re...
[ -1, -1, -1, -1, -1, 5, 5, 8, 8 ]
[ -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ "Ba4_w6U-PO", "zJd-Ryvr3FT", "Ib59l4YyOdJ", "je-HM8Gdfpb", "p43xOQCbbk3", "nips_2022_VdQWVdT_8v", "nips_2022_VdQWVdT_8v", "nips_2022_VdQWVdT_8v", "nips_2022_VdQWVdT_8v" ]
nips_2022_9YQPaqVZKP
Neuron with Steady Response Leads to Better Generalization
Regularization can mitigate the generalization gap between training and inference by introducing inductive bias. Existing works have already proposed various inductive biases from diverse perspectives. However, none of them explores inductive bias from the perspective of class-dependent response distribution of individual neurons. In this paper, we conduct a substantial analysis of the characteristics of such distribution. Based on the analysis results, we articulate the Neuron Steadiness Hypothesis: the neuron with similar responses to instances of the same class leads to better generalization. Accordingly, we propose a new regularization method called Neuron Steadiness Regularization (NSR) to reduce neuron intra-class response variance. Based on the Complexity Measure, we theoretically guarantee the effectiveness of NSR for improving generalization. We conduct extensive experiments on Multilayer Perceptron, Convolutional Neural Networks, and Graph Neural Networks with popular benchmark datasets of diverse domains, which show that our Neuron Steadiness Regularization consistently outperforms the vanilla version of models with significant gain and low additional computational overhead.
Accept
This paper measures intra-class neuron response variance, and shows that network performance is better when it is lower. They then use this term as a regularization target, and show that it leads to improved model performance. Reviews were high quality. Scores were between weak accept and accept, with one reviewer raising their score from weak accept to accept. The most significant concerns were experimental: around ablations, around the diversity and scale of models the technique was tested on, and around the tuning of baselines. However, the experiments seemed fairly strong as is, and of course there are always more experimental conditions that can be requested. Based upon the reviewer consensus, I also recommend acceptance for this paper.
val
[ "09nlXyA_5Vq", "JSvSshk2kh-", "bnwmIeAsrlr", "wLHfVmOksL", "3zoesM5Pg5K", "Y4EI717ycbd", "PjDrZ2vMN1D", "v0sR9hRlHOS", "1R9D7O9_jh", "aOP4iU2d1yI", "zQpNKlUZT0d", "JjyfaUzpsB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your hard work in the reviewing period of NeurIPS'22 and we feel grateful for your helpful suggestions and questions. We are also very willing to further discuss with you if you have any questions or suggestions.", " Thank you for your response. \n\nWe compare the sensitivity propos...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "Y4EI717ycbd", "3zoesM5Pg5K", "wLHfVmOksL", "1R9D7O9_jh", "v0sR9hRlHOS", "PjDrZ2vMN1D", "JjyfaUzpsB", "zQpNKlUZT0d", "aOP4iU2d1yI", "nips_2022_9YQPaqVZKP", "nips_2022_9YQPaqVZKP", "nips_2022_9YQPaqVZKP" ]
nips_2022_nQcc_muJyFB
Improved Feature Distillation via Projector Ensemble
In knowledge distillation, previous feature distillation methods mainly focus on the design of loss functions and the selection of the distilled layers, while the effect of the feature projector between the student and the teacher remains under-explored. In this paper, we first discuss a plausible mechanism of the projector with empirical evidence and then propose a new feature distillation method based on a projector ensemble for further performance improvement. We observe that the student network benefits from a projector even if the feature dimensions of the student and the teacher are the same. Training a student backbone without a projector can be considered as a multi-task learning process, namely achieving discriminative feature extraction for classification and feature matching between the student and the teacher for distillation at the same time. We hypothesize and empirically verify that without a projector, the student network tends to overfit the teacher's feature distributions despite having different architecture and weights initialization. This leads to degradation on the quality of the student's deep features that are eventually used in classification. Adding a projector, on the other hand, disentangles the two learning tasks and helps the student network to focus better on the main feature extraction task while still being able to utilize teacher features as a guidance through the projector. Motivated by the positive effect of the projector in feature distillation, we propose an ensemble of projectors to further improve the quality of student features. Experimental results on different datasets with a series of teacher-student pairs illustrate the effectiveness of the proposed method. Code is available at https://github.com/chenyd7/PEFD.
Accept
The paper received 5 positive reviews and the reviewers increased/remained their scores after the rebuttal. All the reviewers agree that the proposed method is simple yet effective, and the experiments are comprehensive. Overall this work proposes an improved feature distillation method via projector ensemble. But I hope the authors will discuss the computational costs brought by multiple projectors clearly, as suggested by the reviewers.
train
[ "ACD68DBNYh6", "a2vyYxm6Xz8", "-LViqi-zNa0", "i7L3jl4tZei", "rqVDubYaTzq", "Ixcxy0JR_FX", "Dh8ccs1dM3W", "G2l6y3tmsFv", "GvnUx6oSngl", "8_SK4TNgaBM", "T5OMXV4g7lO", "1qHzafPrddc", "r6cCftIJiXw", "tYV7GxVJQMH", "C4xb4MDiPZ3" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate your support. In the table for Q2, we use teacher-student pair DenseNet201-ResNet18 on ImageNet for demonstration. In this table, we report the training times of one epoch of different methods and record the peak GPU (an NVIDIA V100 GPU) memory usages of different methods with batch size o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5, 5 ]
[ "a2vyYxm6Xz8", "Dh8ccs1dM3W", "rqVDubYaTzq", "G2l6y3tmsFv", "GvnUx6oSngl", "C4xb4MDiPZ3", "tYV7GxVJQMH", "r6cCftIJiXw", "1qHzafPrddc", "T5OMXV4g7lO", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB" ]
nips_2022_XxmOKCt8dO9
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
The success of deep learning is partly attributed to the availability of massive data downloaded freely from the Internet. However, it also means that users' private data may be collected by commercial organizations without consent and used to train their models. Therefore, it's important and necessary to develop a method or tool to prevent unauthorized data exploitation. In this paper, we propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners. Specifically, the noise produced by the generator for each image has the confounder property. It can build spurious correlations between images and labels, so that the model cannot learn the correct mapping from images to labels in this noise-added dataset. Meanwhile, the discriminator is used to ensure that the generated noise is small and imperceptible, thereby remaining the normal utility of the encrypted image for humans. The experiments are conducted in six image classification datasets, including three natural object datasets and three medical datasets. The results demonstrate that our method not only outperforms state-of-the-art methods in standard settings, but can also be applied to fast encryption scenarios. Moreover, we show a series of transferability and stability experiments to further illustrate the effectiveness and superiority of our method.
Accept
All the reviewers were excited by the idea and a efficient method to solve very critical problem with rigorous experimental support. They all agreed that the paper is above bar for publications. We hope the authors will further improve the paper for camera ready submission.
train
[ "5rTgAXT-4g", "8KliZsQTWsp", "ngcy_vaXJf", "vUE0LQV5sV8", "SUx0d-M1Kl", "8peBcxPt5gm", "BV9lH6mXQQ0", "J0bm0vzitKH", "RbM7os-79DKE", "xiXVb-8GP_", "tJUJl49fWu", "ZUrDSk7uP5F", "rcJgz2rPQnh" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your suggestion, although we believe that the current version of ConfounderGAN can be applied to most practical scenarios (i.e. as data owners usually don't reveal which encryption tool they use), we also note that it is important to design an encryption tool that strictly satisfies the Kerckhoffs's pr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "8KliZsQTWsp", "BV9lH6mXQQ0", "BV9lH6mXQQ0", "nips_2022_XxmOKCt8dO9", "rcJgz2rPQnh", "ZUrDSk7uP5F", "J0bm0vzitKH", "RbM7os-79DKE", "xiXVb-8GP_", "tJUJl49fWu", "nips_2022_XxmOKCt8dO9", "nips_2022_XxmOKCt8dO9", "nips_2022_XxmOKCt8dO9" ]
nips_2022_0GRBKLBjJE
A Fast Post-Training Pruning Framework for Transformers
Pruning is an effective way to reduce the huge inference cost of Transformer models. However, prior work on pruning Transformers requires retraining the models. This can add high training cost and high complexity to model deployment, making it difficult to use in many practical situations. To address this, we propose a fast post-training pruning framework for Transformers that does not require any retraining. Given a resource constraint and a sample dataset, our framework automatically prunes the Transformer model using structured sparsity methods. To retain high accuracy without retraining, we introduce three novel techniques: (i) a lightweight mask search algorithm that finds which heads and filters to prune based on the Fisher information; (ii) mask rearrangement that complements the search algorithm; and (iii) mask tuning that reconstructs the output activations for each layer. We apply our method to BERT-base and DistilBERT, and we evaluate its effectiveness on GLUE and SQuAD benchmarks. Our framework achieves up to 2.0x reduction in FLOPs and 1.56x speedup in inference latency, while maintaining < 1% loss in accuracy. Importantly, our framework prunes Transformers in less than 3 minutes on a single GPU, which is over two orders of magnitude faster than existing pruning approaches that retrain the models.
Accept
The authors deliver on what they promise: a fast post-training pruning framework for transformers. It reduces the inference costs of deploying transformers while preserving much or all of their accuracy on the standard range of academic downstream tasks. Moreover, it does so without the hefty costs that typically come with prune-and-retrain cycles. The paper is clearly written and well-presented, and the technique seems to work quite well. The authors seemed to satisfactorily address all reviewer concerns, and those concerns were minor at best. What more can you ask for? I look forward to visiting the poster at NeurIPS and trying this technique myself. The authors are to be especially commended for focusing on real-world speedup on real hardware. That's (sadly) still a rarity in pruning papers. This is something that appears genuinely useful, today, by practitioners.
train
[ "5XV3yhzitRi", "f07FbpOaNC_", "8jh5nAdISi_o", "8HsqMR-Xh-S", "P30SE6-eFQH", "dcRfHPda7sg", "Pw1aLo_ErZX", "vWG-YAjxQJL", "iQB9-BELUb", "sSV1Y2960CD", "7e5_6VgnVW", "rO9dYgiVg5P" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the author for the rebuttal. The author solves my concern.", " ==================post rebuttal============ \n\nMy concerns are addressed. I raise up my score. Thanks.", " Thanks for providing the feedback. The rebuttal addresses my concerns. I would like to keep my acceptance recommendation. ", " Th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "vWG-YAjxQJL", "rO9dYgiVg5P", "dcRfHPda7sg", "rO9dYgiVg5P", "rO9dYgiVg5P", "7e5_6VgnVW", "sSV1Y2960CD", "iQB9-BELUb", "nips_2022_0GRBKLBjJE", "nips_2022_0GRBKLBjJE", "nips_2022_0GRBKLBjJE", "nips_2022_0GRBKLBjJE" ]
nips_2022_yI7i9yc3Upr
Controllable Text Generation with Neurally-Decomposed Oracle
We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO). Given a pre-trained base language model and a sequence-level boolean oracle function, we aim to decompose the oracle function into token-level guidance to steer the base model in text generation. Specifically, the token-level guidance is provided by NADO, a neural model trained with examples sampled from the base model, demanding no additional auxiliary labeled data. Based on posterior regularization, we present the close-form optimal solution to incorporate the decomposed token-level guidance into the base model for controllable generation. We further discuss how the neural approximation affects the quality of the solution. These experiments conducted on two different applications: (1) text generation with lexical constraints and (2) machine translation with formality control demonstrate that our framework efficiently guides the base model towards the given oracle while keeping high generation quality.
Accept
All three reviewers sided to accept the paper. The method of the paper is formulated as an optimization problem based on posterior regularization, and as such is quite different from existing paradigms in controllable NLG (e.g., lexically constrained beam search or modified probability sampling). The work's theoretical basis also offers a nice contrast with established methods in this area, as the existing methods are often applied in post-hoc manners and without theoretic guarantees. The only significant downside of this paper is that its evaluation is not very standard and lacks human evaluation, and its model-based automated evaluation of attributes such as formality could have been affected by spurious correlations (note: the latter concern affects only one of two tasks of the paper). As the paper achieves some substantial gains on two very different tasks, the reviewers generally considered the method of the paper to be quite effective.
train
[ "lACN1pY0iw", "XBDHcf-mM3l", "XHt-lEZuacg", "T5jxIk4EvtU", "tiRY0_bvmGn", "o3rRcZnYKoy", "nsdAZHrWcW-", "LIQu9Dz999h", "oSED9QLbM-x" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response, revision, and additional results. The authors have addressed most of my questions, and I am happy to increase my score (also apologize for this last-minute response)", " Dear Reviewer 2y5z,\n\nThanks for your valuable comments and we believe they help a lot in ou...
[ -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "XBDHcf-mM3l", "LIQu9Dz999h", "nips_2022_yI7i9yc3Upr", "oSED9QLbM-x", "LIQu9Dz999h", "nsdAZHrWcW-", "nips_2022_yI7i9yc3Upr", "nips_2022_yI7i9yc3Upr", "nips_2022_yI7i9yc3Upr" ]
nips_2022_C7cv9fh8m-b
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models
Despite the great success of pre-trained language models (PLMs) in a large set of natural language processing (NLP) tasks, there has been a growing concern about their security in real-world applications. Backdoor attack, which poisons a small number of training samples by inserting backdoor triggers, is a typical threat to security. Trained on the poisoned dataset, a victim model would perform normally on benign samples but predict the attacker-chosen label on samples containing pre-defined triggers. The vulnerability of PLMs under backdoor attacks has been proved with increasing evidence in the literature. In this paper, we present several simple yet effective training strategies that could effectively defend against such attacks. To the best of our knowledge, this is the first work to explore the possibility of backdoor-free adaptation for PLMs. Our motivation is based on the observation that, when trained on the poisoned dataset, the PLM's adaptation follows a strict order of two stages: (1) a moderate-fitting stage, where the model mainly learns the major features corresponding to the original task instead of subsidiary features of backdoor triggers, and (2) an overfitting stage, where both features are learned adequately. Therefore, if we could properly restrict the PLM's adaptation to the moderate-fitting stage, the model would neglect the backdoor triggers but still achieve satisfying performance on the original task. To this end, we design three methods to defend against backdoor attacks by reducing the model capacity, training epochs, and learning rate, respectively. Experimental results demonstrate the effectiveness of our methods in defending against several representative NLP backdoor attacks. We also perform visualization-based analysis to attain a deeper understanding of how the model learns different features, and explore the effect of the poisoning ratio. Finally, we explore whether our methods could defend against backdoor attacks for the pre-trained CV model. The codes are publicly available at https://github.com/thunlp/Moderate-fitting.
Accept
The paper proposed an approach to against backdoor triggers by restricting the language model fine-tuning to the moderate-fitting stage. The paper also provides a nice analysis to demonstrate the factors that impact the models’ vulnerability to backdoors. Overall, the paper is well-written and provide sufficient analyses to support the claims. The revision and rebuttal address the comments from the reviewers.
train
[ "BcLGRO2DOgT", "r8GS_mbsmh_", "XhgFaWGrx_2", "0DPAmRrnnA", "mCU71Fhdj8", "-C2TZ3XGqH", "amBUVhoQYcG", "cFy2WsGTCQ", "D4JBEjnwoqpd", "b-HT8cAxVsg", "vOXPvrXRUJl", "q4HiSW6jK0Q", "617vJbXmW6", "YM9fTYlktQM", "GVDTXlJxZBV", "iAp-SEi32uz", "j2NIHxIz82Z", "J7mnQuxkhEs", "_b-c_tGRUoO",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " We sincerely thank you for your effort to improve this paper and your decision.\n\n", " I appreciate the authors' response. I am glad to see that my questions are well answered, and the quality of the paper is improved. \n\nI maintain my original score.", " We sincerely thank you for your effort to improve th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 3 ]
[ "r8GS_mbsmh_", "9BRKB87G2I", "0DPAmRrnnA", "vOXPvrXRUJl", "-C2TZ3XGqH", "amBUVhoQYcG", "cFy2WsGTCQ", "b-HT8cAxVsg", "nips_2022_C7cv9fh8m-b", "_bBFi2OVyoy", "Kex7yor8r3Y", "617vJbXmW6", "yZgGlsKGa4G", "_bBFi2OVyoy", "_bBFi2OVyoy", "_bBFi2OVyoy", "u5GdGIz64u0", "u5GdGIz64u0", "u5Gd...
nips_2022_EENzpzcs4Vy
Unsupervised Learning of Shape Programs with Repeatable Implicit Parts
Shape programs encode shape structures by representing object parts as subroutines and constructing the overall shape by composing these subroutines. This usually involves the reuse of subroutines for repeatable parts, enabling the modeling of correlations among shape elements such as geometric similarity. However, existing learning-based shape programs suffer from limited representation capacity, because they use coarse geometry representations such as geometric primitives and low-resolution voxel grids. Further, their training requires manually annotated ground-truth programs, which are expensive to attain. We address these limitations by proposing Shape Programs with Repeatable Implicit Parts (ProGRIP). Using implicit functions to represent parts, ProGRIP greatly boosts the representation capacity of shape programs while preserving the higher-level structure of repetitions and symmetry. Meanwhile, we free ProGRIP from any inaccessible supervised training via devising a matching-based unsupervised training objective. Our empirical studies show that ProGRIP outperforms existing structured representations in both shape reconstruction fidelity and segmentation accuracy of semantic parts.
Accept
All reviewers recommend acceptance of this paper. They find the approach of repeatable parts innovative and the paper well written. The AC concurs
train
[ "MnUAzCOqmlk", "g5E28o-af5X", "KtQb9WXAPG", "HhpiZnGRtT", "EWWMJug1u-r", "dxXbA56d-n", "zhE1qVkHCKU", "e2UiPf1CRdy", "gEYlckzKVF2", "Sb_z-g6MqKn", "jzWxZAOFlKw", "VN3Ex1064qy" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the answers. I believe that most concerns have been properly addressed. I am inclined to stick to my original rating.\n\n", " Thank you again for taking the time and care to consider both our original manuscript and our responses! This has certainly made the work stronger and clearer. We will ma...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "gEYlckzKVF2", "KtQb9WXAPG", "dxXbA56d-n", "nips_2022_EENzpzcs4Vy", "nips_2022_EENzpzcs4Vy", "jzWxZAOFlKw", "jzWxZAOFlKw", "VN3Ex1064qy", "Sb_z-g6MqKn", "nips_2022_EENzpzcs4Vy", "nips_2022_EENzpzcs4Vy", "nips_2022_EENzpzcs4Vy" ]
nips_2022_4u-oGqB4Lf6
Efficient Active Learning with Abstention
The goal of active learning is to achieve the same accuracy achievable by passive learning, while using much fewer labels. Exponential savings in terms of label complexity have been proved in very special cases, but fundamental lower bounds show that such improvements are impossible in general. This suggests a need to explore alternative goals for active learning. Learning with abstention is one such alternative. In this setting, the active learning algorithm may abstain from prediction and incur an error that is marginally smaller than random guessing. We develop the first computationally efficient active learning algorithm with abstention. Our algorithm provably achieves $\mathsf{polylog}(\frac{1}{\varepsilon})$ label complexity, without any low noise conditions. Such performance guarantee reduces the label complexity by an exponential factor, relative to passive learning and active learning that is not allowed to abstain. Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term \emph{proper abstention} that also leads to a host of other desirable characteristics (e.g., recovering minimax guarantees in the standard setting, and avoiding the undesirable ``noise-seeking'' behavior often seen in active learning). We also provide novel extensions of our algorithm that achieve \emph{constant} label complexity and deal with model misspecification.
Accept
In this paper, the authors develop the first computationally efficient active learning algorithm with abstention, while maintaining the exponential savings in terms of label complexity. Furthermore, the proposed algorithm enjoys other nice properties, such as recovering minimax rates in the standard setting. The algorithm is based on novel applications of techniques from contextual bandits, and the analysis is nontrivial. On the other hand, the authors should improve their paper by addressing the concerns of reviewers, especially the realizable assumption.
train
[ "ERNnqqxssc", "4XZa2aCbsPt", "OjMhucNTMhl", "4yu5LDpBI3CR", "cLzTR24zjT", "yNaSPhHjDWY", "FQABQiNb6C", "edAwltDxlA7", "M8lRHkb8DUP", "80QUogWmHmK", "9ZJcTEYLLyD", "FDVTnCl_hAM", "QiZFKf9wJs" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply. \n\nWe believe it's hard to find *a single function class* that contains all possible true conditional probability (under *arbitrary* ${\\cal D_{XY}}$) and has a non-trivial disagreement coefficient (we can trivially bound $\\theta \\leq \\gamma/\\epsilon$ for any ${\\cal F}$; see Defini...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3, 3 ]
[ "4XZa2aCbsPt", "FQABQiNb6C", "4yu5LDpBI3CR", "edAwltDxlA7", "80QUogWmHmK", "QiZFKf9wJs", "FDVTnCl_hAM", "M8lRHkb8DUP", "9ZJcTEYLLyD", "nips_2022_4u-oGqB4Lf6", "nips_2022_4u-oGqB4Lf6", "nips_2022_4u-oGqB4Lf6", "nips_2022_4u-oGqB4Lf6" ]
nips_2022_J-IZQLQZdYu
Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy Constraints
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs. Researchers primarily operate from a privacy first perspective, setting strict privacy requirements and minimizing risk subject to these constraints. Practitioners often desire an accuracy first perspective, possibly satisfied with the greatest privacy they can get subject to obtaining sufficiently small error. Ligett et al. have introduced a `"noise reduction" algorithm to address the latter perspective. The authors show that by adding correlated Laplace noise and progressively reducing it on demand, it is possible to produce a sequence of increasingly accurate estimates of a private parameter and only pay a privacy cost for the least noisy iterate released. In this work, we generalize noise reduction to the setting of Gaussian noise, introducing the Brownian mechanism. The Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion. Then, at the practitioner's discretion, noise is gradually decreased by tracing back along the Brownian path to an earlier time. Our mechanism is more naturally applicable to the common setting of bounded $\ell_2$-sensitivity, empirically outperforms existing work on common statistical tasks, and provides customizable control of privacy loss over the entire interaction with the practitioner. We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm that provides adaptive privacy guarantees. Overall, our results demonstrate that one can meet utility constraints while still maintaining strong levels of privacy.
Accept
The reviewers unanimously agreed that the paper is well-motivated and the theoretical results surrounding the proposed Brownian mechanism are interesting. Initial concerns regarding presentation and clarity were assuaged after the authors' responses to the reviews. Overall, the paper is a non-trivial and valuable extension of [Ligget et al., 2017] and should be presented at the conference.
test
[ "aNPB3djtXz", "lGvoS0YDHx-", "Yt8xyhmAkgxg", "a18_FK-mlub", "dql8zcrcXz6", "fecGNrQTpqx", "5aQdOF3GKo8", "NRqUWCBBDIt", "4GD4YmSkIZe", "QOynWzqWGgy", "1TMOCNZkudH", "9KOQudDLoiv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the clarifications and edits. I encourage you to give the paper one more round of editing to correct minor typos, e.g., $(Z_t)_{t \\ge 0}$ in Line 236, and note that the probabilities sum to 1 in the proof of Proposition C.1 so the total sum is $\\gamma/2$. Also, it could be helpful to inc...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 4, 4 ]
[ "NRqUWCBBDIt", "5aQdOF3GKo8", "fecGNrQTpqx", "dql8zcrcXz6", "9KOQudDLoiv", "1TMOCNZkudH", "QOynWzqWGgy", "4GD4YmSkIZe", "nips_2022_J-IZQLQZdYu", "nips_2022_J-IZQLQZdYu", "nips_2022_J-IZQLQZdYu", "nips_2022_J-IZQLQZdYu" ]
nips_2022_ZL-XYsDqfQz
Make an Omelette with Breaking Eggs: Zero-Shot Learning for Novel Attribute Synthesis
Most of the existing algorithms for zero-shot classification problems typically rely on the attribute-based semantic relations among categories to realize the classification of novel categories without observing any of their instances. However, training the zero-shot classification models still requires attribute labeling for each class (or even instance) in the training dataset, which is also expensive. To this end, in this paper, we bring up a new problem scenario: ''Can we derive zero-shot learning for novel attribute detectors/classifiers and use them to automatically annotate the dataset for labeling efficiency?'' Basically, given only a small set of detectors that are learned to recognize some manually annotated attributes (i.e., the seen attributes), we aim to synthesize the detectors of novel attributes in a zero-shot learning manner. Our proposed method, Zero-Shot Learning for Attributes (ZSLA), which is the first of its kind to the best of our knowledge, tackles this new research problem by applying the set operations to first decompose the seen attributes into their basic attributes and then recombine these basic attributes into the novel ones. Extensive experiments are conducted to verify the capacity of our synthesized detectors for accurately capturing the semantics of the novel attributes and show their superior performance in terms of detection and localization compared to other baseline approaches. Moreover, we demonstrate the application of automatic annotation using our synthesized detectors on Caltech-UCSD Birds-200-2011 dataset. Various generalized zero-shot classification algorithms trained upon the dataset re-annotated by ZSLA shows comparable performance with those trained with the manual ground-truth annotations.
Accept
This paper has proposed a method named zero-shot learning for attributes to deal with a research problem about novel attribute classification and attribute labeling. The reviewers have many questions in the intial round. After the rebuttal, the authours clarify most unclear points, and some reviewers raise the score. In general, all the reviewers agree with the acceptance of this paper.
test
[ "gCI2tL0Y7I9", "vGdaMoFn531", "RarfWwRU9p4", "ylD_aGF0YyC", "xgWBTxpjYZ2", "UbgpGCeN64J", "FXwTwlBtyjt4", "XXcLeuFNUc8", "3n3tur_VPzL", "MYi-zGb27L-", "GCf7zIGA2RV", "Qd5jtXhB9TA", "TpMI-Igh8_", "Q4oUV-13MPm", "T7cIWpY4d-5", "Hh4vaDi34TT", "XiCbNtiISdv" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " First, we thank the reviewer for recognizing our efforts to provide clarification in the rebuttal as well as for his/her consideration to raise the rating on our work. Below we sequentially reply to the remaining concerns in the reviewer's comments.\n\n---\n\n**[Reply to Q1]**\n\nWe believe that the good performa...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "RarfWwRU9p4", "RarfWwRU9p4", "ylD_aGF0YyC", "XiCbNtiISdv", "Hh4vaDi34TT", "T7cIWpY4d-5", "Q4oUV-13MPm", "XiCbNtiISdv", "XiCbNtiISdv", "XiCbNtiISdv", "Hh4vaDi34TT", "T7cIWpY4d-5", "Q4oUV-13MPm", "nips_2022_ZL-XYsDqfQz", "nips_2022_ZL-XYsDqfQz", "nips_2022_ZL-XYsDqfQz", "nips_2022_ZL-...
nips_2022_WSAWRKVjr5K
All Politics is Local: Redistricting via Local Fairness
In this paper, we propose to use the concept of local fairness for auditing and ranking redistricting plans. Given a redistricting plan, a deviating group is a population-balanced contiguous region in which a majority of individuals are of the same interest and in the minority of their respective districts; such a set of individuals have a justified complaint with how the redistricting plan was drawn. A redistricting plan with no deviating groups is called locally fair. We show that the problem of auditing a given plan for local fairness is NP-complete. We present an MCMC approach for auditing as well as ranking redistricting plans. We also present a dynamic programming based algorithm for the auditing problem that we use to demonstrate the efficacy of our MCMC approach. Using these tools, we test local fairness on real-world election data, showing that it is indeed possible to find plans that are almost or exactly locally fair. Further, we show that such plans can be generated while sacrificing very little in terms of compactness and existing fairness measures such as competitiveness of the districts or seat shares of the plans.
Accept
The reviewers universally agreed that this paper is timely, interesting, and well written. It has two limitations addressing which would make the paper even stronger. First, being upfront about the heuristic (rather than rigorous) nature of some of the statements, as highlighted by the reviewers. Second, is the investigation of other datasets to further strengthen the empirical section. I urge the authors to make these changes for the camera ready.
train
[ "bFninMk6o6E", "CDivMVRwAW", "IdHTCgqXhV9", "nUF3yU5sSH3", "aKX9zjvUHhd", "vtUwBCGLxG8", "OROvo2Z8FK6", "2aAmgqDFGj", "zwtojYu-1W5", "6rmGlTgOETf", "Gr9rUq8LGVr", "K-0AK9M5pVl" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional response. I agree that counting all fractionally “wasted votes” from each VTD (full votes for people whose party lost in their district and fractional if their party won by more than necessary) would likely address the issue of packing. However, there are two emergent issues with this su...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "CDivMVRwAW", "IdHTCgqXhV9", "vtUwBCGLxG8", "K-0AK9M5pVl", "Gr9rUq8LGVr", "6rmGlTgOETf", "zwtojYu-1W5", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K" ]
nips_2022_I4XNmBm2h-E
Adaptive Oracle-Efficient Online Learning
The classical algorithms for online learning and decision-making have the benefit of achieving the optimal performance guarantees, but suffer from computational complexity limitations when implemented at scale. More recent sophisticated techniques, which we refer to as $\textit{oracle-efficient}$ methods, address this problem by dispatching to an $\textit{offline optimization oracle}$ that can search through an exponentially-large (or even infinite) space of decisions and select that which performed the best on any dataset. But despite the benefits of computational feasibility, most oracle-efficient algorithms exhibit one major limitation: while performing well in worst-case settings, they do not adapt well to friendly environments. In this paper we consider two such friendly scenarios, (a) "small-loss" problems and (b) IID data. We provide a new framework for designing follow-the-perturbed-leader algorithms that are oracle-efficient and adapt well to the small-loss environment, under a particular condition which we call $\textit{approximability}$ (which is spiritually related to sufficient conditions provided in (Dudík et al., 2020)). We identify a series of real-world settings, including online auctions and transductive online classification, for which approximability holds. We also extend the algorithm to an IID data setting and establish a "best-of-both-worlds" bound in the oracle-efficient setting.
Accept
The paper received reviews from experts in online learning, who all support acceptance following some clarifications provided by the authors. From my own look into the paper, I also firmly support acceptance: the paper makes a clear, solid and elegant contribution to a long line of research in online learning, and it is also very well written. I do however strongly encourage the authors to pay close attention to the suggestions in the reviews as to how to improve their presentation for the final version.
train
[ "PfHPfdl1SiU", "AeqJ4jvZmLJ", "5Z6-325N-Y", "lAdMNUGoXa8", "fFOZDSJYYg", "hYuiX4T8OJl", "-Qh9F33C3C", "pc9G09fczg1", "gVeNN1vmbZx", "5Cv5TjoSWhx" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarification.\nI now have a better understanding of the contributions of this study and have corrected some of my misunderstandings.\nI am generally satisfied with the responses and am now leaning towards increasing my score, but will update it after the reviewer discussion period.\n", " Dea...
[ -1, -1, -1, -1, -1, -1, 7, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "lAdMNUGoXa8", "nips_2022_I4XNmBm2h-E", "5Cv5TjoSWhx", "gVeNN1vmbZx", "pc9G09fczg1", "-Qh9F33C3C", "nips_2022_I4XNmBm2h-E", "nips_2022_I4XNmBm2h-E", "nips_2022_I4XNmBm2h-E", "nips_2022_I4XNmBm2h-E" ]
nips_2022_Zvh6lF5b26N
Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold
When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse'" phenomenon. More specifically, for the output features of the penultimate layer, for each class the within-class features converge to their means, and the means of different classes exhibit a certain tight frame structure, which is also aligned with the last layer's classifier. As feature normalization in the last layer becomes a common practice in modern representation learning, in this work we theoretically justify the neural collapse phenomenon under normalized features. Based on an unconstrained feature model, we simplify the empirical loss function in a multi-class classification task into a nonconvex optimization problem over the Riemannian manifold by constraining all features and classifiers over the sphere. In this context, we analyze the nonconvex landscape of the Riemannian optimization problem over the product of spheres, showing a benign global landscape in the sense that the only global minimizers are the neural collapse solutions while all other critical points are strict saddle points with negative curvature. Experimental results on practical deep networks corroborate our theory and demonstrate that better representations can be learned faster via feature normalization. Code for our experiments can be found at https://github.com/cjyaras/normalized-neural-collapse.
Accept
The paper studies a matrix decomposition problem and shows the problem is in a strict-saddle type. All the reviewers tend to accept the paper. I recommend an acceptance.
train
[ "ZJGLE6_kNAD", "zwi1hJV59DS", "j62jr1Jvh79", "J4oh3rXl6LC", "DEzdu32vqYc", "5-zdAW4xFcV", "zZb2rQ_Kueb" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their helpful comments. Please find our response below. \n\n${\\bf Q1.}$ The main result of the paper is neither introduced nor motivated and no intuitions ... This part should be \"softened\". (see Weakness 1 for details)\n\n${\\bf A1.}$ We thank the reviewer for valuable feedback and ...
[ -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, 3, 2, 4 ]
[ "zZb2rQ_Kueb", "5-zdAW4xFcV", "DEzdu32vqYc", "nips_2022_Zvh6lF5b26N", "nips_2022_Zvh6lF5b26N", "nips_2022_Zvh6lF5b26N", "nips_2022_Zvh6lF5b26N" ]
nips_2022_H_xAgRM7I5N
Zero-Shot 3D Drug Design by Sketching and Generating
Drug design is a crucial step in the drug discovery cycle. Recently, various deep learning-based methods design drugs by generating novel molecules from scratch, avoiding traversing large-scale drug libraries. However, they depend on scarce experimental data or time-consuming docking simulation, leading to overfitting issues with limited training data and slow generation speed. In this study, we propose the zero-shot drug design method DESERT (Drug dEsign by SkEtching and geneRaTing). Specifically, DESERT splits the design process into two stages: sketching and generating, and bridges them with the molecular shape. The two-stage fashion enables our method to utilize the large-scale molecular database to reduce the need for experimental data and docking simulation. Experiments show that DESERT achieves a new state-of-the-art at a fast speed.
Accept
The paper makes a novel contribution to methods for generating novel molecules from scratch. The core idea is to generate a shape that fits the molecular pocket without looking at the protein structure. Two out of three reviewers recommended acceptance. Reviewers emphasize that the method is innovative and interesting, and the empirical performance appealing (especially given that only the shape information is provided to the model). Strong performance is enabled by good design choices made across the paper, such as including the pretraining stage. The reviewer that recommend rejection raised issues related to the novelty and clarity of the paper. However, I believe the paper is sufficiently clear and novel to meet the bar for acceptance. Overall, it is my pleasure to recommend acceptance of the paper.
train
[ "G7pOqldthb3", "JhutK5xovvR", "LRscidv3AkN", "kycIxhJ47qE", "dV7ceMmRin9", "7hWwH2Kfzlf", "HY6QzzliAjz", "lAFBtsRJMkG", "XEk09eqgZEKi", "CMpzxqHMz5", "OVQHKzYu6iR", "myQ3TBGZyxg", "DY-yzObqlD", "bX0JP-inNtZ", "ZjoInd_rUL", "4tiaB0foaye", "Q4wZyHWAkZ", "naSFP3JZBsF", "3wbdXCKR0h",...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thanks again for your comments. Did we fix your concern of this paper properly? If not, we are happy to take further questions!", " Thanks for the response. I understand such style of model and training approach is widely used in domains like machine translation, but I'm still a bit surprised that the spatial c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "10s0g2G9yiA", "kycIxhJ47qE", "dV7ceMmRin9", "dV7ceMmRin9", "naSFP3JZBsF", "VVC9g0n2Ylt", "10s0g2G9yiA", "XEk09eqgZEKi", "OVQHKzYu6iR", "OVQHKzYu6iR", "DY-yzObqlD", "nips_2022_H_xAgRM7I5N", "MM1zPWNNZs", "MM1zPWNNZs", "MM1zPWNNZs", "10s0g2G9yiA", "10s0g2G9yiA", "VVC9g0n2Ylt", "VV...
nips_2022_-ZPeUAJlkEu
Why neural networks find simple solutions: The many regularizers of geometric complexity
In many contexts, simpler models are preferable to more complex models and the control of this model complexity is the goal for many methods in machine learning such as regularization, hyperparameter tuning and architecture design. In deep learning, it has been difficult to understand the underlying mechanisms of complexity control, since many traditional measures are not naturally suitable for deep neural networks. Here we develop the notion of geometric complexity, which is a measure of the variability of the model function, computed using a discrete Dirichlet energy. Using a combination of theoretical arguments and empirical results, we show that many common training heuristics such as parameter norm regularization, spectral norm regularization, flatness regularization, implicit gradient regularization, noise regularization and the choice of parameter initialization all act to control geometric complexity, providing a unifying framework in which to characterize the behavior of deep learning models.
Accept
All reviewers and the AC find this paper makes valuable contributions to the deep learning theory community. Thus, the AC recommends acceptance.
train
[ "bUhhZgWVgfC", "J3iVepPBAlk", "18OKVqOyXkj", "MYktCrQqXe", "C9W-IU3oUwg", "tMUYrRGx7Ob", "mnzKK-3b4os", "foRwX14gkdU", "GzJVdA7k3f-", "lkLMmU6YDWt", "6y6fpiPRv6u", "B25vBrdkiKy", "V7wRWuHw9xH" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We'd like to thank you for your thorough response. We are very glad that we were able to clear the misunderstanding and that you find the current work interesting. The question of fully understanding how GC affects generalization is certainly an interesting one as well and something to focus on for future work. T...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "18OKVqOyXkj", "GzJVdA7k3f-", "MYktCrQqXe", "C9W-IU3oUwg", "mnzKK-3b4os", "foRwX14gkdU", "V7wRWuHw9xH", "B25vBrdkiKy", "6y6fpiPRv6u", "nips_2022_-ZPeUAJlkEu", "nips_2022_-ZPeUAJlkEu", "nips_2022_-ZPeUAJlkEu", "nips_2022_-ZPeUAJlkEu" ]
nips_2022_-N-OYK2cY7
Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes
In this paper we study the problem of multiclass classification with a bounded number of different labels $k$, in the realizable setting. We extend the traditional PAC model to a) distribution-dependent learning rates, and b) learning rates under data-dependent assumptions. First, we consider the universal learning setting (Bousquet, Hanneke, Moran, van Handel and Yehudayoff, STOC'21), for which we provide a complete characterization of the achievable learning rates that holds for every fixed distribution. In particular, we show the following trichotomy: for any concept class, the optimal learning rate is either exponential, linear or arbitrarily slow. Additionally, we provide complexity measures of the underlying hypothesis class that characterize when these rates occur. Second, we consider the problem of multiclass classification with structured data (such as data lying on a low dimensional manifold or satisfying margin conditions), a setting which is captured by partial concept classes (Alon, Hanneke, Holzman and Moran, FOCS'21). Partial concepts are functions that can be undefined in certain parts of the input space. We extend the traditional PAC learnability of total concept classes to partial concept classes in the multiclass setting and investigate differences between partial and total concepts.
Accept
This work is an extension of the theories of partial concept classes and the universal learning framework to multi-class classification tasks. The reviewers have found the work well-rounded and correct, and of substantial interest to the learning theory sub-community at NeurIPS. A drawback might be that this submission might appear as a natural follow up work on previously established results.
train
[ "Zn0Kzf-7aJ", "pRZzW7XzVU", "eQYRKk8IX3W", "xssRoNtHchJ", "xIMe95M4VJd", "RynUY4pyKWV", "-HuGJfrOnA", "G91m2mrzi7z", "zYacsOVctZQ", "ztB1YW3s2XE", "MIgl4PBXcNQ", "agsJuVFU_cz", "U9ek9M0M6S-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thanks the authors for their detailed response. I suggest that the authors include the aforementioned technical challenges in the final version of their paper.", " I am satisfied with the response of the authors. Regardless of the decision on this submission, I think the community will benefit f...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "G91m2mrzi7z", "eQYRKk8IX3W", "xssRoNtHchJ", "U9ek9M0M6S-", "RynUY4pyKWV", "-HuGJfrOnA", "agsJuVFU_cz", "zYacsOVctZQ", "ztB1YW3s2XE", "MIgl4PBXcNQ", "nips_2022_-N-OYK2cY7", "nips_2022_-N-OYK2cY7", "nips_2022_-N-OYK2cY7" ]
nips_2022_RjS0j6tsSrf
Diagonal State Spaces are as Effective as Structured State Spaces
Modeling long range dependencies in sequential data is a fundamental step towards attaining human-level performance in many modalities such as text, vision, audio and video. While attention-based models are a popular and effective choice in modeling short-range interactions, their performance on tasks requiring long range reasoning has been largely inadequate. In an exciting result, Gu et al. (ICLR 2022) proposed the $\textit{Structured State Space}$ (S4) architecture delivering large gains over state-of-the-art models on several long-range tasks across various modalities. The core proposition of S4 is the parameterization of state matrices via a diagonal plus low rank structure, allowing efficient computation. In this work, we show that one can match the performance of S4 even without the low rank correction and thus assuming the state matrices to be diagonal. Our $\textit{Diagonal State Space}$ (DSS) model matches the performance of S4 on Long Range Arena tasks, speech classification on Speech Commands dataset, while being conceptually simpler and straightforward to implement.
Accept
This paper proposes a simpler alternative to S4 that achieves comparable performance. The method makes sense and the experiments are thorough. All reviewers agreed this is a good paper. I recommend acceptance.
train
[ "TIwx0cIIzm", "FRvkviIAAg", "NmFaLWbejf", "L_GiskA0Iq", "KF0tXNcEKz", "slkqtyzk2MI", "QPXWD0WKCDT", "dyBxYbIq243", "LluHel8dF1b", "Gwo-7YUrZWz" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for all the helpful suggestions and for increasing the score!!\n\nAs per your suggestion we have added the benchmaking results in Section A.4 in the attached updated version.", " Thanks for benchmarking the speed-up. I believe it would be a useful addition to the paper from an empirical pers...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "FRvkviIAAg", "NmFaLWbejf", "L_GiskA0Iq", "QPXWD0WKCDT", "Gwo-7YUrZWz", "LluHel8dF1b", "dyBxYbIq243", "nips_2022_RjS0j6tsSrf", "nips_2022_RjS0j6tsSrf", "nips_2022_RjS0j6tsSrf" ]
nips_2022_AQd4ugzALQ1
MetaTeacher: Coordinating Multi-Model Domain Adaptation for Medical Image Classification
In medical image analysis, we often need to build an image recognition system for a target scenario with the access to small labeled data and abundant unlabeled data, as well as multiple related models pretrained on different source scenarios. This presents the combined challenges of multi-source-free domain adaptation and semi-supervised learning simultaneously. However, both problems are typically studied independently in the literature, and how to effectively combine existing methods is non-trivial in design. In this work, we introduce a novel MetaTeacher framework with three key components: (1) A learnable coordinating scheme for adaptive domain adaptation of individual source models, (2) A mutual feedback mechanism between the target model and source models for more coherent learning, and (3) A semi-supervised bilevel optimization algorithm for consistently organizing the adaption of source models and the learning of target model. It aims to leverage the knowledge of source models adaptively whilst maximize their complementary benefits collectively to counter the challenge of limited supervision. Extensive experiments on five chest x-ray image datasets show that our method outperforms clearly all the state-of-the-art alternatives. The code is available at https://github.com/wongzbb/metateacher.
Accept
The paper proposes a model for a multiple teacher, single student setting for medical image classification. The reviewers where split, with two reviewers leaning towards accept and one leaning towards reject. The main criticism of the negative reviewers is that the proposed model only slightly outperforms the state of the art. Given the extensive experimental evaluation and the fact that the proposed method consistently outperforms the state of the art, the improvement should be statistically significant. The negative reviewer has acknowledged the improvement in the discussion phase. A second criticism was the lack of significance of the proposed learning setting. As the reviewers find this setting novel and due to the relevance in the medical domain, I vote to accept the paper.
test
[ "kPfZIknNYrq", "x9NAKpwdCOV", "mLb2CtAs_La", "LEXqXWKJcdi", "8FuxC7w2DD", "IPoK4F3J6j2", "Um71NsXGgjb", "LImTE0aCnT", "MXKag0Lf69V", "J37mpqVRwIs", "ZixDxZLLhfT" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your kind answer. We are glad that our responses could clarify all your concerns. In case there are no further questions, we would appreciate if you could improve your score.", " Really appreciate the reviewer for further feedback and interaction.\n\nFirst, we would like to stress that our propose...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "LEXqXWKJcdi", "mLb2CtAs_La", "IPoK4F3J6j2", "Um71NsXGgjb", "nips_2022_AQd4ugzALQ1", "J37mpqVRwIs", "MXKag0Lf69V", "ZixDxZLLhfT", "nips_2022_AQd4ugzALQ1", "nips_2022_AQd4ugzALQ1", "nips_2022_AQd4ugzALQ1" ]
nips_2022_pDUYkwrx__w
Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss
A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm's privacy loss remain open---even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant. This result reveals that all previous analyses for this setting have the wrong qualitative behavior. Specifically, while previous privacy analyses increase ad infinitum in the number of iterations, we show that after a small burn-in period, running SGD longer leaks no further privacy. Our analysis departs from previous approaches based on fast mixing, instead using techniques based on optimal transport (namely, Privacy Amplification by Iteration) and the Sampled Gaussian Mechanism (namely, Privacy Amplification by Sampling). Our techniques readily extend to other settings.
Accept
The paper solves a longstanding open problem of showing bounded privacy loss for releasing the last iterate of noisy SGD for convex problems This improves upon previous work by going from GD to SGD and strongly convex to convex. All reviewers agree this is a strong paper and should be accepted.
train
[ "wsPVY9dslk", "sRMdptcMyeJ", "FzrcHvOurZ", "Ok-why5_Qv", "-RT0GxhvVyw", "hdUjnppFgtk", "dlaSSxgZNoS", "hGBYIi9ctD" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are very grateful to the reviewer for the kind words that our result is very interesting and promising for private optimization. \n\n----> Response to concerns:\n\n1. The reviewer is correct that theoretically, minimax utility bounds for DP ERM and DP SCO are obtained already with existing analyses, i.e. where...
[ -1, -1, -1, -1, 6, 7, 6, 7 ]
[ -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "-RT0GxhvVyw", "hdUjnppFgtk", "dlaSSxgZNoS", "hGBYIi9ctD", "nips_2022_pDUYkwrx__w", "nips_2022_pDUYkwrx__w", "nips_2022_pDUYkwrx__w", "nips_2022_pDUYkwrx__w" ]
nips_2022_uloenYmLCAo
Block-Recurrent Transformers
We introduce the Block-Recurrent Transformer, which applies a transformer layer in a recurrent fashion along a sequence, and has linear complexity with respect to sequence length. Our recurrent cell operates on blocks of tokens rather than single tokens during training, and leverages parallel computation within a block in order to make efficient use of accelerator hardware. The cell itself is strikingly simple. It is merely a transformer layer: it uses self-attention and cross-attention to efficiently compute a recurrent function over a large set of state vectors and tokens. Our design was inspired in part by LSTM cells, and it uses LSTM-style gates, but it scales the typical LSTM cell up by several orders of magnitude. Our implementation of recurrence has the same cost in both computation time and parameter count as a conventional transformer layer, but offers dramatically improved perplexity in language modeling tasks over very long sequences. Our model out-performs a long-range Transformer XL baseline by a wide margin, while running twice as fast. We demonstrate its effectiveness on PG19 (books), arXiv papers, and GitHub source code. Our code has been released as open source.
Accept
This paper describes a modification to the transformer architecture to use block-recurrence to more accurately model very long sequences, borrowing some ideas from the LSTM. The idea is fairly simple to implement, as it doesn't require much code over a traditional transformer, and results seem good, if not completely overwhelmingly so. All reviewers voted to accept this paper and I agree. It's a fairly simple idea with fairly good results and adds to the body of knowledge regarding how to model very long sequences.
train
[ "7tM5c5YXm6", "i4hp1rRM9zD", "tImudqeIK5Q", "Y_6Cr77HBd", "P66WrRjyrS", "GvxS8tIeFk", "YNs2iRB3zQp", "nIBd7Q50bRe", "kQfGdZCFXNW", "tpQVp9-KSpl", "36mOpe5uDzf", "S3JysCuQnds", "7WOw4weawgr" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nMinor update to paper (64k Memorizing Transformer numbers, wording changes). \nNew sub-sections added to supplementary material, Appendix A, to address reviewer concerns.\n", " \nThanks for taking a second look! We're glad our clarifications helped.\n\n> Some of your answers did not result in changes in the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "Y_6Cr77HBd", "tImudqeIK5Q", "Y_6Cr77HBd", "P66WrRjyrS", "GvxS8tIeFk", "YNs2iRB3zQp", "nIBd7Q50bRe", "7WOw4weawgr", "S3JysCuQnds", "36mOpe5uDzf", "nips_2022_uloenYmLCAo", "nips_2022_uloenYmLCAo", "nips_2022_uloenYmLCAo" ]
nips_2022_AOSIbSmQJr
Markovian Interference in Experiments
We consider experiments in dynamical systems where interventions on some experimental units impact other units through a limiting constraint (such as a limited supply of products). Despite outsize practical importance, the best estimators for this `Markovian' interference problem are largely heuristic in nature, and their bias is not well understood. We formalize the problem of inference in such experiments as one of policy evaluation. Off-policy estimators, while unbiased, apparently incur a large penalty in variance relative to state-of-the-art heuristics. We introduce an on-policy estimator: the Differences-In-Q's (DQ) estimator. We show that the DQ estimator can in general have exponentially smaller variance than off-policy evaluation. At the same time, its bias is second order in the impact of the intervention. This yields a striking bias-variance tradeoff so that the DQ estimator effectively dominates state-of-the-art alternatives. From a theoretical perspective, we introduce three separate novel techniques that are of independent interest in the theory of Reinforcement Learning (RL). Our empirical evaluation includes a set of experiments on a city-scale ride-hailing simulator.
Accept
Please add proof outlines to the main body in the final version. Also add a discussion on Assumption 1 and more insights for the proposed method.
val
[ "gPaWOoUMPs", "dwnsmw30H5", "-4CdLmUCz3p", "Msz4ma75qx", "BA85mM_tu7", "7B6xaKFZqqL", "V65q1I1zUeB", "5SiCl3HYKd", "AS5aAErBlZX", "8v0JaIpi6s", "KY3vPI1zY1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarification. Adding this discussion about the assumption and more intuition for the proposed method in the main paper can be beneficial. I have no other questions, and I have updated the score accordingly.", " Adding insights like that given in your proof outline will further strengthen the...
[ -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "BA85mM_tu7", "V65q1I1zUeB", "7B6xaKFZqqL", "5SiCl3HYKd", "8v0JaIpi6s", "KY3vPI1zY1", "AS5aAErBlZX", "nips_2022_AOSIbSmQJr", "nips_2022_AOSIbSmQJr", "nips_2022_AOSIbSmQJr", "nips_2022_AOSIbSmQJr" ]
nips_2022_rZalM6vZ2J
DP-PCA: Statistically Optimal and Differentially Private PCA
We study the canonical statistical task of computing the principal component from i.i.d.~data under differential privacy. Although extensively studied in literature, existing solutions fall short on two key aspects: ($i$) even for Gaussian data, existing private algorithms require the number of samples $n$ to scale super-linearly with $d$, i.e., $n=\Omega(d^{3/2})$, to obtain non-trivial results while non-private PCA requires only $n=O(d)$, and ($ii$) existing techniques suffer from a large error even when the variance in each data point is small. We propose DP-PCA method that uses a single-pass minibatch gradient descent style algorithm to overcome the above limitations. For sub-Gaussian data, we provide nearly optimal statistical error rates even for $n=O(d \log d)$.
Accept
The paper studies PCA under the constraint of differential privacy, and presents an improved algorithm which is based on a minibatch SGD where each step contains a private top eigenvalue estimation and a private mean estimation. The reviewers agree that the new algorithm is interesting and that the new results are important.
train
[ "EYmvoyJqi4w", "wA1BdJ6ksp_", "VpUxRZemalG", "qoZL8xfxMLM", "eaUjqpJlSsH", "RldcRcKcgyS", "VEqz6QKIykl", "HWBCtvlj7vo", "_r08z6kju59" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The current revise solves my questions. I will change my rating to 'accept'.", " - Re: Major weakness:\n - We thank the reviewer for their meticulous checking of the proof. We acknowledge that there were a few typos in the proof which we have now fixed. A corrected proof of ***the exactly the same statement o...
[ -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 2, 5, 3, 3 ]
[ "wA1BdJ6ksp_", "RldcRcKcgyS", "VEqz6QKIykl", "HWBCtvlj7vo", "_r08z6kju59", "nips_2022_rZalM6vZ2J", "nips_2022_rZalM6vZ2J", "nips_2022_rZalM6vZ2J", "nips_2022_rZalM6vZ2J" ]
nips_2022_a3ooPbW0Jzh
Differentially Private Learning with Margin Guarantees
We present a series of new differentially private (DP) algorithms with dimension-independent margin guarantees. For the family of linear hypotheses, we give a pure DP learning algorithm that benefits from relative deviation margin guarantees, as well as an efficient DP learning algorithm with margin guarantees. We also present a new efficient DP learning algorithm with margin guarantees for kernel-based hypotheses with shift-invariant kernels, such as Gaussian kernels, and point out how our results can be extended to other kernels using oblivious sketching techniques. We further give a pure DP learning algorithm for a family of feed-forward neural networks for which we prove margin guarantees that are independent of the input dimension. Additionally, we describe a general label DP learning algorithm, which benefits from relative deviation margin bounds and is applicable to a broad family of hypothesis sets, including that of neural networks. Finally, we show how our DP learning algorithms can be augmented in a general way to include model selection, to select the best confidence margin parameter.
Accept
After the internal discussion, all reviewers agreed that the paper should be accepted. Please take into account the reviewers' comments while preparing the camera-ready version of the paper.
train
[ "9NM6G1OnHy", "tUFG5kVi4F3", "hkGplcPVb7", "EYyz0gD-35", "cEd9xL_o2pt", "spzIy5XVtLO", "w2m6G-OMaP", "M7eRxNAyGmR" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the review. Please, see our response to your comments and questions below.\n\n- “There is no experiments to enhance the theoretical conclusions. It will be better even if the authors can run toy experiments on some small datasets.”\n\n\nOur contributions are theoretical and we believe that the fund...
[ -1, -1, -1, -1, 7, 7, 6, 5 ]
[ -1, -1, -1, -1, 2, 2, 4, 3 ]
[ "M7eRxNAyGmR", "w2m6G-OMaP", "spzIy5XVtLO", "cEd9xL_o2pt", "nips_2022_a3ooPbW0Jzh", "nips_2022_a3ooPbW0Jzh", "nips_2022_a3ooPbW0Jzh", "nips_2022_a3ooPbW0Jzh" ]
nips_2022_SbHxPRHPc2u
Oracle-Efficient Online Learning for Smoothed Adversaries
We study the design of computationally efficient online learning algorithms under smoothed analysis. In this setting, at every step, an adversary generates a sample from an adaptively chosen distribution whose density is upper bounded by $1/\sigma$ times the uniform density. Given access to an offline optimization (ERM) oracle, we give the first computationally efficient online algorithms whose sublinear regret depends only on the pseudo/VC dimension $d$ of the class and the smoothness parameter $\sigma$. In particular, we achieve \emph{oracle-efficient} regret bounds of $ O ( \sqrt{T d\sigma^{-1}} ) $ for learning real-valued functions and $ O ( \sqrt{T d\sigma^{-\frac{1}{2}} } )$ for learning binary-valued functions. Our results establish that online learning is computationally as easy as offline learning, under the smoothed analysis framework. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for some settings with binary-valued functions and worst-case adversaries. These include an oracle-efficient algorithm with $O ( \sqrt{T(d |\mathcal{X}|)^{1/2} })$ regret that refines the earlier $O ( \sqrt{T|\mathcal{X}|})$ bound of [DS16] for finite domains, and an oracle-efficient algorithm with $O(T^{3/4} d^{1/2})$ regret for the transductive setting.
Accept
As summarized very well in the reviews, this is a well-written paper that makes a solid and elegant contribution to the recently active line of work on smoothed online learning. The authors have successfully addressed the main concerns brought up in the discussion. I genuinely agree the paper should be accepted. As a side note to the authors: I honestly found your reaction to Reviewer gcMM’s comments rather aggressive and incongruous. Disagreements naturally arise in a discussion and should not be automatically considered as an attempt to “greatly harm the review process and the community at large”.
train
[ "P6He2jHprMG", "gz3N1lFcTkq", "ignCG8v_m6l", "p7s6E9aojN", "BcuClpFEO3u0", "3WhmVtaEchW", "XujFwR782kF", "wEVblKHIr1B", "_x8U29614BM", "2nGRVS1kuXx", "1MbunLhFtS", "skrB4rQe-e", "FpFk2N60bgG", "ULh1pjjqov0", "hKKHir1J_zD", "yfn2bHxH4z", "5fWU8BiM6b9", "_VOOMWKtTd", "mOzoo8DfMVU",...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Thank you for your detailed response on the connection with classical settings ! I think explaining how your results recover/improve regret bounds in classic settings is very important: it convinces audiences that the smoothed analysis setting is legit, and I believe it's worth using a separate sub-section discus...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 9, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "gz3N1lFcTkq", "3WhmVtaEchW", "3WhmVtaEchW", "FpFk2N60bgG", "skrB4rQe-e", "XujFwR782kF", "1vjBDQ3vIP", "1vjBDQ3vIP", "1vjBDQ3vIP", "1vjBDQ3vIP", "mOzoo8DfMVU", "_VOOMWKtTd", "5fWU8BiM6b9", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc...
nips_2022_qZUHvvtbzy
Systematic improvement of neural network quantum states using Lanczos
The quantum many-body problem lies at the center of the most important open challenges in condensed matter, quantum chemistry, atomic, nuclear, and high-energy physics. While quantum Monte Carlo, when applicable, remains the most powerful numerical technique capable of treating dozens or hundreds of degrees of freedom with high accuracy, it is restricted to models that are not afflicted by the infamous sign problem. A powerful alternative that has emerged in recent years is the use of neural networks as variational estimators for quantum states. In this work, we propose a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines. This construction allows one to explore states outside of the original variational manifold and increase the representation power with moderate computational effort. Besides allowing one to restore spatial symmetries, an expansion in terms of Krylov states using a Lanczos recursion offers a solution that can further improve the quantum state accuracy. We illustrate these ideas with an application to the Heisenberg $J_1-J_2$ model on the square lattice, a paradigmatic problem under debate in condensed matter physics, and achieve state-of-the-art accuracy in the representation of the ground state.
Accept
This submission proposes to apply Lanczos step improvements over a shallow neural quantum state based on restricted Boltzmann machines (RBM). The authors report highly competitive empirical results with the state of the art on one challenge benchmark: the J1-J2 model. The reviews are mixed for this contribution: while the provided empirical results look promising, a clear and comprehensive comparison with SOTA's results was lacking in the initial submission, which was later improved during the interaction between the authors and the reviewers. We think these suggested changes should be included in the revision. Most of the reviewers see the novelty of using Lanczos step improvements and feel this technique could be generally applicable. However, the authors should also discuss the limitation of this technique, especially the size consistency of the approach and the fact that returns will be smaller and smaller on larger lattices, explicitly in the revision.
train
[ "zHb3ePrRKgh", "v6z0-I7UhkM", "Q536r-Byego", "yi-yGJaN5PC", "IMFcS_QeO2t", "V_3rjcJFXNe", "CmEWNTjSg5k", "WRMzXWWcnwk", "roPhHarGQ0f", "AGhB3pdfPs5", "1dv4nm4lbFv", "YcXEikoiORm", "c1JqXXBGtCm", "sSG5VDTRell", "5EufKSw693", "GyurOK0ZrjP", "jgbrbPQ889k", "E1g6mE7PZUD", "6bMujXr9d_...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for bringing this data to our attention. It again shows the superiority of our method on both square and triangular lattice. But due to the approaching deadline, we don't have enough time to work on kagome and honeycomb lattices. We hope to encourage other researchers to use similar strategies on other qua...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 5 ]
[ "IMFcS_QeO2t", "yi-yGJaN5PC", "roPhHarGQ0f", "5EufKSw693", "sSG5VDTRell", "nips_2022_qZUHvvtbzy", "GyurOK0ZrjP", "GyurOK0ZrjP", "GyurOK0ZrjP", "E1g6mE7PZUD", "E1g6mE7PZUD", "E1g6mE7PZUD", "jgbrbPQ889k", "jgbrbPQ889k", "6bMujXr9d_n", "nips_2022_qZUHvvtbzy", "nips_2022_qZUHvvtbzy", "...
nips_2022_HjwK-Tc_Bc
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have been used to diagnose the multi-hop reasoning ability and interpretability of an AI system. However, existing datasets fail to provide annotations for the answers, or are restricted to the textual-only modality, small scales, and limited domain diversity. To this end, we present Science Question Answering (ScienceQA), a new benchmark that consists of ~21k multimodal multiple choice questions with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations. We further design language models to learn to generate lectures and explanations as the chain of thought (CoT) to mimic the multi-hop reasoning process when answering ScienceQA questions. ScienceQA demonstrates the utility of CoT in language models, as CoT improves the question answering performance by 1.20% in few-shot GPT-3 and 3.99% in fine-tuned UnifiedQA. We also explore the upper bound for models to leverage explanations by feeding those in the input; we observe that it improves the few-shot performance of GPT-3 by 18.96%. Our analysis further shows that language models, similar to humans, benefit from explanations to learn from fewer data and achieve the same performance with just 40% of the data. The data and code are available at https://scienceqa.github.io.
Accept
The paper introduces a large new multimodal dataset for science question answering, and thoroughly evaluates a range of models, including a version of chain-of-thought. Reviewers agree that the paper is generally solid and well written, and the dataset is potentially useful. The major concerns around the technical novelty of the contributions, which are somewhat incremental extensions to chain of thought (e.g. with fine tuned and multimodal models). Some reviewers are also confused why generating the answer first gives better chain of thought results, because this appears inconsistent with the step-by-step reasoning explanation of chain of thought, and this point could be better explained. Overall I think the submission is borderline, leaning towards acceptance.
train
[ "54ZdVR6yTRU", "q3skGccT4Kt", "j_ea9h4KkgSh", "kLXCh4yjFt", "PBf92acWI7K", "TOEZOy0L7s", "d9W1k42Ufiv", "plB92G82cJz", "FKEKiUXwSAg", "z8Jbgx0jTnj", "NvUg0ERHlkv", "AFO-egfAKG", "d16dvdCKVFt", "AC-siDjbjuQ", "m_Oz40h5RW", "RCR_IgwBocc", "lTNCLQCUTC0", "I6PJn9oScNg", "j0S21Fs_2QN"...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for your great efforts and time!\n\nIt is a great encouragement to our work, and we are glad to see that we have addressed most of your concerns. And many thanks for your feedback to make our paper stronger and more solid!", " Thank you for the extensive and thoughtful reply and additional res...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3, 4 ]
[ "q3skGccT4Kt", "PBf92acWI7K", "kLXCh4yjFt", "NvUg0ERHlkv", "j0S21Fs_2QN", "j0S21Fs_2QN", "j0S21Fs_2QN", "JTuc_wW0p4S", "I6PJn9oScNg", "I6PJn9oScNg", "I6PJn9oScNg", "lTNCLQCUTC0", "lTNCLQCUTC0", "RCR_IgwBocc", "nips_2022_HjwK-Tc_Bc", "nips_2022_HjwK-Tc_Bc", "nips_2022_HjwK-Tc_Bc", "...
nips_2022_rBCvMG-JsPd
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)^3 that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments will be publicly available.
Accept
This paper demonstrates that Few-shot Parameter-efficient Fine-tuning (PEFT) is more accurate and dramatically less computationally expensive than in-context learning (ICL), and introduces a new PEFT method that varies the activity level depending on the learned vector and achieves high performance with only a few parameters. In addition, this paper proposes a simple way to apply it to the T0 model and shows that the proposed fine-tuning method performs better than the baselines. This paper is well written. The proposed method provides a simple and practical recipe for few-shot learning and shows strong performance on popular and challenging benchmarks. All three reviewers had similar positive comments on this paper. Thus the meta-reviewer recommends it for acceptance.
train
[ "0cklhcyPO7A", "cKx9nkSQ9iQ", "qax_IGgtP0s", "lD6e6Kqq7tP", "YOj_EjKI5u", "YfzRP0Jhtfn", "Iyirer7BypB", "Hty0BCjtHa7", "jrwvckediRi" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much to all of the reviewers for their constructive suggestions. We have responded to all reviewer comments and questions below and have updated our draft accordingly. Changes made include:\n\n- Ran experiments on all the PEFT methods ablating the additional losses and added them to appendix D. SAID ...
[ -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "nips_2022_rBCvMG-JsPd", "jrwvckediRi", "Hty0BCjtHa7", "Iyirer7BypB", "YfzRP0Jhtfn", "nips_2022_rBCvMG-JsPd", "nips_2022_rBCvMG-JsPd", "nips_2022_rBCvMG-JsPd", "nips_2022_rBCvMG-JsPd" ]
nips_2022_3LMI8CHDb0g
Reproducibility in Optimization: Theoretical Framework and Limits
We initiate a formal study of reproducibility in optimization. We define a quantitative measure of reproducibility of optimization procedures in the face of noisy or error-prone operations such as inexact or stochastic gradient computations or inexact initialization. We then analyze several convex optimization settings of interest such as smooth, non-smooth, and strongly-convex objective functions and establish tight bounds on the limits of reproducibility in each setting. Our analysis reveals a fundamental trade-off between computation and reproducibility: more computation is necessary (and sufficient) for better reproducibility.
Accept
The paper studies how the noise inherent in optimization affects “reproducibility,” which the authors measure by the Euclidean distance between two independent runs of the algorithm. The results of the paper reveal fundamental tradeoffs between computation (in terms of gradient oracle complexity) and the proposed notion of reproducibility. The reviewers have reached a clear consensus toward accepting this paper, citing its novelty and technical depth. I concur, and recommend acceptance as a spotlight presentation.
train
[ "RySvrFNvX4o", "pETK5pjI2Qv", "Zuk2LfL4Imw", "oDeuqcTdvoV1", "B-ZGNkFR8VS", "FIg92pwf0QG", "YWl0KASRF4q", "DFzycLzkCN", "ab1kjBP2hZ1" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for raising the score, and for giving us very useful feedback for improving the presentation of the paper (specifically, the discussion about $\\delta$, and the discussion about the role of $||x_f - x^*||^2$ in various settings). We’ll reflect your comments in our final version.", " Thank you for the ...
[ -1, -1, -1, -1, -1, -1, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "pETK5pjI2Qv", "oDeuqcTdvoV1", "B-ZGNkFR8VS", "ab1kjBP2hZ1", "DFzycLzkCN", "YWl0KASRF4q", "nips_2022_3LMI8CHDb0g", "nips_2022_3LMI8CHDb0g", "nips_2022_3LMI8CHDb0g" ]
nips_2022_RNZ8JOmNaV4
Unsupervised Image-to-Image Translation with Density Changing Regularization
Unpaired image-to-image translation aims to translate an input image to another domain such that the output image looks like an image from another domain while important semantic information are preserved. Inferring the optimal mapping with unpaired data is impossible without making any assumptions. In this paper, we make a density changing assumption where image patches of high probability density should be mapped to patches of high probability density in another domain. Then we propose an efficient way to enforce this assumption: we train the flows as density estimators and penalize the variance of density changes. Despite its simplicity, our method achieves the best performance on benchmark datasets and needs only $56-86\%$ of training time of the existing state-of-the-art method. The training and evaluation code are avaliable at $$\url{https://github.com/Mid-Push/Decent}.$$
Accept
This paper addresses the density-mismatch problem in image-to-image translation by introducing a patch-wise variance constraint regularization. The approach is simple and effective, according to the reviewers. There were some general concerns about the validity of the assumption, but the authors appear to have sufficiently addressed those concerns. I would encourage the authors to make it clear that this is an inductive bias that they're relying on to make their method work: it's a valuable contribution but I think it's worth being extra clear that this is a reasonable assumption they built into their model but it might not be the best one. I therefore recommend acceptance of this paper to NeurIPS. There was one negative review from zLZj that had some useful content, but the authors seemed to address those concerns fairly well. The reviewer was showed skepticism towards the method that wasn't entirely clear to me and wanted to look at the code themselves but never followed through. I wasn't terribly convinced by the score being so low after the discussion and the author rebuttals, and I don't see evidence that reviewer looked at other reviews or discussion, so I believe the score does not accurately represent the paper's quality and I will treat the score (the discussion was good) as an outlier. Both wXXZ and o4oL did well as far as discussion and engagement.
train
[ "IMeUn7C1fjh", "800cDC3LSnG", "kp5SnwIlqLO", "9RRf8OzKfNx", "U0AxNPcPvO", "nLWMCrKxSL7", "CaStmvkeXHw", "b_X9-Sn88fO", "rlD7GGaewHD", "CoOhu0VGvQo", "JJMMXmuiXDn", "Vk5bZBehgYX", "qoGAoZhH7Yd", "UjK5pXvv39", "ZHcUB4MHQpD", "dsvHdbAIGI9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi, we tried to download the checkpoints of selfie2anime of AttentionGAN to add the results in Table 2, but unfortunately we encounter the size mismatch problem, which is the same as https://github.com/Ha0Tang/AttentionGAN/issues/20 \n\nWe have included the results of horse2zebra in table 1. But the pretrained mo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "9RRf8OzKfNx", "9RRf8OzKfNx", "U0AxNPcPvO", "CoOhu0VGvQo", "JJMMXmuiXDn", "UjK5pXvv39", "ZHcUB4MHQpD", "nips_2022_RNZ8JOmNaV4", "dsvHdbAIGI9", "ZHcUB4MHQpD", "UjK5pXvv39", "qoGAoZhH7Yd", "nips_2022_RNZ8JOmNaV4", "nips_2022_RNZ8JOmNaV4", "nips_2022_RNZ8JOmNaV4", "nips_2022_RNZ8JOmNaV4" ...
nips_2022_aXf9V5Labm
Network change point localisation under local differential privacy
Network data are ubiquitous in our daily life, containing rich but often sensitive information. In this paper, we expand the current static analysis of privatised networks to a dynamic framework by considering a sequence of networks with potential change points. We investigate the fundamental limits in consistently localising change points under both node and edge privacy constraints, demonstrating interesting phase transition in terms of the signal-to-noise ratio condition, accompanied by polynomial-time algorithms. The private signal-to-noise ratio conditions quantify the costs of the privacy for change point localisation problems and exhibit a different scaling in the sparsity parameter compared to the non-private counterparts. Our algorithms are shown to be optimal under the edge LDP constraint up to log factors. Under node LDP constraint, a gap exists between our upper bound and lower bound and we leave it as an interesting open problem, echoing the challenges in high-dimensional statistical inference under LDP constraints.
Accept
This paper considers the important problem of change point detection in networks under local differential privacy. The paper provides bounds for the problem under both node and edge privacy constraints. While the bounds in all cases are not matching the results are timely and will be interesting for several researchers.
train
[ "c1pE1HQ2aB0", "zmOcyMkRRn", "wDEc_QZttEX", "2XUT2z39O6mB", "VCIBL4GbXDu", "Ls1YITo1ECa", "0TLJBwgDntU", "_O_8NHs8Y6" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your appreciation and constructive comments. We reply to all your comments and questions point-by-point in the following. We have submitted revised main text file and supplementary materials.\n\n**On the network models**\n\nThanks for giving us the opportunity to further elaborate on our m...
[ -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, 2, 5, 3, 2 ]
[ "_O_8NHs8Y6", "0TLJBwgDntU", "Ls1YITo1ECa", "VCIBL4GbXDu", "nips_2022_aXf9V5Labm", "nips_2022_aXf9V5Labm", "nips_2022_aXf9V5Labm", "nips_2022_aXf9V5Labm" ]
nips_2022_qfC1uDXfDJo
Annihilation of Spurious Minima in Two-Layer ReLU Networks
We study the optimization problem associated with fitting two-layer ReLU neural networks with respect to the squared loss, where labels are generated by a target network. Use is made of the rich symmetry structure to develop a novel set of tools for studying the mechanism by which over-parameterization annihilates spurious minima through. Sharp analytic estimates are obtained for the loss and the Hessian spectrum at different minima, and it is shown that adding neurons can turn symmetric spurious minima into saddles through a local mechanism that does not generate new spurious minima; minima of smaller symmetry require more neurons. Using Cauchy's interlacing theorem, we prove the existence of descent directions in certain subspaces arising from the symmetry structure of the loss function. This analytic approach uses techniques, new to the field, from algebraic geometry, representation theory and symmetry breaking, and confirms rigorously the effectiveness of over-parameterization in making the associated loss landscape accessible to gradient-based methods. For a fixed number of neurons and inputs, the spectral results remain true under symmetry breaking perturbation of the target.
Accept
Thank you for your submission to NeurIPS. This paper is on the structure of critical points and local minima in over-parameterized two layer ReLU neural networks. The reviewers and I, after the author response, are in agreement that there are interesting contributions in this work. However, the reviewers noted significant issues with the presentation (see below). Four knowledgeable reviewers recommend accept/borderline accept, and I concur, in light of the contributions made. The reviewers also noted several weaknesses in the presentation: In particular, the reviewers noted that (1) the technical terms used in this paper and notation are hard to follow, which makes the paper not easily accessible (2) the paper assumes previous knowledge from previous works and is not highly self-readable (3) the theoretical assumptions are strong, (4) it is not clear whether the results can give insights on practical neural networks. Moreover, the analysis techniques are non-standard (to most ML theorist, in my opinion). The reviewers most likely did not check the proofs, but feel confident with the mathematical rigor. The statement of Theorem 1 looks informal, which should either be made more rigorous or stated that this is an informal version of a formal result that appears later. Please take into account the updated reviewer comments when preparing the final version to accommodate the requested changes.
train
[ "CCTR6v2SVH", "qSphkv8c1E8", "ijxoEutrwl", "39VAmhwLhpO", "SAI2Rbc8dgO8", "FZXi_9BDoEw", "5bwg4Z9GpNW", "AszRdcIRv_", "oUF3J8JUMQs", "F7AEQ9qasMT", "XICjZLZTIZ", "62F1HRKHbgt", "rKPihoqlTHf", "N6Jg8TxcbqH", "GwXQfc4OBjd" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed responses.\n\n&nbsp;\n\nStatistical methods, often from theoretical physics, have so far not been particularly successful in explaining phenomena seen in neural networks. For example, explaining how adding neurons can remove spurious minima---the main focus of our paper. Our approach is to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 2 ]
[ "qSphkv8c1E8", "ijxoEutrwl", "SAI2Rbc8dgO8", "oUF3J8JUMQs", "5bwg4Z9GpNW", "AszRdcIRv_", "GwXQfc4OBjd", "N6Jg8TxcbqH", "rKPihoqlTHf", "62F1HRKHbgt", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo" ]
nips_2022_4lw1XqPvLzT
Will Bilevel Optimizers Benefit from Loops
Bilevel optimization has arisen as a powerful tool for solving a variety of machine learning problems. Two current popular bilevel optimizers AID-BiO and ITD-BiO naturally involve solving one or two sub-problems, and consequently, whether we solve these problems with loops (that take many iterations) or without loops (that take only a few iterations) can significantly affect the overall computational efficiency. Existing studies in the literature cover only some of those implementation choices, and the complexity bounds available are not refined enough to enable rigorous comparison among different implementations. In this paper, we first establish unified convergence analysis for both AID-BiO and ITD-BiO that are applicable to all implementation choices of loops. We then specialize our results to characterize the computational complexity for all implementations, which enable an explicit comparison among them. Our result indicates that for AID-BiO, the loop for estimating the optimal point of the inner function is beneficial for overall efficiency, although it causes higher complexity for each update step, and the loop for approximating the outer-level Hessian-inverse-vector product reduces the gradient complexity. For ITD-BiO, the two loops always coexist, and our convergence upper and lower bounds show that such loops are necessary to guarantee a vanishing convergence error, whereas the no-loop scheme suffers from an unavoidable non-vanishing convergence error. Our numerical experiments further corroborate our theoretical results.
Accept
There seems to be a clear consensus among reviewers about the paper being well written and addressing relevant research questions pertaining to bi-level optimization, with a particular focus on two popular bilevel optimizers AID-BiO and ITD-BiO. Furthermore, Reviewer Kzzc stressed that this work provides several interesting convergence results, with a practical echo in applications such as meta-learning, NAS, some HO problems, etc. Kzzc also pointed out that, the convergence for ITD has not been well studied, and the results on upper and lower bounds in this work can be a good complement. rZSp joined Kzzc by pointing out that the authors first establish unified convergence analyses that are applicable to all implementation choices of loops for both AID-BiO and ITD-BiO. While at first critical on some aspects of the work (notation, references, tests), Reviewer rZSp went on with a score increase following the constructive discussion with the authors. The most critical Reviewer was KmZV, who questioned the originality of contributions compared to [17]. While the authors did give a detailed response during the discussion, this reviewer did not react. Overall, based on the reviews and the discussion, I assess the paper to be a valuable addition tot he existing literature and recommend it to be accepted to NeurIPS 2022.
train
[ "iHtw9VFg_wR", "imXUZi8KBv", "CEt44Hf2HRB", "qWoEcKvZ8Aw", "Z14wm-tyVO0", "hMp_5ISSXZ8", "I8xYLV-H6xk", "Yf8kCaCuTnT", "raMhF83h_9", "2vdQ0vzNGz9", "9Z1WMQ-k08Bq", "bEQN7jz5xT2T", "mnDaV64Qv-f", "mRmOUHVWT9_", "HlY7w-_oVsW", "ujHGtMk21IS", "syBQsRgfTjB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We first truly thank all reviewers’ insightful and constructive suggestions, which helped to significantly improve our paper! We also thank the area chair very much for great efforts into handling our paper during the review process! \n\nUnfortunately, we regret that we have not received any response from Reviewe...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 5 ]
[ "nips_2022_4lw1XqPvLzT", "Z14wm-tyVO0", "HlY7w-_oVsW", "hMp_5ISSXZ8", "9Z1WMQ-k08Bq", "mnDaV64Qv-f", "HlY7w-_oVsW", "HlY7w-_oVsW", "HlY7w-_oVsW", "HlY7w-_oVsW", "syBQsRgfTjB", "ujHGtMk21IS", "mRmOUHVWT9_", "nips_2022_4lw1XqPvLzT", "nips_2022_4lw1XqPvLzT", "nips_2022_4lw1XqPvLzT", "ni...
nips_2022_G4VOQPYxBsI
Algorithms that Approximate Data Removal: New Results and Limitations
We study the problem of deleting user data from machine learning models trained using empirical risk minimization (ERM). Our focus is on learning algorithms which return the empirical risk minimizer and approximate unlearning algorithms that comply with deletion requests that come in an online manner. Leveraging the infintesimal jacknife, we develop an online unlearning algorithm that is both computationally and memory efficient. Unlike prior memory efficient unlearning algorithms, we target ERM trained models that minimize objectives with non-smooth regularizers, such as the commonly used $\ell_1$, elastic net, or nuclear norm penalties. We also provide generalization, deletion capacity, and unlearning guarantees that are consistent with state of the art methods. Across a variety of benchmark datasets, our algorithm empirically improves upon the runtime of prior methods while maintaining the same memory requirements and test accuracy. Finally, we open a new direction of inquiry by proving that all approximate unlearning algorithms introduced so far fail to unlearn in problem settings where common hyperparameter tuning methods, such as cross-validation, have been used to select models.
Accept
Most of the reviewers agree that this paper is well written and provides a notable improvement over prior works on algorithms for data deletion. Some initial concerns regarding the proper motivation for the problem setting have been largely addressed.
train
[ "hgJ2qr4ArkX", "sDwLHN4J2A", "kMm0yuCHAp", "1ZlaaQH2tcR", "hfELfj2hR6W", "oJ9A6PKyB6X", "ogFRO7lcNSh", "Z9SO0ejEQQ8", "ZMfRt3PEL2i", "IhW1bi7qGJi", "zdN1NGChHmC", "y3M4bR0kRVd", "_GuFkNdzjPx2", "le39Jajh1Pi", "6gPRjpakWh-", "SKIq5VqZ3fI", "J1ylj7DYNWd" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for raising their score but we remain slightly confused. \n\n- The reviewer wanted us to add plots about the hyper-parameters and we have done so. What correctness is there to check? Shouldn't it be as easy as checking that we have done so? \n\n- If the reviewer hasn't checked correctness an...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 2 ]
[ "sDwLHN4J2A", "kMm0yuCHAp", "ZMfRt3PEL2i", "hfELfj2hR6W", "oJ9A6PKyB6X", "ogFRO7lcNSh", "Z9SO0ejEQQ8", "IhW1bi7qGJi", "y3M4bR0kRVd", "zdN1NGChHmC", "J1ylj7DYNWd", "SKIq5VqZ3fI", "6gPRjpakWh-", "nips_2022_G4VOQPYxBsI", "nips_2022_G4VOQPYxBsI", "nips_2022_G4VOQPYxBsI", "nips_2022_G4VOQ...
nips_2022_Epk1RQUpOj0
Online Minimax Multiobjective Optimization: Multicalibeating and Other Applications
We introduce a simple but general online learning framework in which a learner plays against an adversary in a vector-valued game that changes every round. Even though the learner's objective is not convex-concave (and so the minimax theorem does not apply), we give a simple algorithm that can compete with the setting in which the adversary must announce their action first, with optimally diminishing regret. We demonstrate the power of our framework by using it to (re)derive optimal bounds and efficient algorithms across a variety of domains, ranging from multicalibration to a large set of no-regret algorithms, to a variant of Blackwell's approachability theorem for polytopes with fast convergence rates. As a new application, we show how to ``(multi)calibeat'' an arbitrary collection of forecasters --- achieving an exponentially improved dependence on the number of models we are competing against, compared to prior work.
Accept
There is general agreement that this paper should be accepted.
train
[ "5j8F_Ty1j4J", "wpggp_zGXT", "KWanmtmTkRW", "WC0Xl9A64ZJ7", "XE5TjJTyXb3", "qBeMzf07gxN", "oJnL6CkRtzI", "HbgvZ59lU_", "0oirysySE9G" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank authors for their detailed explanation. I tend to keep my score.", " Thank you!", " Thanks for the response. This is a good paper, and I will continue supporting acceptance. Congrats for the good work! ", " Thank you for your review! We agree with you that the manuscript is long. Howev...
[ -1, -1, -1, -1, -1, -1, 5, 8, 9 ]
[ -1, -1, -1, -1, -1, -1, 2, 3, 2 ]
[ "qBeMzf07gxN", "KWanmtmTkRW", "XE5TjJTyXb3", "0oirysySE9G", "HbgvZ59lU_", "oJnL6CkRtzI", "nips_2022_Epk1RQUpOj0", "nips_2022_Epk1RQUpOj0", "nips_2022_Epk1RQUpOj0" ]
nips_2022_dpYhDYjl4O
No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation
We examine the problem of regret minimization when the learner is involved in a continuous game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully adversarial environments. We study this problem in the context of variationally stable games (a class of continuous games which includes all convex-concave and monotone games), and when the players only have access to noisy estimates of their individual payoff gradients. If the noise is additive, the game-theoretic and purely adversarial settings enjoy similar regret guarantees; however, if the noise is \emph{multiplicative}, we show that the learners can, in fact, achieve \emph{constant} regret. We achieve this faster rate via an optimistic gradient scheme with \emph{learning rate separation} \textendash\ that is, the method's extrapolation and update steps are tuned to different schedules, depending on the noise profile. Subsequently, to eliminate the need for delicate hyperparameter tuning, we propose a fully adaptive method that smoothly interpolates between worst- and best-case regret guarantees.
Accept
Reviewers are all positive and appreciate the theoretical contributions of the paper. Great work! Please make sure you address all the reviewers' comments and incorporate them (and any new experimental results, if applicable) in your camera-ready.
train
[ "4sMo6j_LdIZ", "i2Jz_6M291V7", "Qu4VD5MKAf", "malrQ5-GhzJ", "Iu87QUocjklV", "JjRHWX1LSsg", "BcpW4qYywMA", "nx0qsnpRpD", "qXQGVZvSxIH", "hm76YActITq", "bIyemohKNY1" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I see your point about why the crude approach I suggested would indeed not work. Thanks for clarifying.", " Dear Reviewer,\n\nWe would like to thank you again for your valuable feedback and positive evaluation. We truly appreciate it. Below we briefly reply to the two points mentioned in your response.\n\n1. We...
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "Iu87QUocjklV", "Qu4VD5MKAf", "malrQ5-GhzJ", "bIyemohKNY1", "hm76YActITq", "qXQGVZvSxIH", "nx0qsnpRpD", "nips_2022_dpYhDYjl4O", "nips_2022_dpYhDYjl4O", "nips_2022_dpYhDYjl4O", "nips_2022_dpYhDYjl4O" ]
nips_2022_B3TOg-YCtzo
Physics-Embedded Neural Networks: Graph Neural PDE Solvers with Mixed Boundary Conditions
Graph neural network (GNN) is a promising approach to learning and predicting physical phenomena described in boundary value problems, such as partial differential equations (PDEs) with boundary conditions. However, existing models inadequately treat boundary conditions essential for the reliable prediction of such problems. In addition, because of the locally connected nature of GNNs, it is difficult to accurately predict the state after a long time, where interaction between vertices tends to be global. We present our approach termed physics-embedded neural networks that considers boundary conditions and predicts the state after a long time using an implicit method. It is built based on an $\mathrm{E}(n)$-equivariant GNN, resulting in high generalization performance on various shapes. We demonstrate that our model learns flow phenomena in complex shapes and outperforms a well-optimized classical solver and a state-of-the-art machine learning model in speed-accuracy trade-off. Therefore, our model can be a useful standard for realizing reliable, fast, and accurate GNN-based PDE solvers. The code is available at https://github.com/yellowshippo/penn-neurips2022.
Accept
The paper proposes a E(n)-equivariant neural PDE solvers that can satisfy boundary conditions provably. The reviewers acknowledged the importance of the studied problem setting and generally appreciated the results. The paper is nicely written and provides both strong experimental results and theory. Indeed, a range of interesting experiments demonstrate the effectiveness of the proposed method. I want to thank the authors for their detailed responses that helped in answering some of the reviewers' questions. (The reviewers have provided detailed feedback in their reviews, and we strongly encourage the authors to incorporate this feedback when preparing a revised version of the paper.) In summary, this paper is a clear accept. Well done!
train
[ "aLFezcWsT0", "dwnAOJI2Hs", "6_n0D2g4vln", "b0rX6FQLtix", "FIIVAGvhBWn", "V7MlegFgvMS", "MGz3GYDoc0ga", "1BcbRmxZvm4", "AHIS1AZtCtV", "jEYVG5qatn6", "E1uV2dOlHZ", "MJJCczQQeS", "Sl252AbE7tk" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification. ", " We appreciate the feedback given by the reviewers. We have performed additional experiments and updated the manuscript. Here we summarize our main updates:\n\n* We changed the title to \"Physics-Embedded Neural Networks: Graph Neural PDE Solvers with Mixed Boundary Conditions\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "MGz3GYDoc0ga", "nips_2022_B3TOg-YCtzo", "Sl252AbE7tk", "FIIVAGvhBWn", "V7MlegFgvMS", "MJJCczQQeS", "E1uV2dOlHZ", "AHIS1AZtCtV", "jEYVG5qatn6", "nips_2022_B3TOg-YCtzo", "nips_2022_B3TOg-YCtzo", "nips_2022_B3TOg-YCtzo", "nips_2022_B3TOg-YCtzo" ]
nips_2022_qHs3qeaQjgl
On Scalable Testing of Samplers
In this paper we study the problem of testing of constrained samplers over high-dimensional distributions with $(\varepsilon,\eta,\delta)$ guarantees. Samplers are increasingly used in a wide range of safety-critical ML applications, and hence the testing problem has gained importance. For $n$-dimensional distributions, the existing state-of-the-art algorithm, $\mathsf{Barbarik2}$, has a worst case query complexity of exponential in $n$ and hence is not ideal for use in practice. Our primary contribution is an exponentially faster algorithm, $\mathsf{Barbarik3}$, that has a query complexity linear in $n$ and hence can easily scale to larger instances. We demonstrate our claim by implementing our algorithm and then comparing it against $\mathsf{Barbarik2}$. Our experiments on the samplers $\mathsf{wUnigen3}$ and $\mathsf{wSTS}$, find that $\mathsf{Barbarik3}$ requires $10\times$ fewer samples for $\mathsf{wUnigen3}$ and $450\times$ fewer samples for $\mathsf{wSTS}$ as compared to $\mathsf{Barbarik2}$.
Accept
This submission studies (a somewhat non-standard version of) tolerant closeness testing of distributions over the n-dimensional hypercube. Instead of only iid samples, it is assumed that the tester is able to efficiently evaluate the probability mass at any point in the domain and to sample from the distribution conditioned on any subset of size two of the domain. The main result is an algorithm with query complexity scaling near-linearly in the dimension. Using only iid samples, one would need exponential dependence on dimension. The algorithm is evaluated on synthetic and real-world datasets. It is experimentally shown that their algorithm outperforms a previous baseline, which in the worst case has complexity scaling exponentially in the dimension. Overall, this is an interesting work that appears to meet the bar for acceptance.
val
[ "P6Gg109u8e", "7Eq06oK7Aqp", "QsKoioC6jU", "BQZsYGnm5qc", "dzX_xj7ZtVA", "dZ9cXwTadWX", "TR_9c_WBF2j", "qbGY4gT-fOj", "YIXtAFE6OG" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nSampling from real-life distributions is computationally intractable in general; hence samplers providing guarantees are slow in practice. Guaranteed sampling techniques, such as FPRASes and compilation-based techniques(WAPS), can offer DUAL access; hence, in our experiments, we use WAPS as a DUAL oracle to $P$...
[ -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "7Eq06oK7Aqp", "BQZsYGnm5qc", "dzX_xj7ZtVA", "YIXtAFE6OG", "TR_9c_WBF2j", "qbGY4gT-fOj", "nips_2022_qHs3qeaQjgl", "nips_2022_qHs3qeaQjgl", "nips_2022_qHs3qeaQjgl" ]
nips_2022_Dqcoao24G8s
A Best-of-Both-Worlds Algorithm for Bandits with Delayed Feedback
We present a modified tuning of the algorithm of Zimmert and Seldin [2020] for adversarial multiarmed bandits with delayed feedback, which in addition to the minimax optimal adversarial regret guarantee shown by Zimmert and Seldin [2020] simultaneously achieves a near-optimal regret guarantee in the stochastic setting with fixed delays. Specifically, the adversarial regret guarantee is $\mathcal{O}(\sqrt{TK} + \sqrt{dT\log K})$, where $T$ is the time horizon, $K$ is the number of arms, and $d$ is the fixed delay, whereas the stochastic regret guarantee is $\mathcal{O}\left(\sum_{i \neq i^*}(\frac{1}{\Delta_i} \log(T) + \frac{d}{\Delta_{i}}) + d K^{1/3}\log K\right)$, where $\Delta_i$ are the suboptimality gaps. We also present an extension of the algorithm to the case of arbitrary delays, which is based on an oracle knowledge of the maximal delay $d_{max}$ and achieves $\mathcal{O}(\sqrt{TK} + \sqrt{D\log K} + d_{max}K^{1/3} \log K)$ regret in the adversarial regime, where $D$ is the total delay, and $\mathcal{O}\left(\sum_{i \neq i^*}(\frac{1}{\Delta_i} \log(T) + \frac{\sigma_{max}}{\Delta_{i}}) + d_{max}K^{1/3}\log K\right)$ regret in the stochastic regime, where $\sigma_{max}$ is the maximal number of outstanding observations. Finally, we present a lower bound that matches regret upper bound achieved by the skipping technique of Zimmert and Seldin [2020] in the adversarial setting.
Accept
The paper makes a solid technical contribution in the online learning literature, providing the first best-of-both worlds algorithm for online learning with delayed feedback. Despite building heavily on existing algorithmic ideas, the paper involves some critical technical novelties that enable their results.
train
[ "P9R6EISV3Ma", "2T80BdIHnG4", "nDaDf2_cqW", "Ji0M3GKI9Us", "40D6jRAyjOP", "DYotiTsIirL", "4A75f3SHlu8" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks the authors for their response.\n\nI have read through other reviewers’ comments and as I said earlier in my initial comment, I think the paper makes novel contribution into analysing the regret bound of a modified version of a known algorithm in stochastic setting. I also acknowlege that the lower bo...
[ -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "40D6jRAyjOP", "4A75f3SHlu8", "DYotiTsIirL", "40D6jRAyjOP", "nips_2022_Dqcoao24G8s", "nips_2022_Dqcoao24G8s", "nips_2022_Dqcoao24G8s" ]
nips_2022_45p8yDYVr5
Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs
Sparse linear regression with ill-conditioned Gaussian random covariates is widely believed to exhibit a statistical/computational gap, but there is surprisingly little formal evidence for this belief. Recent work has shown that, for certain covariance matrices, the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples. However, this lower bound only holds against deterministic preconditioners, and in many contexts randomization is crucial to the success of preconditioners. We prove a stronger lower bound that rules out randomized preconditioners. For an appropriate covariance matrix, we construct a single signal distribution on which any invertibly-preconditioned Lasso program fails with high probability, unless it receives a linear number of samples. Surprisingly, at the heart of our lower bound is a new robustness result in compressed sensing. In particular, we study recovering a sparse signal when a few measurements can be erased adversarially. To our knowledge, this natural question has not been studied before for sparse measurements. We surprisingly show that standard sparse Bernoulli measurements are almost-optimally robust to adversarial erasures: if $b$ measurements are erased, then all but $O(b)$ of the coordinates of the signal are identifiable.
Accept
This paper studies the problem of sparse regression with ill-conditioned Gaussian covariates. Despite the simplicity of this problem formulation and the extensive studies of sparse linear regression, the potential existence of a statistical-computational gap for this problem has not been well understood. Taking a step towards understanding this problem, the authors provide theoretically rigorous evidence about the limitation of randomly preconditioned Lasso for this problem. The paper contains solid impossibility results, and hence I recommend acceptance. Note that one reviewer has suggested ways to improve the structure and readability of the paper, which I hope the authors can address in the final paper; the paper would also benefit from having more substantial experiments.
train
[ "wGOyZ9XsuBm", "pKArDpzBz8o", "y7fak8aAuBA", "wa8KSovewoq", "10wM7JUPABl", "uoQ4Yyr6o_f", "WRcuY0wAoki", "r90XMvDqx26", "yKguTVwWO8K" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I want to thank the authors for their detailed answers to my questions. I stick to my original evaluation.", " We thank the reviewer for their time. To address their questions:\n\n1. Indeed, ill-conditioned random-design sparse linear regression has no known reduction from planted clique. Of course, this is one...
[ -1, -1, -1, -1, -1, 6, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "pKArDpzBz8o", "yKguTVwWO8K", "r90XMvDqx26", "WRcuY0wAoki", "uoQ4Yyr6o_f", "nips_2022_45p8yDYVr5", "nips_2022_45p8yDYVr5", "nips_2022_45p8yDYVr5", "nips_2022_45p8yDYVr5" ]
nips_2022_c39zYHHgQmy
CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders
CLIPDraw is an algorithm that synthesizes novel drawings from natural language input. It does not require any additional training; rather, a pre-trained CLIP language-image encoder is used as a metric for maximizing similarity between the given description and a generated drawing. Crucially, CLIPDraw operates over vector strokes rather than pixel images, which biases drawings towards simpler human-recognizable shapes. Results compare CLIPDraw with other synthesis-through-optimization methods, as well as highlight various interesting behaviors of CLIPDraw.
Accept
This is a very interesting paper. While there are methods for generating text without training using CLIP (e.g., https://arxiv.org/abs/2205.02655), this paper introduces a method generating stroke-based images based on the similarity between the text and the image. The performance of the method is quite impressive and the reviews are all positive. I therefore recommend acceptance of this paper.
train
[ "2QkycgxPica", "FMFlYkxpixW", "FV84LBINYZI", "ANOPEABuMkv" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you all for the valuable reviews. Since the feedback was largely positive, we will not be making any large changes to the work.\nHowever, some comments will be addressed in the revision:\n- Wall clock time is ~1 minute on a Colab GPU\n- The motivation for focusing on human-interpretable strokes (in contrast...
[ -1, 7, 6, 7 ]
[ -1, 5, 4, 3 ]
[ "nips_2022_c39zYHHgQmy", "nips_2022_c39zYHHgQmy", "nips_2022_c39zYHHgQmy", "nips_2022_c39zYHHgQmy" ]
nips_2022_FWMQYjFso-a
Pre-Trained Language Models for Interactive Decision-Making
Language model (LM) pre-training is useful in many language processing tasks. But can pre-trained LMs be further leveraged for more general machine learning problems? We propose an approach for using LMs to scaffold learning and generalization in general sequential decision-making problems. In this approach, goals and observations are represented as a sequence of embeddings, and a policy network initialized with a pre-trained LM predicts the next action. We demonstrate that this framework enables effective combinatorial generalization across different environments and supervisory modalities. We begin by assuming access to a set of expert demonstrations, and show that initializing policies with LMs and fine-tuning them via behavior cloning improves task completion rates by 43.6% in the VirtualHome environment. Next, we integrate an active data gathering procedure in which agents iteratively interact with the environment, relabel past "failed" experiences with new goals, and update their policies in a self-supervised loop. Active data gathering further improves combinatorial generalization, outperforming the best baseline by 25.1%. Finally, we explain these results by investigating three possible factors underlying the effectiveness of the LM-based policy. We find that sequential input representations (vs. fixed-dimensional feature vectors) and LM-based weight initialization are both important for generalization. Surprisingly, however, the format of the policy inputs encoding (e.g. as a natural language string vs. an arbitrary sequential encoding) has little influence. Together, these results suggest that language modeling induces representations that are useful for modeling not just language, but also goals and plans; these representations can aid learning and generalization even outside of language processing.
Accept
This paper adapts the "pretrain-then-finetune" framework to policy learning using large language models and demonstrates its effectiveness. They also develops an active expert data gathering approach for settings where no expert data is available. All reviewers find the empirical findings in the paper interesting and the work technically solid. This paper may spur more work in using pretrained language models in RL settings. I recommend acceptance.
train
[ "MdyCubzwkkA", "Q70XdolY7rx5", "lbOIWbQhAlY", "804cNQN3Bm3", "LDN-PwYWNT", "2NOYGiOrZFz", "H4yvufWWa_f", "XO_csgoccBE", "quNXA68q4rf", "bbkEQg8WQhZ", "DdNtH_2SWce", "L3h79bdrBSG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the authors' feedback and the updated version of the paper. I appreciate that the authors add additional experimental results on using bidirectional encoders from BART and show that it works even better. My main concern regarding reproducibility is also resolved by the provided code. However, I still ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "804cNQN3Bm3", "H4yvufWWa_f", "LDN-PwYWNT", "L3h79bdrBSG", "2NOYGiOrZFz", "DdNtH_2SWce", "XO_csgoccBE", "bbkEQg8WQhZ", "nips_2022_FWMQYjFso-a", "nips_2022_FWMQYjFso-a", "nips_2022_FWMQYjFso-a", "nips_2022_FWMQYjFso-a" ]
nips_2022_TJUNtiZiTKE
Diffusion-based Molecule Generation with Informative Prior Bridges
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development. Because the molecules are governed by physical laws, a key challenge is to incorporate prior information into the training procedure to generate high-quality and realistic molecules. We propose a simple and novel approach to steer the training of diffusion-based generative models with physical and statistics prior information. This is achieved by constructing physically informed diffusion bridges, stochastic processes that guarantee to yield a given observation at the fixed terminal time. We develop a Lyapunov function based method to construct and determine bridges, and propose a number of proposals of informative prior bridges for both high-quality molecule generation and uniformity-promoted 3D point cloud generation. With comprehensive experiments, we show that our method provides a powerful approach to the 3D generation task, yielding molecule structures with better quality and stability scores and more uniformly distributed point clouds of high qualities.
Accept
All reviewers agreed that this work has many positive aspects, such as originality of the idea, technical soundness and practical relevance. In the initial reviews, some concerns about the experimental evaluation have been raised. In particular, one reviewer mentioned potential problems regarding the uniqueness of generated molecules. This issue, however could be addressed reasonably well in the rebuttal. I do share the generally positive perception of this paper. Therefore, I recommend to accept to paper.
train
[ "JSQrVv5P6b_", "XlhP0RzHS3M", "KcCtuv62OS", "2ZEFOewRPm7", "FLG3aACL72Z", "QmyEhbGZCZ", "eN7lEljmTRS", "Ehgtu5mAFJv", "c9QB_2gGZu", "QBnQxxG-A4f", "kJKqmE6werF" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for raising the score and giving positive feedback on our work. We submit a revised version and trying our best to cover as much as clarities as possible in blue in this version. Since the page limit is still nine pages at the current stage, we are running out of space to cover all the clarification above....
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "2ZEFOewRPm7", "Ehgtu5mAFJv", "eN7lEljmTRS", "FLG3aACL72Z", "QmyEhbGZCZ", "kJKqmE6werF", "QBnQxxG-A4f", "c9QB_2gGZu", "nips_2022_TJUNtiZiTKE", "nips_2022_TJUNtiZiTKE", "nips_2022_TJUNtiZiTKE" ]
nips_2022_fKXiO9sLubb
Learning from Stochastically Revealed Preference
We study the learning problem of revealed preference in a stochastic setting: a learner observes the utility-maximizing actions of a set of agents whose utility follows some unknown distribution, and the learner aims to infer the distribution through the observations of actions. The problem can be viewed as a single-constraint special case of the inverse linear optimization problem. Existing works all assume that all the agents share one common utility which can easily be violated under practical contexts. In this paper, we consider two settings for the underlying utility distribution: a Gaussian setting where the customer utility follows the von Mises-Fisher distribution, and a $\delta$-corruption setting where the customer utility distribution concentrates on one fixed vector with high probability and is arbitrarily corrupted otherwise. We devise Bayesian approaches for parameter estimation and develop theoretical guarantees for the recovery of the true parameter. We illustrate the algorithm performance through numerical experiments.
Accept
An interesting approach to stochastically revealed preferences
train
[ "MP0cNt_hjk3", "R_cXLkrwZa", "OrFtwcyN3gk", "mwzlSv-IOqY", "hfsDGjvl33K", "qcoHP_RhuhL", "ad5l_HZZD4p", "ESoJ82hv0B5", "Uuv4jvOlYNY", "8-9LCwThzW" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification, and I will keep my rating for the paper.", " Thanks for the clarifications, most of my questions are addressed.\n\nI think it would be good to include the discussion on \"which distance to select\" the corresponding considerations in such selection; and a quantitative discussion...
[ -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 2, 4, 2 ]
[ "mwzlSv-IOqY", "OrFtwcyN3gk", "8-9LCwThzW", "ESoJ82hv0B5", "ad5l_HZZD4p", "Uuv4jvOlYNY", "nips_2022_fKXiO9sLubb", "nips_2022_fKXiO9sLubb", "nips_2022_fKXiO9sLubb", "nips_2022_fKXiO9sLubb" ]
nips_2022_mowt1WNhTC7
When does dough become a bagel? Analyzing the remaining mistakes on ImageNet
Image classification accuracy on the ImageNet dataset has been a barometer for progress in computer vision over the last decade. Several recent papers have questioned the degree to which the benchmark remains useful to the community, yet innovations continue to contribute gains to performance, with today's largest models achieving 90%+ top-1 accuracy. To help contextualize progress on ImageNet and provide a more meaningful evaluation for today's state-of-the-art models, we manually review and categorize every remaining mistake that a few top models make in order to provide insight into the long-tail of errors on one of the most benchmarked datasets in computer vision. We focus on the multi-label subset evaluation of ImageNet, where today's best models achieve upwards of 97% top-1 accuracy. Our analysis reveals that nearly half of the supposed mistakes are not mistakes at all, and we uncover new valid multi-labels, demonstrating that, without careful review, we are significantly underestimating the performance of these models. On the other hand, we also find that today's best models still make a significant number of mistakes (40%) that are obviously wrong to human reviewers. To calibrate future progress on ImageNet, we provide an updated multi-label evaluation set, and we curate ImageNet-Major: a 68-example "major error" slice of the obvious mistakes made by today's top models -- a slice where models should achieve near perfection, but today are far from doing so.
Accept
All reviewers are positive about this paper, leaning toward accept. The AC does not find sufficient grounds to overrule the consensus.
val
[ "mFDdn20E2WC", "lm2tSpWvgO", "zfdN50wlKD2", "lgcLsZ4aWA0", "p5nAIDdI-F1", "omzg4Japvu6", "pk9wpFta4kq", "1bK2OXEaQBO", "8SDNxkRPLJL", "2_TKKVrGq76" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the detailed response!\n\nI had concerns that\n1. It's unclear how to use ImageNet-M to evaluate models (qualitatively and quantitatively) and decide the next steps based on the evaluation (debugging, model selection, model comparison, ...)\n2. I wasn't entirely sure if ImageNet-M is suffi...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "zfdN50wlKD2", "p5nAIDdI-F1", "lgcLsZ4aWA0", "2_TKKVrGq76", "8SDNxkRPLJL", "nips_2022_mowt1WNhTC7", "1bK2OXEaQBO", "nips_2022_mowt1WNhTC7", "nips_2022_mowt1WNhTC7", "nips_2022_mowt1WNhTC7" ]
nips_2022_PzI4ow094E
Scalable Sensitivity and Uncertainty Analyses for Causal-Effect Estimates of Continuous-Valued Interventions
Estimating the effects of continuous-valued interventions from observational data is a critically important task for climate science, healthcare, and economics. Recent work focuses on designing neural network architectures and regularization functions to allow for scalable estimation of average and individual-level dose-response curves from high-dimensional, large-sample data. Such methodologies assume ignorability (observation of all confounding variables) and positivity (observation of all treatment levels for every covariate value describing a set of units), assumptions problematic in the continuous treatment regime. Scalable sensitivity and uncertainty analyses to understand the ignorance induced in causal estimates when these assumptions are relaxed are less studied. Here, we develop a continuous treatment-effect marginal sensitivity model (CMSM) and derive bounds that agree with the observed data and a researcher-defined level of hidden confounding. We introduce a scalable algorithm and uncertainty-aware deep models to derive and estimate these bounds for high-dimensional, large-sample observational data. We work in concert with climate scientists interested in the climatological impacts of human emissions on cloud properties using satellite observations from the past 15 years. This problem is known to be complicated by many unobserved confounders.
Accept
This paper extends the marginal sensitivity model to continuous treatments. Given the developments in the discrete treatment setting, none of the parts of the paper are surprising. Further, there are several simultaneous related works that carry out a generalization to continuous treatments. That being said, the work is sound and a polished contribution.
test
[ "EheJ_AzPleY", "VgLJ9k5p63Q", "ck50dmzHVyl", "9r2ifDgTB7O", "lbegvuLTfF-", "gRy8x-uVgoG", "IEssAyWPFeg", "vDD4-Aj61o6k", "9Dc8fod7zvU", "YBPA7kRCk7O", "NVYL-k_Qcjs", "Kxu6ruNeYBr", "BK05pRAKeZH", "TnLkNI1Mi6", "nV9D_dUnfY", "SM6ijFqeT0r", "15QWNOFV9ea", "ufB7-qd57G" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for taking the time to review our paper. We hope our response below has addressed your concerns. If you have any further suggestions, we would be happy to discuss them with you prior to your review confirmation.", " Thank you again for your feedback and corrections.", " Thank you again for you...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "15QWNOFV9ea", "IEssAyWPFeg", "gRy8x-uVgoG", "lbegvuLTfF-", "TnLkNI1Mi6", "BK05pRAKeZH", "YBPA7kRCk7O", "nips_2022_PzI4ow094E", "ufB7-qd57G", "ufB7-qd57G", "15QWNOFV9ea", "SM6ijFqeT0r", "SM6ijFqeT0r", "nV9D_dUnfY", "nips_2022_PzI4ow094E", "nips_2022_PzI4ow094E", "nips_2022_PzI4ow094E...
nips_2022_QedyATtQ1H
On the convergence of policy gradient methods to Nash equilibria in general stochastic games
Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner. Because of this, the convergence properties of popular learning algorithms — like policy gradient and its variants — are poorly understood, except in specific classes of games (such as potential or two-player, zero-sum games). In view of this, we examine the long-run behavior of policy gradient methods with respect to Nash equilibrium policies that are second-order stationary (SOS) in a sense similar to the type of sufficiency conditions used in optimization. Our first result is that SOS policies are locally attracting with high probability, and we show that policy gradient trajectories with gradient estimates provided by the REINFORCE algorithm achieve an $\mathcal{O}(1/\sqrt{n})$ distance-squared convergence rate if the method's step-size is chosen appropriately. Subsequently, specializing to the class of deterministic Nash policies, we show that this rate can be improved dramatically and, in fact, policy gradient methods converge within a finite number of iterations in that case.
Accept
This paper analyzes the convergence of policy gradient algorithms in "generic" stochastic games. The authors provide local convergence guarantees for projected gradient descent with the REINFORCE gradient estimator. Reviewers were generally positive on this paper--- though I think it needs to be much better contextualized in the literature on gradient-based learning in games (of which this is a special case). Indeed--- while interesting in the context of MARL --- the results are not very surprising given that they seem very similar with other local analyses of (stochastic) gradient-play in games (see e.g., [1]). Furthermore the equivalence of equilibria follows from well known manipulations of the single-agent RL loss function like those performed in [2], genericity arguments for Nash equilibria [3], as well as work on variational inequality approaches to learning in games [4]. The final version of the paper should really comment on these previous results. Nevertheless, due to the positive reviews and the relevance to MARL, I recommend this paper for acceptance. [1] Chasnov, Ratliff, Mazumdar, Burden; Convergence Analysis of Gradient-Based Learning in Continuous Games [2] Zhang, Ren, Li; Gradient play in stochastic games: stationary points, convergence, and sample complexity [3] Ratliff, Burden, Sastry; Characterization and computation of local Nash equilibria in continuous games [4] Mertikopolous and Zhou; Learning in games with continuous action sets and unknown payoff functions
test
[ "Vq5a2AVtXAh", "L7WytOIyyyV", "xsD6iX_z_I", "WqGLheGukyw", "5neWKZ6Dd7N", "bOP7aInLmzqW", "TE-nkKpnCt7", "m_P5T8O1Q3C", "2qOqTc4lzsy", "CNSxhQVeBeD", "CpNZNNEsXbxJ", "GSK0CJ9wJ1i2", "kHs6nOnLq8z", "Cc8xgJi5eCx", "smqr1-UOlx8", "k69HUOgZk_o", "eVLDI79jYHu", "e2MeCNkI7ra" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your follow-up comments and your positive re-assessment! We reply to your two remarks point-by-point below:\n\n1. ***On the assumptions of Jin et al.*** \n\n The only Nash equilibrium convergence result of Jin et al. concerns two-player zero-sum stochastic games; by contrast, our paper treats ge...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "xsD6iX_z_I", "WqGLheGukyw", "kHs6nOnLq8z", "5neWKZ6Dd7N", "bOP7aInLmzqW", "e2MeCNkI7ra", "eVLDI79jYHu", "eVLDI79jYHu", "eVLDI79jYHu", "k69HUOgZk_o", "k69HUOgZk_o", "k69HUOgZk_o", "k69HUOgZk_o", "smqr1-UOlx8", "nips_2022_QedyATtQ1H", "nips_2022_QedyATtQ1H", "nips_2022_QedyATtQ1H", ...
nips_2022_cZ41U927n8m
Semi-Supervised Learning with Decision Trees: Graph Laplacian Tree Alternating Optimization
Semi-supervised learning seeks to learn a machine learning model when only a small amount of the available data is labeled. The most widespread approach uses a graph prior, which encourages similar instances to have similar predictions. This has been very successful with models ranging from kernel machines to neural networks, but has remained inapplicable to decision trees, for which the optimization problem is much harder. We solve this based on a reformulation of the problem which requires iteratively solving two simpler problems: a supervised tree learning problem, which can be solved by the Tree Alternating Optimization algorithm; and a label smoothing problem, which can be solved through a sparse linear system. The algorithm is scalable and highly effective even with very few labeled instances, and makes it possible to learn accurate, interpretable models based on decision trees in such situations.
Accept
This paper extends graph-based semi-supervised learning to decision tree classifiers, where the optimization gets much more challenging. The proposed solution reformulates the problem with a new auxiliary variable, which leads naturally to an iterative solution of alternating between 1) supervised learning on trees, and 2) label smoothing via a sparse linear systems. High accuracy is favorable interpretability of the method are demonstrated in numerical experiments. All the reviewers, including myself, find the paper a solid contribution to the methodology and analysis. There are a few concerns such as computational complexity, and the rebuttal has done a good job addressing it (and other concerns). These additional results and insights can be included in the final version of the paper.
train
[ "qhXHRqvCh2m", "WGmJrmBhUI", "l6W-xTttnog", "ztPtB70Wo-gu", "_kmGVtKz5T", "65gkmFHlxNQ", "NU4wmworjUZ", "Ba63RflQfTq", "AFdX5nL_K1", "VTcYwyG-pXE", "JWNWEFTYSit", "SG2RJ6199SM", "Zt6D9XSpTum", "l9WTFGknNYF", "MAGrmfww_C7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' detailed explanations, and my main concern has been addressed. For that reason I decide to raise my score to 6. ", " I appreciated the author detailed response. Most questions have been resolved, and author should add EBBS for comparison in the revised version. I raised my score to 6. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "Ba63RflQfTq", "_kmGVtKz5T", "VTcYwyG-pXE", "NU4wmworjUZ", "65gkmFHlxNQ", "l9WTFGknNYF", "MAGrmfww_C7", "AFdX5nL_K1", "Zt6D9XSpTum", "SG2RJ6199SM", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m" ]
nips_2022_303XqIQ5c_d
You Only Live Once: Single-Life Reinforcement Learning
Reinforcement learning algorithms are typically designed to learn a performant policy that can repeatedly and autonomously complete a task, usually starting from scratch. However, in many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial. For example, imagine a disaster relief robot tasked with retrieving an item from a fallen building, where it cannot get direct supervision from humans. It must retrieve this object within one test-time trial, and must do so while tackling unknown obstacles, though it may leverage knowledge it has of the building before the disaster. We formalize this problem setting, which we call single-life reinforcement learning (SLRL), where an agent must complete a task within a single episode without interventions, utilizing its prior experience while contending with some form of novelty. SLRL provides a natural setting to study the challenge of autonomously adapting to unfamiliar situations, and we find that algorithms designed for standard episodic reinforcement learning often struggle to recover from out-of-distribution states in this setting. Motivated by this observation, we propose an algorithm, Q-weighted adversarial learning (QWALE), which employs a distribution matching strategy that leverages the agent's prior experience as guidance in novel situations. Our experiments on several single-life continuous control problems indicate that methods based on our distribution matching formulation are 20-60% more successful because they can more quickly recover from novel states.
Accept
The paper introduces a new formulation for single life reinforcement learning which is interesting. Moreover, an algorithm is presented for solving this RL scenario. The paper was evaluated positively by all reviewers. The 2 borderline reviews main concerns were: - missing theoretical evidence / motivation for the algorithm (Riviewer muVA): This concern has been mostly addressed by the authors. They motivate their choice of the weights, but how to incorporate the weights into the algorithm is clear to me on an intuition level, but not so much backed on theory. - the algorithm was not illustrated for scenarios with changing goals (Reviewer jw3X): This concern was addressed by the rebuttal. Unfortunately, the two reviewers with the borderline scores did not respond to the rebuttal, but I think their concerns have been mostly addressed and they should have raised their score. Hence, I recommend accepting the paper.
train
[ "c1h6L1oVe-C", "Zi70v7eiS-b", "WXYNtqV3I39", "N356D1MDDz", "PYy4GPycET", "ZsAsYYnh34xs", "efNo-qjh_d", "niD57hsm_Al", "aIfLQXrUG44", "br1j_6Jhtt", "6N3DYltWz0", "xldJeKRCJ2O", "ermtgKfcFLV", "G3k2E3nkTzq", "eKcMAjUBAMu", "j44HPvaKBo7", "KBDgrML8Zqr" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi reviewer jw3X, \n\nWe wanted to check in again. Can you let us know if our revisions and response address your concerns? If not, we would be happy to provide further revisions for remaining concerns. Thank you!", " Hi Reviewer bBp1,\n\nWe wanted to check in again to see if our revisions and response address ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 3 ]
[ "j44HPvaKBo7", "eKcMAjUBAMu", "G3k2E3nkTzq", "6N3DYltWz0", "KBDgrML8Zqr", "G3k2E3nkTzq", "j44HPvaKBo7", "eKcMAjUBAMu", "G3k2E3nkTzq", "nips_2022_303XqIQ5c_d", "KBDgrML8Zqr", "j44HPvaKBo7", "eKcMAjUBAMu", "nips_2022_303XqIQ5c_d", "nips_2022_303XqIQ5c_d", "nips_2022_303XqIQ5c_d", "nips...
nips_2022_KblXjniQCHY
Neural Circuit Architectural Priors for Embodied Control
Artificial neural networks for motor control usually adopt generic architectures like fully connected MLPs. While general, these tabula rasa architectures rely on large amounts of experience to learn, are not easily transferable to new bodies, and have internal dynamics that are difficult to interpret. In nature, animals are born with highly structured connectivity in their nervous systems shaped by evolution; this innate circuitry acts synergistically with learning mechanisms to provide inductive biases that enable most animals to function well soon after birth and learn efficiently. Convolutional networks inspired by visual circuitry have encoded useful biases for vision. However, it is unknown the extent to which ANN architectures inspired by neural circuitry can yield useful biases for other AI domains. In this work, we ask what advantages biologically inspired ANN architecture can provide in the domain of motor control. Specifically, we translate C. elegans locomotion circuits into an ANN model controlling a simulated Swimmer agent. On a locomotion task, our architecture achieves good initial performance and asymptotic performance comparable with MLPs, while dramatically improving data efficiency and requiring orders of magnitude fewer parameters. Our architecture is interpretable and transfers to new body designs. An ablation analysis shows that constrained excitation/inhibition is crucial for learning, while weight initialization contributes to good initial performance. Our work demonstrates several advantages of biologically inspired ANN architecture and encourages future work in more complex embodied control.
Accept
This paper introduces the use of neural circuit architectural priors to build controllers for a physically simulated c-elegans-like swimmer implemented in MuJoCo as part of the DeepMind control suite. By leveraging the bio-inspired architectural priors, the controller starts with structured behavior (rather than highly erratic random movements as is commonly the starting point for embodied RL initial behavior). And the architectural prior supports continued learning from this starting point. The work is seen as original, interesting, and quite clear. The work is also nicely self-contained. That said, this paper has received mixed and borderline reviews (6, 4, 4, 6), and there were some concerns about scalability and utility to the AI community. This paper was discussed with the SAC, and we decided that despite some of these legitimate concerns, this paper should be accepted. This paper has clear goals and can help us rethink some of our approaches to architectures. Moreover the potential audience spans both neuroscience and AI. We (the AC and SAC) still highly encourage you to seriously consider comments from the reviewers. From both the positive-leaning and negative-leaning reviewers, there is respect for what was done as a work of modeling, but concerns about whether this constitutes only well-done computational modeling, or if it really amounts to anything that could be useful for AI more generally (and if it could scale to other bodies). You outlined some next steps, and how similar approaches could be used in other scenarios and with more complex bodies; we recommend that you include that discussion in this paper. We'd also strongly encourage you to avoid assertive claims about how neuroscience-inspired ideas can generally improve AI systems, and acknowledge limitations in this case study. While this case study is a provocative first step, the reviewers and AC tend to believe it will prove quite difficult to extend this strategy to more complex bodies. Overall, focusing on what was achieved in this paper, nice work.
train
[ "ImaErFFS6YH", "YUXi2C-uFCL", "1HIvb7CxFy2", "PcLmsQinVGi", "ot84N7AmhFX", "9MkKyMMvqmI", "3bLsGGywgz9", "eyU-kZF_umO", "uAI6X6hm1wo", "lNp1mAfUXhp", "qmyFXQbBUB4", "xx-clmzelKJ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for trying out the requested experiment. I understand these are tight timelines, but I'm afraid I can't significantly vote for acceptance based on the assumption that very significant changes will be made between now and the camera-ready version. That said, I'll still increase my score (3-->4) to repres...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "1HIvb7CxFy2", "9MkKyMMvqmI", "PcLmsQinVGi", "3bLsGGywgz9", "xx-clmzelKJ", "qmyFXQbBUB4", "lNp1mAfUXhp", "uAI6X6hm1wo", "nips_2022_KblXjniQCHY", "nips_2022_KblXjniQCHY", "nips_2022_KblXjniQCHY", "nips_2022_KblXjniQCHY" ]
nips_2022_zUbMHIxszNp
Micro and Macro Level Graph Modeling for Graph Variational Auto-Encoders
Generative models for graph data are an important research topic in machine learning. Graph data comprise two levels that are typically analyzed separately: node-level properties such as the existence of a link between a pair of nodes, and global aggregate graph-level statistics, such as motif counts. This paper proposes a new multi-level framework that jointly models node-level properties and graph-level statistics, as mutually reinforcing sources of information. We introduce a new micro-macro training objective for graph generation that combines node-level and graph-level losses. We utilize the micro-macro objective to improve graph generation with a GraphVAE [41], a well-established model based on graph-level latent variables, that provides fast training and generation time for medium-sized graphs. Our experiments show that adding micro-macro modeling to the GraphVAE model improves graph quality scores up to 2 orders of magnitude on five benchmark datasets, while maintaining the GraphVAE generation speed advantage.
Accept
This paper proposes a new generative model for the generation of graphs. Different from most of existing approaches, the proposed method considers both node and graph level properties to capture high-order connectivity and overcome sparsity of any observed graph. The writing is general clear and the results are convincing. The reviewers are overall positive, with some concerns on the motivation, which has been addressed well by the authors in the rebuttal. Some other questions raised by the reviewers are also appropriately addressed, which leads to the increase of some scores. The downside of the approach lies in the time complexity in collecting the macro-level statistics. But overall, it is a good paper worth accepting.
train
[ "VH0F61NuNt0", "sKVGOiAFJZy", "yZekHQodo7A", "WD4mLL53cR", "FnkSlUhCQBP", "2O0Lerhi0RJ", "BRaYTS4_Dqt", "HwV6yeHVfqa", "sDr2O_5pHY5", "3g2iLFWAbLb", "AKFpBkPsL4q", "qdKNOG0RoIy", "aj7J4lEPx-N" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for re-evaluating our work and increasing their rating. Below are responses to the follow-up questions.\n\n**Q.** Can you explain why your model performs poorly when you use only one graph statistic? I wonder why the performance improved rapidly when you use all of three statistics. Does th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "sKVGOiAFJZy", "aj7J4lEPx-N", "qdKNOG0RoIy", "aj7J4lEPx-N", "aj7J4lEPx-N", "aj7J4lEPx-N", "3g2iLFWAbLb", "3g2iLFWAbLb", "AKFpBkPsL4q", "nips_2022_zUbMHIxszNp", "nips_2022_zUbMHIxszNp", "nips_2022_zUbMHIxszNp", "nips_2022_zUbMHIxszNp" ]
nips_2022_grzlF-EOxPA
Conformal Frequency Estimation with Sketched Data
A flexible conformal inference method is developed to construct confidence intervals for the frequencies of queried objects in very large data sets, based on a much smaller sketch of those data. The approach is data-adaptive and requires no knowledge of the data distribution or of the details of the sketching algorithm; instead, it constructs provably valid frequentist confidence intervals under the sole assumption of data exchangeability. Although our solution is broadly applicable, this paper focuses on applications involving the count-min sketch algorithm and a non-linear variation thereof. The performance is compared to that of frequentist and Bayesian alternatives through simulations and experiments with data sets of SARS-CoV-2 DNA sequences and classic English literature.
Accept
The paper proposes a method based on conformal inference in order to obtain confidence intervals for the frequencies of queried objects in very large data sets, based on sketched data. The applicability of the method relies solely on the exchangeability assumption for the data, not on the sketching procedure nor on the data distribution, and is therefore very general, as emphasized by all reviewers. The reviewers have done a great job and this should (and has been) acknowledged by the authors. There has been some objections concerning the applicability of the main assumption (exchangeability), the meaningfulness of the experimental comparison with prior work and the interpretation of the resulting plots, or on the amount of theoretical content of the paper. But the post-review discussion appears to have been very active and fruitful. It overall gives me the impression that the authors took very seriously the comments and will improve the manuscript accordingly, and that many objections could be answered by a more appropriate exposition. Given that this paper lies of the edge of the acceptance threshold, this improvement is very important, as the reviewers concerns (which have some strong overlap) will otherwise be probably be shared by the wider audience of readers. This is especially true given the statistics flavor of the paper which does not target the main NeurIPS audience, implying that an even greater effort has to be put on the presentation. Very detailed, and I find meaningful from a layman perspective, answers have been provided by the authors, and not all their content will fit in the additional page. There is thus a important work of selection and re-writing ahead of the authors before publication.
train
[ "nAvgHpZlJLq", "hQR6degg0FO", "ZnBjGUF5eI", "dQu7kLuCmYm", "ZIfNxxjMQ7", "9Q_R4AiyNWc", "PXGTZflxzI", "ZgaFnabHy2V", "T4D8qGoXfV", "NqFLIxqgctm", "QsDri2zsT-Q", "dPQ2-h_x4K", "X1X0smeWk6f", "Ild25872VD" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer vGGR,\n\nThank you for taking the time to read our rather long response and for continuing the discussion. Please let us clarify that we did not mean to suggest any misunderstanding might be due to any fault on your side. By contrast, we were very pleased to see you read our paper carefully. What we...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "hQR6degg0FO", "ZnBjGUF5eI", "Ild25872VD", "nips_2022_grzlF-EOxPA", "9Q_R4AiyNWc", "PXGTZflxzI", "Ild25872VD", "T4D8qGoXfV", "NqFLIxqgctm", "X1X0smeWk6f", "dPQ2-h_x4K", "nips_2022_grzlF-EOxPA", "nips_2022_grzlF-EOxPA", "nips_2022_grzlF-EOxPA" ]
nips_2022_fDDTJakJKR7
A Single-timescale Analysis for Stochastic Approximation with Multiple Coupled Sequences
Stochastic approximation (SA) with multiple coupled sequences has found broad applications in machine learning such as bilevel learning and reinforcement learning (RL). In this paper, we study the finite-time convergence of nonlinear SA with multiple coupled sequences. Different from existing multi-timescale analysis, we seek scenarios where a fine-grained analysis can provide a tight performance guarantee for single-timescale multi-sequence SA (STSA). At the heart of our analysis is the smoothness property of the fixed points in multi-sequence SA that holds in many applications. When all sequences have strongly monotone increments, we establish the iteration complexity of $\mathcal{O}(\epsilon^{-1})$ to achieve $\epsilon$-accuracy, which improves the existing $\mathcal{O}(\epsilon^{-1.5})$ complexity for two coupled sequences. When the main sequence does not have a strongly monotone increment, we establish the iteration complexity of $\mathcal{O}(\epsilon^{-2})$. We showcase the power of our result by applying it to stochastic bilevel and compositional optimization problems, as well as RL problems, all of which recover the best known or lead to improvements over their existing guarantees.
Accept
This paper provides convergence analysis for nonlinear stochastic approximation with a "multi-sequence" update structure motivated by applications in reinforcement learning and bilevel learning. When all sequences have strongly monotone increments, the authors provide iteration complexity of O(\epsilon^{−1}) to achieve accuracy, which improves the existing O(\epsilon^{−1.5}) complexity for two coupled sequences. When the main sequence does not have strongly monotone increments, they establish iteration complexity O(\epsilon^{−2}). The reviewers agreed that the techniques in this paper are novel, and that it is well-written. In addition, the paper improves upon existing results when applied to problems in reinforcement learning and bilevel optimization, and hence is likely to have broader impact. However, the reviewers felt that the for the final version, the discussion of the smoothness assumption needs to be expanded, and the comparison with prior work needs to be improved.
train
[ "-1AWB3SJd1", "m0UZA7y0CvS", "NsUnMQusuaPe", "LhkKOE_W1mU", "cT_yovly1H4", "8r5g45Feb_g", "eI81sw7wkp-", "rSNw_JkC-U", "K998hbgvd90", "y--yDJCt1mA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the responses. Overall, I think the writing of the current submission is not sufficiently clear due to the lack of the above important discussions. In my understanding, I still feel that Lipchitz assumption for $y*$ is a kind of stronger assumption, which gives a chance to directly get a single-timesca...
[ -1, -1, -1, -1, -1, -1, 6, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 1, 5, 3 ]
[ "8r5g45Feb_g", "cT_yovly1H4", "y--yDJCt1mA", "K998hbgvd90", "rSNw_JkC-U", "eI81sw7wkp-", "nips_2022_fDDTJakJKR7", "nips_2022_fDDTJakJKR7", "nips_2022_fDDTJakJKR7", "nips_2022_fDDTJakJKR7" ]
nips_2022_Sxk8Bse3RKO
Reconstructing Training Data From Trained Neural Networks
Understanding to what extent neural networks memorize training data is an intriguing question with practical and theoretical implications. In this paper we show that in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier. We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods. To the best of our knowledge, our results are the first to show that reconstructing a large portion of the actual training samples from a trained neural network classifier is generally possible. This has negative implications on privacy, as it can be used as an attack for revealing sensitive training data. We demonstrate our method for binary MLP classifiers on a few standard computer vision datasets.
Accept
This paper proposed a new algorithm to reconstruct a subset of training examples from a trained homogeneous binary classification neural network. Although there are still some limitations such as the zero training loss and homogeneity assumption, as well as limited experiments beyond MLPs, the reviewers also acknowledge that this results is very interesting and reveals an important property of deep neural networks that could potentially have far-reaching implications for privacy and security.
train
[ "ucD4TqIqVWh", "9XXT7psdBh", "3HeztmcDzNH", "63EXePafW12", "yOGfW7ISlDM", "Y__-glNWavz", "bbZp3CXgreN", "h9fJEFxx75J", "7eCYGUJ9cPA", "P6-vzallHhk" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for their answers and I have no further questions. While I understand the concerns of other reviewers, I'm weighting the conceptual contribution stronger than the practical limitations such that I am keeping my score. ", " Thanks for your elaboration. I updated my rating after read...
[ -1, -1, -1, -1, -1, -1, 7, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "yOGfW7ISlDM", "3HeztmcDzNH", "P6-vzallHhk", "7eCYGUJ9cPA", "h9fJEFxx75J", "bbZp3CXgreN", "nips_2022_Sxk8Bse3RKO", "nips_2022_Sxk8Bse3RKO", "nips_2022_Sxk8Bse3RKO", "nips_2022_Sxk8Bse3RKO" ]
nips_2022_36Yz37cEN_Q
Redeeming intrinsic rewards via constrained policy optimization
State-of-the-art reinforcement learning (RL) algorithms typically use random sampling (e.g., $\epsilon$-greedy) for exploration, but this method fails in hard exploration tasks like Montezuma's Revenge. To address the challenge of exploration, prior works incentivize the agent to visit novel states using an exploration bonus (also called intrinsic rewards), which led to excellent results on some hard exploration tasks. However, recent studies show that on many other tasks intrinsic rewards can bias policy optimization leading to poor performance compared to optimizing only the environment reward. The low-performance results from the agent seeking intrinsic rewards and performing unnecessary exploration even when sufficient environment reward is provided. This inconsistency in performance across tasks prevents widespread use of intrinsic rewards with RL algorithms. We propose a principled constrained policy optimization procedure to eliminate the detrimental effects of intrinsic rewards while preserving their merits when applicable. Our method automatically tunes the importance of intrinsic reward: it suppresses intrinsic rewards when they are not needed and increases them when exploration is required. The end result is a superior exploration algorithm that does not require manual tuning to balance intrinsic rewards against environment rewards. Experimental results across 61 Atari games validate our claim.
Accept
Balancing between extrinsic rewards and intrinsic rewards is an important challenge for exploration in RL. This paper proposes a simple yet effective way to automatically adjust the balance between them. The large-scale empirical result across 61 Atari games shows a strong improvement over the baseline approaches. All of the reviewers agreed that the proposed method is novel, and the empirical results are convincing. The reviewers had no major concern about the paper. Thus, I recommend accepting this paper.
train
[ "UMz5Vj70Qdn", "tATNbHr971", "73vPV5OxzK1", "5SGXaYZIb2_T", "NNp0Mv5M4s_", "Q-j0W2KkmeG", "bpPQQl6sbLj", "4WnFu4IOfSd" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response.\n\nI must have missed those explanations in the paper. Indeed it sufficiently covers my concerns.\nOverall I liked this work, very intuitive and simple scheme.", " I thank the authors for their response and for clarifying my question. ", " We’re happy to hear that you enjoyed the bre...
[ -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "5SGXaYZIb2_T", "NNp0Mv5M4s_", "4WnFu4IOfSd", "bpPQQl6sbLj", "Q-j0W2KkmeG", "nips_2022_36Yz37cEN_Q", "nips_2022_36Yz37cEN_Q", "nips_2022_36Yz37cEN_Q" ]
nips_2022_NdpUjzwsHp
S-PIFu: Integrating Parametric Human Models with PIFu for Single-view Clothed Human Reconstruction
We present three novel strategies to incorporate a parametric body model into a pixel-aligned implicit model for single-view clothed human reconstruction. Firstly, we introduce ray-based sampling, a novel technique that transforms a parametric model into a set of highly informative, pixel-aligned 2D feature maps. Next, we propose a new type of feature based on blendweights. Blendweight-based labels serve as soft human parsing labels and help to improve the structural fidelity of reconstructed meshes. Finally, we show how we can extract and capitalize on body part orientation information from a parametric model to further improve reconstruction quality. Together, these three techniques form our S-PIFu framework, which significantly outperforms state-of-the-arts methods in all metrics. Our code is available at https://github.com/kcyt/SPIFu.
Accept
All reviewers consider this a novel and effective contribution to the increasingly important subfield of 3D human reconstruction, particularly from unusual poses, or, as exposed in the rebuttal, with loose clothing. The key technical questions of the reviewers (both positive and negative) were about dependence on accurate pose parameters, and dependence on accurate surface fits, which would be incorrect for e.g. baggy clothing. The rebuttal does a thorough and convincing job in exploring these questions, Reviewer FRso says: - The three proposed methods "seem more like tricks". This does not refute their novelty - that would be achieved by pointing to specific prior art. - "What happens if [16] fails?" This is now well answered in the rebuttal, and the answer is satisfactory - "ablated versions perform worse" - the new tables show this can be the case on some datasets, but not on others. Of course, it would be ideal if some mechanism could downweight these contributions where appropriate, but that is not a task for this paper. Reviewer 49L7 says: - " It is not clear how this approach performs for the pixels that do not belong to SMPLX body". Now answered well in the rebuttal. - "seems to require very accurate underlying SMPLX fitting". Now answered well in the rebuttal. - "misses one important work, ICON". As the rebuttal notes, the code for this work was released very shortly before the deadline. The rebuttal is careful to give the timeline for the code release, rather than just using the CVPR conference date. The rebuttal also includes a preliminary but useful comparison to ICON, showing that in fact the paper outperforms ICON (when trained on similar data), but also noting that they are very different architectures, which further argues for both being exposed to the community. I agree with the authors that the later objections of R1 are "moving the goalposts". I would not necessarily dismiss those later objections if they were fundamental, but again, the rebuttal answers them convincingly. Reviewer sxwr was overall in favour of accept, but had some queries, again well responded to in the rebuttal.
train
[ "mAfKWp0r5XT", "ZEQAawtee2R", "kShovhKxQYF", "avud2F5kti", "y7xF-QlO7YN", "w_lQjI1sF9F", "tN6dcbIXXKu", "cn0NpekAirx", "hxvH918cbbk_", "VRMU-rVXc1r", "Fvcrhtf5wnM", "RmT7tLbq56L" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you, we take your comments very seriously. But ray-based sampling alone (i.e. PIFu + M + C) is not sufficient for a paper because it is only able to cleanly outperform the SOTA (PIFuHD) in the THuman2.0 dataset and not in the BUFF dataset (See below; lower values are better for all metrics shown). \n\n| ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "ZEQAawtee2R", "w_lQjI1sF9F", "nips_2022_NdpUjzwsHp", "tN6dcbIXXKu", "cn0NpekAirx", "hxvH918cbbk_", "RmT7tLbq56L", "Fvcrhtf5wnM", "VRMU-rVXc1r", "nips_2022_NdpUjzwsHp", "nips_2022_NdpUjzwsHp", "nips_2022_NdpUjzwsHp" ]
nips_2022_48Js-sP8wnv
Use-Case-Grounded Simulations for Explanation Evaluation
A growing body of research runs human subject evaluations to study whether providing users with explanations of machine learning models can help them with practical real-world use cases. However, running user studies is challenging and costly, and consequently each study typically only evaluates a limited number of different settings, e.g., studies often only evaluate a few arbitrarily selected model explanation methods. To address these challenges and aid user study design, we introduce Simulated Evaluations (SimEvals). SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to the user, to predict answers to the use case of interest. The algorithmic agent's test set accuracy provides a measure of the predictiveness of the information content for the downstream use case. We run a comprehensive evaluation on three real-world use cases (forward simulation, model debugging, and counterfactual reasoning) to demonstrate that SimEvals can effectively identify which explanation methods will help humans for each use case. These results provide evidence that \simevals{} can be used to efficiently screen an important set of user study design decisions, e.g., selecting which explanations should be presented to the user, before running a potentially costly user study.
Accept
The paper proposes simulated evaluations (SimEvals) to guide explainable AI (XAI) researchers about what explanations to include in a user study. All the reviewers agreed that this is a novel contribution to a significant and timely problem. There were common questions around the empirical evaluations that the authors clarified during the feedback phase. The reviewers have acknowledged the authors' responses and have confirmed that their questions were adequately addressed. By adding the new table contrasting prior work that the authors included in their feedback, as well as the clarifications from the reviewer discussion, the paper will be substantially stronger.
test
[ "CyG677CxgF", "x7tA1bTydLY", "cu_SZo7zEeg-", "gzQ0CNWLLed", "QZf7i2ltfhm", "YzB7Zo23iyb", "6eAy9Pc8lho", "gGp5kZnJ74V", "ZCEroW-EtQq" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and for acknowledging the strengths of this work. \n\nWe first respond to the weaknesses below:\n\n**Snippet:** *“Test accuracy of the agent may not be a wholistic measure of the performance, might need to augment with other metrics as well.”*\n- We agree that including additional metri...
[ -1, -1, -1, -1, -1, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "ZCEroW-EtQq", "gGp5kZnJ74V", "6eAy9Pc8lho", "YzB7Zo23iyb", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv" ]
nips_2022_s_PJMEGIUfa
LIFT: Language-Interfaced Fine-Tuning for Non-language Machine Learning Tasks
Fine-tuning pretrained language models (LMs) without making any architectural changes has become a norm for learning various language downstream tasks. However, for non-language downstream tasks, a common practice is to employ task-specific designs for input, output layers, and loss functions. For instance, it is possible to fine-tune an LM into an MNIST classifier by replacing the word embedding layer with an image patch embedding layer, the word token output layer with a 10-way output layer, and the word prediction loss with a 10-way classification loss, respectively. A natural question arises: Can LM fine-tuning solve non-language downstream tasks without changing the model architecture or loss function? To answer this, we propose Language-Interfaced Fine-Tuning (LIFT) and study its efficacy and limitations by conducting an extensive empirical study on a suite of non-language classification and regression tasks. LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs." We find that LIFT performs comparably well across a wide range of low-dimensional classification and regression tasks, matching the performances of the best baselines in many cases, especially for the classification tasks. We also report experimental results on the fundamental properties of LIFT, including inductive bias, robustness, and sample complexity. We also analyze the effect of pretraining on LIFT and a few properties/techniques specific to LIFT, e.g., context-aware learning via appropriate prompting, calibrated predictions, data generation, and two-stage fine-tuning. Our code is available at https://github.com/UW-Madison-Lee-Lab/LanguageInterfacedFineTuning.
Accept
The paper demonstrates that pre-trained language models can be competitive at classifying non-textual data, where the input features are linearized into a text-like sequence and used as the conditional prefix for the language model. While the method is still not competitive with supervised learning methods, the fact that LLMs are able to do the task is intriguing. During the rebuttal, the authors also provided convincing answers to when one might prefer this approach (which is computationally expensive) over traditional methods. This paper provides timely insights into empirical research on LLMs. Therefore, I recommend acceptance.
train
[ "tEHI8v4XalB", "68IvMrXZe8K", "h7cWz5dA44Y", "tdXrxhZCzj", "MtKK6QgsXHH", "07YqmvObNK-", "8982ANhJXUt", "tGxoeI5btJ", "5Z_sESvb0Bi", "IbS_25gelwj", "NoHiMg3LN0", "8n7ke4l1jpT", "TOvSLEmNzwbV", "NXIFhzvcTVw", "oTIK6sgQfiI", "ixShrhTCz73", "aAiGrns56xR", "pO_xKfI8Dt_", "stqW0ZaxOU"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the Reviewer for increasing the score. \nWe appreciate your valuable feedbacks for our work and we will happily keep improving our paper.\nBest, ", " We thank the Reviewer and really appreciate your support for our paper. ", " I have read all the comments by the authors, and they mostly clear up my q...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "h7cWz5dA44Y", "tdXrxhZCzj", "MtKK6QgsXHH", "tGxoeI5btJ", "07YqmvObNK-", "8982ANhJXUt", "x5t4o414sia", "stqW0ZaxOU", "IbS_25gelwj", "pO_xKfI8Dt_", "8n7ke4l1jpT", "aAiGrns56xR", "NXIFhzvcTVw", "oTIK6sgQfiI", "ixShrhTCz73", "nips_2022_s_PJMEGIUfa", "nips_2022_s_PJMEGIUfa", "nips_2022...
nips_2022_aqLugNVQqRw
Class-Aware Adversarial Transformers for Medical Image Segmentation
Transformers have made remarkable progress towards modeling long-range dependencies within the medical image analysis domain. However, current transformer-based models suffer from several disadvantages: (1) existing methods fail to capture the important features of the images due to the naive tokenization scheme; (2) the models suffer from information loss because they only consider single-scale feature representations; and (3) the segmentation label maps generated by the models are not accurate enough without considering rich semantic contexts and anatomical textures. In this work, we present CASTformer, a novel type of adversarial transformers, for 2D medical image segmentation. First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations. We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures. Lastly, we utilize an adversarial training strategy that boosts segmentation accuracy and correspondingly allows a transformer-based discriminator to capture high-level semantically correlated contents and low-level anatomical features. Our experiments demonstrate that CASTformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining 2.54%-5.88% absolute improvements in Dice over previous models. Further qualitative experiments provide a more detailed picture of the model’s inner workings, shed light on the challenges in improved transparency, and demonstrate that transfer learning can greatly improve performance and reduce the size of medical image datasets in training, making CASTformer a strong starting point for downstream medical image analysis tasks.
Accept
The paper proposes a generative adversarial approach to 2D medical image segmentation. The problem is a standard one for MRI and improvements in this direction can have real-world impact. The reviewers were on the whole positive in their opinions of the paper. They found the design to be well-motivated and the paper to be well-written and easy to follow. The improvement to current methods was significant enough to be a good reason for acceptance. The reviewers generally found the feedback period to be helpful in swaying them in a more positive direction for accepting the paper.
train
[ "TTugiUuBw0d", "AYv8WOKK9MY", "sqrA2i7lj46", "OVMg6d44IA", "_Ov2FEMMg8L", "qshAlPFP8M9", "yqRNFAW7UV", "ys85Fb7fOQ", "WxHiFk6leKy", "PIefzigE9aI", "u4vrvyFOVG_", "45GIIcbQkHl", "FbG4KGpMrQ", "Bm0igmgj4dw" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nI thank the authors for responding to the concerns and questions I have raised.\nI carefully read the rebuttals and some of them convince me.\nI would like to increase my score to weak acceptance based on my evaluation.\n\nThank you for your replies and I have no more concerns.", " We thank the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ "OVMg6d44IA", "sqrA2i7lj46", "PIefzigE9aI", "Bm0igmgj4dw", "45GIIcbQkHl", "u4vrvyFOVG_", "ys85Fb7fOQ", "WxHiFk6leKy", "Bm0igmgj4dw", "FbG4KGpMrQ", "45GIIcbQkHl", "nips_2022_aqLugNVQqRw", "nips_2022_aqLugNVQqRw", "nips_2022_aqLugNVQqRw" ]
nips_2022_p4xLHcTLRwh
SALSA: Attacking Lattice Cryptography with Transformers
Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, "quantum resistant" cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged as strong contenders for standardization. In this work, we train transformers to perform modular arithmetic and mix half-trained models and statistical cryptanalysis techniques to propose SALSA: a machine learning attack on LWE-based cryptographic schemes. SALSA can fully recover secrets for small-to-mid size LWE instances with sparse binary secrets, and may scale to attack real world LWE-based cryptosystems.
Accept
The authors propose SALSA: a machine learning attack on cryptographic schemes based on Learning With Errors (LWE) as the underlying hard problem. They show that SALSA recovers secrets of small and medium size LWE instances with sparse binary secrets. The main selling point is that if the attack could scale up, it may pose a real treat to real-world LWE-based cryptosystems. The reviewers found use of transformers to perform modular arithmetic interesting. At the same time, I agree with the reviewers that given the current computational complexity of the attack and given its poor scalability in the dimension of the lattice could potentially prevent its deployment in real-world systems. As such despite the merit of the paper, I find it to be a borderline submission.
train
[ "forYYc9TBq", "MqZhiJd1St", "Do1MYIaVEao", "53_rj-Q1_uE", "2_f2VmaBThl", "R__DxQYS5t", "r09HBhTDzA4", "1DzGu1REf-k", "d7xBDCaFugg", "cGzR-4LYfB5", "_VYZ-fH3v5M" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are happy to add the number of possible secrets as a baseline for Table 2.\n\nFor Table 4, we believe the number of possible secrets for a given $n$/density would be the most appropriate baseline, since our experiments in Table 4 restrict the range of values in $\\mathbf{a}$ but not the number of possible secr...
[ -1, -1, -1, -1, -1, -1, -1, 7, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "Do1MYIaVEao", "2_f2VmaBThl", "R__DxQYS5t", "_VYZ-fH3v5M", "cGzR-4LYfB5", "d7xBDCaFugg", "1DzGu1REf-k", "nips_2022_p4xLHcTLRwh", "nips_2022_p4xLHcTLRwh", "nips_2022_p4xLHcTLRwh", "nips_2022_p4xLHcTLRwh" ]
nips_2022_huT1G2dtSr
Robust Imitation via Mirror Descent Inverse Reinforcement Learning
Recently, adversarial imitation learning has shown a scalable reward acquisition method for inverse reinforcement learning (IRL) problems. However, estimated reward signals often become uncertain and fail to train a reliable statistical model since the existing methods tend to solve hard optimization problems directly. Inspired by a first-order optimization method called mirror descent, this paper proposes to predict a sequence of reward functions, which are iterative solutions for a constrained convex problem. IRL solutions derived by mirror descent are tolerant to the uncertainty incurred by target density estimation since the amount of reward learning is regulated with respect to local geometric constraints. We prove that the proposed mirror descent update rule ensures robust minimization of a Bregman divergence in terms of a rigorous regret bound of $\mathcal{O}(1/T)$ for step sizes $\{\eta_t\}_{t=1}^{T}$. Our IRL method was applied on top of an adversarial framework, and it outperformed existing adversarial methods in an extensive suite of benchmarks.
Accept
This work proposes imitation learning via the route of mirror descent inverse-rl. Mirror descent is a well-understood optimization algorithm and framing IRL via it is a good theoretical exercise. Using an expert to help schedule learning is a novel theoretical contribution in the context of adversarial imitation learning. This is directly guiding how to design the approach. The current concern is that the experimental results are not statistically significant and even though the nice theoretical properties of mirror descent are nice to potentially leverage, it is not coming through strongly yet. One suggestion is to drastically increase the number of random seeds (say 25) and report 2*standard error instead of standard deviation, especially comparing to RAIRL. The promising innovation is the idea of multiple discriminators which can better account for distribution shift. The authors are encouraged to bolster the experiments with this in mind and frame this as the central point of the work with the theory of MD as supporting evidence. The writing of the paper can also be improved and the idea of estimating experts for curriculum can be highlighted better as this is a significant contribution and is currently a bit buried in text. A significant refactoring of Section 4 and a running example that connects Fig 2 and 3 to the algorithm in Section 5 will greatly help. Lines 130-151 are not the main contribution and can be moved to appendix or cut. This is also mainly an imitation learning paper and not an IRL paper as the reviewers have noted. While the naming follows the convention of other IL papers like GAIL, RAIRL, it can be a bit misleading. Perhaps the authors can reconsider the name.
train
[ "oWZ5U2fDJri", "QcWr32wiGG3", "bFOeN0qjuR", "aFcxAnSYa1f", "TvGtDv5kW9", "EZ__lX6cylc", "gAaa1K94fV-", "3qUVzyq3xRy", "pqwWbhvJAId", "TM4-bftkWIc", "I7Hgr3Y1oL", "y2QhDF3YA3w", "BqRHebuJYdH", "e9JDF8MnZp3", "kKdmrpsVyPT", "zqPT3-RfnTGE", "xvHvoKEKEV5c", "GgjNxHw0lqY", "nRy16duSBn...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_...
[ " Thank you again for your insightful comments and discussion.\n\n\nSincerely, Authors.", " Thanks for the clarifications. I will take these into account when discussing the paper with the other reviewers. I don't have any additional questions at this point.", " We are very grateful that our response cleared ma...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 2, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2, 4 ]
[ "QcWr32wiGG3", "bFOeN0qjuR", "aFcxAnSYa1f", "TvGtDv5kW9", "EZ__lX6cylc", "gAaa1K94fV-", "y2QhDF3YA3w", "pqwWbhvJAId", "kKdmrpsVyPT", "e9JDF8MnZp3", "y2QhDF3YA3w", "BqRHebuJYdH", "GgjNxHw0lqY", "nRy16duSBny", "zqPT3-RfnTGE", "pN2lz8SZ53U", "GgjNxHw0lqY", "wwC2puPOSNZ", "GM5MhZ5GFJ...
nips_2022_q2nJyb3cvR9
Near-Optimal Randomized Exploration for Tabular Markov Decision Processes
We study algorithms using randomized value functions for exploration in reinforcement learning. This type of algorithms enjoys appealing empirical performance. We show that when we use 1) a single random seed in each episode, and 2) a Bernstein-type magnitude of noise, we obtain a worst-case $\widetilde{O}\left(H\sqrt{SAT}\right)$ regret bound for episodic time-inhomogeneous Markov Decision Process where $S$ is the size of state space, $A$ is the size of action space, $H$ is the planning horizon and $T$ is the number of interactions. This bound polynomially improves all existing bounds for algorithms based on randomized value functions, and for the first time, matches the $\Omega\left(H\sqrt{SAT}\right)$ lower bound up to logarithmic factors. Our result highlights that randomized exploration can be near-optimal, which was previously achieved only by optimistic algorithms. To achieve the desired result, we develop 1) a new clipping operation to ensure both the probability of being optimistic and the probability of being pessimistic are lower bounded by a constant, and 2) a new recursive formula for the absolute value of estimation errors to analyze the regret.
Accept
Thank the authors for their submission. The paper studies regret minimization in finite-horizon tabular Markov decision processes. It is the first to show an optimal (up to logarithmic factors) regret bound of $\widetilde O(H \sqrt{|S| |A| T)}$ to Thompson sampling-type algorithms. A good addition to the TS literature, showing another case in which TS algorithms can have the same regret guarantees as optimistic algorithms. The paper is well-written.
train
[ "2gBPAfynl", "hlQfaBo3hff", "YY1Bv_Hmyp", "wo_6Qf7zMW0", "oZhQNNvPMnh", "-Mu4Md_-_b8", "v9xSAjHSmkyN", "-4QwtcZaSTL", "uzsIcSCgKP", "QPBaSHHJDHfI", "k6BF1ho4tLj", "U-qVArdYYAn", "ZmbnBhdkmj1", "K0e7uoBMN7e", "68owXVCI6hv", "HJniB9FQwtJ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your response! Please find our answers to your questions below:\n- **Non-stationary MDP**: In RL theory, **non-stationary MDP** means that transition probability $P$ and reward function $r$ change from episodes to episodes, which is a notion of non-stationarity slightly different from the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "hlQfaBo3hff", "QPBaSHHJDHfI", "wo_6Qf7zMW0", "-4QwtcZaSTL", "HJniB9FQwtJ", "K0e7uoBMN7e", "ZmbnBhdkmj1", "HJniB9FQwtJ", "68owXVCI6hv", "K0e7uoBMN7e", "ZmbnBhdkmj1", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cv...
nips_2022_Kf8sfv0RckB
TTOpt: A Maximum Volume Quantized Tensor Train-based Optimization and its Application to Reinforcement Learning
We present a novel procedure for optimization based on the combination of efficient quantized tensor train representation and a generalized maximum matrix volume principle. We demonstrate the applicability of the new Tensor Train Optimizer (TTOpt) method for various tasks, ranging from minimization of multidimensional functions to reinforcement learning. Our algorithm compares favorably to popular gradient-free methods and outperforms them by the number of function evaluations or execution time, often by a significant margin.
Accept
The basic ideas and contribution of this paper have been positively evaluated by the reviewers. There were a few questions, but many of them have been resolved by the authors' careful replies. There is an opinion that the comparison with reinforcement learning is inadequate, but since it is an application, this is not a major problem.
test
[ "o0ZgIThEsJg", "4NP8YtwYwfE", "R_VdFHwyg2s", "9F06vASsC80", "wpDmnjWrIl", "8j4ZppLpaHh", "K30GlSvoISn", "JfixnKgbJl", "_y3rl5vR9X8", "H6gQmfL3NC", "xdBrKMf4aiC", "OV3IVPn_h6bz", "hkSyXhEox5", "nwrDELUK36x", "1tGJR3TVFMQ", "7wJHCc1_DjM", "14gXs9r50fP", "LydYjFhWZUM", "vm7r4WDPzP6"...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " Thanks for the detailed rebuttal which clarifies some of my concern. I decide to increase the score.", " Thanks a lot! We are very pleased with your appreciation of our work. We will describe the discretization process in more detail in the text, if it will be accepted for publication.", " Thank you very much...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4, 3 ]
[ "PxkiIF4lg-d", "wpDmnjWrIl", "8j4ZppLpaHh", "B6tHEXKhuFw", "_y3rl5vR9X8", "OV3IVPn_h6bz", "xrM4jNQf70T", "B6tHEXKhuFw", "Y0Y0gy2pAo6", "xdBrKMf4aiC", "F7693w2o3n4", "1tGJR3TVFMQ", "nwrDELUK36x", "SLsQf3P0tJ", "V-X-boocn_g", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n...
nips_2022_gnc2VJHXmsG
RKHS-SHAP: Shapley Values for Kernel Methods
Feature attribution for kernel methods is often heuristic and not individualised for each prediction. To address this, we turn to the concept of Shapley values (SV), a coalition game theoretical framework that has previously been applied to different machine learning model interpretation tasks, such as linear models, tree ensembles and deep networks. By analysing SVs from a functional perspective, we propose RKHS-SHAP, an attribution method for kernel machines that can efficiently compute both Interventional and Observational Shapley values using kernel mean embeddings of distributions. We show theoretically that our method is robust with respect to local perturbations - a key yet often overlooked desideratum for consistent model interpretation. Further, we propose Shapley regulariser, applicable to a general empirical risk minimisation framework, allowing learning while controlling the level of specific feature's contributions to the model. We demonstrate that the Shapley regulariser enables learning which is robust to covariate shift of a given feature and fair learning which controls the SVs of sensitive features.
Accept
The authors propose a novel method for calculating Shapley value for kernel-based models. The paper includes both a theoretical analysis and extensive experimental evaluation. A majority of reviewers are in support of accepting the paper and the rebuttal/discussion period helped to clear out (most of) the reviewers' concerns.
train
[ "kEUXuxoaOZ", "A0o7UDfIt2I", "tpHPtvg4O7", "EqIrzt4K6zH", "1dIm8ebBlq", "hr8Ik92pGuT", "SMNpQ99P-kN", "XqRRnGR2P4I", "0T40I4ufnUz", "kXS4yqLmpX0", "QbqAtLBdf-", "W6ebe9cAD9", "j_JA91rKjn7", "mxgZzXTKTbFV", "1xfDLTrUICd", "ZNQ77wARFeD", "hDThH9TEbQ5", "8DKggMTVyjs", "F0_qSbb8MAl",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_r...
[ " We will forward the message we recieved from the Program Chairs:\n\n\n- During this decision making phase, the authors are not allowed to interact with the reviewers, area chairs nor senior area chairs. Reviewers, area chairs and senior area chairs will work with what they have at this moment, including your manu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "A0o7UDfIt2I", "1dIm8ebBlq", "EqIrzt4K6zH", "XqRRnGR2P4I", "hr8Ik92pGuT", "SMNpQ99P-kN", "kXS4yqLmpX0", "0T40I4ufnUz", "W6ebe9cAD9", "QbqAtLBdf-", "j_JA91rKjn7", "mxgZzXTKTbFV", "hDThH9TEbQ5", "ZNQ77wARFeD", "nips_2022_gnc2VJHXmsG", "BQkeSBmKUYh", "EFtuInxqJe", "Pja6xWdaSkc", "ni...
nips_2022_3LBxVcnsEkV
GREED: A Neural Framework for Learning Graph Distance Functions
Similarity search in graph databases is one of the most fundamental operations in graph analytics. Among various distance functions, graph and subgraph edit distances (GED and SED respectively) are two of the most popular and expressive measures. Unfortunately, exact computations for both are NP-hard. To overcome this computational bottleneck, neural approaches to learn and predict edit distance in polynomial time have received much interest. While considerable progress has been made, there exist limitations that need to be addressed. First, the efficacy of an approximate distance function lies not only in its approximation accuracy, but also in the preservation of its properties. To elaborate, although GED is a metric, its neural approximations do not provide such a guarantee. This prohibits their usage in higher order tasks that rely on metric distance functions, such as clustering or indexing. Second, several existing frameworks for GED do not extend to SED due to SED being asymmetric. In this work, we design a novel siamese graph neural network called Greed, which through a carefully crafted inductive bias, learns GED and SED in a property-preserving manner. Through extensive experiments across $10$ real graph datasets containing up to $7$ million edges, we establish that Greed is not only more accurate than the state of the art, but also up to $3$ orders of magnitude faster. Even more significantly, due to preserving the triangle inequality, the generated embeddings are indexable and consequently, even in a CPU-only environment, Greed is up to $50$ times faster than GPU-powered computations of the closest baseline.
Accept
The paper studies an important problem of computing distance functions across graphs which is NP-hard to solve in the worst case. The author provide a theoretical analysis of certain properties of the algorithm, and show its relevance in practice. The reviewers pointed out some weakness, but the rebuttal helped resolve some of those. Please address those comments for the camera ready version. The paper has weak accept votes, but in light of the significance of the topic, I also lean toward accepting the paper.
train
[ "7aFIQMYq6s", "Gs5yx7FwlzT", "WpWCiWF_1WO", "oMnb7bJdSFp", "btf4IOEF5Zg", "AdHGQK3DlMs", "vq6t53578Uz", "xLCfCVzYAC3", "My--wnMAr8", "IwFh1flPYHgC", "PviUGBtsnrv", "tTUwZ99aE4K", "wbGHWop2Ky", "nKdbUmWg7qU", "0ptXGHvK7Nz", "jX8ADOCPjD", "JYiOfhBLo4Z", "-4SHMCrFgu" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hello dear authors,\nThank you for your comments and addressing the issues pointed out by all reviewers. I am updating my score to weak accept.", " Based on your latest responses, my major concerns are addressed. \n\nI would like to improve my ratings to weak accept.", " Dear Reviewer ap6M,\n\nWe thank you fo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "WpWCiWF_1WO", "oMnb7bJdSFp", "My--wnMAr8", "btf4IOEF5Zg", "AdHGQK3DlMs", "tTUwZ99aE4K", "nKdbUmWg7qU", "PviUGBtsnrv", "wbGHWop2Ky", "nips_2022_3LBxVcnsEkV", "tTUwZ99aE4K", "-4SHMCrFgu", "JYiOfhBLo4Z", "0ptXGHvK7Nz", "jX8ADOCPjD", "nips_2022_3LBxVcnsEkV", "nips_2022_3LBxVcnsEkV", "...
nips_2022_cLx3kbl2AI
Context-Based Dynamic Pricing with Partially Linear Demand Model
In today’s data-rich environment, context-based dynamic pricing has gained much attention. To model the demand as a function of price and context, the existing literature either adopts a parametric model or a non-parametric model. The former is easier to implement but may suffer from model mis-specification, whereas the latter is more robust but does not leverage many structural properties of the underlying problem. This paper combines these two approaches by studying the context-based dynamic pricing with online learning, where the unknown expected demand admits a semi-parametric partially linear structure. Specifically, we consider two demand models, whose expected demand at price $p$ and context $x \in \mathbb{R}^d$ is given by $bp+g(x)$ and $ f(p)+ a^\top x$ respectively. We assume that $g(x)$ is $\beta$-H{\"o}lder continuous in the first model, and $f(p)$ is $k$th-order smooth with an additional parameter $\delta$ in the second model. For both models, we design an efficient online learning algorithm with provable regret upper bounds, and establish matching lower bounds. This enables us to characterize the statistical complexity for the two learning models, whose optimal regret rates are $\widetilde \Theta(\sqrt T \vee T^{\frac{d}{d+2\beta}})$ and $\widetilde \Theta(\sqrt T \vee (\delta T^{k+1})^{\frac{1}{2k+1}})$ respectively. The numerical results demonstrate that our learning algorithms are more effective than benchmark algorithms, and also reveal the effects of parameters $d$, $\beta$ and $\delta$ on the algorithm's empirical regret, which are consistent with our theoretical findings.
Accept
The reviewers found the paper to be novel and interesting. The introduced model was found to be innovative and leading to cleaner/better regret bounds. The only major concern that was raised and not resolved was lack of technical novelty. However, it seems that this work provides new and relevant results to the existing literature. And it seems that clean and fundamental techniques are indeed adequate for achieving the result of this paper.
train
[ "XLFiBUKlhq", "WCHf7BnQCk9", "27vMXe6rFuPd", "2Hzdt7q2EAv", "qacRqSGC1eG", "Yr6aIwp4xxE", "f6JxRhvYivU", "3pUhOSUJ-7j", "XhGJ4KUSswu", "4gkKhNYgmgr", "sj6nd52yf78", "vK_7fkv4bs", "D6BGwpXZave" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate the reviewer’s insightful comments that help our paper become stronger. We have learned quite a lot from the reviewer’s professionalism and patience. Thank you so much for your valuable time.", " The authors' response clarifies all of the points that I did not understand. As for [30] (now [...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "WCHf7BnQCk9", "XhGJ4KUSswu", "vK_7fkv4bs", "vK_7fkv4bs", "vK_7fkv4bs", "4gkKhNYgmgr", "4gkKhNYgmgr", "D6BGwpXZave", "sj6nd52yf78", "nips_2022_cLx3kbl2AI", "nips_2022_cLx3kbl2AI", "nips_2022_cLx3kbl2AI", "nips_2022_cLx3kbl2AI" ]
nips_2022_SPiQQu2NmO9
Target alignment in truncated kernel ridge regression
Kernel ridge regression (KRR) has recently attracted renewed interest due to its potential for explaining the transient effects, such as double descent, that emerge during neural network training. In this work, we study how the alignment between the target function and the kernel affects the performance of the KRR. We focus on the truncated KRR (TKRR) which utilizes an additional parameter that controls the spectral truncation of the kernel matrix. We show that for polynomial alignment, there is an over-aligned regime, in which TKRR can achieve a faster rate than what is achievable by full KRR. The rate of TKRR can improve all the way to the parametric rate, while that of full KRR is capped at a sub-optimal value. This shows that target alignemnt can be better leveraged by utilizing spectral truncation in kernel methods. We also consider the bandlimited alignment setting and show that the regularization surface of TKRR can exhibit transient effects including multiple descent and non-monotonic behavior. Our results show that there is a strong and quantifable relation between the shape of the alignment spectrum and the generalization performance of kernel methods, both in terms of rates and in finite samples.
Accept
The reviewers all found that the paper is interesting and contributes interesting new results. The reviewers have made several useful constructive comments that the authors are strongly encouraged to take into account.
train
[ "hkPv73gKmGP", "jsC6hlSCx8", "Xl434e-o_tH", "NOFXFiYc0ZK", "JKSMQfxZ3Yb", "hOUiiul7DS", "3CU0fDCyu7t", "kmkZ2yojcJ9", "7gphLjflgPu", "ZPnP0C07US", "35rU0hPgGK1", "12-wvZ6iVLH", "4V9UQLrP4IX", "B2swk8lUO8Z", "Hl0l0xPwHaN" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you! We are happy that our comments clarified your questions. Yes, we will add the clarifying statements to the revised paper.\n", " > thank you for your reply, it clarified my confusions and if you include some of these points in the manuscript, I believe it will help the readers as well.\n\nWe are glad ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 2 ]
[ "JKSMQfxZ3Yb", "hOUiiul7DS", "NOFXFiYc0ZK", "kmkZ2yojcJ9", "7gphLjflgPu", "ZPnP0C07US", "nips_2022_SPiQQu2NmO9", "4V9UQLrP4IX", "Hl0l0xPwHaN", "B2swk8lUO8Z", "12-wvZ6iVLH", "nips_2022_SPiQQu2NmO9", "nips_2022_SPiQQu2NmO9", "nips_2022_SPiQQu2NmO9", "nips_2022_SPiQQu2NmO9" ]
nips_2022_eUAw7dwaOg8
Bridging the Gap: Unifying the Training and Evaluation of Neural Network Binary Classifiers
While neural network binary classifiers are often evaluated on metrics such as Accuracy and $F_1$-Score, they are commonly trained with a cross-entropy objective. How can this training-evaluation gap be addressed? While specific techniques have been adopted to optimize certain confusion matrix based metrics, it is challenging or impossible in some cases to generalize the techniques to other metrics. Adversarial learning approaches have also been proposed to optimize networks via confusion matrix based metrics, but they tend to be much slower than common training methods. In this work, we propose a unifying approach to training neural network binary classifiers that combines a differentiable approximation of the Heaviside function with a probabilistic view of the typical confusion matrix values using soft sets. Our theoretical analysis shows the benefit of using our method to optimize for a given evaluation metric, such as $F_1$-Score, with soft sets, and our extensive experiments show the effectiveness of our approach in several domains.
Accept
When training binary classifiers, one usually minimizes the (surrogate) binary cross-entropy loss (BCE), but evaluates on metrics such as F1, AUROC, or other confusion metric-based scores. The authors propose to instead combine a differentiable approximation of the Heaviside function with a probabilistic view of the typical confusion matrix values using soft sets to directly optimize F1 and AUROC at training time. The authors show that under certain assumptions the metrics computed over the soft-set confusion matrix values are asymptotically similar to the underlying true metric. Finally, the authors evaluate the proposed approximations on several unbalanced datasets and show competitive performance with respect to optimizing BCE. While the reviewers outlined some weaknesses, they agreed that this work is relevant to the larger research community and presents a novel approach with potential to be used in practice. During the rebuttal phase the authors addressed the main remaining issues and I will recommend acceptance. Please incorporate all the information presented during the rebuttal phase into the manuscript.
train
[ "6JbhRiK7B7", "_P2xqELLNZ0", "774qly9buWH", "4n_e_1MDzqs", "8kr3uokaeAP", "j9oeWCjwo8D", "5YQQD33Tf6f", "u_hKUtEUs3", "GeR3HJw0fCZ", "Vocby5Ynvyl" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response. I also read your response to other reviewers. I decide to maintain my positive rating. I hope this paper could include an analysis of generalization errors, studies on multiclass classification, and a comparison with other surrogate losses in the main content of the paper.",...
[ -1, -1, -1, -1, -1, -1, 6, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "j9oeWCjwo8D", "774qly9buWH", "Vocby5Ynvyl", "GeR3HJw0fCZ", "u_hKUtEUs3", "5YQQD33Tf6f", "nips_2022_eUAw7dwaOg8", "nips_2022_eUAw7dwaOg8", "nips_2022_eUAw7dwaOg8", "nips_2022_eUAw7dwaOg8" ]
nips_2022_pGLFkjgVvVe
Uncertainty Estimation Using Riemannian Model Dynamics for Offline Reinforcement Learning
Model-based offline reinforcement learning approaches generally rely on bounds of model error. Estimating these bounds is usually achieved through uncertainty estimation methods. In this work, we combine parametric and nonparametric methods for uncertainty estimation through a novel latent space based metric. In particular, we build upon recent advances in Riemannian geometry of generative models to construct a pullback metric of an encoder-decoder based forward model. Our proposed metric measures both the quality of out-of-distribution samples as well as the discrepancy of examples in the data. We leverage our combined method for uncertainty estimation in a pessimistic model-based framework, showing a significant improvement upon contemporary model-based offline approaches on continuous control and autonomous driving benchmarks.
Accept
Unanimous accept from 3 reviewers. I'm uncertain about "accept" given reviewer's XJeV and Jba5 reviews are on the short and vague side. Reviewer vqqM never responded, even though they would have been a great reviewer for this work (reminded them once and they confirmed, but forgot to follow up again). Reviewer TKTA's review was the most useful, a borderline accept. There is reviewer consensus on novelty, in addition to being well written, with convincing results on MuJoCo and a highway environment. I myself am unfamiliar with Riemannian metrics/manifolds, however after reading up on the subject, I worried this paper might have been too close to "Latent Space Oddity: on the Curvature of Deep Generative Models" to learn (or compute) latent space metrics, however this works differs by (1) using a variational *forwards* model to consider dynamics, (2) using an ensemble of model to consider epistemic uncertainty, and (3) tying both these aleatoric and epistemic forms of uncertainty into an offline-RL setting, where rewards are pessimistically estimated under uncertainty. While it seems a little ad hoc to suggest this particular method _for_ a particular application (offline RL), which confuses the narrative and motivation a bit, it does seem to give better RL performance in these settings than L2 and ensembling/bootstrapping in Figure 4. This is the most borderline paper I've seen as AC this NeurIPS, but if forced to make a decision, I lean accept.
train
[ "RXlHkvM-Wqk", "fnx0M2So99v", "TthC12cx8D", "_8MT67_zqkO", "-SlAbh2D8ci", "p36z4tfMEgn", "3JXdgqEgqoJ", "6baQMHgPFD_", "Haq7LVvG7mv" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time and effort in revising the paper. Regarding the geodesic solve: My apologies for not having seen the supplementary material, yes it is clearly explained there, thank you for the detailed description. I'm not quite certain if Corollary 1 is of enough importance in the main paper, but thank ...
[ -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "fnx0M2So99v", "TthC12cx8D", "Haq7LVvG7mv", "6baQMHgPFD_", "3JXdgqEgqoJ", "nips_2022_pGLFkjgVvVe", "nips_2022_pGLFkjgVvVe", "nips_2022_pGLFkjgVvVe", "nips_2022_pGLFkjgVvVe" ]
nips_2022_cJ006qBE8Uv
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Supervised learning methods trained with maximum likelihood objectives often overfit on training data. Most regularizers that prevent overfitting look to increase confidence on additional examples (e.g., data augmentation, adversarial training), or reduce it on training data (e.g., label smoothing). In this work we propose a complementary regularization strategy that reduces confidence on self-generated examples. The method, which we call RCAD (Reducing Confidence along Adversarial Directions), aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss. In contrast to adversarial training, RCAD does not try to robustify the model to output the original label, but rather regularizes it to have reduced confidence on points generated using much larger perturbations than in conventional adversarial training. RCAD can be easily integrated into training pipelines with a few lines of code. Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques (e.g., label smoothing, MixUp training) to increase test accuracy by 1-3% in absolute value, with more significant gains in the low data regime. We also provide a theoretical analysis that helps to explain these benefits in simplified settings, showing that RCAD can provably help the model unlearn spurious features in the training data.
Accept
All reviewers have expressed a clear opinion in favour of acceptance, one improving their score after the rebuttal and discussion. I’m happy to recomment acceptance.
train
[ "LZcCVKCWf9j", "foX3bQsYJNF", "87-7efj3Lty", "qA_eRffTCKDI", "19T43H5Xd_e", "DTNo9rXE_nF", "aliucaa53Lq", "bO3cWpPad5I", "-jBnMNFNrtc", "UFUX8uuN2I", "krC2iE9Ro4D", "2qejYrxrXtA", "iGD2qWm9S5k", "Xq1hI-Nxb8T", "mebmQRVJUx8", "xD9cONEUN_f" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The additional results on VAT partly address my concern and I have increased my score. But the effectiveness of the proposed method still worries me given the marginal improvement.", " Dear Reviewer,\n\nThank you for the suggestions for improving the paper. We have additional experiments comparing RCAD with the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "19T43H5Xd_e", "19T43H5Xd_e", "qA_eRffTCKDI", "krC2iE9Ro4D", "2qejYrxrXtA", "aliucaa53Lq", "UFUX8uuN2I", "-jBnMNFNrtc", "xD9cONEUN_f", "mebmQRVJUx8", "Xq1hI-Nxb8T", "iGD2qWm9S5k", "nips_2022_cJ006qBE8Uv", "nips_2022_cJ006qBE8Uv", "nips_2022_cJ006qBE8Uv", "nips_2022_cJ006qBE8Uv" ]