paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_cUY5OkP3VR
Hyperparameter Sensitivity in Deep Outlier Detection: Analysis and a Scalable Hyper-Ensemble Solution
Outlier detection (OD) literature exhibits numerous algorithms as it applies to diverse domains. However, given a new detection task, it is unclear how to choose an algorithm to use, nor how to set its hyperparameter(s) (HPs) in unsupervised settings. HP tuning is an ever-growing problem with the arrival of many new detectors based on deep learning, which usually come with a long list of HPs. Surprisingly, the issue of model selection in the outlier mining literature has been “the elephant in the room”; a significant factor in unlocking the utmost potential of deep methods, yet little said or done to systematically tackle the issue. In the first part of this paper, we conduct the first large-scale analysis on the HP sensitivity of deep OD methods, and through more than 35,000 trained models, quantitatively demonstrate that model selection is inevitable. Next, we design a HP-robust and scalable deep hyper-ensemble model called ROBOD that assembles models with varying HP configurations, bypassing the choice paralysis. Importantly, we introduce novel strategies to speed up ensemble training, such as parameter sharing, batch/simultaneous training, and data subsampling, that allow us to train fewer models with fewer parameters. Extensive experiments on both image and tabular datasets show that ROBOD achieves and retains robust, state-of-the-art detection performance as compared to its modern counterparts, while taking only 2-10% of the time by the naïve hyper-ensemble with independent training.
Accept
This paper empirically demonstrates the sensitivity of unsupervised OD methods to hyperparameters and proposes ensembles of models with differing hyperparameters along with training techniques based on weight sharing to do so efficiently. The authors provided additional experiments to answer some reviewer's major concerns regarding the (meta-)HP robustness of the ensemble methods. While there are natural fluctuations with respect to the (meta) hyperparameters, a larger number of hyperparameters included usually resulted in a close to optimal AUROC. Another concern for two reviewers was the extendability of the efficient ensemble training techniques to other models such as GANs etc. As the authors replied, even though skip connections might not be adaptable to other techniques, e.g. for ensembling various widths, zero-masked layers with BatchEnsemble would also be broadly applicable to ensemble-learn other unsupervised representation learning models. Perhaps in the final version, the authors further discuss with a short experiment how the AE-specific scaling techniques add to the performance. A further concern was the lack of theoretical underpinning about the (meta)HP-robustness. Given that the reviewers agree that this paper provides ample empirical evidence and praise the experimental value, we think theoretical work (providing HP sensitivity results is highly nontrivial in general, let alone for neural networks) can be part of future work but the lack of it in the current manuscript should not prevent the publication of this work.
train
[ "rPyTM7BcZwe", "usXgNasl0iL", "Pp8Jsx5bi5v", "byeTDTavVRi", "wpgHVw3k4k", "SxMaR-RLd0", "eAud8qLCZCG", "XW-n7AHFHNO", "noDoXpHbYQx", "Bf4k_U_Zap1", "E3u4GkgSAYz", "a_yKn6KHeh", "2IA-YQaCu3n", "kwY64Gnbi2e", "8jiCOSgp1N6", "af-ylfAQgpk", "1tQeVZCOQGk" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for answering my questions. I particularly appreciate the new experiments on the muti-width / multi-depth autoencoder, which show that reconstruction loss is not always improved with the multi-width / multi-depth design. I also appreciated the clarification about why sensitivity to hyperp...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "af-ylfAQgpk", "Pp8Jsx5bi5v", "kwY64Gnbi2e", "E3u4GkgSAYz", "XW-n7AHFHNO", "2IA-YQaCu3n", "nips_2022_cUY5OkP3VR", "noDoXpHbYQx", "Bf4k_U_Zap1", "af-ylfAQgpk", "a_yKn6KHeh", "1tQeVZCOQGk", "kwY64Gnbi2e", "8jiCOSgp1N6", "nips_2022_cUY5OkP3VR", "nips_2022_cUY5OkP3VR", "nips_2022_cUY5OkP...
nips_2022_yJV9zp5OKAY
Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach
Data augmentation is a critical contributing factor to the success of deep learning but heavily relies on prior domain knowledge which is not always available. Recent works on automatic data augmentation learn a policy to form a sequence of augmentation operations, which are still pre-defined and restricted to limited options. In this paper, we show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle that aims to preserve the minimum sufficient information of the labels. Given an example, the objective aims at creating a distant ``hard positive example'' as the augmentation, while still preserving the original label. We then propose a practical surrogate to the objective that can be optimized efficiently and integrated seamlessly into existing methods for a broad class of machine learning tasks, e.g., supervised, semi-supervised, and noisy-label learning. Unlike previous works, our method does not require training an extra generative model but instead leverages the intermediate layer representations of the end-task model for generating data augmentations. In experiments, we show that our method consistently brings non-trivial improvements to the three aforementioned learning tasks from both efficiency and final performance, either or not combined with pre-defined augmentations, e.g., on medical images when domain knowledge is unavailable and the existing augmentation techniques perform poorly. Code will be released publicly.
Accept
This paper presents a new augmentation method (though it's similar to adversarial training) that significantly improves various benchmarks, methods, and tasks. Besides, the authors provide an information-theoretic ground for the proposed method. AC appreciate the technical contribution to the community. After rebuttal, the authors addressed most of the concerns. AC recommends accept. AC also would like to suggest the authors comprehensively compare your method with adversarial training in revision.
val
[ "nneDPYwS5Lw", "O2oUku-41N0", "qkG4vIaG0d", "6V0YygzdOX4", "Bqzcr4Wbi4X", "-Cyf62abLqK", "Ynqf3HwJvp", "trEkA_QJrHd", "UjFRyTBzV4c", "7yAD0tt9Hsu", "l1k3eaxRU_p", "5nRdRO9MdhY", "AIQEQ4-hcgS", "9riZwP5yRLk" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewer for taking the time to review our responses and make valuable suggestions. We will discuss more about the relationship to adversarial training and keep running more experiments using different pre-trained models to improve the quality of the work.", " Thanks the authors for giving the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "O2oUku-41N0", "6V0YygzdOX4", "5nRdRO9MdhY", "AIQEQ4-hcgS", "-Cyf62abLqK", "9riZwP5yRLk", "trEkA_QJrHd", "AIQEQ4-hcgS", "5nRdRO9MdhY", "l1k3eaxRU_p", "nips_2022_yJV9zp5OKAY", "nips_2022_yJV9zp5OKAY", "nips_2022_yJV9zp5OKAY", "nips_2022_yJV9zp5OKAY" ]
nips_2022_e2M4CNa-UOS
Efficient Sequence Packing without Cross-contamination: Accelerating Large Language Models without Impacting Performance
Effective training of today's large language models (LLMs) depends on large batches and long sequences for throughput and accuracy. To handle variable-length sequences on hardware accelerators, it is common practice to introduce padding tokens, so that all sequences in a batch have the same length. We show in this paper that the variation in sequence lengths in common NLP datasets is such that up to 50% of all tokens can be padding. In less common, but not extreme, cases (e.g. GLUE-COLA with sequence length 128), the ratio is up to 89%. Existing methods to address the resulting inefficiency are complicated by the need to avoid "cross-contamination" in self-attention, by a reduction in accuracy when sequence ordering information is lost, or by customized kernel implementations only valid for specific accelerators. This paper introduces a new formalization of sequence packing in the context of the well-studied bin packing problem, and presents new algorithms based on this formulation which, for example, confer a 2x speedup for phase 2 pretraining in BERT while preserving downstream performance. We show how existing models can be adapted to ensure mathematical equivalence between the original and packed models, meaning that packed models can be trained with existing pre-training and fine-tuning practices.
Reject
This paper draws attention to the importance of good packing to avoid padding when creating batches. This problem is indeed important in practice and the paper does a good job studying the paper. That being said, the machine learning novelty seems limited. One reviewer was strongly supportive of acceptance while the other two thought this paper was below the cut-off. The meta reviewer thinks that there isn't sufficient ML novelty for NeurIPS.
train
[ "7WfcpqANmi", "0mcUWQc6MM2", "z5xG4y9J8PP", "4St0dkHIVSO", "CmD5X6om-43", "HbsncaMrxf", "9TqegVtut1Z", "5gLAhUBUAb1", "SP3fEgzoE6L" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for responding to my questions and concerns. I have reviewed the response and will discuss with fellow reviewers in the next stage.", " These reviews make great suggestions for clarification and improvement of this paper. We believe we have answered the various technical quest...
[ -1, -1, -1, -1, -1, -1, 4, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "5gLAhUBUAb1", "nips_2022_e2M4CNa-UOS", "SP3fEgzoE6L", "CmD5X6om-43", "5gLAhUBUAb1", "9TqegVtut1Z", "nips_2022_e2M4CNa-UOS", "nips_2022_e2M4CNa-UOS", "nips_2022_e2M4CNa-UOS" ]
nips_2022_9xVWIHFSyfl
Hybrid Neural Autoencoders for Stimulus Encoding in Visual and Other Sensory Neuroprostheses
Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capabilities. However, sensations elicited by current devices often appear artificial and distorted. Although current models can predict the neural or perceptual response to an electrical stimulus, an optimal stimulation strategy solves the inverse problem: what is the required stimulus to produce a desired response? Here, we frame this as an end-to-end optimization problem, where a deep neural network stimulus encoder is trained to invert a known and fixed forward model that approximates the underlying biological system. As a proof of concept, we demonstrate the effectiveness of this Hybrid Neural Autoencoder (HNA) in visual neuroprostheses. We find that HNA produces high-fidelity patient-specific stimuli representing handwritten digits and segmented images of everyday objects, and significantly outperforms conventional encoding strategies across all simulated patients.
Accept
This paper formulates the problem of learning how to stimulate a visual neuroprosthesis as a hybrid autoencoder. While the decoder can be taken as a known and fixed model that describes how stimuli produce percepts, the encoder needs to be learned. Once learned the encoder maps target percepts into stimuli that can be passed into the device (decoder). Motivation and formulation of the problem is especially clear / strong. The paper is well written and the reviewers and I appreciated the nice solution strategy for a potentially impactful application area. There were some concerns about how generally applicable the approach is. However, the results presented likely do advance the state of the art in this setting. Given my own reading of the paper and the consistently positive reviewer scores, I'm very comfortable endorsing this paper for acceptance.
val
[ "5YX5EpDby0i", "dmUCKo3Mnn_", "bjkvzjNU5Ge", "FfG1ndchCOQ", "9sE2rELZre9", "XcWXX2XA57k", "WO58tIsp7J" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are grateful to the reviewer for their encouraging comments and insightful analysis. \n\n> *The model of the biology, and treatment of the problem of phosphenes is very elegant, and works well. It does suggest that generalizability of the methodology requires equally detailed biological forward models in other...
[ -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "WO58tIsp7J", "XcWXX2XA57k", "9sE2rELZre9", "nips_2022_9xVWIHFSyfl", "nips_2022_9xVWIHFSyfl", "nips_2022_9xVWIHFSyfl", "nips_2022_9xVWIHFSyfl" ]
nips_2022_mzze3bubjk
The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics
Many high-dimensional statistical inference problems are believed to possess inherent computational hardness. Various frameworks have been proposed to give rigorous evidence for such hardness, including lower bounds against restricted models of computation (such as low-degree functions), as well as methods rooted in statistical physics that are based on free energy landscapes. This paper aims to make a rigorous connection between the seemingly different low-degree and free-energy based approaches. We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal. By leveraging these rigorous connections we are able to: establish that for Gaussian additive models the "algebraic" notion of low-degree hardness implies failure of "geometric" local MCMC algorithms, and provide new low-degree lower bounds for sparse linear regression which seem difficult to prove directly. These results provide both conceptual insights into the connections between different notions of hardness, as well as concrete technical tools such as new methods for proving low-degree lower bounds.
Accept
The paper establishes new bridges between two fundamental notions for the study of average case hardness of hypothesis testing problems: low-degree functions and the Franz-Parisi free energy, and also connect them to MCMC hardness. The paper has been very well received by the reviewers, who have done a great job. I agree with the reviewers on the quality of the paper. I also think that despite NeurIPS not being the most obvious avenue to submit such a paper, there indeed is an active sub-community of NeurIPS readers very much enthusiastic about such results, connections with statistical physics, etc. All issues raised seems to have been properly answered by the authors, so this is a clear accept for a high-quality paper full of rich notions and new fundamental connections.
val
[ "8X-cft1KiO", "7wh6vBRL8-L", "-7e8S0ESbgR", "KaIMQtffFW", "b-W0dlghSs2", "R-FHjcUXqTb", "izEILp_j2iZ", "YhoYDLaSsd", "yG1bMBH2jXa" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for raising your score. (No need to edit the review -- we just wanted to emphasize that the main point is not just MCMC lower bounds.)", " I thank the authors for their response. Since they have addressed some of my concerns, I'm willing to raise my score. I believe that as a reader of this ...
[ -1, -1, -1, -1, -1, -1, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "7wh6vBRL8-L", "KaIMQtffFW", "b-W0dlghSs2", "YhoYDLaSsd", "yG1bMBH2jXa", "izEILp_j2iZ", "nips_2022_mzze3bubjk", "nips_2022_mzze3bubjk", "nips_2022_mzze3bubjk" ]
nips_2022_6qdUJblMHqy
Toward Efficient Robust Training against Union of $\ell_p$ Threat Models
The overwhelming vulnerability of deep neural networks to carefully crafted perturbations known as adversarial attacks has led to the development of various training techniques to produce robust models. While the primary focus of existing approaches has been directed toward addressing the worst-case performance achieved under a single-threat model, it is imperative that safety-critical systems are robust with respect to multiple threat models simultaneously. Existing approaches that address worst-case performance under the union of such threat models ($\ell_{\infty}, \ell_2, \ell_1$) either utilize adversarial training methods that require multi-step attacks which are computationally expensive in practice, or rely upon fine-tuning of pre-trained models that are robust with respect to a single-threat model. In this work, we show that by carefully choosing the objective function used for robust training, it is possible to achieve similar, or improved worst-case performance over a union of threat models while utilizing only single-step attacks, thereby achieving a significant reduction in computational resources necessary for training. Furthermore, prior work showed that adversarial training specific to the $\ell_1$ threat model is relatively difficult, to the extent that even multi-step adversarially trained models were shown to be prone to gradient-masking. However, the proposed method—when applied on the $\ell_1$ threat model specifically—enables us to obtain the first $\ell_1$ robust model trained solely with single-step adversaries. Finally, to demonstrate the merits of our approach, we utilize a modern set of attack evaluations to better estimate the worst-case performance under the considered union of threat models.
Accept
This paper received mixed recommendations that range from borderline reject to strong accept. After examining all reviewers' comments, author rebuttal, and the paper itself, I lean toward acceptance mainly because: (i) it seems that the author responses have addressed reviewers' main concerns (although some reviewers did not respond to the rebuttal); (2) the contributions made by the paper, as summarized in the paper and author rebuttal, are significant enough and could potentially facilitate the development of adversarial defense. Nevertheless, the organization and presentation of the paper needs to be further improved. For instance, the novelty/contributions over some existing works (e.g., [1] [2]), the computational cost of the proposed method, and the possible limitations are insufficiently discussed in the main text of the original submission. I urge the authors to consider these issues and take careful note of the reviewers' comments and suggestions when preparing for the final version. [1] F. Croce and M. Hein. Adversarial robustness against multiple lp-threat models at the price of one and how to quickly fine-tune robust models to another threat model. [2] G. Sriramanan, S. Addepalli, A. Baburaj, and V. B. Radhakrishnan. Towards efficient and effective adversarial training. In NeurIPS 2021
train
[ "OGToiik9EcT", "tn6vi6VSvMw", "xHRqcprisU", "eFMbva3C9Vy", "WnVVfCSMo9n", "qSohmTWEZT0", "TdU4Ya9Gcps", "Ynz7M9oPoY2", "Ul9LHJjCzvu", "fA6ctxY2JIt", "0szqu3Re-Vt", "xwsDux01S6V", "NHelPKP1ZMV", "nlEqewxUrMp", "2oKFM6kLDW", "N7oJjUqwEkK" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer again for their response. We address the specific comments below:\n\n- **Efficacy on Threat Models other than $\\ell_1$**\n\n As discussed in the rebuttal above, NCAT is seen to be highly effective even in cases without the inclusion of the $\\ell_1$ norm. With $\\ell_{\\infty}$ based tra...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 5 ]
[ "xHRqcprisU", "TdU4Ya9Gcps", "Ul9LHJjCzvu", "nlEqewxUrMp", "NHelPKP1ZMV", "N7oJjUqwEkK", "2oKFM6kLDW", "Ul9LHJjCzvu", "nlEqewxUrMp", "0szqu3Re-Vt", "NHelPKP1ZMV", "nips_2022_6qdUJblMHqy", "nips_2022_6qdUJblMHqy", "nips_2022_6qdUJblMHqy", "nips_2022_6qdUJblMHqy", "nips_2022_6qdUJblMHqy"...
nips_2022_yJE7iQSAep
On the Parameterization and Initialization of Diagonal State Space Models
State space models (SSM) have recently been shown to be very effective as a deep learning layer as a promising alternative to sequence models such as RNNs, CNNs, or Transformers. The first version to show this potential was the S4 model, which is particularly effective on tasks involving long-range dependencies by using a prescribed state matrix called the HiPPO matrix. While this has an interpretable mathematical mechanism for modeling long dependencies, it also requires a custom representation and algorithm that makes the model difficult to understand and implement. On the other hand, a recent variant of S4 called DSS showed that restricting the state matrix to be fully diagonal can still preserve the performance of the original model when using a specific initialization based on approximating S4's matrix. This work seeks to systematically understand how to parameterize and initialize diagonal state space models. While it follows from classical results that almost all SSMs have an equivalent diagonal form, we show that the initialization is critical for performance. First, we explain why DSS works mathematically, as the diagonal approximation to S4 surprisingly recovers the same dynamics in the limit of infinite state dimension. We then systematically describe various design choices in parameterizing and computing diagonal SSMs, and perform a controlled empirical study ablating the effects of these choices. Our final model S4D is a simple diagonal version of S4 whose kernel computation requires just 3 lines of code and performs comparably to S4 in almost all settings, with state-of-the-art results in image, audio, and medical time-series domains, and 85\% average on the Long Range Arena benchmark.
Accept
All reviewers agree that the paper proposes an interesting approach to examine diagonal state space models. Although some reviewers have some technical concerns at their first reviews, basically those have been resolved by the authors' responses. Thus, although there are some points that should be modified from the current form, I think we can expect the authors modify the paper in the camera-ready by reflecting the discussion. Based on these, I recommend acceptance for this paper.
train
[ "fe53CF_7Nbc", "EQDWEKVtKdU", "9b7Lo_4SDlh", "Tzk5-j-PHbn", "kGeQFRdU0tr", "t512rdas768", "RmAdY5a_zFQJ", "Mg3NdN8lzvt", "UX7u1k_c-B", "fEoH7WWkxQ1", "EYK0ZszoA39", "Sgta6uGnnc2" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the response and further suggestions.\n\n> For theorem 6 in the supplementary material is there a proof anywhere? I don't think its extremely obvious that there should be no proof/reference.\n\nThe proof is in the reference \"How to Train Your HiPPO\" cited right before Theorem 6. A copy of the manu...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "EQDWEKVtKdU", "kGeQFRdU0tr", "Sgta6uGnnc2", "EYK0ZszoA39", "fEoH7WWkxQ1", "fEoH7WWkxQ1", "UX7u1k_c-B", "nips_2022_yJE7iQSAep", "nips_2022_yJE7iQSAep", "nips_2022_yJE7iQSAep", "nips_2022_yJE7iQSAep", "nips_2022_yJE7iQSAep" ]
nips_2022_kQgLvIFLyIu
Coreset for Line-Sets Clustering
The input to the {line-sets $k$-median} problem is an integer $k \geq 1$, and a set $\mathcal{L} = \{L_1,\dots,L_n\}$ that contains $n$ sets of lines in $\mathbb{R}^d$. The goal is to compute a set $C$ of $k$ centers (points in $\mathbb{R}^d$) that minimizes the sum $\sum_{L \in \mathcal{L}}\min_{\ell\in L, c\in C}\mathrm{dist}(\ell,c)$ of Euclidean distances from each set to its closest center, where $\mathrm{dist}(\ell,c):=\min_{x\in \ell}\norm{x-c}_2$. An \emph{$\varepsilon$-coreset} for this problem is a weighted subset of sets in $\mathcal{L}$ that approximates this sum up to $1 \pm \varepsilon$ multiplicative factor, for every set $C$ of $k$ centers. We prove that \emph{every} such input set $\set{L}$ has a small $\varepsilon$-coreset, and provide the first coreset construction for this problem and its variants. The coreset consists of $O(\log^2n)$ weighted line-sets from $\set{L}$, and is constructed in $O(n\log n)$ time for every fixed $d, k\geq 1$ and $\varepsilon \in (0,1)$. The main technique is based on a novel reduction to a ``fair clustering'' of colored points to colored centers. We then provide a coreset for this coloring problem, which may be of independent interest. Open source code and experiments are also provided.
Accept
The paper gives new coresets for sets of lines in a high dimensional space. The problem comes from modeling an input data point with 2 missing attributes, one continuous and one discrete, as a set of lines. Previous results only worked with missing continuous attributes. The reviewers appreciate the technical contribution of the paper and it might lead to additional results for data with more general patterns of missing attributes. On the other hand, the contribution seems somewhat limited as the previous works for missing continuous attributes can handle a general number of missing attributes, up to 10-20 whereas the formulation here is very specific, albeit with 1 discrete attribute. More experimental evaluation comparing with previous coreset results in common special case would also strengthen the paper.
train
[ "6BduyS6w_G", "--RUVSnkAWo", "YQZhnDZsfDl", "indBe3fdiq9", "wiK5xIXdvqN", "0C24WbCHwv6", "KiGklIz6tP", "zP_Rxh7xfYr", "fHlL6q-TAPn", "rda-cZ5iAZp", "obbvruhSv1J", "1U9YBBX1F_s", "RvU4LBfoqKR", "X1YgTE7geyl" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers, ACs, and SACs,\n\nAs the discussion period is coming to an end, we would like to express our deep gratitude for spending the time to review and assess our paper, as well as for providing us with insightful comments throughout the review process. Your feedback has already helped us to improve our p...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "nips_2022_kQgLvIFLyIu", "wiK5xIXdvqN", "0C24WbCHwv6", "X1YgTE7geyl", "rda-cZ5iAZp", "zP_Rxh7xfYr", "nips_2022_kQgLvIFLyIu", "RvU4LBfoqKR", "X1YgTE7geyl", "1U9YBBX1F_s", "nips_2022_kQgLvIFLyIu", "nips_2022_kQgLvIFLyIu", "nips_2022_kQgLvIFLyIu", "nips_2022_kQgLvIFLyIu" ]
nips_2022_SyD-b2m2meG
Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve
We find a surprising connection between multitask learning and robustness to neuron failures. Our experiments show that bilingual language models retain higher performance under various neuron perturbations, such as random deletions, magnitude pruning and weight noise. Our study is motivated by research in cognitive science showing that symptoms of dementia and cognitive decline appear later in bilingual speakers compared to monolingual patients with similar brain damage, a phenomenon called bilingual cognitive reserve. Our language model experiments replicate this phenomenon on bilingual GPT-2 and other models. We provide a theoretical justification of this robustness by mathematically analyzing linear representation learning and showing that multitasking creates more robust representations. We open-source our code and models in the following URL: https://github.com/giannisdaras/multilingual\_robustness.
Accept
This paper has received positive reviews. There was some initial concern by one reviewer about * Missing discussion of neuroscience research * Why noise is added to model weights But these issues were clarified in the author response and the reviewer has updated their evaluation. There was additional discussion on the connection to neuroscience research, with one reviewer suggesting not to push the authors to make stronger links than they see fit. The limitations in Appendix E are appropriate and can even feature more prominently in the main body. In any case, these weaknesses are relatively minor compared to the paper's strengths: * Interesting motivation and connections between bilingual models and people, and how deficits impact them. * insightful theoretical analysis * Experiments with multiple settings I therefore recommend acceptance. And in any case, I encourage the authors to take into account all comments by the reviewers when preparing their revision, especially the following: * Comments and questions by Reviewer WfPh, especially about the experimental paradigm. * The comment and discussion by Reviewer 1N4w on the theoretical analysis and L2 regularization as a comparison.
train
[ "y7wHTIOx-GB", "P94CnDsi56j", "Au9d466oWP", "F02cOGknGY-", "DZPzf6mAau5", "iEhdClwk_6H", "fKg2JUCoPBH", "WnaTPw9xBi", "5DoTcTAp5ZR7", "SbkkuBSC1k4", "vVUrGeBt-pk", "gzRreSwoiU4", "Zr_w0CgYfpn", "s2NiViaG7DD", "3J6wbCFiBgF" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarifications. One last point regarding the title. You propose \"A Simple Model for Bilingual Cognitive Reserve\". The issue perhaps is that this title continues to claim that the model you propose *explains* a neuroscientific phenomenon (Bilingual Cognitive Reserve), something for which no dir...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 4, 3 ]
[ "P94CnDsi56j", "Au9d466oWP", "iEhdClwk_6H", "DZPzf6mAau5", "3J6wbCFiBgF", "fKg2JUCoPBH", "Zr_w0CgYfpn", "5DoTcTAp5ZR7", "s2NiViaG7DD", "gzRreSwoiU4", "nips_2022_SyD-b2m2meG", "nips_2022_SyD-b2m2meG", "nips_2022_SyD-b2m2meG", "nips_2022_SyD-b2m2meG", "nips_2022_SyD-b2m2meG" ]
nips_2022_w-Aq4vmnTOP
On Learning and Refutation in Noninteractive Local Differential Privacy
We study two basic statistical tasks in non-interactive local differential privacy (LDP): *learning* and *refutation*: learning requires finding a concept that best fits an unknown target function (from labelled samples drawn from a distribution), whereas refutation requires distinguishing between data distributions that are well-correlated with some concept in the class, versus distributions where the labels are random. Our main result is a complete characterization of the sample complexity of agnostic PAC learning for non-interactive LDP protocols. We show that the optimal sample complexity for any concept class is captured by the approximate $\gamma_2$ norm of a natural matrix associated with the class. Combined with previous work, this gives an *equivalence* between agnostic learning and refutation in the agnostic setting.
Accept
This paper provides a characterization of the sample complexity of agnostic learning under local differential privacy (LDP) and show that it is equivalent to refutation. The mathematical insights from the paper might be valuable in future research. It is a clean contribution, with positive feedback from all reviewers.
train
[ "wjaDSTG238", "JMBXL17Z3Tj", "PeI2s1KgYPh", "v0lUVcuG3sg", "oMXHb2FxHZM", "9-c3DK2lgnq", "9t4FhAwJnSM", "75e4VtJpwJi", "jy_Fnz6vafD" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their time and their thoughtful comments. We have submitted a revision of the paper, in which we fixed several typos, and did our best to address reviewer comments within the space limits. ", " We thank the reviewer for their interest in our paper.\n\nWe have added a definition of the...
[ -1, -1, -1, -1, -1, 7, 7, 8, 6 ]
[ -1, -1, -1, -1, -1, 2, 2, 3, 2 ]
[ "nips_2022_w-Aq4vmnTOP", "jy_Fnz6vafD", "75e4VtJpwJi", "9t4FhAwJnSM", "9-c3DK2lgnq", "nips_2022_w-Aq4vmnTOP", "nips_2022_w-Aq4vmnTOP", "nips_2022_w-Aq4vmnTOP", "nips_2022_w-Aq4vmnTOP" ]
nips_2022_StlwkcFsjaZ
Learning to Follow Instructions in Text-Based Games
Text-based games present a unique class of sequential decision making problem in which agents interact with a partially observable, simulated environment via actions and observations conveyed through natural language. Such observations typically include instructions that, in a reinforcement learning (RL) setting, can directly or indirectly guide a player towards completing reward-worthy tasks. In this work, we study the ability of RL agents to follow such instructions. We conduct experiments that show that the performance of state-of-the-art text-based game agents is largely unaffected by the presence or absence of such instructions, and that these agents are typically unable to execute tasks to completion. To further study and address the task of instruction following, we equip RL agents with an internal structured representation of natural language instructions in the form of Linear Temporal Logic (LTL), a formal language that is increasingly used for temporally extended reward specification in RL. Our framework both supports and highlights the benefit of understanding the temporal semantics of instructions and in measuring progress towards achievement of such a temporally extended behaviour. Experiments with 500+ games in TextWorld demonstrate the superior performance of our approach.
Accept
This paper proposes a linear temporal logic-based solutions to some of the common failure modes of existing agents addressing text-based games, stemming from the analysis of these failure modes. There was disagreement about whether the work is publishable based on the limited range of domains LTL supports. That said, from reading the discussion, it seems that the emerging consensus is that it is sufficiently novel and interesting for this work to prove the concept in text domains, and that further work may seek to extend such approaches (or draw inspiration from them) in a wider range of domains. I support acceptance.
train
[ "dKdAaYIL5i", "hMNUkMjyRxo", "sJfb7g8lNFj", "XeRp-Aar9C", "p3uf_jWZOoV", "5ykfsNV1SSQ", "AU0mjE2n96", "EqiIEJhWMBO", "gy-_9cWdn5X", "E4TkIqK4S4L", "gtBzqpqg7L9", "DZNRCtcA595" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for addressing my questions. I agree and am excited about the potential to incorporate formal semantics in interactive reasoning tasks.\n\n- Novelty: Thanks for the clarification. Using belief graphs for progress monitoring instead of an event detector is an important difference ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "XeRp-Aar9C", "p3uf_jWZOoV", "AU0mjE2n96", "E4TkIqK4S4L", "DZNRCtcA595", "gy-_9cWdn5X", "gtBzqpqg7L9", "nips_2022_StlwkcFsjaZ", "nips_2022_StlwkcFsjaZ", "nips_2022_StlwkcFsjaZ", "nips_2022_StlwkcFsjaZ", "nips_2022_StlwkcFsjaZ" ]
nips_2022_wph_3smhuec
Non-Convex Bilevel Games with Critical Point Selection Maps
Bilevel optimization problems involve two nested objectives, where an upper-level objective depends on a solution to a lower-level problem. When the latter is non-convex, multiple critical points may be present, leading to an ambiguous definition of the problem. In this paper, we introduce a key ingredient for resolving this ambiguity through the concept of a selection map which allows one to choose a particular solution to the lower-level problem. Using such maps, we define a class of hierarchical games between two agents that resolve the ambiguity in bilevel problems. This new class of games requires introducing new analytical tools in Morse theory to extend implicit differentiation, a technique used in bilevel optimization resulting from the implicit function theorem. In particular, we establish the validity of such a method even when the latter theorem is inapplicable due to degenerate critical points. Finally, we show that algorithms for solving bilevel problems based on unrolled optimization solve these games up to approximation errors due to finite computational power. A simple correction to these algorithms is then proposed for removing these errors.
Accept
This paper studies bilevel optimization problems and proposes techniques for disambiguating cases where the lower-level objective has multiple optimal solutions. The main points the reviewers raise in favor of acceptance are that: 1. The paper provides theoretical justification for techniques used to solve ambiguous bilevel optimization in practice, and this theory leads to algorithmic improvements. 1. The paper is well written. 1. The topic and results are of interest to the NeurIPS community. Reviewer oofy argues strongly for reject with the following main concerns: 1. The notation of the paper is confusing (and in particular, there are concerns about the use of the variable y when the bilevel games with selection (BGS)). Reviewer gXUJ also had some confusion about the BGS setup, and reviewer tW4t agreed that the use of y is inconsistent in the BGS setup. 1. Introducing a selection map doesn’t really resolve ambiguity in the optimization problem because choosing the selection map is essentially resolving the ambiguity by hand. In the rebuttal, the authors argue that the theoretical analysis is still interesting and useful because many algorithms used to solve bilevel optimization problems in practice are implicitly making a choice of selection map, and the theoretical analysis of the paper allows us to understand what those techniques are really optimizing for. Reviewer tW4t also found the example selection maps provided in the paper to be compelling. I am convinced by the author rebuttal and reviewer tW4t that the selection map is useful. As for the notational concerns in the introduction of BGS, I think the paper would benefit from added discussion on the role of y in the upper level. In particular, from the author responses and later sections of the paper, it appears that y plays the role of a warm-start or initialization for the agent optimizing in the lower level. This discussion should be included near to the introduction of BGS, since otherwise it is confusing why the selection map is not a function from X -> Y.
train
[ "IrvcVPV6Sh5", "irzheYFQHvS", "PjXsMS02VO", "Hrur_-vFMB4", "0na3WOHFt3J", "CS472QCekrE", "TqD3oCBIjS-", "gKdRm8VI18V", "LpvQL3bpT3D", "KfQgVxE5wUX", "zhfQsEJkwuZ", "Dx6l7MeUwea" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers, \n\nThank you again for your time and effort in assessing our submission.\n\nWe also thank you reviewer (gXUJ) for your acknowledgment of the response and we are happy to see that we answered all your questions.\n\nWe understand that this is the summer break but we hope to hear from the rest of th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 2 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 3, 4 ]
[ "nips_2022_wph_3smhuec", "0na3WOHFt3J", "Hrur_-vFMB4", "nips_2022_wph_3smhuec", "LpvQL3bpT3D", "KfQgVxE5wUX", "zhfQsEJkwuZ", "Dx6l7MeUwea", "nips_2022_wph_3smhuec", "nips_2022_wph_3smhuec", "nips_2022_wph_3smhuec", "nips_2022_wph_3smhuec" ]
nips_2022_agihaAKJ89X
Differentially Private Generalized Linear Models Revisited
We study the problem of $(\epsilon,\delta)$-differentially private learning of linear predictors with convex losses. We provide results for two subclasses of loss functions. The first case is when the loss is smooth and non-negative but not necessarily Lipschitz (such as the squared loss). For this case, we establish an upper bound on the excess population risk of $\tilde{O}\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert w^* \Vert^2}{(n\epsilon)^{2/3}},\frac{\sqrt{d}\Vert w^*\Vert^2}{n\epsilon}\right\}\right)$, where $n$ is the number of samples, $d$ is the dimension of the problem, and $w^*$ is the minimizer of the population risk. Apart from the dependence on $\Vert w^\ast\Vert$, our bound is essentially tight in all parameters. In particular, we show a lower bound of $\tilde{\Omega}\left(\frac{1}{\sqrt{n}} + {\min\left\{\frac{\Vert w^*\Vert^{4/3}}{(n\epsilon)^{2/3}}, \frac{\sqrt{d}\Vert w^*\Vert}{n\epsilon}\right\}}\right)$. We also revisit the previously studied case of Lipschitz losses \cite{SSTT21}. For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) $\Theta\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert w^*\Vert}{\sqrt{n\epsilon}},\frac{\sqrt{\text{rank}}\Vert w^*\Vert}{n\epsilon}\right\}\right)$, where $\text{rank}$ is the rank of the design matrix. This improves over existing work in the high privacy regime. Finally, our algorithms involve a private model selection approach that we develop to enable attaining the stated rates without a-priori knowledge of $\Vert w^*\Vert$.
Accept
This work studies the problem of learning GLMs (generalized linear models) in the differentially private setting. The main results are nearly tight upper and lower bounds on the sample complexity of this task in a number of settings, including the smooth nonnegative case and the Lipschitz case. The reviewers appreciated the theoretical contribution and agreed that this paper is a good fit for the conference.
train
[ "FZRgTDtdaKU", "TkIQNYgO20", "xPb_Ebb2dY", "807ukJGATOk", "y4IhkKh6iBj", "9mmhF_P3uj" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for these observations. We will add a clarification about the rank bound in the paper. We also observed this coincidence with [AFKT21], but could not find any principled explanation for this coincidence, and so did not comment on it in the paper. With regards to [LT18], we in fact use the generalized ex...
[ -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, 2, 3, 5 ]
[ "9mmhF_P3uj", "y4IhkKh6iBj", "807ukJGATOk", "nips_2022_agihaAKJ89X", "nips_2022_agihaAKJ89X", "nips_2022_agihaAKJ89X" ]
nips_2022_IUikebJ1Bf0
Autoformalization with Large Language Models
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs. A successful autoformalization system could advance the fields of formal verification, program synthesis, and artificial intelligence. While the long-term goal of autoformalization seemed elusive for a long time, we show large language models provide new prospects towards this goal. We make the surprising observation that LLMs can correctly translate a significant portion ($25.3\%$) of mathematical competition problems perfectly to formal specifications in Isabelle/HOL. We demonstrate the usefulness of this process by improving a previously introduced neural theorem prover via training on these autoformalized theorems. Our methodology results in a new state-of-the-art result on the MiniF2F theorem proving benchmark, improving the proof rate from~$29.6\%$ to~$35.2\%$.
Accept
The reviewers appreciated the importance of the problem studied. The reviewers had concerns regarding novelty of the proposed solution but were convinced by a detailed empirical evaluation. Overall, this is a clear accept. Congratulations to the authors; please take all the feedback into account while preparing the final version.
train
[ "g-3sUPGkDrp", "4WhBw6d1UaJ", "SzzQUVPMgvJ", "gEtw3GMobFF", "VR6ztCwMg5Q", "Jeg0L_4nQ4o", "kZj6LBXCTwJ", "d1WEkDf4ZM", "knCoFUZS9_z", "FJD1LXQUGM", "NrjbIbsRXkp", "7Vt_brf98x", "pBBItC-yAKr", "n5VjlBlXisE", "yMu7RWA9U1", "ywDd6mGkSB" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **\"encourage the authors to rewrite lines 141-145 while taking into account the results of the new experiments\"**\n\nIn line 141-145, we did not claim that the model learned the syntax from the few-shot prompts. Our claims emphasize the fact that there is a small amount of Isabelle proofs on the internet and it...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "4WhBw6d1UaJ", "FJD1LXQUGM", "VR6ztCwMg5Q", "Jeg0L_4nQ4o", "d1WEkDf4ZM", "knCoFUZS9_z", "nips_2022_IUikebJ1Bf0", "knCoFUZS9_z", "yMu7RWA9U1", "pBBItC-yAKr", "ywDd6mGkSB", "n5VjlBlXisE", "nips_2022_IUikebJ1Bf0", "nips_2022_IUikebJ1Bf0", "nips_2022_IUikebJ1Bf0", "nips_2022_IUikebJ1Bf0" ]
nips_2022_XzeTJBq1Ce2
The Role of Baselines in Policy Gradient Optimization
We study the effect of baselines in on-policy stochastic policy gradient optimization, and close the gap between the theory and practice of policy optimization methods. Our first contribution is to show that the \emph{state value} baseline allows on-policy stochastic \emph{natural} policy gradient (NPG) to converge to a globally optimal policy at an $O(1/t)$ rate, which was not previously known. The analysis relies on two novel findings: the expected progress of the NPG update satisfies a stochastic version of the non-uniform \L{}ojasiewicz (N\L{}) inequality, and with probability 1 the state value baseline prevents the optimal action's probability from vanishing, thus ensuring sufficient exploration. Importantly, these results provide a new understanding of the role of baselines in stochastic policy gradient: by showing that the variance of natural policy gradient estimates remains unbounded with or without a baseline, we find that variance reduction \emph{cannot} explain their utility in this setting. Instead, the analysis reveals that the primary effect of the value baseline is to \textbf{reduce the aggressiveness of the updates} rather than their variance. That is, we demonstrate that a finite variance is \emph{not necessary} for almost sure convergence of stochastic NPG, while controlling update aggressiveness is both necessary and sufficient. Additional experimental results verify these theoretical findings.
Accept
The paper studies the effect of subtracting baselines in natural policy gradient methods for reinforcement learning. The authors offer a new perspective on update aggressiveness that is both intuitive and technically sound. All the reviewers appreciated the exposition around "vicious circle" and "virtuous circle" of updates, and how this is different from variance reduction. There were concerns that the empirical evaluations are limited, but the authors pointed to experiments in depth-4 tree MDPs to provide additional evidence. There were several clarifying questions from the reviews that the authors addressed comprehensively during the feedback phase. Please include these clarifications in a paper revision to strengthen the current exposition.
train
[ "sbWKVwobAt", "jY5_2peIreg", "SgZFh34wlPP", "W_OMIvuPEcG", "JfdQLhuGMjB", "G762KpyUXKT", "60uJlKzd0qh", "p33itjGE8l", "rgwQ8yyziY", "yv5qkTTkXY", "234aq1gEboI", "6oVzBib59LP", "gamALzpA3Ct", "YZKr_vylr1AQ", "W3bs_lZMdZd", "d1wgKkKms3A", "cvvBfdw-fBh", "APLLx-AWbwz", "rOnO-esmI4z"...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Dear Reviewer aDcz,\n\nThank you for reconsidering the rating! \n\nAs you suggested, we will give more background for NPG and point relevant appendix sections in subsequent versions.", " My apologies for the late response. Thank you very much for the detailed response. I am changing my rating to a 6.\n\nI would...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "jY5_2peIreg", "yv5qkTTkXY", "l_47Ol5sZIW", "rgwQ8yyziY", "G762KpyUXKT", "60uJlKzd0qh", "d1wgKkKms3A", "gamALzpA3Ct", "rOnO-esmI4z", "234aq1gEboI", "6oVzBib59LP", "l_47Ol5sZIW", "YZKr_vylr1AQ", "W3bs_lZMdZd", "sxsn2h61hao", "cvvBfdw-fBh", "APLLx-AWbwz", "PgS5ZQl-Zkd", "nips_2022_...
nips_2022_NoAZRVthZL
Learning Mixed Multinomial Logits with Provable Guarantees
A mixture of multinomial logits (MMNL) generalizes the single logit model, which is commonly used in predicting the probabilities of different outcomes. While extensive algorithms have been developed in the literature to learn MMNL models, theoretical results are limited. Built on the Frank-Wolfe (FW) method, we propose a new algorithm that learns both mixture weights and component-specific logit parameters with provable convergence guarantees for an arbitrary number of mixtures. Our algorithm utilizes historical choice data to generate a set of candidate choice probability vectors, each being close to the ground truth with a high probability. We further provide a sample complexity analysis to show that only a polynomial number of samples is required to secure the performance guarantee of our algorithm. Finally, we conduct simulation studies to evaluate the performance and demonstrate how to apply our algorithm to real-world applications.
Accept
This paper proposes Frank-wolfe method for learning mixture of multinomial logit models. The algorithms is novel and the first of its kind to be applied to this problem. This is an important addition to the widely studied topic of preference learning, and this result extends the capability of what can be efficiently learned under realistic scenario with mixture models. Mixture models for ranking are notoriously hard problems and a novel approach such as the one introduced in this paper is critical in bridging the gap to making preference learning practical. All the reviewers' concerns have been addressed in the rebuttal adequately. Given the good quality of the paper, it will be quite beneficial for the readers if the paper spend some more time on explaining how the result compares to the extensive work that has been don on mixture of multinomial logit models. Such a nicely written related work section will only make the paper more interesting and broaden the impact of the results. Some important and closely related work that predate [Chierichetti et al. 2018] are missing. Some more recent related work are missing. Here is a sampling of such related work that should be compared with: - Learning Mixtures of Random Utility Models with Features from Incomplete Preferences Zhibing Zhao, Ao Liu, Lirong Xia, (IJCAI-22) - On the identifiability of mixtures of ranking models, Xiaomin Zhang, Xucheng Zhang, Po-Ling Loh, Yingyu Liang, https://arxiv.org/abs/2201.13132 - Learning Mixtures of Plackett-Luce Models from Structured Partial Orders, Zhibing Zhao, Lirong Xia, (NeurIPS 2019) - Learning Plackett-Luce Mixtures from Partial Preferences, Ao Liu, Zhibing Zhao, Chao Liao, Pinyan Lu, Lirong Xia (AAAI-19) - Learning Mixtures of Plackett-Luce Models, Zhibing Zhao, Peter Piech, Lirong Xia, (ICML-16) - Collaboratively Learning Preferences from Ordinal Data, Sewoong Oh, Kiran K. Thekumparampil, Jiaming Xu, (NIPS 2015) - A Topic Modeling Approach to Ranking, Weicong Ding, Prakash Ishwar, Venkatesh Saligrama, (AISTATS-15) - Learning mixed multinomial logit model from ordinal data, S Oh, D Shah (NIPS-14)
train
[ "Fwg3TkkFZ8h", "YY8WbM8YBcC", "TIaCvvS_6ax", "DaivbXv8Rq1", "i-QnHbPgAz", "2rH5j9lgXY0", "BEbfepqi7fGn", "v1gXtfqqPl", "_sq3ncLsWk", "y-3_jOs5MvB", "ux6_lknUqvz", "yJiVCvW85G", "F4ZGmCG3Fu", "vReLcLI_wQL", "yZXbjS5n9eV", "N6imrnvFwFN", "KXOc8nPKdq", "8hasLbOqoOD" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for his/her time to read through our response. Please let us know if you have any further questions.", " We thank the reviewer for his/her time to read through our response. Please let us know if you have any further questions.", " We thank the reviewer for his/her time to read through o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "i-QnHbPgAz", "BEbfepqi7fGn", "2rH5j9lgXY0", "nips_2022_NoAZRVthZL", "vReLcLI_wQL", "_sq3ncLsWk", "ux6_lknUqvz", "8hasLbOqoOD", "8hasLbOqoOD", "KXOc8nPKdq", "KXOc8nPKdq", "N6imrnvFwFN", "yZXbjS5n9eV", "yZXbjS5n9eV", "nips_2022_NoAZRVthZL", "nips_2022_NoAZRVthZL", "nips_2022_NoAZRVthZ...
nips_2022_hTCZbhKaDJz
Phase transitions in when feedback is useful
Sensory observations about the world are invariably ambiguous. Inference about the world's latent variables is thus an important computation for the brain. However, computational constraints limit the performance of these computations. These constraints include energetic costs for neural activity and noise on every channel. Efficient coding is one prominent theory that describes how such limited resources can best be used. In one incarnation, this leads to a theory of predictive coding, where predictions are subtracted from signals, reducing the cost of sending something that is already known. This theory does not, however, account for the costs or noise associated with those predictions. Here we offer a theory that accounts for both feedforward and feedback costs, and noise in all computations. We formulate this inference problem as message-passing on a graph whereby feedback serves as an internal control signal aiming to maximize how well an inference tracks a target state while minimizing the costs of computation. We apply this novel formulation of inference as control to the canonical problem of inferring the hidden scalar state of a linear dynamical system with Gaussian variability. The best solution depends on architectural constraints, such as Dale's law, the ubiquitous law that each neuron makes solely excitatory or inhibitory postsynaptic connections. This biological structure can create asymmetric costs for feedforward and feedback channels. Under such conditions, our theory predicts the gain of optimal predictive feedback and how it is incorporated into the inference computation. We show that there is a non-monotonic dependence of optimal feedback gain as a function of both the computational parameters and the world dynamics, leading to phase transitions in whether feedback provides any utility in optimal inference under computational constraints.
Accept
Sound and clearly presented contribution to the field of predictive coding.
train
[ "LRhHFw8OcXS", "k7Xa-1GmJfG", "4oaT8MhUpD3", "nGvPb_nimzA", "2JIcmu4YRV", "Bo0HRbFKwcm", "b8bCl_Wxxlm", "4Q6SvJ2zipx", "gdb9PsU7dTq", "iPkTI7eLgE", "U1SDuuGGYWT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am happy with the answers provided by the authors, especially about the plan to answer Weakness 2. I remain positive about this paper, as shown by my (unchanged) evaluation.\n\nNote: I modified in Q1 the incomplete sentence \"where the estimation is.\" to \"where the estimation takes place under constraints (fo...
[ -1, -1, -1, -1, -1, -1, -1, 5, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "b8bCl_Wxxlm", "2JIcmu4YRV", "Bo0HRbFKwcm", "4Q6SvJ2zipx", "gdb9PsU7dTq", "iPkTI7eLgE", "U1SDuuGGYWT", "nips_2022_hTCZbhKaDJz", "nips_2022_hTCZbhKaDJz", "nips_2022_hTCZbhKaDJz", "nips_2022_hTCZbhKaDJz" ]
nips_2022_q_AeTuxv02D
OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs
This work provides the first theoretical study on the ability of graph Message Passing Neural Networks (gMPNNs) ---such as Graph Neural Networks (GNNs)--- to perform inductive out-of-distribution (OOD) link prediction tasks, where deployment (test) graph sizes are larger than training graphs. We first prove non-asymptotic bounds showing that link predictors based on permutation-equivariant (structural) node embeddings obtained by gMPNNs can converge to a random guess as test graphs get larger. We then propose a theoretically-sound gMPNN that outputs structural pairwise (2-node) embeddings and prove non-asymptotic bounds showing that, as test graphs grow, these embeddings converge to embeddings of a continuous function that retains its ability to predict links OOD. Empirical results on random graphs show agreement with our theoretical results.
Accept
This paper makes a compelling contribution to the study of size generation in the inductive link prediction setting. The authors extend previous results on size generalization for graph classification in a non-trivial manner and propose a new MPNN architecture with improved theoretical properties. Though the analysis is based on contrived random graph models (graphons), it's very encouraging to see that the proposed changes to the MPNN architecture lead to better experimental results. The authors also made a good effort in the rebuttal to explain their work and to enhance their experiments following the reviewers' feedback.
train
[ "Wqm2xP-yUXV", "2asLXoZ_lh4", "35XnWR5nwJR", "4BY8nFcvxRp", "8E8PR0C2USk", "-11vhhbte4", "Snx-5M4nhgu", "9wtiTfcUC7U", "KTFBqLLrhIk", "jZioER42Kdq", "JycshRMilR", "KZPpmJRugQj", "CF4EU3ZiLmV", "LDJ5MaScuk3", "ZopqTs1SNdL", "Zj1FM_jczSI" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThanks for your detailed reply. Some of the concerns are addressed. However, the major concern that \"A large portion of the (theoretical) analyses are based on the SBM model with isomorphic blocks\" is not addressed. Hence, I will adjust my rating to 5. ", " Dear authors,\n\nThanks for your de...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "8E8PR0C2USk", "KTFBqLLrhIk", "-11vhhbte4", "Snx-5M4nhgu", "9wtiTfcUC7U", "KTFBqLLrhIk", "JycshRMilR", "KZPpmJRugQj", "Zj1FM_jczSI", "ZopqTs1SNdL", "LDJ5MaScuk3", "CF4EU3ZiLmV", "nips_2022_q_AeTuxv02D", "nips_2022_q_AeTuxv02D", "nips_2022_q_AeTuxv02D", "nips_2022_q_AeTuxv02D" ]
nips_2022_HnIQrSY7vPI
Sample-Efficient Reinforcement Learning of Partially Observable Markov Games
This paper considers the challenging tasks of Multi-Agent Reinforcement Learning (MARL) under partial observability, where each agent only sees her own individual observations and actions that reveal incomplete information about the underlying state of system. This paper studies these tasks under the general model of multiplayer general-sum Partially Observable Markov Games (POMGs), which is significantly larger than the standard model of Imperfect Information Extensive-Form Games (IIEFGs). We identify a rich subclass of POMGs---weakly revealing POMGs---in which sample-efficient learning is tractable. In the self-play setting, we prove that a simple algorithm combining optimism and Maximum Likelihood Estimation (MLE) is sufficient to find approximate Nash equilibria, correlated equilibria, as well as coarse correlated equilibria of weakly revealing POMGs, in a polynomial number of samples when the number of agents is small. In the setting of playing against adversarial opponents, we show that a variant of our optimistic MLE algorithm is capable of achieving sublinear regret when being compared against the optimal maximin policies. To our best knowledge, this work provides the first line of sample-efficient results for learning POMGs.
Accept
While most reviewers were positive about the paper, one reviewer expressed concerns on the assumption and the technical novelty of the paper. After reading the discussion, the AC believes that the main assumption of weakly revealing is standard. This is commonly used in control and learning in HMMs/PSRs (we just cannot learn efficiently without these type of assumptions). The AC is also satisfied by the authors response to the technical novelty. The results presented in this paper is certainly more than just replacing max by min-max in the Bellman equation. In summary, the AC would recommend for acceptance in this case.
train
[ "4aGf1F5ENg", "NeRHL6tGvQh", "r-Iv8vC51n", "OtRJGL7Ow-c", "nfhJE7RnsMy", "xVtH0JR7FoC", "NFXZnKzPzsh", "u_8QJoR2PU", "a_VMY2r2m2C", "_G7-cVBGUhq", "aAFCqs2KJ_s", "XEqDV8hnGiT", "1VrivWBlJbR" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q**. Once an MDP (or POMDP) problem is solved, solving zero-sum games just requires replacing the max operator with a minmax. And almost all the results go through without too much difficulty.\n\n**A**. We respectfully but strongly disagree with this comment:\n1. This paper considers multiplayer general-sum set...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2, 3 ]
[ "NeRHL6tGvQh", "xVtH0JR7FoC", "u_8QJoR2PU", "NFXZnKzPzsh", "xVtH0JR7FoC", "_G7-cVBGUhq", "aAFCqs2KJ_s", "XEqDV8hnGiT", "1VrivWBlJbR", "nips_2022_HnIQrSY7vPI", "nips_2022_HnIQrSY7vPI", "nips_2022_HnIQrSY7vPI", "nips_2022_HnIQrSY7vPI" ]
nips_2022_K1NPDQ7E-Cl
Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints
In modern machine learning, users often have to collaborate to learn distributions that generate the data. Communication can be a significant bottleneck. Prior work has studied homogeneous users---i.e., whose data follow the same discrete distribution---and has provided optimal communication-efficient methods. However, these methods rely heavily on homogeneity, and are less applicable in the common case when users' discrete distributions are heterogeneous. Here we consider a natural and tractable model of heterogeneity, where users' discrete distributions only vary sparsely, on a small number of entries. We propose a novel two-stage method named SHIFT: First, the users collaborate by communicating with the server to learn a central distribution; relying on methods from robust statistics. Then, the learned central distribution is fine-tuned to estimate the individual distributions of users. We show that our method is minimax optimal in our model of heterogeneity and under communication constraints. Further, we provide experimental results using both synthetic data and $n$-gram frequency estimation in the text domain, which corroborate its efficiency.
Accept
With 4, 7, and 7, the paper got a wide range of scores. The reviewer who assigned score 4 did not identify any clear flaws in the paper, and did not engage with the authors or other reviewers. Hence I put less weight on the lower review and recommend accepting the paper.
train
[ "zlkIgjhgk-P", "ibQxYpTIwA_", "dafa3CE1HNL", "a2fUfiQWONA", "BQ8H9-dWAZE", "nLvbMwR4Do", "5wjAcsi6L3V", "oD0l60IuVcM", "VqdTlP1odTB", "PGuWcRgtrAh", "iwwke2MCQjHb", "0AYoZkkFbET", "kvG1x820Tm0J", "dxodu9IfvBY", "0YGVv5k2Vdz", "oxhKa6TztL", "SLC0xttmo7n" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \\\nThank you very much for your decision and the valuable comments. Since the current page limit does not allow adding much content, we will definitely include the discussion in the later revision (one more page will be allowed once the paper gets accepted). ", " I thank the authors for their detailed response...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "ibQxYpTIwA_", "0AYoZkkFbET", "SLC0xttmo7n", "0YGVv5k2Vdz", "nips_2022_K1NPDQ7E-Cl", "5wjAcsi6L3V", "SLC0xttmo7n", "VqdTlP1odTB", "SLC0xttmo7n", "oxhKa6TztL", "oxhKa6TztL", "oxhKa6TztL", "0YGVv5k2Vdz", "0YGVv5k2Vdz", "nips_2022_K1NPDQ7E-Cl", "nips_2022_K1NPDQ7E-Cl", "nips_2022_K1NPDQ...
nips_2022_pBJe5yu41Pq
On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification
Aleatoric uncertainty captures the inherent randomness of the data, such as measurement noise. In Bayesian regression, we often use a Gaussian observation model, where we control the level of aleatoric uncertainty with a noise variance parameter. By contrast, for Bayesian classification we use a categorical distribution with no mechanism to represent our beliefs about aleatoric uncertainty. Our work shows that explicitly accounting for aleatoric uncertainty significantly improves the performance of Bayesian neural networks. We note that many standard benchmarks, such as CIFAR-10, have essentially no aleatoric uncertainty. Moreover, we show that data augmentation in approximate inference softens the likelihood, leading to underconfidence and misrepresenting our beliefs about aleatoric uncertainty. Accordingly, we find that a cold posterior, tempered by a power greater than one, often more honestly reflects our beliefs about aleatoric uncertainty than no tempering --- providing an explicit link between data augmentation and cold posteriors. We further show that we can match or exceed the performance of posterior tempering by using a Dirichlet observation model, where we explicitly control the level of aleatoric uncertainty, without any need for tempering.
Accept
This paper studies Bayesian neural networks and tempered posteriors and study the link between data augmentation and cold posteriors. Overall, after the rebuttal and discussion, all reviewers unanimously agreed to accept the paper. Reviewers appreciated the exploration of the link between data augmentation and the cold posterior effect and found the empirical results to be convincing. One reviewer had concerns about the method but after discussion was convinced to accept the paper provided that the paper will be rewritten to increase clarity and that "they will add discussion on (a) the softmax temperature prior approach and (b) previous work on modeling aleatoric uncertainty in classifiers with Dirichlet distributions." Another reviewer stated that the discussion of limitations should be expanded in the paper. Please revise the paper to address the remaining concerns of the reviewers in the final version.
train
[ "dSfmWlw-sKF", "vp233OewFYi", "W1bXsSBbaE_", "9FHYQuc15S8p", "uRrv0t84Mdy", "3fh_gqThEid", "zmk0yiOFwtW", "O1s3lmSFVIO", "xvCsBeorLT7Y", "dglDSnm5QCd", "BIbrV3t-k0k", "z-6JiMv3anuj", "jusScJYrbNAy", "8pstoJ5d_DPP", "JXidXgqg4ga", "vznRweEx9DR", "n6ITPM7cYKT", "wA8EeD1M1YC", "b4ML...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thanks for the clarifications. I'm not sure I agree with the comments on asymptotic behavior, but I understand this is somewhat outside the scope of the paper. \n\nI've updated my original review and raised the score. You have convinced me that this is a correct, nontrivial and interesting contribution. As discus...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "vp233OewFYi", "W1bXsSBbaE_", "9FHYQuc15S8p", "uRrv0t84Mdy", "3fh_gqThEid", "3DrPXZVUMKJ4", "n6ITPM7cYKT", "BIbrV3t-k0k", "dglDSnm5QCd", "JXidXgqg4ga", "8pstoJ5d_DPP", "jusScJYrbNAy", "nips_2022_pBJe5yu41Pq", "RApvfNHnIkV", "vznRweEx9DR", "GJuVZyRmH_W", "wA8EeD1M1YC", "b4MLJJafSXP"...
nips_2022_i-k6J4VkCDq
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
In recent years, machine learning models have been shown to be vulnerable to backdoor attacks. Under such attacks, an adversary embeds a stealthy backdoor into the trained model such that the compromised models will behave normally on clean inputs but will misclassify according to the adversary's control on maliciously constructed input with a trigger. While these existing attacks are very effective, the adversary's capability is limited: given an input, these attacks can only cause the model to misclassify toward a single pre-defined or target class. In contrast, this paper exploits a novel backdoor attack with a much more powerful payload, denoted as Marksman, where the adversary can arbitrarily choose which target class the model will misclassify given any input during inference. To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model. Given the learned trigger-generation function, during inference, the adversary can specify an arbitrary backdoor attack target class, and an appropriate trigger causing the model to classify toward this target class is created accordingly. We show empirically that the proposed framework achieves high attack performance (e.g., 100% attack success rates in several experiments) while preserving the clean-data performance in several benchmark datasets, including MNIST, CIFAR10, GTSRB, and TinyImageNet. The proposed Marksman backdoor attack can also easily bypass existing backdoor defenses that were originally designed against backdoor attacks with a single target class. Our work takes another significant step toward understanding the extensive risks of backdoor attacks in practice.
Accept
This paper introduces a backdoor attack that allows an attacker to cause an input to become any target class, as opposed to just one target class. The reviewers mostly liked this paper and found the attack interesting and useful. The main weakness was in the comparison to prior work, but here the authors have addressed much of this in the comments (and I hope they will make the necessary adjustments in the final version of the paper). The paper also has a limited evaluation against backdoor defenses, but given the goal of this attack is to show a new attack technique and not evade existing defenses I believe it is okay to push this to future work.
train
[ "eKUMd5YYWtx", "IfojfSsNqAV", "eJ1cduqD-KH", "uzyNY1fgun", "z9v4K7jmqLn", "J-WwVGF24b", "wGlJEL4vDAq", "9iKNp9a6Ngn", "mBe3l4Pqtl4", "_IBRh6lYGSj", "_MM779Qz8fH", "3hSr6bbkrR3", "nAHHlu0r2_I", "mkNdUMuJse" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your efforts in answering the questions and addressing the concerns. They will be taken into full consideration.", " Thank you again for your valuable comments. If you have any additional questions/comments, please kindly let us know, and we will try to answer them accordingly.", " Tha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "eJ1cduqD-KH", "_MM779Qz8fH", "9iKNp9a6Ngn", "z9v4K7jmqLn", "mBe3l4Pqtl4", "mkNdUMuJse", "mkNdUMuJse", "nAHHlu0r2_I", "3hSr6bbkrR3", "_MM779Qz8fH", "nips_2022_i-k6J4VkCDq", "nips_2022_i-k6J4VkCDq", "nips_2022_i-k6J4VkCDq", "nips_2022_i-k6J4VkCDq" ]
nips_2022_yHFATHaIDN
MABSplit: Faster Forest Training Using Multi-Armed Bandits
Random forests are some of the most widely used machine learning models today, especially in domains that necessitate interpretability. We present an algorithm that accelerates the training of random forests and other popular tree-based learning methods. At the core of our algorithm is a novel node-splitting subroutine, dubbed MABSplit, used to efficiently find split points when constructing decision trees. Our algorithm borrows techniques from the multi-armed bandit literature to judiciously determine how to allocate samples and computational power across candidate split points. We provide theoretical guarantees that MABSplit improves the sample complexity of each node split from linear to logarithmic in the number of data points. In some settings, MABSplit leads to 100x faster training (an 99% reduction in training time) without any decrease in generalization performance. We demonstrate similar speedups when MABSplit is used across a variety of forest-based variants, such as Extremely Random Forests and Random Patches. We also show our algorithm can be used in both classification and regression tasks. Finally, we show that MABSplit outperforms existing methods in generalization performance and feature importance calculations under a fixed computational budget. All of our experimental results are reproducible via a one-line script at https://github.com/ThrunGroup/FastForest.
Accept
The reviewers agree that from a decision tree induction point of view, this paper provides a solid methodological approach in contrast to prior heuristics-based approaches. They found the author feedback satisfying, specifically, the additional experiments showcase that the proposed method generalizes most likely beyond development datasets. We ask the authors that in the final version, please make a pass over the paper to incorporate the author-reviewer discussions.
train
[ "g1R090y7WIw", "8_CacPEVeU-", "McYqFmJ60bt", "Sc5CMc4bS27", "necjOYV5Va9", "izoVui9zd5F", "gmnfRn6ksGy", "jrlr0EvFSk", "3m5GU_u9GJR", "TEQTiDSR8zG", "hi9fKZwGCQ5", "NTyDnk-JSEa", "2phGwJyJFbJ", "T87Vk3O0wCh", "JkOZpnnj2g", "No2V_yNfdMT", "aeU-Cm_J1H4" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank Reviewer wtDT once again for their time and feedback on our paper. Please let us know if there are additional questions that we may address during the author-reviewer discussion period. We are happy to engage in further discussion as the discussion period allows.\n\n", " I thank the autho...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "T87Vk3O0wCh", "hi9fKZwGCQ5", "NTyDnk-JSEa", "jrlr0EvFSk", "No2V_yNfdMT", "nips_2022_yHFATHaIDN", "JkOZpnnj2g", "JkOZpnnj2g", "aeU-Cm_J1H4", "aeU-Cm_J1H4", "aeU-Cm_J1H4", "No2V_yNfdMT", "T87Vk3O0wCh", "nips_2022_yHFATHaIDN", "nips_2022_yHFATHaIDN", "nips_2022_yHFATHaIDN", "nips_2022_...
nips_2022_zz0FC7qBpkh
The Missing Invariance Principle found -- the Reciprocal Twin of Invariant Risk Minimization
Machine learning models often generalize poorly to out-of-distribution (OOD) data as a result of relying on features that are spuriously correlated with the label during training. Recently, the technique of Invariant Risk Minimization (IRM) was proposed to learn predictors that only use invariant features by conserving the feature-conditioned label expectation $\mathbb{E}_e[y|f(x)]$ across environments. However, more recent studies have demonstrated that IRM-v1, a practical version of IRM, can fail in various task settings. Here, we identify a fundamental flaw of IRM formulation that causes the failure. We then introduce a complementary notion of invariance, MRI, based on conserving the label-conditioned feature expectation $\mathbb{E}_e[f(x)|y]$ across environments, which is free of this flaw. Further, we introduce a simplified, practical version of the MRI formulation called MRI-v1. We note that this constraint is convex which confers it with an advantage over IRM-v1, which imposes non-convex constraints. We prove that in a general linear problem setting, MRI-v1 can guarantee invariant predictors given sufficient environments. We also empirically demonstrate that MRI strongly out-performs IRM and consistently achieves near-optimal OOD generalization in image-based nonlinear problems.
Accept
The paper theoretically considers the formulation of IRM for OOD generalization, and proposes a new MRI as a fix to the flaw of the IRM. The proposed method is empirically shown to give better test loss. The evaluations of the reviewers split largely and did not converge, unfortunately. The paper provides an interesting theoretical analysis of IRM and comparison with MRI, which has a good contribution to the field of invariant learning. However, some concerns have also been raised. There may be some overclaim on the relation between IRM and MRI; the underlying assumptions are not identical and the pros and cons of the two formulations should be compared with care. The evaluation with the test accuracy and test loss should be demonstrated clearly. These concerns should be addressed carefully in the paper.
train
[ "EQtZaqku-9U", "tyalCeptVl0", "GGT3OoOc-K1", "_56M1bGXFYh", "WRV08vHzdZx", "kCYgXvH76F5", "3SEhe0irF2o1", "NyUpol5cxNQ", "gMuI7jC1Y15", "LUfBmzNJ5uT", "k4CTrvHje5", "WpTZ45rBdfg", "Bsyj5X9vN8p", "DmV5vS06BZQ", "bRAlmHuuIUi", "JoounTZ52vj", "MNr7yu8VLqS", "tED2nVPgUx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have improved my score but I have the following minor comments.\n1. It would be helpful to have test accuracies reported in Table 2 (instead of Appendix). Even if it is a coarse metric for measuring invariance, it is still the ultimate goal. \n2. I am not sure why the authors converted VLCS and TI to binary cla...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "DmV5vS06BZQ", "Bsyj5X9vN8p", "DmV5vS06BZQ", "kCYgXvH76F5", "kCYgXvH76F5", "k4CTrvHje5", "NyUpol5cxNQ", "nips_2022_zz0FC7qBpkh", "LUfBmzNJ5uT", "WpTZ45rBdfg", "MNr7yu8VLqS", "tED2nVPgUx", "JoounTZ52vj", "bRAlmHuuIUi", "nips_2022_zz0FC7qBpkh", "nips_2022_zz0FC7qBpkh", "nips_2022_zz0FC...
nips_2022_isPnnaTZaP5
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning
Fine-tuning large pre-trained models on downstream tasks has been adopted in a variety of domains recently. However, it is costly to update the entire parameter set of large pre-trained models. Although recently proposed parameter-efficient transfer learning (PETL) techniques allow updating a small subset of parameters (e.g. only using 2% of parameters) inside a pre-trained backbone network for a new task, they only reduce the training memory requirement by up to 30%. This is because the gradient computation for the trainable parameters still requires back-propagation through the large pre-trained backbone model. To address this, we propose Ladder Side-Tuning (LST), a new PETL technique that can reduce training memory requirements by more substantial amounts. Unlike existing parameter-efficient methods that insert additional parameters inside backbone networks, we train a ladder side network, a small and separate network that takes intermediate activations as input via shortcut connections (ladders) from backbone networks and makes predictions. LST has significantly lower memory requirements than previous methods, because it does not require back-propagation through the backbone network, but instead only through the side network and ladder connections. We evaluate our method with various models (T5 and CLIP-T5) on both natural language processing (GLUE) and vision-and-language (VQA, GQA, NLVR2, MSCOCO) tasks. LST saves 69% of the memory costs to fine-tune the whole network, while other methods only save 26% of that in similar parameter usages (hence, 2.7x more memory savings). Moreover, LST achieves higher accuracy than Adapter and LoRA in a low-memory regime. To further show the advantage of this better memory efficiency, we also apply LST to larger T5 models (T5-large, T5-3B), attaining better GLUE performance than full fine-tuning and other PETL methods. The trend also holds in the experiments on vision-and-language tasks, where LST achieves similar accuracy to other PETL methods when training a similar number of parameters while also having 2.7x more memory savings. Our code is available at: https://github.com/ylsung/Ladder-Side-Tuning.
Accept
The paper contributes a novel training methodology based on a ladder network for a scenario of fine-tuning under limited memory constraints. Two out of three reviewers recommended rejecting the paper. Reviewer L9hL noted that the paper is related to distillation or network prunning and is not discussed sufficiently. The method is related but not the same as distillation or network prunning. Both of these techniques require an additional computation step. I agree with Reviewer tNTu that it would strengthen the paper to make a one-to-one comparison with representative distillation or network prunning methods, but I also feel it is not strictly necessary as they have (slightly) different use cases. Hence, I feel it is more of an issue with writing, than with the core contribution. It was also brought up that Side Tuning is a similar method from '19. My understanding is that Figure 8 and the rebuttal properly discusses this point and shows that the design choices made by the Authors (e.g. injecting the representation at multiple stages) are crucial innovations, and do not diminish novelty in my opinion. All in all, I believe this is a solid contribution that is another tool that will help democratize large-scale models. It is my pleasure to recommend acceptance. Please remember to address reviewers' remark, and please pay special attention to better contextualizing your work in the broader field of memory efficient training of neural networks.
test
[ "vBPRjxqQcXh", "24OyuMqB4pZ", "TIU9WNm6m5k", "EPB_UoAe6Hm", "z0A_rSIC1gO", "NdMJo_A62y", "8H_l8j4JT-D", "rTqsVJ0L6XV", "Byu2zaXabVE", "siS69gvX-l1", "bLEUtvqa6xT", "8QqYclhLT4", "7LHSjUSG72T", "gDsnoe8t7cJ", "nRd5s9X_Xs", "y2bmju4-4S-", "13h2MYaNu1M", "NvRSDjtYT7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the comments. Please see our clarifications below.\n\n> I think using different pre-trained models makes it difficult to compare different methods\n\nAs we mentioned in our previous response, this is a different type of **fair comparison** where we want to show our memory savings of LST can allow a bi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "EPB_UoAe6Hm", "NdMJo_A62y", "z0A_rSIC1gO", "bLEUtvqa6xT", "Byu2zaXabVE", "gDsnoe8t7cJ", "nips_2022_isPnnaTZaP5", "nips_2022_isPnnaTZaP5", "NvRSDjtYT7", "NvRSDjtYT7", "13h2MYaNu1M", "y2bmju4-4S-", "y2bmju4-4S-", "nRd5s9X_Xs", "nips_2022_isPnnaTZaP5", "nips_2022_isPnnaTZaP5", "nips_20...
nips_2022_wKhUPzqVap6
On Feature Learning in the Presence of Spurious Correlations
Deep classifiers are known to rely on spurious features — patterns which are correlated with the target on the training data but not inherently relevant to the learning problem, such as the image backgrounds when classifying the foregrounds. In this paper we evaluate the amount of information about the core (non-spurious) features that can be decoded from the representations learned by standard empirical risk minimization (ERM) and specialized group robustness training. Following recent work on Deep Feature Reweighting (DFR), we evaluate the feature representations by re-training the last layer of the model on a held-out set where the spurious correlation is broken. On multiple vision and NLP problems, we show that the features learned by simple ERM are highly competitive with the features learned by specialized group robustness methods targeted at reducing the effect of spurious correlations. Moreover, we show that the quality of learned feature representations is greatly affected by the design decisions beyond the training method, such as the model architecture and pre-training strategy. On the other hand, we find that strong regularization is not necessary for learning high-quality feature representations. Finally, using insights from our analysis, we significantly improve upon the best results reported in the literature on the popular Waterbirds, CelebA hair color prediction and WILDS-FMOW problems, achieving 97\%, 92\% and 50\% worst-group accuracies, respectively.
Accept
The paper shows that empirical risk minimization is sufficient to obtain good worst-group accuracies and specialized group robustness methods do not appear to provide additional benefits. The reviewers pointed out that the current work depends on DFR, which seems to require some additional data compared to group robustness methods. The reviewers also note that the NLP experiments did not use more recent models, and the authors addressed these issues. Generally the reviewers think this is a well-executed paper on an important problem, and are unanimous in accepting it.
test
[ "-xEbUw4GgRL", "9VxFOA_bWpm", "USD52RwnfCS", "F_9OmQ1eEk", "9pOX9zhu_oo", "IKmjK3yVDcy", "Kx-dH7OveFG", "3imPpVvUQ1P", "DrxN8TLFe_Y", "r2HHULhF8mp", "yjAiE9JhXZG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response and the additional new content.", " Thanks for the response! I have revised my recommendation in light of this new information.", " Appreciate the additional experiments using DeBERTa, and the additions/clarification to your paper as offered in the updated version. I feel confident...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "F_9OmQ1eEk", "9pOX9zhu_oo", "IKmjK3yVDcy", "yjAiE9JhXZG", "r2HHULhF8mp", "DrxN8TLFe_Y", "3imPpVvUQ1P", "nips_2022_wKhUPzqVap6", "nips_2022_wKhUPzqVap6", "nips_2022_wKhUPzqVap6", "nips_2022_wKhUPzqVap6" ]
nips_2022_yKYCwTvl8eU
Learning Concept Credible Models for Mitigating Shortcuts
During training, models can exploit spurious correlations as shortcuts, resulting in poor generalization performance when shortcuts do not persist. In this work, assuming access to a representation based on domain knowledge (i.e., known concepts) that is invariant to shortcuts, we aim to learn robust and accurate models from biased training data. In contrast to previous work, we do not rely solely on known concepts, but allow the model to also learn unknown concepts. We propose two approaches for mitigating shortcuts that incorporate domain knowledge, while accounting for potentially important yet unknown concepts. The first approach is two-staged. After fitting a model using known concepts, it accounts for the residual using unknown concepts. While flexible, we show that this approach is vulnerable when shortcuts are correlated with the unknown concepts. This limitation is addressed by our second approach that extends a recently proposed regularization penalty. Applied to two real-world datasets, we demonstrate that both approaches can successfully mitigate shortcut learning.
Accept
The paper considers the important problem of shortcut learning and the existing solutions and suggests learning unknown concepts besides the known ones (prior knowledge) in other to better tackle shortcut learning. They built upon Concept Bottleneck Models (CBM) to also learn the unknown concepts. The proposed approach is simple and easy to understand, with the detailed analysis presented for linear models.The authors have tested the method on the CUB (birds) dataset and edema prediction from x-ray images. The considered problem is important and the experiment results validate the effectiveness of the proposed approach. Moreover, the authors were able to address reviewers questions and concerns during the rebuttal period. Therefore, I suggest the paper to get accepted.
test
[ "ntb8cnbYp68", "0r1aWrvvGCO", "5gkRM9Dmk-0", "8tgKALkLtr", "RNDppo2-tY5", "zlyZZTHamB", "wnO-LhKmDH3", "9JHMYWL1Pj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I have revised my rating. This work will be useful for the spurious correlation/shortcuts community.", " The authors have adequately answered my questions. I think visual checks will be a valuable addition. And yes, while more datasets would have helped, I do not think they are absol...
[ -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "8tgKALkLtr", "5gkRM9Dmk-0", "9JHMYWL1Pj", "wnO-LhKmDH3", "zlyZZTHamB", "nips_2022_yKYCwTvl8eU", "nips_2022_yKYCwTvl8eU", "nips_2022_yKYCwTvl8eU" ]
nips_2022_JoukmNwGgsn
Peer Prediction for Learning Agents
Peer prediction refers to a collection of mechanisms for eliciting information from human agents when direct verification of the obtained information is unavailable. They are designed to have a game-theoretic equilibrium where everyone reveals their private information truthfully. This result holds under the assumption that agents are Bayesian and they each adopt a fixed strategy across all tasks. Human agents however are observed in many domains to exhibit learning behavior in sequential settings. In this paper, we explore the dynamics of sequential peer prediction mechanisms when participants are learning agents. We first show that the notion of no regret alone for the agents’ learning algorithms cannot guarantee convergence to the truthful strategy. We then focus on a family of learning algorithms where strategy updates only depend on agents’ cumulative rewards and prove that agents' strategies in the popular Correlated Agreement (CA) mechanism converge to truthful reporting when they use algorithms from this family. This family of algorithms is not necessarily no-regret, but includes several familiar no-regret learning algorithms (e.g multiplicative weight update and Follow the Perturbed Leader) as special cases. Simulation of several algorithms in this family as well as the $\epsilon$-greedy algorithm, which is outside of this family, shows convergence to the truthful strategy in the CA mechanism.
Accept
The reviews are mostly positive and consider that the paper solves an interesting problem with non-trivial techniques. For further improvement, the reviewers mentioned some concerns about the problem setup being restricted, and gave some suggestions for improving presentation.
test
[ "1Zy-59rjH01", "-6YdwDsttyO", "eK5odKhbeFE", "rNEXwnvyVpk", "-jAkpui0x3y", "hpTwjbrGtmr", "e_JpNpqUNhc", "Q_VZXm1fJGY" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your detailed review.  We have incorporated many of your suggestions into our updated version.  \n\n**Why CA**\n\nThe CA mechanism is the first multi-task peer prediction mechanism, and the CA mechanism possesses a similar structure with several follow-up mechanisms (pairing mechanism, DMI...
[ -1, -1, -1, -1, 7, 6, 5, 5 ]
[ -1, -1, -1, -1, 3, 4, 1, 3 ]
[ "Q_VZXm1fJGY", "e_JpNpqUNhc", "hpTwjbrGtmr", "-jAkpui0x3y", "nips_2022_JoukmNwGgsn", "nips_2022_JoukmNwGgsn", "nips_2022_JoukmNwGgsn", "nips_2022_JoukmNwGgsn" ]
nips_2022_dYhB_alLyCO
Mean Estimation with User-level Privacy under Data Heterogeneity
A key challenge in many modern data analysis tasks is that user data is heterogeneous. Different users may possess vastly different numbers of data points. More importantly, it cannot be assumed that all users sample from the same underlying distribution. This is true, for example in language data, where different speech styles result in data heterogeneity. In this work we propose a simple model of heterogeneous user data that differs in both distribution and quantity of data, and we provide a method for estimating the population-level mean while preserving user-level differential privacy. We demonstrate asymptotic optimality of our estimator and also prove general lower bounds on the error achievable in our problem.
Accept
Addressing heterogeneity in differentially private aggregation and estimation is an practically important topic that shows up in many real world settings. This paper makes the first step in closing the gap between existing sophisticated algorithms that work under homogeneous setting and the practical scenarios with heterogeneous users. The reviewers agree that this is an important paper that opens up several exciting research directions.
val
[ "JtcxOTwH0Js", "Ky2BkJYEIa", "hIxz-EMfL-4", "4QPmyJ84bOs", "sPD3ZjoNXQ", "IzJ7btywdOf", "Yn6cZzRvgki", "XEIkoDiejG", "KbTp0U4OndS", "syNATuOn9I_", "9h4aFN6RVba" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the other reviewers' comments and the authors' responses. Thanks to the authors for their feedback. All of my questions are addressed.", " Thank you for clarifying my concerns. I also read other reviews and responses to them. I increased my score accordingly.", " Thank you for the clear example an...
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 2 ]
[ "Yn6cZzRvgki", "sPD3ZjoNXQ", "4QPmyJ84bOs", "9h4aFN6RVba", "syNATuOn9I_", "KbTp0U4OndS", "XEIkoDiejG", "nips_2022_dYhB_alLyCO", "nips_2022_dYhB_alLyCO", "nips_2022_dYhB_alLyCO", "nips_2022_dYhB_alLyCO" ]
nips_2022_VeQBBm1MmTZ
CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference
Recently cloud-based graph convolutional network (GCN) has demonstrated great success and potential in many privacy-sensitive applications such as personal healthcare and financial systems. Despite its high inference accuracy and performance on the cloud, maintaining data privacy in GCN inference, which is of paramount importance to these practical applications, remains largely unexplored. In this paper, we take an initial attempt towards this and develop CryptoGCN--a homomorphic encryption (HE) based GCN inference framework. A key to the success of our approach is to reduce the tremendous computational overhead for HE operations, which can be orders of magnitude higher than its counterparts in the plaintext space. To this end, we develop a solution that can effectively take advantage of the sparsity of matrix operations in GCN inference to significantly reduce the encrypted computational overhead. Specifically, we propose a novel Adjacency Matrix-Aware (AMA) data formatting method along with the AMA assisted patterned sparse matrix partitioning, to exploit the complex graph structure and perform efficient matrix-matrix multiplication in HE computation. In this way, the number of HE operations can be significantly reduced. We also develop a co-optimization framework that can explore the trade-offs among the accuracy, security level, and computational overhead by judicious pruning and polynomial approximation of activation modules in GCNs. Based on the NTU-XVIEW skeleton joint dataset, i.e., the largest dataset evaluated homomorphically by far as we are aware of, our experimental results demonstrate that CryptoGCN outperforms state-of-the-art solutions in terms of the latency and number of homomorphic operations, i.e., achieving as much as a 3.10$\times$ speedup on latency and reduces the total Homomorphic Operation Count (HOC) by 77.4\% with a small accuracy loss of 1-1.5$\%$. Our code is publicly available at https://github.com/ranran0523/CryptoGCN.
Accept
The reviewers appreciate the importance of both GCNs as an application, as well as the application of FHE to make their computations secure. The authors are strongly encouraged to: 1) Include the comparison to the prior work that reviewer RkGQ identified in the camera ready version. 2) Discuss complexity, and include the extended table, at the very least in the supplement. 3) Update Fig. 3, and surface the main takeaways from responses to issues raised by Reviewer YLFb to the main text.
train
[ "G2fwEzLV77_", "Tp9DKtwcWUP", "PhImm8v3JqU", "8EGN9qKXLey", "RrFaw8nsZ3", "5nz8_aXiZT07", "B_JbUhsd4kK", "8iexl8bqwtq", "4zuwppfBRy", "lBjXQUDR17", "G0Qu3K_M7Po" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate your great support to our work, and would like to incorporate the comparison and relevant discussion into the revised version as you suggested. In particular, this will be included in Section 5.2 AMA-format effectiveness. Also, we will add a public GitHub link of our source code in the rev...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "Tp9DKtwcWUP", "8EGN9qKXLey", "RrFaw8nsZ3", "8iexl8bqwtq", "5nz8_aXiZT07", "lBjXQUDR17", "G0Qu3K_M7Po", "4zuwppfBRy", "nips_2022_VeQBBm1MmTZ", "nips_2022_VeQBBm1MmTZ", "nips_2022_VeQBBm1MmTZ" ]
nips_2022_JLweqJeqhSq
Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems
Despite the established convergence theory of Optimistic Gradient Descent Ascent (OGDA) and Extragradient (EG) methods for the convex-concave minimax problems, little is known about the theoretical guarantees of these methods in nonconvex settings. To bridge this gap, for the first time, this paper establishes the convergence of OGDA and EG methods under the nonconvex-strongly-concave (NC-SC) and nonconvex-concave (NC-C) settings by providing a unified analysis through the lens of single-call extra-gradient methods. We further establish lower bounds on the convergence of GDA/OGDA/EG, shedding light on the tightness of our analysis. We also conduct experiments supporting our theoretical results. We believe our results will advance the theoretical understanding of OGDA and EG methods for solving complicated nonconvex minimax real-world problems, e.g., Generative Adversarial Networks (GANs) or robust neural networks training.
Accept
The reviewers appreciate the novel theoretical contribution to the convergence analysis of OGDA and EG methods in the settings of the nonconvex-strongly-concave (NC-SC) and nonconvex-concave (NC-C) as well as the lower bounds showing the tightness of their convergence guarantees. I recommended its acceptance accordingly. The suggestions of the reviewers should be included in the revised version.
train
[ "XZCD0a-JZmu", "i2aJoE2wXt1", "L5QhHe0iOlVf", "KQuzLfl1643", "iE8xqQl7hiz", "uaY1GiSmOz3", "XmH4q0fnOEU", "cXcaaJpZqnj", "KquMzAGQL2t", "l7wol_e9s9b", "aTy7CiH9ZQ8" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I decided to maintain my score, but like the reviewer C83W, I also think the authors could have done better by providing some additional preliminary experimental results (at least in text). Regarding Q2, I now better understand what authors were trying to say here, and I think it was impossible to deduce such exp...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "XmH4q0fnOEU", "L5QhHe0iOlVf", "iE8xqQl7hiz", "KquMzAGQL2t", "aTy7CiH9ZQ8", "l7wol_e9s9b", "cXcaaJpZqnj", "nips_2022_JLweqJeqhSq", "nips_2022_JLweqJeqhSq", "nips_2022_JLweqJeqhSq", "nips_2022_JLweqJeqhSq" ]
nips_2022_oUigTwc7Cw5
Efficient coding, channel capacity, and the emergence of retinal mosaics
Among the most striking features of retinal organization is the grouping of its output neurons, the retinal ganglion cells (RGCs), into a diversity of functional types. Each of these types exhibits a mosaic-like organization of receptive fields (RFs) that tiles the retina and visual space. Previous work has shown that many features of RGC organization, including the existence of ON and OFF cell types, the structure of spatial RFs, and their relative arrangement, can be predicted on the basis of efficient coding theory. This theory posits that the nervous system is organized to maximize information in its encoding of stimuli while minimizing metabolic costs. Here, we use efficient coding theory to present a comprehensive account of mosaic organization in the case of natural videos as the retinal channel capacity---the number of simulated RGCs available for encoding---is varied. We show that mosaic density increases with channel capacity up to a series of critical points at which, surprisingly, new cell types emerge. Each successive cell type focuses on increasingly high temporal frequencies and integrates signals over larger spatial areas. In addition, we show theoretically and in simulation that a transition from mosaic alignment to anti-alignment across pairs of cell types is observed with increasing output noise and decreasing input noise. Together, these results offer a unified perspective on the relationship between retinal mosaics, efficient coding, and channel capacity that can help to explain the stunning functional diversity of retinal cell types.
Accept
This paper received 1 accept, 2 strong accepts and 1 reject. All reviewers agree that the proposed model is elegant and that the technical work is impressive (even the negative reviewer). The main criticism of the negative reviewer is that the main take away is not clear. The authors submitted a revised version of the manuscript. Sadly, the reviewer did not read the rebuttal and/or engage in a discussion post-rebuttal. The AC considers that the main criticism of this reviewer was addressed. In light of this, the AC recommends the paper to be accepted.
val
[ "hJO6Nv1ruHm", "UQs79VCbLoPp", "T17u7_PBzPN", "HSQxwZRXe8x", "ZBvl3R4Bmho", "KJXS0P6zJay", "nPLz-lWImZ", "-ZVX5juynEn", "zncaXdnYI4v", "Vwy2sA7Dc5q", "XVNS3tXM_wT", "wSHtHwY_Pil", "8cGQI1jHlEU", "r3tCvcWoGd", "FOPUuxp3IAV", "qhGTII3kWLI", "0xGSKfAShiE", "nzxCMK48atS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your efforts in addressing the points I raised in my review, particularly the relationship with Ocko et al, the reference to specific empirical observations that are reproduced by the model, and the reorganization of Section 3. I think these changes have made the paper more accessible, and I will up...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "XVNS3tXM_wT", "zncaXdnYI4v", "nPLz-lWImZ", "nzxCMK48atS", "0xGSKfAShiE", "qhGTII3kWLI", "nzxCMK48atS", "0xGSKfAShiE", "0xGSKfAShiE", "qhGTII3kWLI", "qhGTII3kWLI", "FOPUuxp3IAV", "FOPUuxp3IAV", "FOPUuxp3IAV", "nips_2022_oUigTwc7Cw5", "nips_2022_oUigTwc7Cw5", "nips_2022_oUigTwc7Cw5", ...
nips_2022_eV4JI-MMeX
Amortized Inference for Causal Structure Learning
Inferring causal structure poses a combinatorial search problem that typically involves evaluating structures with a score or independence test. The resulting search is costly, and designing suitable scores or tests that capture prior knowledge is difficult. In this work, we propose to amortize causal structure learning. Rather than searching over structures, we train a variational inference model to predict the causal structure from observational or interventional data. This allows us to bypass both the search over graphs and the hand-engineering of suitable score functions. Instead, our inference model acquires domain-specific inductive biases for causal discovery solely from data generated by a simulator. The architecture of our inference model emulates permutation invariances that are crucial for statistical efficiency in structure learning, which facilitates generalization to significantly larger problem instances than seen during training. On synthetic data and semisynthetic gene expression data, our models exhibit robust generalization capabilities when subject to substantial distribution shifts and significantly outperform existing algorithms, especially in the challenging genomics domain. Our code and models are publicly available at: https://github.com/larslorch/avici
Accept
In this paper, the authors introduce an amortized inference approach to learn causal structures from observational/interventional data. Most of the reviewers consider the proposed method is sound and novel and their questions have been well addressed.
train
[ "OoAO1mr2jEa", "QKusHtfbZgd", "C_zOaI5JXcP", "Caf8dE5uaFy", "ce09lyL-3Y", "jWcV9QkU_Bs", "JuRNKMgl8Mb", "QW8c0GaiO5O", "rZHMpm0e9UK", "mzG6eFj84Va", "MOdGs7oPwLj", "O6j38FqYIxo" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for getting back to us. We are glad that we addressed most of your concerns and that our explanations updated your evaluation of our work. \n\n**Relation to [1] (Löwe et al., 2022):** (In our reply, we accidentally denoted this work by [2] since you mentioned [2] in the context of time series. Apologies...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "C_zOaI5JXcP", "Caf8dE5uaFy", "QW8c0GaiO5O", "O6j38FqYIxo", "O6j38FqYIxo", "O6j38FqYIxo", "MOdGs7oPwLj", "mzG6eFj84Va", "nips_2022_eV4JI-MMeX", "nips_2022_eV4JI-MMeX", "nips_2022_eV4JI-MMeX", "nips_2022_eV4JI-MMeX" ]
nips_2022_MHE27tjD8m3
Robust Neural Posterior Estimation and Statistical Model Criticism
Computer simulations have proven a valuable tool for understanding complex phenomena across the sciences. However, the utility of simulators for modelling and forecasting purposes is often restricted by low data quality, as well as practical limits to model fidelity. In order to circumvent these difficulties, we argue that modellers must treat simulators as idealistic representations of the true data generating process, and consequently should thoughtfully consider the risk of model misspecification. In this work we revisit neural posterior estimation (NPE), a class of algorithms that enable black-box parameter inference in simulation models, and consider the implication of a simulation-to-reality gap. While recent works have demonstrated reliable performance of these methods, the analyses have been performed using synthetic data generated by the simulator model itself, and have therefore only addressed the well-specified case. In this paper, we find that the presence of misspecification, in contrast, leads to unreliable inference when NPE is used naïvely. As a remedy we argue that principled scientific inquiry with simulators should incorporate a model criticism component, to facilitate interpretable identification of misspecification and a robust inference component, to fit ‘wrong but useful’ models. We propose robust neural posterior estimation (RNPE), an extension of NPE to simultaneously achieve both these aims, through explicitly modelling the discrepancies between simulations and the observed data. We assess the approach on a range of artificially misspecified examples, and find RNPE performs well across the tasks, whereas naïvely using NPE leads to misleading and erratic posteriors.
Accept
The reviewers, based on their scores, are not in consensus about this paper. However, there has been an extensive amount of conversation between reviewers, amongst themselves and with the authors, who have made significant updates to the manuscript that have been appreciated by the reviewers. Two reviewers champion the paper, while another reviewer maintained his/her score based on the original version of the manuscript, upon reading the updated version, only voices a mild concern around significance. The final reviewer who voted negatively simply presents novelty concerns. After a careful read of the manuscript myself, I would like to break the vote towards acceptance. This is a topic that is often overlooked in this field and I think that the authors present an interesting approach and, perhaps more importantly, that this will lead to interesting discussion and further work in this field. I appreciate the authors' careful attention to revising their manuscript and thank the reviewers for their balanced approach to reviewing and discussing this paper. I hope that the authors address any remaining concerns in preparing the final version of their manuscript,
train
[ "aOoJx1OiCzp", "y7mbbaYQb7j", "jm_YRqArah", "rZvSFCE7MlO", "K0oITYfZFVv", "e1w2MsywP2v", "frwb4Zfs8fS", "tCU8oJmSmx", "z_FevG2jNQH", "hpsqQ686Frpc", "AhZwyxUVC1Ss", "Ogo8yz_md01", "3Hk3q6n026X", "aFwB3ExaXXN", "grh0IM_pxQE", "OirsgWzNTS", "m8nhERpyOZP", "_fJOwxBCOu6", "YvQr5PtgBN...
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Thank you for your comment.\n\nRegarding scaling to higher dimensional problems, due to the tight turnaround, we did not have time to include a higher dimensional task before the end of the discussion period. However, one thing we would like to note is that the individual components of RNPE (HMC and normalising f...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 2 ]
[ "tCU8oJmSmx", "tCU8oJmSmx", "nips_2022_MHE27tjD8m3", "K0oITYfZFVv", "hpsqQ686Frpc", "frwb4Zfs8fS", "z_FevG2jNQH", "OirsgWzNTS", "grh0IM_pxQE", "AhZwyxUVC1Ss", "Ogo8yz_md01", "3Hk3q6n026X", "aFwB3ExaXXN", "SSSdiqOkL7o", "-TESMfdRVt-", "YvQr5PtgBNJ", "_fJOwxBCOu6", "nips_2022_MHE27tj...
nips_2022_X3RuacCx1R
Expected Frequency Matrices of Elections: Computation, Geometry, and Preference Learning
We use the "map of elections" approach of Szufa et al. (AAMAS 2020) to analyze several well-known vote distributions. For each of them, we give an explicit formula or an efficient algorithm for computing its frequency matrix, which captures the probability that a given candidate appears in a given position in a sampled vote. We use these matrices to draw the "skeleton map" of distributions, evaluate its robustness, and analyze its properties. We further develop a general and unified framework for learning the distribution of real-world preferences using the frequency matrices of established vote distributions.
Accept
This paper works to identify relationships among different vote distributions. This is done by applying (previously introduced) "frequency matrices" to the vote distributions themselves and gives formula or algorithms for computing these. The resulting "map of elections" seems to have especially strong real-world potential.
train
[ "8ulxxjE4SbM", "AT9bxTKWB9h", "KA_AEAGrC0", "OYnSJvuynmA", "uAtytejH_e4m", "xzrl88-jJ2v", "Llf2TFQJf9F", "YqroHDzKOsb" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments!", " Thank you for the response and clarification! I have updated my review score.", " *Maps and experiments*\n\nThe map shows relations between elections and between statistical cultures. As elections close on the map often have similar properties (see Boehmer et al. 2021a+b; Szuf...
[ -1, -1, -1, -1, -1, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "OYnSJvuynmA", "KA_AEAGrC0", "YqroHDzKOsb", "Llf2TFQJf9F", "xzrl88-jJ2v", "nips_2022_X3RuacCx1R", "nips_2022_X3RuacCx1R", "nips_2022_X3RuacCx1R" ]
nips_2022_Q_WPshXgGI9
Confident Approximate Policy Iteration for Efficient Local Planning in $q^\pi$-realizable MDPs
We consider approximate dynamic programming in $\gamma$-discounted Markov decision processes and apply it to approximate planning with linear value-function approximation. Our first contribution is a new variant of Approximate Policy Iteration (API), called Confident Approximate Policy Iteration (CAPI), which computes a deterministic stationary policy with an optimal error bound scaling linearly with the product of the effective horizon $H$ and the worst-case approximation error $\epsilon$ of the action-value functions of stationary policies. This improvement over API (whose error scales with $H^2$) comes at the price of an $H$-fold increase in memory cost. Unlike Scherrer and Lesner [2012], who recommended computing a non-stationary policy to achieve a similar improvement (with the same memory overhead), we are able to stick to stationary policies. This allows for our second contribution, the application of CAPI to planning with local access to a simulator and $d$-dimensional linear function approximation. As such, we design a planning algorithm that applies CAPI to obtain a sequence of policies with successively refined accuracies on a dynamically evolving set of states. The algorithm outputs an $\tilde O(\sqrt{d}H\epsilon)$-optimal policy after issuing $\tilde O(dH^4/\epsilon^2)$ queries to the simulator, simultaneously achieving the optimal accuracy bound and the best known query complexity bound, while earlier algorithms in the literature achieve only one of them. This query complexity is shown to be tight in all parameters except $H$. These improvements come at the expense of a mild (polynomial) increase in memory and computational costs of both the algorithm and its output policy.
Accept
All reviewers and the AC believe this paper makes valuable contributions to the theoretical reinforcement learning community.
train
[ "Xp7VviWs0gi", "CVL6QhK9z0", "tEr_aW-927", "e0hr_xHXVBT", "skEhu4Mcbke", "cVBo0-BzLXD", "Ffco2wfnBsC", "Lh6sHmuX2uI", "WGJJrQZC1P", "pLbYbcMqrw", "lsim9qy4PC2", "X8_7cuI1bAT" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " While this work is for the batch RL setting, we thank the reviewer for pointing out this connection. We certainly did not intend to suggest that our work is the first where the suboptimality scales with the horizon, as we point out in Table 1, even for our exact setting, Confident MC-POLITEX achieves the same sub...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "X8_7cuI1bAT", "X8_7cuI1bAT", "X8_7cuI1bAT", "lsim9qy4PC2", "pLbYbcMqrw", "pLbYbcMqrw", "pLbYbcMqrw", "pLbYbcMqrw", "pLbYbcMqrw", "nips_2022_Q_WPshXgGI9", "nips_2022_Q_WPshXgGI9", "nips_2022_Q_WPshXgGI9" ]
nips_2022_jXgbJdQ2YIy
Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits
Variational quantum circuits have been widely employed in quantum simulation and quantum machine learning in recent years. However, quantum circuits with random structures have poor trainability due to the exponentially vanishing gradient with respect to the circuit depth and the qubit number. This result leads to a general standpoint that deep quantum circuits would not be feasible for practical tasks. In this work, we propose an initialization strategy with theoretical guarantees for the vanishing gradient problem in general deep quantum circuits. Specifically, we prove that under proper Gaussian initialized parameters, the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase. Our theoretical results hold for both the local and the global observable cases, where the latter was believed to have vanishing gradients even for very shallow circuits. Experimental results verify our theoretical findings in quantum simulation and quantum chemistry.
Accept
The authors propose a new random initialization of quantum neural networks which could avoid generating vanishing gradients. Specifically, the new random (Gaussian) initialization scheme will depend on the shape of the ansatz so that the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase. This finding is also supported by the associated empirical study. The reviewers consider this an important step toward the understanding of the trainability of variational quantum circuits. However, some limitations of the proposal are also discussed in the reviews, and we hope the authors can make an explicit discussion of these limitations in the final version.
train
[ "fo-7IkHqjYg", "MS6yv0sqiLX", "foGQ_3E1lOQ", "Z1Tz_aqNQzj", "LLYVP0-ucdf", "_3VvkdpHCME", "eEF-cJ9wJKa", "-tqRfqfkoS", "V1cjEhNBho4D", "k0zaNxBrlBz", "0yxspWKwCV", "r9DUBkuejwK6", "JixLc-bpIl4", "x9V4_bahkun", "EZK8Kr7J79N", "_PXEyTGDLDc", "6NY8puwAtI", "X_cW57iXlHF" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to express our appreciation to all reviewers for their constructive questions and helpful feedback through the rebuttal period! The manuscript and the appendix have been refined following suggestions. Here we summarize the main improvements.\n\n1. We have added more discussions about conditions when...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "nips_2022_jXgbJdQ2YIy", "Z1Tz_aqNQzj", "LLYVP0-ucdf", "_3VvkdpHCME", "-tqRfqfkoS", "eEF-cJ9wJKa", "r9DUBkuejwK6", "V1cjEhNBho4D", "0yxspWKwCV", "nips_2022_jXgbJdQ2YIy", "6NY8puwAtI", "X_cW57iXlHF", "_PXEyTGDLDc", "EZK8Kr7J79N", "nips_2022_jXgbJdQ2YIy", "nips_2022_jXgbJdQ2YIy", "nips...
nips_2022_mCzMqeWSFJ
ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs
Many real-world data can be modeled as 3D graphs, but learning representations that incorporates 3D information completely and efficiently is challenging. Existing methods either use partial 3D information, or suffer from excessive computational cost. To incorporate 3D information completely and efficiently, we propose a novel message passing scheme that operates within 1-hop neighborhood. Our method guarantees full completeness of 3D information on 3D graphs by achieving global and local completeness. Notably, we propose the important rotation angles to fulfill global completeness. Additionally, we show that our method is orders of magnitude faster than prior methods. We provide rigorous proof of completeness and analysis of time complexity for our methods. As molecules are in essence quantum systems, we build the \underline{com}plete and \underline{e}fficient graph neural network (ComENet) by combing quantum inspired basis functions and the proposed message passing scheme. Experimental results demonstrate the capability and efficiency of ComENet, especially on real-world datasets that are large in both numbers and sizes of graphs. Our code is publicly available as part of the DIG library (\url{https://github.com/divelab/DIG}).
Accept
This paper presents useful MP for molecular graphs with better completeness properties. I encourage the authors to consider addressing the reviewers' comments while preparing their revision. In particular, discuss relation to previous work such as PaiNN, DimeNet++. Clarify splits. Consider adding "completeness". Address exposition issues (eaXa, SSbD).
train
[ "0dzp3NMjpNe", "810_2t2Zxfz", "DqogcKrFT-t", "OP7WapmrUtA", "fbZClPUsj_P", "kB-csV8hyT", "QshNmjYELEI", "QYetnocw9zI", "GvjaU1vv0GN", "FOzg5QB1kSV", "MSvxXjWlCD", "bve07EOjAY8n", "38vUS7X-uyz", "S-CMriiGWf", "y7SQ_tKezy", "_9_9eHBhiti", "8PRxu-Ckzb4", "siMPN7EwnoM", "9PzoEdgkYzt"...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "officia...
[ " Dear Reviewer tzSo,\n\nThank you very much for your reply! We are happy to know that `your points are well-addressed` in the revision and responses. We further summarize the novelty and impact of our paper here.\n\n> **Novelty** \n\nThe main novelty of this paper is the design of a **complete and efficient** mess...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "DqogcKrFT-t", "7IAvicH5ns", "0BBm6uDwHW", "fbZClPUsj_P", "kB-csV8hyT", "GvjaU1vv0GN", "UJ-UBsR6XcN", "7IAvicH5ns", "7FtHF3m4lIM", "MSvxXjWlCD", "8PRxu-Ckzb4", "8PRxu-Ckzb4", "nips_2022_mCzMqeWSFJ", "7IAvicH5ns", "7IAvicH5ns", "r_o9SeEHW_t", "r_o9SeEHW_t", "UJ-UBsR6XcN", "UJ-UBsR...
nips_2022_8gjwWnN5pfy
Fairness without Demographics through Knowledge Distillation
Most of existing work on fairness assumes available demographic information in the training set. In practice, due to legal or privacy concerns, when demographic information is not available in the training set, it is crucial to find alternative objectives to ensure fairness. Existing work on fairness without demographics follows Rawlsian Max-Min fairness objectives. However, such constraints could be too strict to improve group fairness, and could lead to a great decrease in accuracy. In light of these limitations, in this paper, we propose to solve the problem from a new perspective, i.e., through knowledge distillation. Our method uses soft label from an overfitted teacher model as an alternative, and we show from preliminary experiments that soft labelling is beneficial for improving fairness. We analyze theoretically the fairness of our method, and we show that our method can be treated as an error-based reweighing. Experimental results on three datasets show that our method outperforms state-of-the-art alternatives, with notable improvements in group fairness and with relatively small decrease in accuracy.
Accept
Overall the reviews are positive, leaning towards accept. The reviewers agree that the main idea is interesting and the paper is well written and structured. Also several issues raised are properly addressed and checked, and I think that the remaining issues are also taken care of. Hence, I recommend the acceptance of this paper.
test
[ "w7jV34w9kVR", "Ktk3PUSTqKk", "hVVae6O0uVP", "DutMEvEsoD", "dM-sLxha5jI", "6tZA1QlNZEF", "Oi7aFzcDTEM", "ZSsOz-LAfg7", "WmB3klgK_d", "oFbzGbOyU6A", "iY5AMU62LSy" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response.\n\n**Legibility:** We first empirically show the effectiveness of label smoothing through experiments on new Adult dataset in Tab. 1, and we theoretically discuss the connection between our knowledge distillation method for fairness and label smoothing in Section 3.4. We provide theore...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "Ktk3PUSTqKk", "Oi7aFzcDTEM", "DutMEvEsoD", "6tZA1QlNZEF", "nips_2022_8gjwWnN5pfy", "iY5AMU62LSy", "oFbzGbOyU6A", "WmB3klgK_d", "nips_2022_8gjwWnN5pfy", "nips_2022_8gjwWnN5pfy", "nips_2022_8gjwWnN5pfy" ]
nips_2022_pV7f1Rq71I5
Estimation of Entropy in Constant Space with Improved Sample Complexity
Recent work of Acharya et al.~(NeurIPS 2019) showed how to estimate the entropy of a distribution $\mathcal D$ over an alphabet of size $k$ up to $\pm\epsilon$ additive error by streaming over $(k/\epsilon^3) \cdot \text{polylog}(1/\epsilon)$ i.i.d.\ samples and using only $O(1)$ words of memory. In this work, we give a new constant memory scheme that reduces the sample complexity to $(k/\epsilon^2)\cdot \text{polylog}(1/\epsilon)$. We conjecture that this is optimal up to $\text{polylog}(1/\epsilon)$ factors.
Accept
This paper improves the state-of-the-art in estimating the entropy of a discrete distribution under memory constraints. The reviewers agreed that the presented result is elegant and non-trivial and that the ideas are described well. There is some concern that the paper is incremental due to the absence of lower bounds, but the algorithmic contribution is strong enough to merit acceptance to NeurIPS.
test
[ "B2di5ydpk5", "UsSqJtQ7y5Y", "JWtnB7BOHhS", "tAX-KbnhK5l", "XtklfvX50O4", "zKztM8zKBee", "AstPb9V_FKK", "9RS8yzL5ynJ", "oneZ_S_Ozkb" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I have no further clarifications. ", " We thank the reviewer for their feedback. \n \nQuestion: If a worst-case sample complexity bound is required, we could use the fact any algorithm that uses $\\mu$ samples in expectation into one with $c\\mu$ worst case sample complexity by term...
[ -1, -1, -1, -1, -1, 7, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, 5, 3, 4, 5 ]
[ "XtklfvX50O4", "oneZ_S_Ozkb", "9RS8yzL5ynJ", "AstPb9V_FKK", "zKztM8zKBee", "nips_2022_pV7f1Rq71I5", "nips_2022_pV7f1Rq71I5", "nips_2022_pV7f1Rq71I5", "nips_2022_pV7f1Rq71I5" ]
nips_2022_6rVXMHImDzv
Communication Efficient Distributed Learning for Kernelized Contextual Bandits
We tackle the communication efficiency challenge of learning kernelized contextual bandits in a distributed setting. Despite the recent advances in communication-efficient distributed bandit learning, existing solutions are restricted to simple models like multi-armed bandits and linear bandits, which hamper their practical utility. In this paper, instead of assuming the existence of a linear reward mapping from the features to the expected rewards, we consider non-linear reward mappings, by letting agents collaboratively search in a reproducing kernel Hilbert space (RKHS). This introduces significant challenges in communication efficiency as distributed kernel learning requires the transfer of raw data, leading to a communication cost that grows linearly w.r.t. time horizon $T$. We addresses this issue by equipping all agents to communicate via a common Nystr\"{o}m embedding that gets updated adaptively as more data points are collected. We rigorously proved that our algorithm can attain sub-linear rate in both regret and communication cost.
Accept
The paper studies distributed kernelized bandit and applies the Nyström approximation to achieve communication efficiency, which is a novel technique appreciated by the reviewers. After some discussions, and including adding another extra reviewer, I believe the paper has enough contribution and is worth to be published at NeurIPS'2022. However, the authors do need to take all review comments serious and give a thorough revision to the paper, including comparisons to other possible approaches, such as federated learning. The authors also need to make it very clear why studying kernelized bandit and distributed bandit setting is important for applications.
train
[ "_9XIfNz7j0-", "OsgJ3nKlzWw", "iWRLOnOZ85N", "1zJob3Pyv0", "hVLR9T1TmoU", "wrZ17WZEv89", "33SgUwAvpdA", "2pgDGUasRJpU", "4_nxCqxw98I", "fI-WFmTDRC", "JG4XoteA7I9", "i1fM5yUQHEp" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The current paper studies the distributed kernelized bandit, a generalization of the distributed linear bandit. Different from the distributed linear bandit [25], one cannot upload/download sufficient statistics (i.e., Gram matrix) for total $O(\\log T)$ communication rounds, since for later communication rounds,...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2022_6rVXMHImDzv", "2pgDGUasRJpU", "wrZ17WZEv89", "wrZ17WZEv89", "wrZ17WZEv89", "33SgUwAvpdA", "i1fM5yUQHEp", "JG4XoteA7I9", "fI-WFmTDRC", "nips_2022_6rVXMHImDzv", "nips_2022_6rVXMHImDzv", "nips_2022_6rVXMHImDzv" ]
nips_2022_dgWo-UyVEsa
Linear Label Ranking with Bounded Noise
Label Ranking (LR) is the supervised task of learning a sorting function that maps feature vectors $x \in \mathbb{R}^d$ to rankings $\sigma(x) \in \mathbb S_k$ over a finite set of $k$ labels. We focus on the fundamental case of learning linear sorting functions (LSFs) under Gaussian marginals: $x$ is sampled from the $d$-dimensional standard normal and the ground truth ranking $\sigma^\star(x)$ is the ordering induced by sorting the coordinates of the vector $W^\star x$, where $W^\star \in \mathbb{R}^{k \times d}$ is unknown. We consider learning LSFs in the presence of bounded noise: assuming that a noiseless example is of the form $(x, \sigma^\star(x))$, we observe $(x, \pi)$, where for any pair of elements $i \neq j$, the probability that the order of $i, j$ is different in $\pi$ than in $\sigma^\star(x)$ is at most $\eta < 1/2$. We design efficient non-proper and proper learning algorithms that learn hypotheses within normalized Kendall's Tau distance $\epsilon$ from the ground truth with $N= \widetilde{O}(d\log(k)/\epsilon)$ labeled examples and runtime $\mathrm{poly}(N, k)$. For the more challenging top-$r$ disagreement loss, we give an efficient proper learning algorithm that achieves $\epsilon$ top-$r$ disagreement with the ground truth with $N = \widetilde{O}(d k r /\epsilon)$ samples and $\mathrm{poly}(N)$ runtime.
Accept
The reviewers are unanimous in their strong positive opinion on this paper. The authors have given the first efficient algorithms for learning noisy linear sorting functions with theoretical guarantees a relevant and useful problems setup for the NeuRIPS community. The reviewers consider the paper clear and well-presented and thus this is a natural accept.
train
[ "q7Dxi3tM6Pl", "ybGdV0nqe5i", "E6K1UaAH1Mk", "NbBWZHolKUb", "pFlb05RCE1K", "Tg7tz1w7jN2", "IO_lPHqsEFz", "HVZg7O2tF9e" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for carefully reading our manuscript and for the useful and positive feedback!", " We thank the reviewer for carefully reading our manuscript and providing insightful feedback (and also some interesting questions!).\n\n\"The current work studies bounded noise type. Is there a...
[ -1, -1, -1, -1, 7, 8, 8, 7 ]
[ -1, -1, -1, -1, 2, 3, 4, 3 ]
[ "HVZg7O2tF9e", "IO_lPHqsEFz", "Tg7tz1w7jN2", "pFlb05RCE1K", "nips_2022_dgWo-UyVEsa", "nips_2022_dgWo-UyVEsa", "nips_2022_dgWo-UyVEsa", "nips_2022_dgWo-UyVEsa" ]
nips_2022_rUb6iKYrgXQ
Data-Driven Conditional Robust Optimization
In this paper, we study a novel approach for data-driven decision-making under uncertainty in the presence of contextual information. Specifically, we solve this problem from a Conditional Robust Optimization (CRO) point of view. We propose an integrated framework that designs the conditional uncertainty set by jointly learning the partitions in the covariate data space and simultaneously constructing partition specific deep uncertainty sets for the random vector that perturbs the CRO problem. We also provide theoretical guarantees for the coverage of the uncertainty sets and value at risk performances obtained using the proposed CRO approach. Finally, we use the simulated and real world data to show the implementation of our approach and compare it against two non-contextual benchmark approaches to demonstrate the value of exploiting contextual information in robust optimization.
Accept
The paper proposes a conditional robust optimization framework for solving contextual optimization problems in a risk-averse setting. All three reviewers seem to agree on the usefulness and originality of the proposed approach. As Reviewer 8P9p finds the story to be complete, the analysis to be comprehensive and the problem to be well-motivated with sound and concrete solutions, Reviewer k1N2 points out that the approach leverages very well results from diverse fields such as clustering, outlier detection, robust optimization, and mixed-integer linear optimization. Reviewer 34vX stresses that the method reduces the out of sample value at risk considerably compared to the conventional non-contextual optimization methods. While k1N2 originally phrased some concerns relating to the experimental setting, in particular the lack of benchmark from recent and relevant literature, the autor response lead k1N2 to a score increase. Similarly, 8P9p stated following the author response that from their extra explanations and extra experimental results, the reviewer gained confidence in the soundness of the proposed approach, also resulting in a score increase. Reviewer 34vX put to the fore that the experimental aspects of the paper are well documented. Also, the IDCC algorithm of the paper is illustrated using the US stock market dataset, showing empirically that the approach reduces the out-of-sample VaR considerably compared to non-contextual RO schemes when the level of protection needed is high. From a methodological perspective, it is also point out by 34vX that as neural network-based approaches, the considered DDRO and IDCC approaches are capable of identifying non-convex uncertainty sets, a desirable property. Overall, this paper was found to be a very well written and original reseach work on a timely topic w,ith methodological and empirical results of potentially string interest to the NeurIPS readership and beyond. For all these reasons, I am recommending the paper to be accepted.
train
[ "LD0GCdtRlgp", "1oXUpE5njJM", "RAiAO876C04", "sAxVObaaORR", "OC4hAnpygma", "8Ds-j4AiN_u", "oBiw7PuyF1J", "Qr3UrUWi053", "4CgxU89hqPs", "8VFXUX1SZqi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm happy with the explanation and the fact that the method is amenable to different clustering techniques. I have raised my score accordingly.", " I am not an expert in this specific field but I read the other reviewers' comments as well as your responses. From your extra explanations and extra experimental re...
[ -1, -1, -1, -1, -1, -1, -1, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 2, 5 ]
[ "8VFXUX1SZqi", "8Ds-j4AiN_u", "oBiw7PuyF1J", "nips_2022_rUb6iKYrgXQ", "8VFXUX1SZqi", "4CgxU89hqPs", "Qr3UrUWi053", "nips_2022_rUb6iKYrgXQ", "nips_2022_rUb6iKYrgXQ", "nips_2022_rUb6iKYrgXQ" ]
nips_2022_lWq3KDEIXIE
Learning Tractable Probabilistic Models from Inconsistent Local Estimates
Tractable probabilistic models such as cutset networks which admit exact linear time posterior marginal inference are often preferred in practice over intractable models such as Bayesian and Markov networks. This is because although tractable models, when learned from data, are slightly inferior to the intractable ones in terms of goodness-of-fit measures such as log-likelihood, they do not use approximate inference at prediction time and as a result exhibit superior predictive performance. In this paper, we consider the problem of improving a tractable model using a large number of local probability estimates, each defined over a small subset of variables that are either available from experts or via an external process. Given a model learned from fully-observed, but small amount of possibly noisy data, the key idea in our approach is to update the parameters of the model via a gradient descent procedure that seeks to minimize a convex combination of two quantities: one that enforces closeness via KL divergence to the local estimates and another that enforces closeness to the given model. We show that although the gradients are NP-hard to compute on arbitrary graphical models, they can be efficiently computed over tractable models. We show via experiments that our approach yields tractable models that are significantly superior to the ones learned from small amount of possibly noisy data, even when the local estimates are inconsistent.
Accept
This submission did not reach a full agreement among PC members. I will not repeat the arguments here as they can be read in the reviews (please see e.g. review WY8E, which according to the authors captures well their intention). The main open criticism is the lack of a real application where the type of assumption used in the paper is present (the other important one being the comparison with other approaches, which seems to have been justified by the authors to a good extent). The concern was originally about the existence of those applications, but later this was resolved (it remained as a concern that the real application is not shown in this work). I consider that this is small when weighted against the positive comments in all reviews.
val
[ "MMfGuf11NLj", "PXS60xroR8b", "H-y9-4Cn9Hw", "1F0vZHK2dcf", "fql9Aq14UeN", "FsK3b2d4_G9", "m9WHgg7jY-", "JWqxalbH_wF", "nPIxjrkQXCE", "CrZ-o8EPMn", "tSYx22SO8Xf", "Ne5__m48apa", "YNbkBhpyzWx", "Yh0nrEzBw9" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for updating your review. We will have an extra page if the paper is accepted. Thank you for the suggestion: we will include motivating applications in more detail and also why existing methods will be impractical. We will also include a detailed discussion in the supplement about these issues.", " Ou...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "1F0vZHK2dcf", "H-y9-4Cn9Hw", "fql9Aq14UeN", "nPIxjrkQXCE", "FsK3b2d4_G9", "m9WHgg7jY-", "Yh0nrEzBw9", "YNbkBhpyzWx", "Ne5__m48apa", "tSYx22SO8Xf", "nips_2022_lWq3KDEIXIE", "nips_2022_lWq3KDEIXIE", "nips_2022_lWq3KDEIXIE", "nips_2022_lWq3KDEIXIE" ]
nips_2022_KMaI40_UaGw
Learning from a Sample in Online Algorithms
We consider three central problems in optimization: the restricted assignment load-balancing problem, the Steiner tree network design problem, and facility location clustering. We consider the online setting, where the input arrives over time, and irrevocable decisions must be made without knowledge of the future. For all these problems, any online algorithm must incur a cost that is approximately $\log |I|$ times the optimal cost in the worst-case, where $|I|$ is the length of the input. But can we go beyond the worst-case? In this work we give algorithms that perform substantially better when a $p$-fraction of the input is given as a sample: the algorithm use this sample to \emph{learn} a good strategy to use for the rest of the input.
Accept
This paper gives improved bounds for the assignment load-balancing problem, the Steiner tree network design problem, and facility location clustering in the case when a some of the input is given as a sample. This is a strong paper at the intersection of online algorithms and learning.
train
[ "MrykXrGba9w", "04r-abaQzkD", "GNVPa3gSxD", "C8pSZ531lLw", "zjCZj5Io7QR", "dQZh1kASIV", "4WNSj7sSqB", "Me3O4sQdwQ", "M1TtnF7FlTQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I wasn't seriously concerned with the gap between upper bound and lower bound. The paper contributes sufficiently as it stands.", " Thanks a lot for the helpful response.", " Thank you for your response. I will retain my score of 7 as I feel this is a sufficiently good paper for an acceptance.", " Thank you...
[ -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "dQZh1kASIV", "C8pSZ531lLw", "zjCZj5Io7QR", "M1TtnF7FlTQ", "Me3O4sQdwQ", "4WNSj7sSqB", "nips_2022_KMaI40_UaGw", "nips_2022_KMaI40_UaGw", "nips_2022_KMaI40_UaGw" ]
nips_2022_NgwrhCBPTVk
(Optimal) Online Bipartite Matching with Degree Information
We propose a model for online graph problems where algorithms are given access to an oracle that predicts (e.g., based on modeling assumptions or past data) the degrees of nodes in the graph. Within this model, we study the classic problem of online bipartite matching, and a natural greedy matching algorithm called MinPredictedDegree, which uses predictions of the degrees of offline nodes. For the bipartite version of a stochastic graph model due to Chung, Lu, and Vu where the expected values of the offline degrees are known and used as predictions, we show that MinPredictedDegree stochastically dominates any other online algorithm, i.e., it is optimal for graphs drawn from this model. Since the "symmetric" version of the model, where all online nodes are identical, is a special case of the well-studied "known i.i.d. model", it follows that the competitive ratio of MinPredictedDegree on such inputs is at least 0.7299. For the special case of graphs with power law degree distributions, we show that MinPredictedDegree frequently produces matchings almost as large as the true maximum matching on such graphs. We complement these results with an extensive empirical evaluation showing that MinPredictedDegree compares favorably to state-of-the-art online algorithms for online matching.
Accept
The paper studies online bipartite matching problem under a random graph model, and shows that using the expected mean of the degrees could achieve certain optimal performance under their graph model. The authors complement the theoretical study with empirical evaluation, and demonstrates that estimating the degrees would result in good performance in online bipartite matching. The reviewers agree that the paper contains good technical contributions to the problem, and is worth to be published. There is some concern on whether the paper is related to ML/AI or is a purely combinatorial optimization paper. In particular, the reviewers share the concern that the theoretical algorithm takes the expected degrees as the input, and not the predicted degrees as suggesting an estimation with errors. After some discussions among the reviewers, here is my conclusion and recommendation: 1. The technical algorithm is like a combinatorial optimization algorithm, but it has a strong indication that degree estimation could be helpful in algorithm design. This is further complemented by the empirical study of the paper. Therefore, the study fits into the data-driven optimization and algorithm design paradigm that would interest the ML/AI community. 2. The use of the term "predicted degrees" in the title/abstract/intro is indeed misleading. It gives a strong impression that the algorithm is using a prediction (or estimation) that contains estimation error, but it actually does not use such predicted degrees, and instead using the expected degrees as input. Of course expected degrees are still not the actual random degrees but they are not in the normal sense the "prediction" result. I strongly suggest the authors to properly change the title/abstract/intro to more accurately reflect what they are doing and to reduce confusion (If 3 out of 4 reviewers raise this issue, plus I also has this concern, the authors should expect a significant confusion if they do not revise the presentation). The authors should clearly state that their theoretical algorithm is for expected degrees, and this may indicate that using predicted degrees may be helpful, but the latter is not part of the theoretical result. The following is my try on the title: (Optimal) Online Bipartite Matching with Known Expected Degrees on Random Graphs --- A Theoretical Justification of the Effectiveness of Algorithms Based on Ordering of Predicted Degrees It is certainly not elegant, but hopefully it suggests the authors to spend some effort to give a more accurate title/abstract/intro in their presentation. Also, in terms of their theoretical result, their Appendix D does provide some result regarding the noise in the prediction. But it looks like the presented result is not in the normal sense of the prediction error between the prediction and the ground truth. I also suggest the authors to substantiate this part and perhaps move it into the main text so that the paper indeed has some treatment on predicted degrees. Overall, with the above comments and suggestions, I believe the paper has good contributions to be accepted at NeurIPS.
train
[ "DWbI5mriIS", "ZFHZvMoLIh", "Pscu9XVrnxH", "ERp6AUyBDl", "YUaQ3MOs_B", "L_Xt6sVWv0F", "82mwFKJfHUE", "aUyrlRmFku3", "ipXNQCmCMQG", "x7c3hlT9LdE", "CcmJF9V4PEW", "DOxKRN5-WOt", "JekQ_HTzE2", "cDpGBS32rSM", "jRzLaMFgRP5", "1Ml73XGQI5u" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response. It answers my questions. ", " Dear Reviewer o3xo,\n\nThanks again for your review! We wanted to follow up to see if our responses helped to address your concerns.", " ### 1) Ranking algorithm\n\nIt is not true that our algorithm is the same as the Ranking algorithm. They are ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "ZFHZvMoLIh", "DOxKRN5-WOt", "YUaQ3MOs_B", "CcmJF9V4PEW", "L_Xt6sVWv0F", "aUyrlRmFku3", "x7c3hlT9LdE", "ipXNQCmCMQG", "1Ml73XGQI5u", "jRzLaMFgRP5", "cDpGBS32rSM", "JekQ_HTzE2", "nips_2022_NgwrhCBPTVk", "nips_2022_NgwrhCBPTVk", "nips_2022_NgwrhCBPTVk", "nips_2022_NgwrhCBPTVk" ]
nips_2022_TVlKuUk-uj9
Can Adversarial Training Be Manipulated By Non-Robust Features?
Adversarial training, originally designed to resist test-time adversarial examples, has shown to be promising in mitigating training-time availability attacks. This defense ability, however, is challenged in this paper. We identify a novel threat model named stability attack, which aims to hinder robust availability by slightly manipulating the training data. Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation. Further, we analyze the necessity of enlarging the defense budget to counter stability attacks. Finally, comprehensive experiments demonstrate that stability attacks are harmful on benchmark datasets, and thus the adaptive defense is necessary to maintain robustness.
Accept
This paper proposes a new threat model called stability attack, which aims to hinder model from being robust to adversarial attacks. The author proposes hypocritical perturbation as a method for stability attack and shows that hypocritical perturbation can indeed decrease the adversarial robustness of a model trained in a simple gaussian mixture setting. The reviewers agree that the problem being studied is interesting, the proposed method is well motivated, and the experiments are mostly convincing. The authors are encouraged to merge the new results during the rebuttal into the publication and discuss more on the efficiency of the proposed method.
train
[ "fmb0wTURUS", "GYj5MB-AlBX", "ZMhorQN1fQW", "CR2kAnjQh4m", "I31j5QkGflR", "5jbM57TXyhx", "_Neon8rRcBWi", "c-SiYxkZUAW", "2fNTOgsODuA", "PZ7cOUmhNfh", "PxDUjz5AEAy", "g29jNJGZsS5", "Yk4cSbJlznN", "2cS4y-9BNdS", "qafGsJrn0Wp", "lZ_EFY4qxmd", "NZXEX58rUQc", "SbwxwXDdHi", "K89wUs1jNC...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We again thank all reviewers for their valuable feedback.\n\nWe updated our submission with the following key changes:\n\n- [Section 2.1] We added a more detailed description of the origins of hypocritical perturbations (Reviewer bFP2 and Reviewer UnFX).\n- [Section 5] We added a description of compared methods (...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 2 ]
[ "nips_2022_TVlKuUk-uj9", "ZMhorQN1fQW", "2cS4y-9BNdS", "SbwxwXDdHi", "hXC1-SbXJKP", "_Neon8rRcBWi", "PZ7cOUmhNfh", "fyGZh3-Tg8B", "fyGZh3-Tg8B", "fyGZh3-Tg8B", "hXC1-SbXJKP", "hXC1-SbXJKP", "hXC1-SbXJKP", "hXC1-SbXJKP", "K89wUs1jNCE", "SbwxwXDdHi", "SbwxwXDdHi", "nips_2022_TVlKuUk-...
nips_2022_8xccCiF9JQ6
A Fast Scale-Invariant Algorithm for Non-negative Least Squares with Non-negative Data
Nonnegative (linear) least square problems are a fundamental class of problems that is well-studied in statistical learning and for which solvers have been implemented in many of the standard programming languages used within the machine learning community. The existing off-the-shelf solvers view the non-negativity constraint in these problems as an obstacle and, compared to unconstrained least squares, perform additional effort to address it. However, in many of the typical applications, the data itself is nonnegative as well, and we show that the nonnegativity in this case makes the problem easier. In particular, while the worst-case dimension-independent oracle complexity of unconstrained least squares problems necessarily scales with one of the data matrix constants (typically the spectral norm) and these problems are solved to additive error, we show that nonnegative least squares problems with nonnegative data are solvable to multiplicative error and with complexity that is independent of any matrix constants. The algorithm we introduce is accelerated and based on a primal-dual perspective. We further show how to provably obtain linear convergence using adaptive restart coupled with our method and demonstrate its effectiveness on large-scale data via numerical experiments.
Accept
The reviewers generally agreed that the paper has novel and solid contributions. Moreover, they were satisfied with the authors' responses. Please incorporate the reviewers' suggestions in your revision.
val
[ "oVVD5JfJpMb", "BUhGcV9iqfL", "fJfn1wMA79", "G0Yoid4ieGP", "76igigNeYWV", "F_BdzGE76o_", "l7c8h5KNWBa", "Wuba-4JHyQX", "iOFk2zm4NGs", "7wdgtNPmASe", "9nyJqSmMBQ", "YWZOeg9Fol", "0VAgyW432wc", "IXjnm3XSQXL", "f9leBAjz-sO", "tvqq39bsURR", "8-4aPLGx_b", "fqThdIcku_I" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for considering our response and increasing your score; we very much appreciate it. \n\nWe do plan to make the code publicly available on github once we have had the chance to clean it up after the added code from the rebuttals. If you have suggestions for generating synthetic data on which it...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "BUhGcV9iqfL", "iOFk2zm4NGs", "l7c8h5KNWBa", "Wuba-4JHyQX", "F_BdzGE76o_", "fqThdIcku_I", "8-4aPLGx_b", "tvqq39bsURR", "7wdgtNPmASe", "9nyJqSmMBQ", "f9leBAjz-sO", "0VAgyW432wc", "IXjnm3XSQXL", "nips_2022_8xccCiF9JQ6", "nips_2022_8xccCiF9JQ6", "nips_2022_8xccCiF9JQ6", "nips_2022_8xccC...
nips_2022_FR--mkQu0dw
When Does Differentially Private Learning Not Suffer in High Dimensions?
Large pretrained models can be fine-tuned with differential privacy to achieve performance approaching that of non-private models. A common theme in these results is the surprising observation that high-dimensional models can achieve favorable privacy-utility trade-offs. This seemingly contradicts known results on the model-size dependence of differentially private convex learning and raises the following research question: When does the performance of differentially private learning not degrade with increasing model size? We identify that the magnitudes of gradients projected onto subspaces is a key factor that determines performance. To precisely characterize this for private convex learning, we introduce a condition on the objective that we term restricted Lipschitz continuity and derive improved bounds for the excess empirical and population risks that are dimension- independent under additional conditions. We empirically show that in private fine-tuning of large language models, gradients obtained during fine-tuning are mostly controlled by a few principal components. This behavior is similar to conditions under which we obtain dimension-independent bounds in convex settings. Our theoretical and empirical results together provide a possible explanation for the recent success of large-scale private fine-tuning. Code to reproduce our results can be found at https://github.com/lxuechen/private-transformers/tree/main/examples/classification/spectral_analysis.
Accept
The paper considers DP convex optimization in high-dimension, providing bounds independent of the model size on the empirical and population risk (extending prior work that assumes that gradients belong to a fixed low-rank space). All the reviewers agree that this is an important problem and the results are interesting, and support accepting this paper.
val
[ "_xEsrwaJ9u", "3IzdbWecxkg", "PwWL7JIHRbD", "WLvm9mqCEm5", "SNh_WAtzJH", "cXI5djzNe2k", "7ZRyFc4jmv9", "Ih6Pj2nO_jx", "O7gUCpAGalZ", "FClQd0arrpK", "7g5oIjyt09e" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their quick response and again for their time and feedback! ", " Thanks for the authors' replies, which address most of my concerns. I have updated my score.", " We thank the reviewers again for their time in reading our draft and providing detailed feedback. We have uploaded a new r...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "3IzdbWecxkg", "WLvm9mqCEm5", "nips_2022_FR--mkQu0dw", "SNh_WAtzJH", "7g5oIjyt09e", "FClQd0arrpK", "O7gUCpAGalZ", "nips_2022_FR--mkQu0dw", "nips_2022_FR--mkQu0dw", "nips_2022_FR--mkQu0dw", "nips_2022_FR--mkQu0dw" ]
nips_2022_znbTxnBPlx
Training stochastic stabilized supralinear networks by dynamics-neutral growth
There continues to be a trade-off between the biological realism and performance of neural networks. Contemporary deep learning techniques allow neural networks to be trained to perform challenging computations at (near) human-level, but these networks typically violate key biological constraints. More detailed models of biological neural networks can incorporate many of these constraints but typically suffer from subpar performance and trainability. Here, we narrow this gap by developing an effective method for training a canonical model of cortical neural circuits, the stabilized supralinear network (SSN), that in previous work had to be constructed manually or trained with undue constraints. SSNs are particularly challenging to train for the same reasons that make them biologically realistic: they are characterized by strongly-connected excitatory cells and expansive firing rate non-linearities that together make them prone to dynamical instabilities unless stabilized by appropriately tuned recurrent inhibition. Our method avoids such instabilities by initializing a small network and gradually increasing network size via the dynamics-neutral addition of neurons during training. We first show how SSNs can be trained to perform typical machine learning tasks by training an SSN on MNIST classification. We then demonstrate the effectiveness of our method by training an SSN on the challenging task of performing amortized Markov chain Monte Carlo-based inference under a Gaussian scale mixture generative model of natural image patches with a rich and diverse set of basis functions -- something that was not possible with previous methods. These results open the way to training realistic cortical-like neural networks on challenging tasks at scale.
Accept
This paper proposes a method for training stochastic stabilized supralinear networks (stochastic SSNs), a theoretical model that has shed light on how circuits of excitatory and inhibitory neurons perform various computations. Compared to RNNs commonly used in machine learning, SSNs are trickier to train since they have unbounded nonlinearities and no gating mechanisms. The authors devise a training scheme that adds neurons one at a time while preserving stability, and they demonstrate how their method can learn networks that perform inference in a commonly-studied Gaussian scale mixture model. The reviewers improved their scores over the course of the discussion, and with the authors’ revisions the paper should be accepted. The main criticisms were that the audience is somewhat narrow and the method is a bit heuristic. Neither is a "terminal flaw," to use one reviewer's words, but I would encourage the authors to consider ways to improve on these aspects for the final version.
train
[ "VW_PxNmbVtD", "r3o1ee4gyF", "6oA-xdlQTqL", "xz85jNqKuP", "2JUWwsUqsqw", "FdQAi2CDV-g", "aijyMX5k-c", "LTGgf_3Fkk8", "OMYTOmMsc7", "Xed93zt1AcQ", "Y3GScK8t_r", "lHyEq24bOcN", "TgN6XeMejEa", "LRXvqBoH8MI", "a7-9HTwuywi", "V16a02WvkT" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification. We will make the best out of the additional page if permitted, and prioritize including more information about training over the introductory section as required.\n\n", " Thank you to the authors. Just to clarify, I still think there is some work to be done in further understan...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "r3o1ee4gyF", "xz85jNqKuP", "TgN6XeMejEa", "2JUWwsUqsqw", "Xed93zt1AcQ", "LTGgf_3Fkk8", "Y3GScK8t_r", "OMYTOmMsc7", "lHyEq24bOcN", "LRXvqBoH8MI", "nips_2022_znbTxnBPlx", "V16a02WvkT", "a7-9HTwuywi", "nips_2022_znbTxnBPlx", "nips_2022_znbTxnBPlx", "nips_2022_znbTxnBPlx" ]
nips_2022_51f5sPXJD_E
Sparse Fourier Backpropagation in Cryo-EM Reconstruction
Electron cryo-microscopy (cryo-EM) is a powerful method for investigating the structures of protein molecules, with important implications for understanding the molecular processes of life and drug development. In this technique, many noisy, two-dimensional projection images of protein molecules in unknown poses are combined into one or more three-dimensional reconstructions. The presence of multiple structural states in the data represents a major bottleneck in existing processing pipelines, often requiring expert user supervision. Variational auto-encoders (VAEs) have recently been proposed as an attractive means for learning the data manifold of data sets with a large number of different states. These methods are based on a coordinate-based approach, similar to Neural Radiance Fields (NeRF), to make volumetric reconstructions from 2D image data in Fourier-space. Although NeRF is a powerful method for real-space reconstruction, many of the benefits of the method do not transfer to Fourier-space, e.g. inductive bias for spatial locality. We present an approach where the VAE reconstruction is expressed on a volumetric grid, and demonstrate how this model can be trained efficiently through a novel backpropagation method that exploits the sparsity of the projection operation in Fourier-space. We achieve improved results on a simulated data set and at least equivalent results on an experimental data set when compared to the coordinate-based approach, while also substantially lowering computational cost. Our approach is computationally more efficient, especially in inference, enabling interactive analysis of the latent space by the user.
Accept
The paper presents a new cryoEM reconstruction algorithm for data with multiple structural states for the same protein. In contrast to previous approaches which use a coordinate-based implicit representation of reconstructed density, this paper reconstructs an explicit volumetric grid and the optimization is formulated in the Fourier domain. Good accuracy and faster convergence is demonstrated, in comparison to prior methods. While all reviewers agreed that the method constituted an important contribution to the field, they also pointed to shortcomings in the evaluation of the method. In response, the authors added analysis of the convergence based on FSC-based resolution estimates, and offered to add a comparison to the 3DVA which uses a similar representation to the proposed method. Based on the reviews and author responses, I recommend acceptance of this paper. I urge the authors to follow through on "writing a more expansive comparison to 3DVA, which will be included in the camera-ready version of the manuscript".
val
[ "RhC5Q4jCC7N", "yuHBHeFdF6", "4TgQPPNWuv1", "iHTb1i5N7k", "wScqgSo5JK2", "ipJQCvpsJJJ", "gLJr5uFJMRD", "VFfNRz72uA", "aHcK4Otsegx", "7YBxSqWPrOz", "p3h9TPKrYZd", "RbMxbfyQVv", "-GZvGuaze_Z", "bDmn2CPyhvi" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Average FSC\n==============================\n| epoch | cryoDRGN large | cryoDRGN small | sparse backprop |\n|-------|----------------|----------------|-----------------|\n| 1 | 0.4026649 | 0.36271226 | 0.41322434 |\n| 2 | 0.42484552 | 0.37943053 | 0.4662058 |\n| 3 | 0.43243...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "yuHBHeFdF6", "nips_2022_51f5sPXJD_E", "aHcK4Otsegx", "ipJQCvpsJJJ", "gLJr5uFJMRD", "VFfNRz72uA", "p3h9TPKrYZd", "7YBxSqWPrOz", "bDmn2CPyhvi", "-GZvGuaze_Z", "RbMxbfyQVv", "nips_2022_51f5sPXJD_E", "nips_2022_51f5sPXJD_E", "nips_2022_51f5sPXJD_E" ]
nips_2022_zKBbP3R86oc
Not All Bits have Equal Value: Heterogeneous Precisions via Trainable Noise
We study the problem of training deep networks while quantizing parameters and activations into low-precision numeric representations, a setting central to reducing energy consumption and inference time of deployed models. We propose a method that learns different precisions, as measured by bits in numeric representations, for different weights in a neural network, yielding a heterogeneous allocation of bits across parameters. Learning precisions occurs alongside learning weight values, using a strategy derived from a novel framework wherein the intractability of optimizing discrete precisions is approximated by training per-parameter noise magnitudes. We broaden this framework to also encompass learning precisions for hidden state activations, simultaneously with weight precisions and values. Our approach exposes the objective of constructing a low-precision inference-efficient model to the entirety of the training process. Experiments show that it finds highly heterogeneous precision assignments for CNNs trained on CIFAR and ImageNet, improving upon previous state-of-the-art quantization methods. Our improvements extend to the challenging scenario of learning reduced-precision GANs.
Accept
The paper addresses an important yet less explored problem (optimizing numeric precision per parameters/activations in architectures) with thorough and impressive experiments over multiple architectures (ResNet and Transformers in the rebuttal) and different tasks (ImageNet and machine translation). Reviewers unanimously thinks the paper should be accepted.
train
[ "jzbFF4eIOKm", "2ptdR8_t9co", "SFo5HLUivCk", "jchPhqcAjOj", "8Ni02nN2tnXk", "zUQM5AMvFo1", "8N4sohP9qh", "qkjbi1ECTGN", "UkfwZkGnbWP" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response!\n\nI'm happy to bump up my score. The results are good (especially when expanded). In principle I would like to see a bit more towards per-layer/per-filter quantization, but there are already some experiments in this direction.\n\nI do think that making a connection between quant...
[ -1, -1, -1, -1, -1, -1, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "2ptdR8_t9co", "SFo5HLUivCk", "UkfwZkGnbWP", "qkjbi1ECTGN", "8N4sohP9qh", "nips_2022_zKBbP3R86oc", "nips_2022_zKBbP3R86oc", "nips_2022_zKBbP3R86oc", "nips_2022_zKBbP3R86oc" ]
nips_2022_CIaUMANM6gQ
Detecting Abrupt Changes in Sequential Pairwise Comparison Data
The Bradley-Terry-Luce (BTL) model is a classic and very popular statistical approach for eliciting a global ranking among a collection of items using pairwise comparison data. In applications in which the comparison outcomes are observed as a time series, it is often the case that data are non-stationary, in the sense that the true underlying ranking changes over time. In this paper we are concerned with localizing the change points in a high-dimensional BTL model with piece-wise constant parameters. We propose novel and practicable algorithms based on dynamic programming that can consistently estimate the unknown locations of the change points. We provide consistency rates for our methodology that depend explicitly on the model parameters, the temporal spacing between two consecutive change points and the magnitude of the change. We corroborate our findings with extensive numerical experiments and a real-life example.
Accept
In *Detecting Abrupt Changes in Sequential Pairwise Comparison Data* the authors consider the problem of detecting multiple change points in sequential pairwise comparison data. The reviewers in general consider the model both well-motivated and novel. For these reasons I recommend that this paper be accepted.
train
[ "0vlDlw5OAxy", "0vlDCS27JD-", "wHMfwZP2zL2", "XxicsAm1I1", "x7nXk1oG2MF", "fccB0SknTYh", "iGoydVVoZBt", "HrmdiU6FTfp" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are very grateful to all three reviewers for their helpful reviews and constructive comments. In our revision, we have responded to all the comments and questions point-by-point and conducted additional experiments on simulated and real data to address these comments and questions.\n\nWe would also like to emp...
[ -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, 3, 2, 2 ]
[ "nips_2022_CIaUMANM6gQ", "iGoydVVoZBt", "iGoydVVoZBt", "fccB0SknTYh", "HrmdiU6FTfp", "nips_2022_CIaUMANM6gQ", "nips_2022_CIaUMANM6gQ", "nips_2022_CIaUMANM6gQ" ]
nips_2022_QYhUhMOI4C
Uniqueness and Complexity of Inverse MDP Models
What is the action sequence aa'a" that was likely responsible for reaching state s"' (from state s) in 3 steps? Addressing such questions is important in causal reasoning and in reinforcement learning. Inverse "MDP" models p(aa'a"|ss"') can be used to answer them. In the traditional "forward" view, transition "matrix" p(s'|sa) and policy π(a|s) uniquely determine "everything": the whole dynamics p(as'a's"a"...|s), and with it, the action-conditional state process p(s's"...|saa'a"), the multi-step inverse models p(aa'a"...|ss^i), etc. If the latter is our primary concern, a natural question, analogous to the forward case is to which extent 1-step inverse model p(a|ss') plus policy π(a|s) determine the multi-step inverse models or even the whole dynamics. In other words, can forward models be inferred from inverse models or even be side-stepped. This work addresses this question and variations thereof, and also whether there are efficient decision/inference algorithms for this.
Reject
In this paper, the authors concern the question of whether or not the 1-step inverse model, along with the policy, can determine the multi-step inverse model - or if the forward dynamics of an environment can be inferred by the inverse model along with the policy. The authors also dwell into other related questions. The authors provide insightful answers and discussions to these questions, but the paper is unfortunately written in a very unconventional format which makes it very hard to read. I hope the authors find the reviewer comments below help restructuring the paper to better enlighten the readers.
train
[ "lvXW9KzuFht", "zx01U_jl-CD", "cX3bbkf3Wll", "h3tpwMYibPE", "94kRccVaVpG", "Z4cIP69LC9", "fcZ0tXyhtVM", "BdAxCaOIo7u", "NekApHbLmm5", "jqFTL5Ya0vW", "pZUBFRmGxVP", "28lVMlmDQL", "2kxzc0J6Zj" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > Many of the results are fairly predictable / evident. It is not surprising that in many (most?) cases, inverse models cannot be recovered.\n\nIs it really obvious that, given a k-step inverse model, you can't (in general) recover the (k+1)-step inverse model? We proved this in Appendix K, and would appreciate s...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "cX3bbkf3Wll", "BdAxCaOIo7u", "Z4cIP69LC9", "Z4cIP69LC9", "fcZ0tXyhtVM", "fcZ0tXyhtVM", "jqFTL5Ya0vW", "28lVMlmDQL", "pZUBFRmGxVP", "2kxzc0J6Zj", "nips_2022_QYhUhMOI4C", "nips_2022_QYhUhMOI4C", "nips_2022_QYhUhMOI4C" ]
nips_2022_e8PVEkSa4Fq
Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models
Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6\% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPTperforms on par with the state-of-the-art approaches that use additional training data.
Accept
This paper proposes a technique for training prompts for open-vocabulary vision models (e.g. CLIP) at test time, i.e. without any labeled data. The model is trained to minimize the entropy of the average prediction of many augmented views of a test image. This improves performance to varying degrees without requiring any additional labeling. The method and approach are interesting and some useful analysis is provided. Reviewers agreed that the paper should be accepted. Beyond the suggested changes made by reviewers, I'd recommend some additional references to better situate the paper with respect to past work on consistency regularization, entropy minimization, and transductive learning.
val
[ "YDnZ8jIxdS", "UBclGJJlJTK", "ZYurQ6GmfM", "NrWqe0TqAVE", "6aYjMo8561m", "hVnAPSMubGd", "H6YCuWv_iDm", "o4mB1kvgSw", "tMxgG11NJIQ", "2E9Itys1ZT", "O2C2aFqNx8Q", "lSrkNQP0Gbs", "Cy0Wd7pL5kC", "cDsUnSAkizv", "Y4V1VI8FkG", "_eo62iYHb9H", "4K7z8ZpUeGn", "tXTHngj6q6H" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply and recommendation. We will include a discussion of [1, 2] in our related work section. We also plan to further investigate the per-class performance on both dropped and improved classes. We will include additional analysis and qualitative study in our final version. \n\nIn response to your ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "UBclGJJlJTK", "hVnAPSMubGd", "O2C2aFqNx8Q", "H6YCuWv_iDm", "2E9Itys1ZT", "H6YCuWv_iDm", "o4mB1kvgSw", "Cy0Wd7pL5kC", "lSrkNQP0Gbs", "tXTHngj6q6H", "4K7z8ZpUeGn", "_eo62iYHb9H", "Y4V1VI8FkG", "Y4V1VI8FkG", "nips_2022_e8PVEkSa4Fq", "nips_2022_e8PVEkSa4Fq", "nips_2022_e8PVEkSa4Fq", "...
nips_2022_RQ8X_iK3HT5
When Combinatorial Thompson Sampling meets Approximation Regret
We study the Combinatorial Thompson Sampling policy (CTS) for combinatorial multi-armed bandit problems (CMAB), within an approximation regret setting. Although CTS has attracted a lot of interest, it has a drawback that other usual CMAB policies do not have when considering non-exact oracles: for some oracles, CTS has a poor approximation regret (scaling linearly with the time horizon $T$) [Wang and Chen, 2018]. A study is then necessary to discriminate the oracles on which CTS could learn. This study was started by Kong et al. [2021]: they gave the first approximation regret analysis of CTS for the greedy oracle, obtaining an upper bound of order $\mathcal{O}{\left(\log(T)/\Delta^2\right)}$, where $\Delta$ is some minimal reward gap. In this paper, our objective is to push this study further than the simple case of the greedy oracle. We provide the first $\mathcal{O}{\left(\log(T)/\Delta\right)}$ approximation regret upper bound for CTS, obtained under a specific condition on the approximation oracle, allowing a reduction to the exact oracle analysis. We thus term this condition Reduce2Exact, and observe that it is satisfied in many concrete examples. Moreover, it can be extended to the probabilistically triggered arms setting, thus capturing even more problems, such as online influence maximization.
Accept
We thank the authors for their submission. This work considers combinatorial multi-armed bandits, in which the agent chooses a subset of arms at each round and receives some combinatorial function of the mean rewards of her chosen arms. Specifically, it studies Thompson sampling algorithms in which the only access to the underlying combinatorial problem is via an offline approximation oracle. The authors show novel no-approximate-regret algorithms whose regret guarantees hold under mild assumptions on the oracle (applicable for many common combinatorial problems). The paper, however, has some apparent writing issues which we ask the authors to address in its camera-ready version.
train
[ "WNr6LUG7-nn", "wLBcLek3ktT", "W8FryATg_Wu", "DK6U8d7T87k", "e31Oyxowad_", "l10JYTeNCW", "Sgl4L3wOE34", "5TaHipOkxER", "cOD-w6mkoUW", "1U3F0pWthJK" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your suggestions. \n\n(1) Yes, it is clear that the Greedy Oracle is a good example to give the insights of the assumptions. When you say at the beginning, do you mean at the beginning of section 4, or at the beginning of the article in the \"Contributions\" paragraph?\n\n(2) Talking about limitatio...
[ -1, -1, -1, -1, -1, -1, 4, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 5, 1 ]
[ "wLBcLek3ktT", "DK6U8d7T87k", "1U3F0pWthJK", "cOD-w6mkoUW", "5TaHipOkxER", "Sgl4L3wOE34", "nips_2022_RQ8X_iK3HT5", "nips_2022_RQ8X_iK3HT5", "nips_2022_RQ8X_iK3HT5", "nips_2022_RQ8X_iK3HT5" ]
nips_2022_Cntmos_Ndf0
Lifting Weak Supervision To Structured Prediction
Weak supervision (WS) is a rich set of techniques that produce pseudolabels by aggregating easily obtained but potentially noisy label estimates from various sources. WS is theoretically well-understood for binary classification, where simple approaches enable consistent estimation of pseudolabel noise rates. Using this result, it has been shown that downstream models trained on the pseudolabels have generalization guarantees nearly identical to those trained on clean labels. While this is exciting, users often wish to use WS for \emph{structured prediction}, where the output space consists of more than a binary or multi-class label set: e.g. rankings, graphs, manifolds, and more. Do the favorable theoretical properties of WS for binary classification lift to this setting? We answer this question in the affirmative for a wide range of scenarios. For labels taking values in a finite metric space, we introduce techniques new to weak supervision based on pseudo-Euclidean embeddings and tensor decompositions, providing a nearly-consistent noise rate estimator. For labels in constant-curvature Riemannian manifolds, we introduce new invariants that also yield consistent noise rate estimation. In both cases, when using the resulting pseudolabels in concert with a flexible downstream model, we obtain generalization guarantees nearly identical to those for models trained on clean data. Several of our results, which can be viewed as robustness guarantees in structured prediction with noisy labels, may be of independent interest.
Accept
The paper gives theoretical guarantees for weak supervision for more general problems than binary classification. The reviewers were all positive about this work and felt this paper introduces new techniques to this space that may be useful for other problems as well.
train
[ "VnjDVDJS4jo", "ykbR5YKEyFX", "j1OB8l9FBwB", "CHbDXyJPpgo", "K2ld5qkHsW", "1ungwqqeiqj", "c1Vjb6jwnwG", "ji_cRAURqxM", "7lG3wa_nhKi", "vj0TsTODuD", "3jSEH_2_1gA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank authors hard work on addressing my concerns and continuously improving the paper. My main concerns have been addressed properly.\n- Unnecessariness of assuming the prior is known.\n- The favorable properties of the chosen exponential family model.\n- The uniqueness of the pseudo-Euclidean em...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "CHbDXyJPpgo", "c1Vjb6jwnwG", "nips_2022_Cntmos_Ndf0", "K2ld5qkHsW", "3jSEH_2_1gA", "vj0TsTODuD", "7lG3wa_nhKi", "nips_2022_Cntmos_Ndf0", "nips_2022_Cntmos_Ndf0", "nips_2022_Cntmos_Ndf0", "nips_2022_Cntmos_Ndf0" ]
nips_2022_wYGIxXZ_sZx
What is a Good Metric to Study Generalization of Minimax Learners?
Minimax optimization has served as the backbone of many machine learning problems. Although the convergence behavior of optimization algorithms has been extensively studied in minimax settings, their generalization guarantees, i.e., how the model trained on empirical data performs on the unseen testing data, have been relatively under-explored. A fundamental question remains elusive: What is a good metric to study generalization of minimax learners? In this paper, we aim to answer this question by first showing that primal risk, a universal metric to study generalization in minimization problems, fails in simple examples of minimax problems. Furthermore, another popular metric, the primal-dual risk, also fails to characterize the generalization behavior for minimax problems with nonconvexity, due to non-existence of saddle points. We thus propose a new metric to study generalization of minimax learners: the primal gap, to circumvent these issues. Next, we derive generalization bounds for the primal gap in nonconvex-concave settings. As byproducts of our analysis, we also solve two open questions: establishing generalization bounds for primal risk and primal-dual risk in this setting, and in the strong sense, i.e., without assuming that the maximization and expectation can be interchanged. Finally, we leverage this new metric to compare the generalization behavior of two popular algorithms - gradient descent-ascent (GDA) and gradient descent-max (GDMax) in minimax optimization.
Accept
This paper addresses the important problem of deriving generalization for minimax optimization algorithms. It was shown using concrete examples that the existing measures of generalization (primal risk and the primal-dual risk) may fail to characterize the generalization behavior for minimax problems with nonconvexity. It then proposed a slightly modified new metric called primal gap. Furthermore, the paper resolved an open question in the literature on how to establish generalization bounds for primal risk and primal-dual risk in the strong sense, i.e., without strong concavity. In the rebuttal, it was shown that the obtained rates are indeed optimal. All reviewers acknowledged these novel contributions. I recommend its acceptance accordingly. As reviewers RYz4 and VSNm mentioned, the presentation and wrting of the paper cause some difficulty in reading the paper and recognizing the novel contribution of the paper. I strongly encourage the authors to take their suggestions into account in their revised version.
train
[ "aIBajXTMPdI", "6HPyVVBtOU2", "G-WyQO5j7Nv", "AWHc75Dh6jU", "F2a_cIfg5UO", "MmIdcyfnJzO", "H57JX4PvKmJ", "mi5fXWNyJ4Q", "1Cb0u3JbBa3", "rczi7-ZrdZg", "FErlON-53p", "-NM06olnI3R", "ruYWb09K1Ro", "bF5gs80yNZJo", "9YCMiZluBDV", "V45zIfWqb42", "FKzaeq1bfFI", "vDNmGSJHUMH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nThank the authors for the detailed response. The response has addressed my concerns well. I increase my score to 6 considering this paper has made solid contributions to nonconvex minimax problems.\n\nminor issue: \n\nLine 193: missing a pair of parentheses", " I would like to thank the author for addressing ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "mi5fXWNyJ4Q", "FErlON-53p", "AWHc75Dh6jU", "F2a_cIfg5UO", "H57JX4PvKmJ", "nips_2022_wYGIxXZ_sZx", "rczi7-ZrdZg", "1Cb0u3JbBa3", "vDNmGSJHUMH", "FKzaeq1bfFI", "-NM06olnI3R", "V45zIfWqb42", "bF5gs80yNZJo", "9YCMiZluBDV", "nips_2022_wYGIxXZ_sZx", "nips_2022_wYGIxXZ_sZx", "nips_2022_wYG...
nips_2022_JRXgTMqESS
Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation
There exists an unequivocal distinction between the sound produced by a static source and that produced by a moving one, especially when the source moves towards or away from the microphone. In this paper, we propose to use this connection between audio and visual dynamics for solving two challenging tasks simultaneously, namely: (i) separating audio sources from a mixture using visual cues, and (ii) predicting the 3D visual motion of a sounding source using its separated audio. Towards this end, we present Audio Separator and Motion Predictor (ASMP) -- a deep learning framework that leverages the 3D structure of the scene and the motion of sound sources for better audio source separation. At the heart of ASMP is a 2.5D scene graph capturing various objects in the video and their pseudo-3D spatial proximities. This graph is constructed by registering together 2.5D monocular depth predictions from the 2D video frames and associating the 2.5D scene regions with the outputs of an object detector applied on those frames. The ASMP task is then mathematically modeled as the joint problem of: (i) recursively segmenting the 2.5D scene graph into several sub-graphs, each associated with a constituent sound in the input audio mixture (which is then separated) and (ii) predicting the 3D motions of the corresponding sound sources from the separated audio. To empirically evaluate ASMP, we present experiments on two challenging audio-visual datasets, viz. Audio Separation in the Wild (ASIW) and Audio Visual Event (AVE). Our results demonstrate that ASMP achieves a clear improvement in source separation quality, outperforming prior works on both datasets, while also estimating the direction of motion of the sound sources better than other methods.
Accept
All four reviewers agree that this paper demonstrates strong improvements over prior methods, and there is broad agreement among the reviewers that the model is well motivated and novel, and that the paper is clearly written. There was a good discussion between the authors and reviewers on a number of perceived weaknesses in the paper, and the authors were able to address these concerns with additional experiments and proposed revisions, prompting two reviewers to raise their scores. In the end, all reviewers recommend acceptance to some degree, and in my judgement, the most negative reviewer (who recommends borderline accept) has missed the point, made both in the paper and during the discussion, that the estimation of source direction from single-channel audio depends not only on audio cues, but also on video cues.
test
[ "pIE0IkS8HK", "ufI4M0Jxdf", "0iaQXuxb4iH", "q3MdwOnqxgY", "pKFnihxba_O", "lAu3ZQ_Bj3", "TIilasamNU", "W5LkSa5aVkv", "fgOuhl5KyRk", "UUVptq29QF", "LJoPTzm05q", "WzCM_GNMdP6", "p0_7wWnODqe", "mKHxegsaaKW", "kk70udYGpK5", "c8arGvRKPey", "G5mRgTWeiUi", "tanejKplrqk", "a2Z62e74uo9", ...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_re...
[ " We sincerely appreciate the insightful suggestions and the encouraging response from the reviewer!", " We are sincerely grateful to the reviewer for acknowledging the points made in our rebuttal response and for providing further feedback. Please see our responses below to the follow-up comments/concerns.\n\n**...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "0iaQXuxb4iH", "W5LkSa5aVkv", "TIilasamNU", "fgOuhl5KyRk", "UUVptq29QF", "LJoPTzm05q", "p0_7wWnODqe", "G5mRgTWeiUi", "LJoPTzm05q", "mKHxegsaaKW", "c8arGvRKPey", "nips_2022_JRXgTMqESS", "kk70udYGpK5", "1UsWJG5Y8b", "qjPGyYTE6d8", "a2Z62e74uo9", "tanejKplrqk", "nips_2022_JRXgTMqESS",...
nips_2022_7yUxTNWyQGf
Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisation
Recent research has shown that generative models with highly disentangled representations fail to generalise to unseen combination of generative factor values. These findings contradict earlier research which showed improved performance in out-of-training distribution settings when compared to entangled representations. Additionally, it is not clear if the reported failures are due to (a) encoders failing to map novel combinations to the proper regions of the latent space, or (b) novel combinations being mapped correctly but the decoder is unable to render the correct output for the unseen combinations. We investigate these alternatives by testing several models on a range of datasets and training settings. We find that (i) when models fail, their encoders also fail to map unseen combinations to correct regions of the latent space and (ii) when models succeed, it is either because the test conditions do not exclude enough examples, or because excluded cases involve combinations of object properties with it's shape. We argue that to generalise properly, models not only need to capture factors of variation, but also understand how to invert the process that causes the visual stimulus.
Accept
All reviewers acknowledge that the paper conducts an extensive study of the combinatorial generalization of models, and the additional experiments on CascadeVAE and LieGroupVAE conducted by the authors convinced the reviewers. Despite the lack of a proposed solution, I believe this paper provides novel interesting insights on the failure modes of disentanglement models (for example, the fact that if the factor affect different part of the image combinatorial generalisation is much more likely to occur). In agreement with the reviewers, we recommend acceptance.
train
[ "oC2M9Q-EWkC", "JBV0RxfWyA1", "dsUZ2zVICF7", "UgXGMC62hX8", "p401FjsXgyg", "4A3EDvJoB0_l", "qx575QT27U0A", "3oRnpoTrzfw", "nMelF1we-oWJ", "KFr9p1mlU9", "mjMFlpDHHQTa", "3NLRHktGJ3G-", "WojWzDdV2MD", "3SznSJjnvij", "mSTJdKq0rpe", "XU4SCpgsJ98", "BUSDzfCrG6O", "Qc5ZQ89he_t" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. The extra experiments are very informative and addressed my concerns. I am happy to raise my rating.", " We thank the reviewers for a constructive rebuttal period and we are pleased that we have managed to address their concerns.\n\nWe have added a substantial amount of extra simulatio...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "dsUZ2zVICF7", "nips_2022_7yUxTNWyQGf", "p401FjsXgyg", "KFr9p1mlU9", "nMelF1we-oWJ", "qx575QT27U0A", "3NLRHktGJ3G-", "mjMFlpDHHQTa", "Qc5ZQ89he_t", "BUSDzfCrG6O", "XU4SCpgsJ98", "mSTJdKq0rpe", "3SznSJjnvij", "nips_2022_7yUxTNWyQGf", "nips_2022_7yUxTNWyQGf", "nips_2022_7yUxTNWyQGf", "...
nips_2022_2DZ9R7GXLY
TVLT: Textless Vision-Language Transformer
In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text. Our code and checkpoints are available at: https://github.com/zinengtang/TVLT
Accept
Reviewers acknowledge that the proposed method is simple and performs well and have the potential to present an interesting research direction for future works. The authors respond actively to the review comments and one reviewer raises the score after author response. It is a good paper, and I recommend acceptance. Authors should follow the reviewer's suggestions to update the paper to address some the questions.
train
[ "BVJk19TU-AU", "DsA7OkBD9U", "cLMwm1GHqS2", "QifOmPTobP", "jh6FxoQizXO", "sheNnIUXy59", "yJqfYhsVV-z", "Ois6nQBN5y" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for appreciating our claims and increasing the score!\n\n> 1. Regarding the encoders: Thanks for the additional results. However, I was referring to actual architecture. Meaning did the authors try using different model architecture for speech and image? or only the same architecture but separate encoders?...
[ -1, -1, -1, -1, -1, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "DsA7OkBD9U", "jh6FxoQizXO", "Ois6nQBN5y", "yJqfYhsVV-z", "sheNnIUXy59", "nips_2022_2DZ9R7GXLY", "nips_2022_2DZ9R7GXLY", "nips_2022_2DZ9R7GXLY" ]
nips_2022_D-X3kH-BkpN
Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity
The success of reinforcement learning in a variety of challenging sequential decision-making problems has been much discussed, but often ignored in this discussion is the consideration of how the choice of reward function affects the behavior of these algorithms. Most practical RL algorithms require copious amounts of reward engineering in order to successfully solve challenging tasks. The idea of this type of ``reward-shaping'' has been often discussed in the literature and is used in practical instantiations, but there is relatively little formal characterization of how the choice of reward shaping can yield benefits in sample complexity for RL problems. In this work, we build on the framework of novelty-based exploration to provide a simple scheme for incorporating shaped rewards into RL along with an analysis tool to show that particular choices of reward shaping provably improve sample efficiency. We characterize the class of problems where these gains are expected to be significant and show how this can be connected to practical algorithms in the literature. We show that these results hold in practice in experimental evaluations as well, providing an insight into the mechanisms through which reward shaping can significantly improve the complexity of reinforcement learning while retaining asymptotic performance.
Accept
The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. Overall, the reviewers had a generally positive impression of this paper. One reviewer argued that this paper addresses a relevant and valuable question and makes an important step towards a better understanding of regret when reward shaping is used. Even though this paper makes assumptions that were of some concern to other reviewers, this reviewer argued that the paper is nonetheless an important milestone for the community. Another reviewer acknowledged that this paper conducted a formal theoretical investigation of the impact of reward shaping methods on the sample complexity of RL algorithms and argued that all proofs seem to be sound. This reviewer had a few technical questions, which were all addressed by the authors. Post-rebuttal, the reviewer encouraged the authors to incorporate the corresponding details (such as those discussed in the rebuttal) in the updated version of their draft. A third reviewer emphasized that this paper shed light on reward shaping from a theoretical perspective. They argued that the quality and scientific soundness of the paper are objectively excellent, that the paper is original, and that it deserves merit. The reviewer pointed out one main weakness, however, regarding the assumption of the type of the shaped reward function. They wondered whether this assumption could limit the impact and applicability of the paper's results. After reading the authors' thorough rebuttal, however, the reviewer stated that they were satisfied with all responses and updated their score accordingly. Finally, a fourth reviewer also had an overall positive view of this work but pointed out, as a weakness, the seemingly strong assumption that the shaping signal is an approximation of the optimal value function. After reading the authors' response, however, the reviewer stated that the assumptions made in this paper were not as restrictive as they initially thought, and updated their score. Overall, thus, it seems like most reviewers were positively impressed with the quality of this work. They look forward to an updated paper version addressing the suggestions mentioned in their reviews and during the discussion phase.
train
[ "SmOuK9hteC_", "zaDPOjxVnU", "uPlMk1BHOPx", "2qQt09HCawG", "CknVXmgghkO", "uAPvFzTh_h5", "6oOO-EcSw_a3", "uwJ4KZWxty0", "LzWHeBK0zqo", "jSU748MPJzm", "uqCj56Ds3WY", "27KDei9-MQR", "CyHtHcf2qZL", "n0xPCkHZ5yj", "RzSXtLxQZ", "lLqLX-ejr0d", "d4Lrp1AWIZ", "PHLkOvlKXI1" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks so much for your answer. We will incorporate all these into our final manuscript for which we'll have an extra page. ", " Hi Reviewer 3Wkq,\n\nWe understand reviewer load is high and we thank you again for your time!\n\n\nWe just wanted to flag that other reviewers shared your concerns about our 'sandwi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "uPlMk1BHOPx", "2qQt09HCawG", "CknVXmgghkO", "jSU748MPJzm", "uqCj56Ds3WY", "LzWHeBK0zqo", "CyHtHcf2qZL", "PHLkOvlKXI1", "d4Lrp1AWIZ", "lLqLX-ejr0d", "27KDei9-MQR", "RzSXtLxQZ", "n0xPCkHZ5yj", "nips_2022_D-X3kH-BkpN", "nips_2022_D-X3kH-BkpN", "nips_2022_D-X3kH-BkpN", "nips_2022_D-X3kH...
nips_2022_mwIPkVDeFg
Distributed Optimization for Overparameterized Problems: Achieving Optimal Dimension Independent Communication Complexity
Decentralized optimization are playing an important role in applications such as training large machine learning models, among others. Despite its superior practical performance, there has been some lack of fundamental understanding about its theoretical properties. In this work, we address the following open research question: To train an overparameterized model over a set of distributed nodes, what is the {\it minimum} communication overhead (in terms of the bits got exchanged) that the system needs to sustain, while still achieving (near) zero training loss? We show that for a class of overparameterized models where the number of parameters $D$ is much larger than the total data samples $N$, the best possible communication complexity is ${\Omega}(N)$, which is independent of the problem dimension $D$. Further, for a few specific overparameterized models (i.e., the linear regression, and certain multi-layer neural network with one wide layer), we develop a set of algorithms which uses certain linear compression followed by adaptive quantization, and show that they achieve dimension independent, and sometimes near optimal, communication complexity. To our knowledge, this is the first time that dimension independent communication complexity has been shown for distributed optimization.
Accept
This paper considers the following problem in distributed optimization: To train an overparameterized model over a set of distributed nodes, what is the minimum number of bits required to reach zero loss. The paper gives lower bounds on the bit complexity for two settings: non-convex functions satisfying a PL condition and overparameterized quadratics. The authors then give an algorithm that (1) for PL objectives, has optimal communication complexity (up to logarithmic terms in the dimension of the problem) and (2) for quadratic overparameterized objectives, attains optimal communication complexity (up to logarithmic terms) with high probability. This paper generated significant discussion in the initial author-reviewer discussion period. Most of the reviewers were quite positive on the paper, and found the results to be interesting and relevant to the community, and found the paper to be well-written. They found the results, which provide near-optimal sample complexity for two settings (PL functions and overparameterized quadratics) to be technically strong, and were impressed by the tightness of the results. One reviewer took issue with certain limitations of the paper, including: - A limitation of the lower bounds in the paper is that they concern only deterministic methods. - The results are only tight with respect to the parameters $D$, $N$, and $\epsilon$, and are not necessarily tight with respect to $L$ and $\mu$. - Some related work can be discussed in more detail. These limitations do not seem to take away from the novelty, and neither I nor the other reviewers were convinced by the other issues raised in the discussion. As a result, I believe the paper is worth accepting as a starting point for future research in this direction. Nonetheless, the authors are encouraged to expand the discussion around these issues and limitations in the final version of the paper, as well as expand the comparison to related work.
train
[ "7Ms3xDLZeEE", "Sh0UE2SwTo", "2FhwXAciFcB", "WpfNtOVPiTGJ", "ulJ5FSXkxMV", "ogtz8zdflyk", "PKk3Opb9Ir", "faGprCTX8tY", "pocUdhadadi", "aCOsidCaH5B", "M3KZgos4Up", "udlhsrLGE2H", "5bXDJmy7Cy", "b8IyMI59gQ", "gwC3M25VAwM", "mFDwDAweXex", "oO16LRRTtVr", "9FaYq_bJd5h", "0RjsdSL0iOF",...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Thanks to the authors for the response, but I'm still not satisfied with the rebuttal.\n\n1) The estimates of the paper offers still seem bad to me. The authors suggest turning a blind eye to $L$, $\\mu$, and $\\kappa$ and treating them as constants, but to me this is strange. It seems like we are playing a game:...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "OfrObbcIaR", "9FaYq_bJd5h", "PKk3Opb9Ir", "pocUdhadadi", "pocUdhadadi", "faGprCTX8tY", "udlhsrLGE2H", "b8IyMI59gQ", "5bXDJmy7Cy", "0RjsdSL0iOF", "0RjsdSL0iOF", "OfrObbcIaR", "OfrObbcIaR", "OfrObbcIaR", "9FaYq_bJd5h", "9FaYq_bJd5h", "9FaYq_bJd5h", "nips_2022_mwIPkVDeFg", "nips_20...
nips_2022_NiCJDYpKaBj
Staircase Attention for Recurrent Processing of Sequences
Attention mechanisms have become a standard tool for sequence modeling tasks, in particular by stacking self-attention layers over the entire input sequence as in the Transformer architecture. In this work we introduce a novel attention procedure called staircase attention that, unlike self-attention, operates across the sequence (in time) recurrently processing the input by adding another step of processing. A step in the staircase comprises of backward tokens (encoding the sequence so far seen) and forward tokens (ingesting a new part of the sequence). Thus our model can trade off performance and compute, by increasing the amount of recurrence through time and depth. Staircase attention is shown to be able to solve tasks that involve tracking that conventional Transformers cannot, due to this recurrence. Further, it is shown to provide improved modeling power for the same size model (number of parameters) compared to self-attentive Transformers on large language modeling and dialogue tasks, yielding significant perplexity gains.
Accept
The paper proposes staircase attention to model recurrence with the Transformer architecture. Reviewers are generally on the acceptance side, as they believe the proposed approach is interesting and achieves good empirical performance.
train
[ "iSsL1nVh7BC", "El7soVuQwpO", "14aQ4Ypp-n98", "f63-U9HfNT", "BAAm1dgygbF", "sr5VGghHzl", "7mH6gZigCno", "445A9OD8zBw", "L20BvTUnz5", "EAlZyjAe_2J" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors for their willingness to update the paper with additional results and the explanations for the above questions.", " Most of my concerns have been addressed, I have updated my review accordingly.", " Thank you for the very detailed review and recommendations. However, we would like to ad...
[ -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 5, 4 ]
[ "BAAm1dgygbF", "f63-U9HfNT", "445A9OD8zBw", "EAlZyjAe_2J", "L20BvTUnz5", "7mH6gZigCno", "nips_2022_NiCJDYpKaBj", "nips_2022_NiCJDYpKaBj", "nips_2022_NiCJDYpKaBj", "nips_2022_NiCJDYpKaBj" ]
nips_2022_zD65Zdh6ZhI
On Computing Probabilistic Explanations for Decision Trees
Formal XAI (explainable AI) is a growing area that focuses on computing explanations with mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of the most studied cases is that of explaining the choices taken by decision trees, as they are traditionally deemed as one of the most interpretable classes of models. Recent work has focused on studying the computation of sufficient reasons, a kind of explanation in which given a decision tree $T$ and an instance $x$, one explains the decision $T(x)$ by providing a subset $y$ of the features of $x$ such that for any other instance $z$ compatible with $y$, it holds that $T(z) = T(x)$, intuitively meaning that the features in $y$ are already enough to fully justify the classification of $x$ by $T$. It has been argued, however, that sufficient reasons constitute a restrictive notion of explanation. For such a reason, the community has started to study their probabilistic counterpart, in which one requires that the probability of $T(z) = T(x)$ must be at least some value $\delta \in (0, 1]$, where $z$ is a random instance that is compatible with $y$. Our paper settles the computational complexity of $\delta$-sufficient-reasons over decision trees, showing that both (1) finding $\delta$-sufficient-reasons that are minimal in size, and (2) finding $\delta$-sufficient-reasons that are minimal inclusion-wise, do not admit polynomial-time algorithms (unless P = NP). This is in stark contrast with the deterministic case ($\delta = 1$) where inclusion-wise minimal sufficient-reasons are easy to compute. By doing this, we answer two open problems originally raised by Izza et al., and extend the hardness of explanations for Boolean circuits presented by W{\"a}ldchen et al. to the more restricted case of decision trees. On the positive side, we identify structural restrictions of decision trees that make the problem tractable, and show how SAT solvers might be able to tackle these problems in practical settings.
Accept
The paper studies the complexity of discovering probabilistic explanations for decision trees. They show that computing minimum/minimal probabilistic explanations is an NP-hard problem. On the other hand, they also identify structural conditions that make the problem efficiently solvable. The reviewers appreciated the challenges in proving the complexity results. Although the experimental results are not so convincing, we evaluated the paper mainly on its theoretical aspects and find it suitable for acceptance.
test
[ "dnC5Mx4ml4v", "RMnWASqyyKQ", "tSHs78__Tfz", "A3wvFJenh-n", "4NAFkw_aRoe", "FQmVj_96RI8", "TPMCf6p6UbL", "eCIo451hD7L", "JwP59BxcXnE", "vCE9U9rdVp4", "MkzqlbydPdn" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestions!\n\nGiven what you mention, it might be possible to improve our encoding for instances that have more features than nodes. Perhaps this could allow for probabilistic explanations for the Gisette dataset. We appreciate this idea.\n\nIndeed testing our encoding in more real world datasets...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "RMnWASqyyKQ", "tSHs78__Tfz", "A3wvFJenh-n", "eCIo451hD7L", "TPMCf6p6UbL", "MkzqlbydPdn", "vCE9U9rdVp4", "JwP59BxcXnE", "nips_2022_zD65Zdh6ZhI", "nips_2022_zD65Zdh6ZhI", "nips_2022_zD65Zdh6ZhI" ]
nips_2022_nyCr6-0hinG
Tensor Program Optimization with Probabilistic Programs
Automatic optimization for tensor programs becomes increasingly important as we deploy deep learning in various environments, and efficient optimization relies on a rich search space and effective search. Most existing efforts adopt a search space which lacks the ability to efficiently enable domain experts to grow the search space. This paper introduces MetaSchedule, a domain-specific probabilistic programming language abstraction to construct a rich search space of tensor programs. Our abstraction allows domain experts to analyze the program, and easily propose stochastic choices in a modular way to compose program transformation accordingly. We also build an end-to-end learning-driven framework to find an optimized program for a given search space. Experimental results show that MetaSchedule can cover the search space used in the state-of-the-art tensor program optimization frameworks in a modular way. Additionally, it empowers domain experts to conveniently grow the search space and modularly enhance the system, which brings 48% speedup on end-to-end deep learning workloads.
Accept
With many of the changes proposed by the authors in response to reviewer feedback, I think this will be a very good paper, arguably warranting even higher scores than the updated reviewer scores would indicate, which are already unanimously in favor of acceptance post-rebuttal. The performance improvements obtained here seem very significant, and I think that this paper represents an enormous engineering effort, which I think has potentially enormous value to researchers and practitioners training large end-to-end deep learning models. Despite the paper's significant focus on engineering and systems concerns, I think the potential value to machine learning here is quite clear, and therefore clearly a fit for NeurIPS.
train
[ "Eb0bo4x83YL", "X8hyW10Bby", "ln5DdE7-4IK", "WAnmDL7nWZK", "QYjjNXlg7-", "vfZbwH-bQs", "ZTPjkXLEwMI", "8rBAePaM43", "BWdjhpuFA_", "JcXMEQ8gzP", "xr9pBAzI5Nz", "L9WGw3QQc3d", "IV3j1vSrMJz", "gaiV3wXWnkY", "EVJeBl0CmRt" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your valuable suggestion, and they are very helpful in making the paper accessible to broader audiences! \n\nThe relation can be further clarified as follows:\nAutoTVM's search space is template-based with tuning knobs of loop variables. We can declare these tuning knobs as random variables of stoch...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "X8hyW10Bby", "JcXMEQ8gzP", "WAnmDL7nWZK", "QYjjNXlg7-", "ZTPjkXLEwMI", "nips_2022_nyCr6-0hinG", "BWdjhpuFA_", "EVJeBl0CmRt", "gaiV3wXWnkY", "IV3j1vSrMJz", "L9WGw3QQc3d", "nips_2022_nyCr6-0hinG", "nips_2022_nyCr6-0hinG", "nips_2022_nyCr6-0hinG", "nips_2022_nyCr6-0hinG" ]
nips_2022_PQFr7FbGbO
A Multi-Resolution Framework for U-Nets with Applications to Hierarchical VAEs
U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied. In this paper, we formulate a multi-resolution framework which identifies U-Nets as finite-dimensional truncations of models on an infinite-dimensional function space. We provide theoretical results which prove that average pooling corresponds to projection within the space of square-integrable functions and show that U-Nets with average pooling implicitly learn a Haar wavelet basis representation of the data. We then leverage our framework to identify state-of-the-art hierarchical VAEs (HVAEs), which have a U-Net architecture, as forward Euler discretisations of multi-resolution diffusion processes which flow from a point mass, introducing sampling instabilities. We also demonstrate that HVAEs learn a representation of time which allows for improved parameter efficiency through weight-sharing. We use this observation to achieve state-of-the-art HVAE performance with half the number of parameters of existing models, exploiting the properties of our continuous-time formulation.
Accept
The paper makes an observation that average pooling in U-Nets implicitly learn a Haar wavelet basis representation and build a theory for hierarchical VAEs (HVAEs) on top of it. The proposed interpretation of HVAEs lead to modification to HVAEs that reduce the number of parameters and improve stability. I think the Haar wavelet basis representation is somewhat obvious but the analysis of HVAEs look nontrivial. I would recommend accepting this paper with the following suggestions: I suggest the authors to focus on the HVAE part and improve the presentation. More specifically, the definition of U-Net architectures used in HVAEs should be in the main text. Also make more connection between HVAEs and diffusion models. For example, time information is explicitly handled in diffusion models, whereas this paper suggest that it is handled implicitly in HVAEs. The parameter sharing is common for diffusion models. At the end of the day, the proposed improvement to HVAEs seem to make HVAEs closer to diffusion models.
val
[ "6CcB6gDRZS", "8rH7WWn1RBD", "D-OQPKvN5IZ", "Iy6ns4xxIkt", "QoeZA9CWwmm", "nktMoOZrlT0", "RArP2J7zVZ3", "xJYACBZAwCQ", "PnGtkC7xAr", "srwK1vPtyrw", "3UswA0e7LLq", "r9b3RzPyNIW", "BsmLcLZ9cG8l", "DXbjCvj7FZXI", "YsXN5jCqZiC", "Sa-CyqGda0m", "sC9tUtbFrHx", "5G5HoGi87p-", "EHTJBnxHa...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " > “Can you give an instantiation of Eq.(B.106), i.e., specifying $F_j, B_j, P_j, E_j$, that explicitly includes the skip connection in the expression of U?”\n\nWe are happy to provide such an instantiation.\nConsider the following scenario: \nAssume the data lives in $\\mathbb{R}^2$ and we use the notation $(x,y)...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2 ]
[ "8rH7WWn1RBD", "Iy6ns4xxIkt", "PnGtkC7xAr", "k3Nqgy_jg3m", "srwK1vPtyrw", "3UswA0e7LLq", "r9b3RzPyNIW", "BsmLcLZ9cG8l", "YsXN5jCqZiC", "EHTJBnxHa7k", "EHTJBnxHa7k", "_H4yi9YO-l", "NmjtwTOp2PMU", "nips_2022_PQFr7FbGbO", "Sa-CyqGda0m", "6CsDVe0vNOe", "5G5HoGi87p-", "EHTJBnxHa7k", "...
nips_2022_GyWsthkJ1E2
Instability and Local Minima in GAN Training with Kernel Discriminators
Generative Adversarial Networks (GANs) are a widely-used tool for generative modeling of complex data. Despite their empirical success, the training of GANs is not fully understood due to the joint training of the generator and discriminator. This paper analyzes these joint dynamics when the true samples, as well as the generated samples, are discrete, finite sets, and the discriminator is kernel-based. A simple yet expressive framework for analyzing training called the $\textit{Isolated Points Model}$ is introduced. In the proposed model, the distance between true samples greatly exceeds the kernel width so that each generated point is influenced by at most one true point. The model enables precise characterization of the conditions for convergence both to good and bad minima. In particular, the analysis explains two common failure modes: (i) an approximate mode collapse and (ii) divergence. Numerical simulations are provided that predictably replicate these behaviors.
Accept
All reviewers agree this paper presents interesting analysis and results on GAN training dynamics. However they also note several limitations of the proposed setting, namely the assumptions of kernel width and the isolated points model, restrict directly applying results to practical settings. Authors in the response have done a good job of explaining how one can use the observations from the analysis to improve stability of real world training of GANs. Overall I think this work has some promising directions towards improving our understanding of training GANs, hence I suggest acceptance.
train
[ "q1TQauQP8p", "nJGSujA8MW8", "_HeYE4fF5x", "-tVCBqBUend", "Vtnr6DhdHHm", "hYqJ6qtoiGC", "cgwMWpm-los", "P5xeXv-d2rq", "HsD9qC0prXo", "DTSHaIGlzj0", "xZgQT3EA1ZF", "gAm8eg4YNWl", "GXWh0gXfBq", "Lek-bw6N7b" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for recognizing the difficulty of the problem we are trying to tackle. We would really appreciate it if you could share your perspective on the hardness of this problem with the other reviewers. Although applying our abstraction to real-world datasets is desirable, it is currently beyond the scope of th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "cgwMWpm-los", "Vtnr6DhdHHm", "hYqJ6qtoiGC", "nips_2022_GyWsthkJ1E2", "HsD9qC0prXo", "P5xeXv-d2rq", "DTSHaIGlzj0", "Lek-bw6N7b", "GXWh0gXfBq", "gAm8eg4YNWl", "nips_2022_GyWsthkJ1E2", "nips_2022_GyWsthkJ1E2", "nips_2022_GyWsthkJ1E2", "nips_2022_GyWsthkJ1E2" ]
nips_2022_ok-SB1kz67Z
Introspective Learning : A Two-Stage approach for Inference in Neural Networks
In this paper, we advocate for two stages in a neural network's decision making process. The first is the existing feed-forward inference framework where patterns in given data are sensed and associated with previously learned patterns. The second stage is a slower reflection stage where we ask the network to reflect on its feed-forward decision by considering and evaluating all available choices. Together, we term the two stages as introspective learning. We use gradients of trained neural networks as a measurement of this reflection. A simple three-layered Multi Layer Perceptron is used as the second stage that predicts based on all extracted gradient features. We perceptually visualize the post-hoc explanations from both stages to provide a visual grounding to introspection. For the application of recognition, we show that an introspective network is 4% more robust and 42% less prone to calibration errors when generalizing to noisy data. We also illustrate the value of introspective networks in downstream tasks that require generalizability and calibration including active learning, out-of-distribution detection, and uncertainty estimation. Finally, we ground the proposed machine introspection to human introspection for the application of image quality assessment.
Accept
The paper proposes an approach for reflecting on model predictions in a classification task. The approach is novel and empirical evaluations show significant improvement over standard predictive networks. One of the reviewers is critical of the paper because this is not the first two-stage approach proposed. I do not think that this is fair criticism given the technical details of the current paper and an existing approach the reviewer points out is significantly different, as the reviewer themselves agree. Questions about the practicality of applicability when the number of classes are large and generalization of the conclusions to other datasets were raised, which are fair points in my opinion and addressing these points would make a stronger contribution.
train
[ "DaIdrGX2Tc9", "OkVkiHBqjVj", "jlHmfE5_1-", "KljodtUWnFR", "qV4mDcz4BJX", "WO8uUwNwPdz", "p11Knc5yCuF", "iMgWwDeKYwG", "oUOnvTH897e", "pP7K04N0QC", "iGvu6oM2Xrk", "vs38SslVtDY", "WGZE5JfjLeE", "RxwdP3QnXfS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reponse. It is good and also raises a few general issues like what should be the page limit.\n\nKey points should be in the paper. For me adding a few lines showing more datasets is among them. The discussion of them could be in an Appendix.\n\nAlso I think one should not be very forgiving in term...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "qV4mDcz4BJX", "KljodtUWnFR", "p11Knc5yCuF", "WGZE5JfjLeE", "WO8uUwNwPdz", "RxwdP3QnXfS", "vs38SslVtDY", "oUOnvTH897e", "iGvu6oM2Xrk", "nips_2022_ok-SB1kz67Z", "nips_2022_ok-SB1kz67Z", "nips_2022_ok-SB1kz67Z", "nips_2022_ok-SB1kz67Z", "nips_2022_ok-SB1kz67Z" ]
nips_2022_GBEimWWM9ii
MMRR: Unsupervised Anomaly Detection through Multi-Level Masking and Restoration with Refinement
Recent state-of-the-art anomaly detection algorithms mainly adopt generative models or approaches based on deep one-class classification. These approaches have hyperparameters to balance the adversarial framework of the generative adversarial network and to determine the decision boundary of the classifier. Both methods show good performance, but their performance suffers from hyperparameter sensitivity. A new category of anomaly detection methods has been proposed that utilizes prior knowledge about abnormal data or pretrained features, but it is more generic not to use such side information. In this study, we propose "Multi-Level Masking and Restoration with Refinement (MMRR)", an unsupervised-learning-based anomaly detection method based on a generative model that overcomes hyperparameter sensitivity and the need for side information. MMRR learns the salient features of normal data distributions through restoration from restricted information via masking, resulting in a better restoration of in-distribution data than out-of-distribution data. To overcome hyperparameter sensitivity, we ensemble restoration results from information restricted to predefined multiple levels instead of finding a single optimal restriction level, and propose a novel mask generation and refinement method to achieve hyperparameter robustness. Extensive experimental evaluation on common benchmarks (i.e. MNIST, FMNIST, CIFAR10, MVTecAD) demonstrates the efficacy of the MMRR.
Reject
The paper proposes a Multi-Level Masking and Restoration with Refinement to solve the hyperparameter sensitivity problem in anomaly detection studies. Reviewers had some concerns regarding this work including limited novelty, inconsistent numerical evaluation with prior works, lack of discussions on benefits of using prior knowledge, etc. I appreciate that the authors were active during the rebuttal period to address these concerns but I think the paper needs a bit more work before being accepted.
train
[ "pHhnMdsljjx", "MV-KFW6ZNKA", "mdJCkXUx082", "_HfUzACKEE", "qSKMNi2aafx", "cNc9vWDhx75", "DIrAwDx_1kX", "1oAscr2qXU6", "FPmllbkeB3", "3ZWYBFjW6L6", "bw9WGLfYMeF", "oLI_lKrtrcH", "I_4Pj2uUgB3", "GR-3oZurMIE", "45s-v4cXP0j", "JKmKig1LuO" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nAuthor and reviewer discussion ended. Could you please go over our rebuttal and check the responses? We believe that we have addressed all your concerns and that including these discussions will further strengthen our paper. We hope you reflect this in your final review and the score. We thank you again for you...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "GR-3oZurMIE", "mdJCkXUx082", "JKmKig1LuO", "JKmKig1LuO", "cNc9vWDhx75", "DIrAwDx_1kX", "45s-v4cXP0j", "45s-v4cXP0j", "nips_2022_GBEimWWM9ii", "nips_2022_GBEimWWM9ii", "JKmKig1LuO", "nips_2022_GBEimWWM9ii", "GR-3oZurMIE", "nips_2022_GBEimWWM9ii", "nips_2022_GBEimWWM9ii", "nips_2022_GBE...
nips_2022_AJzrFyqP0ci
Formalizing Consistency and Coherence of Representation Learning
In the study of reasoning in neural networks, recent efforts have sought to improve consistency and coherence of sequence models, leading to important developments in the area of neuro-symbolic AI. In symbolic AI, the concepts of consistency and coherence can be defined and verified formally, but for neural networks these definitions are lacking. The provision of such formal definitions is crucial to offer a common basis for the quantitative evaluation and systematic comparison of connectionist, neuro-symbolic and transfer learning approaches. In this paper, we introduce formal definitions of consistency and coherence for neural systems. To illustrate the usefulness of our definitions, we propose a new dynamic relation-decoder model built around the principles of consistency and coherence. We compare our results with several existing relation-decoders using a partial transfer learning task based on a novel data set introduced in this paper. Our experiments show that relation-decoders that maintain consistency over unobserved regions of representation space retain coherence across domains, whilst achieving better transfer learning performance.
Accept
This paper has slightly mixed reviews, trending toward a weak accept overall. The paper's topic and approach are overall novel, supported by good experiments. It's an interesting paper that will likely spark further investigation and inspire other research. There is still some disagreement among the reviewers, with recommendations ranging from accept to weak reject. In particular, the reviewers believe that, while overall well written, the paper could still use some improvements in presenting the relationship among the ideas and the architecture. In particular, the authors are encouraged to revise their presentation to better present "the architecture as merely a testbed for the proposed losses", as suggested in the discussion below. These revisions are easily made. The reviews also suggested improvements in a working example, which the authors' stated already exists in Fig 3 and admit is rather dense. It would benefit the paper to include a more detailed and clear work-through of the example, possibly in summary in the main paper and then in detail in the appendix. Side note: the authors' privately expressed concerns were taken into consideration by the meta-reviewer in evaluating this paper.
train
[ "Vp7lw7cVgGE", "4zwnlHGIgaM", "mpmICza776S", "DlWTUpGq-q8y", "9jMnxPWhNU9", "1yigdf6LVTg", "jli35JAv8b5", "mJKaS6ccBMv", "dNebk0POBvH", "18AWvMV2_uR", "3hf7PTg9Ft", "lPgN7TvGEBl", "f1720jwckyS", "gNMGwYxx6p1", "a8Pijh_I2B2D", "oV1rtQBQfL", "KLK4EibLH13", "--9nMNXFp4" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer kLPG,\n\nThank you for reconsidering your rating. We would like to note briefly that results using different fragments within the concept of ordinality (but having broader relevance) are already included in the paper. Figure 3 shows consistency losses for the transitivity, reflexivity and asymmetry ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 4 ]
[ "mpmICza776S", "jli35JAv8b5", "3hf7PTg9Ft", "nips_2022_AJzrFyqP0ci", "jli35JAv8b5", "mJKaS6ccBMv", "dNebk0POBvH", "f1720jwckyS", "--9nMNXFp4", "3hf7PTg9Ft", "KLK4EibLH13", "oV1rtQBQfL", "gNMGwYxx6p1", "nips_2022_AJzrFyqP0ci", "nips_2022_AJzrFyqP0ci", "nips_2022_AJzrFyqP0ci", "nips_20...
nips_2022_4L2zYEJ9d_
CARD: Classification and Regression Diffusion Models
Learning the distribution of a continuous or categorical response variable y given its covariates x is a fundamental problem in statistics and machine learning. Deep neural network-based supervised learning algorithms have made great progress in predicting the mean of y given x, but they are often criticized for their ability to accurately capture the uncertainty of their predictions. In this paper, we introduce classification and regression diffusion (CARD) models, which combine a denoising diffusion-based conditional generative model and a pre-trained conditional mean estimator, to accurately predict the distribution of y given x. We demonstrate the outstanding ability of CARD in conditional distribution prediction with both toy examples and real-world datasets, the experimental results on which show that CARD, in general, outperforms state-of-the-art methods, including Bayesian neural network-based one, designed for uncertainty estimation, especially when the conditional distribution of y given x is multi-modal. In addition, we utilize the stochastic nature of the generative model outputs to obtain a finer granularity in model confidence assessment at the instance level for classification tasks.
Accept
This paper proposes a different approach to uncertainty quantification in regression and classification problems. It extends denoising diffusion probabilistic models in combination with a pre-trained conditional mean model to provide a conditional generative model. The proposed approach is evaluated on UCI regression tasks and CIFAR10 classification. Additional results on CIFAR100 and ImageNet were presented during the rebuttal period and additional analyses. The paper received some (relatively weak) support among reviewers. All the reviewers agree the paper proposes an interesting and novel formulation of uncertainty-quantified regression and classification models. The main issues raised by the reviewers were that of (i) conducting experiments on larger datasets, (ii) more analysis in the classification case, and (iii) the effect of conditioning on x in the diffusion processes. I believe the authors have provided additional significant empirical evidence of the benefits of their approach and some of the reviewers updated their scores accordingly. Given this, the potential impact of the paper, the novelty nature of the contribution, and despite the seemingly weak scores, I believe there are no significant concerns remaining about this paper and it is worth presenting in its current form to the NeurIPS community.
val
[ "ssWpCD51Vv", "ildsTdUmgKd", "hCMT3tEhMbp", "lymUf1-XW6T", "SUmygzzvjpg", "A6l0w1VD4Kz", "WxwCFRePlSx", "IcODM6yOLfg", "GxUIf-SVfy", "30u4J5D4gV", "U7OWMDV4Pf7", "saPxZBG9N-t", "E45vjur3Tjx", "nb6t7kxRpit", "hiC7hS0Z3l", "awsXskw-DrB", "ApbRu4wybcv" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Due to the number of classes besides the CIFAR-10 dataset, we do not report the metrics for every class. Instead, for PIW, we report the PIW of true label for correct and incorrect predictions, for the entire test set as well as only the most and the least accurate class; for $t$-test results, we only report the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "nips_2022_4L2zYEJ9d_", "nips_2022_4L2zYEJ9d_", "saPxZBG9N-t", "E45vjur3Tjx", "30u4J5D4gV", "nips_2022_4L2zYEJ9d_", "nips_2022_4L2zYEJ9d_", "ApbRu4wybcv", "ApbRu4wybcv", "U7OWMDV4Pf7", "awsXskw-DrB", "hiC7hS0Z3l", "nb6t7kxRpit", "nips_2022_4L2zYEJ9d_", "nips_2022_4L2zYEJ9d_", "nips_202...
nips_2022_f2MyWR-6HrQ
Learning Modular Simulations for Homogeneous Systems
Complex systems are often decomposed into modular subsystems for engineering tractability. Although various equation based white-box modeling techniques make use of such structure, learning based methods have yet to incorporate these ideas broadly. We present a modular simulation framework for modeling homogeneous multibody dynamical systems, which combines ideas from graph neural networks and neural differential equations. We learn to model the individual dynamical subsystem as a neural ODE module. Full simulation of the composite system is orchestrated via spatio-temporal message passing between these modules. An arbitrary number of modules can be combined to simulate systems of a wide variety of coupling topologies. We evaluate our framework on a variety of systems and show that message passing allows coordination between multiple modules over time for accurate predictions and in certain cases, enables zero-shot generalization to new system configurations. Furthermore, we show that our models can be transferred to new system configurations with lower data requirement and training effort, compared to those trained from scratch.
Accept
The manuscript describes a novel combination of message passing / graph neural network ideas with NODE. Reviewers agree that this is an innovative contribution. Authors have provided various non-trivial experiments to demonstrate its performance.
train
[ "RRvnsI3boMM", "BaTk-5-SxrZ", "3h8GwjDiaY2", "9FTbd0LqaJu", "dGu_vvAZzm", "Pm_eEpABk1", "7FQEs0LqR1B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for responding! While I'm sure the variance was small, for reproducibility and fairness of comparisons, error bars are essential! Moreover, they inspire confidence in the approach as it makes it less likely that the random seed was optimized for instance. ", " We thank the reviewer for their detailed rev...
[ -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "9FTbd0LqaJu", "Pm_eEpABk1", "dGu_vvAZzm", "7FQEs0LqR1B", "nips_2022_f2MyWR-6HrQ", "nips_2022_f2MyWR-6HrQ", "nips_2022_f2MyWR-6HrQ" ]
nips_2022_HXCPA2GXf_
Concept Embedding Models
Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy. Concept bottleneck models promote trustworthiness by conditioning classification tasks on an intermediate level of human-like concepts. This enables human interventions which can correct mispredicted concepts to improve the model's performance. However, existing concept bottleneck models are unable to find optimal compromises between high task accuracy, robust concept-based explanations, and effective interventions on concepts---particularly in real-world conditions where complete and accurate concept supervisions are scarce. To address this, we propose Concept Embedding Models, a novel family of concept bottleneck models which goes beyond the current accuracy-vs-interpretability trade-off by learning interpretable high-dimensional concept representations. Our experiments demonstrate that Concept Embedding Models (1) attain better or competitive task accuracy w.r.t. standard neural models without concepts, (2) provide concept representations capturing meaningful semantics including and beyond their ground truth labels, (3) support test-time concept interventions whose effect in test accuracy surpasses that in standard concept bottleneck models, and (4) scale to real-world conditions where complete concept supervisions are scarce.
Accept
This paper proposes Concept Embedding Models, which learn interpretable high-dimensional concept representations to exploit the tradeoff between accuracy, interpretability, and interventions on concepts. Reviewers vote for accepting this paper. The authors are encouraged to further improve this work based on reviewers’ comments in the camera ready and put the new experiments and discussions during the author-reviewer discussion phrase into the final revision, in particular the following: - Add statistical significance test of experimental results - Compare training costs and model sizes - Better justify the proposed CAS mechanism - Investigate the robustness of learned concepts - Address the fairness concerns raised by reviewers in comparison with baselines
train
[ "PZxL2EOndXe", "eDM9IF__UsH", "8diazdg7p9b", "xJN8g_qGzJw", "QZP7BQAW6GC", "-mcPKv-uPpp", "HzF5k6srd0k", "uH5FgWm9kzn", "60XuULHqMQ6", "TPpLKHbuJqC", "Vn4BTcn1ga", "31HAwAfX9wk", "bcx3OWyfRFS", "DtgfwGlD8Oz", "1rl95z5gdlz", "F6MNtMUlTGE" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 6on2,\n\nWe would like to thank you for taking the time to respond to our rebuttal and for updating your score. We are happy to hear that our rebuttal and updated submission were able to address some of your concerns. Furthermore, we appreciate your very insightful feedback and look forward to explo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "QZP7BQAW6GC", "xJN8g_qGzJw", "-mcPKv-uPpp", "Vn4BTcn1ga", "uH5FgWm9kzn", "60XuULHqMQ6", "F6MNtMUlTGE", "F6MNtMUlTGE", "1rl95z5gdlz", "DtgfwGlD8Oz", "bcx3OWyfRFS", "nips_2022_HXCPA2GXf_", "nips_2022_HXCPA2GXf_", "nips_2022_HXCPA2GXf_", "nips_2022_HXCPA2GXf_", "nips_2022_HXCPA2GXf_" ]
nips_2022__gn5djJHKzj
Online Learning and Pricing for Network Revenue Management with Reusable Resources
We consider a price-based network revenue management problem with multiple products and multiple reusable resources. Each randomly arriving customer requests a product (service) that needs to occupy a sequence of reusable resources (servers). We adopt an incomplete information setting where the firm does not know the price-demand function for each product and the goal is to dynamically set prices of all products to maximize the total expected revenue of serving customers. We propose novel batched bandit learning algorithms for finding near-optimal pricing policies, and show that they admit a near-optimal cumulative regret bound of $\tilde{O}(J\sqrt{XT})$, where $J$, $X$, and $T$ are the numbers of products, candidate prices, and service periods, respectively. As part of our regret analysis, we develop the first finite-time mixing time analysis of an open network queueing system (i.e., the celebrated Jackson Network), which could be of independent interest. Our numerical studies show that the proposed approaches perform consistently well.
Accept
Executive summary: The paper studies a price-based network revenue management problem with multiple products and multiple reusable resources, with an apriori unknown price-demand function. The authors approach this as a batched bandit learning problem, and their main result is an algorithm with cumulative regret \tilde{O}(J \sqrt(XT)) where J is the number of products, X is the number of prices, and T is the number of rounds. The dependence on X and T is best possible. Discussion and recommendation: This paper is a bit out of my comfort zone, so I am relying on the reviews for the decision. It seems that all reviewers appreciated the general model and the the theoretical results and experimental evaluation. Questions were raised regarding the various (seemingly) strong assumptions, but these were addressed in the rebuttal. Weak accept.
train
[ "YmmimS4znR", "8sNlXkDxFGI", "xZxdV0Za3EL", "_07Vqh5NLn", "KLyG6BfPV_", "refMO_1x5wKm", "0nQ2_F83FAx", "_Nm8Ywx_e78", "4TtxAJajTU", "KpYtO8eN8UH", "GxJOLe7nr6r", "7RLTNTSFHAp", "yKsceizST1_V", "RXDd_URbcOP", "0A3Ijpqucy", "g1r7UrBNSLC", "cUreP8Tc-uZ", "dHAIQIFvuvo", "qIy41C_POTx"...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response.\n\nI appreciate the response to Comment 1 which addresses my concern on Assumption 5.\n\nOverall, I think this paper has merits (especially the bound for the loss of nonstationarity), but the scope is a bit limited to learning in the Jackson Network. Thus, I'd like to keep my score.", ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "0A3Ijpqucy", "refMO_1x5wKm", "_07Vqh5NLn", "yKsceizST1_V", "0nQ2_F83FAx", "KpYtO8eN8UH", "GxJOLe7nr6r", "qIy41C_POTx", "qIy41C_POTx", "qIy41C_POTx", "qIy41C_POTx", "dHAIQIFvuvo", "dHAIQIFvuvo", "cUreP8Tc-uZ", "g1r7UrBNSLC", "nips_2022__gn5djJHKzj", "nips_2022__gn5djJHKzj", "nips_2...
nips_2022_bx2roi8hca8
MAgNet: Mesh Agnostic Neural PDE Solver
The computational complexity of classical numerical methods for solving Partial Differential Equations (PDE) scales significantly as the resolution increases. As an important example, climate predictions require fine spatio-temporal resolutions to resolve all turbulent scales in the fluid simulations. This makes the task of accurately resolving these scales computationally out of reach even with modern supercomputers. As a result, current numerical modelers solve PDEs on grids that are too coarse (3km to 200km on each side), which hinders the accuracy and usefulness of the predictions. In this paper, we leverage the recent advances in Implicit Neural Representations (INR) to design a novel architecture that predicts the spatially continuous solution of a PDE given a spatial position query. By augmenting coordinate-based architectures with Graph Neural Networks (GNN), we enable zero-shot generalization to new non-uniform meshes and long-term predictions up to 250 frames ahead that are physically consistent. Our Mesh Agnostic Neural PDE Solver (MAgNet) is able to make accurate predictions across a variety of PDE simulation datasets and compares favorably with existing baselines. Moreover, our model generalizes well to different meshes and resolutions up to four times those trained on.
Accept
The paper proposes an architecture that maps from an input function to an output function that can handle unstructured meshes. On a set of extensive experiments the effectiveness and robustness of the model is shown. As in other models (like Fourier Neural Operator) the PDE itself is not present in the loss, so 'solution' has to be used with caution. However, the overall design and presentation is impressive, and there is a general agreement between the reviewers about the importance of the work.
train
[ "bFKL3AirfNY", "1tKlryzJfR7", "9kv4QPgSRD", "LG_p7XtciLg", "W7lP5IYr9Rv", "eA2XMxYvY5S", "i8b1YX4UsPod", "XRRBQNJmMaG", "XLWU2wSkibg", "V2FOeCgodPD", "R_9NxRYLr7X", "iEAd7bewNPp", "B9SpXIiSVtq", "neUesiJ5l2Z", "sLMmIKYklw9" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n\n\nWe thank again the reviewer for the additional feedback. \n\nThe main advantage of MagNet is that it is generally **more performant in irregular meshes**, as it can be observed for both 1D and 2D experiments (specially with regards to MPNN, as it is more performant than FNO on almost all counts, except for ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "9kv4QPgSRD", "W7lP5IYr9Rv", "LG_p7XtciLg", "XRRBQNJmMaG", "XLWU2wSkibg", "XLWU2wSkibg", "R_9NxRYLr7X", "V2FOeCgodPD", "sLMmIKYklw9", "neUesiJ5l2Z", "B9SpXIiSVtq", "nips_2022_bx2roi8hca8", "nips_2022_bx2roi8hca8", "nips_2022_bx2roi8hca8", "nips_2022_bx2roi8hca8" ]
nips_2022_U2s1GFDDihU
Society of Agents: Regrets Bounds of Concurrent Thompson Sampling
We consider the concurrent reinforcement learning problem where $n$ agents simultaneously learn to make decisions in the same environment by sharing experience with each other. Existing works in this emerging area have empirically demonstrated that Thompson sampling (TS) based algorithms provide a particularly attractive alternative for inducing cooperation, because each agent can independently sample a belief environment (and compute a corresponding optimal policy) from the joint posterior computed by aggregating all agents' data , which induces diversity in exploration among agents while benefiting shared experience from all agents. However, theoretical guarantees in this area remain under-explored; in particular, no regret bound is known on TS based concurrent RL algorithms. In this paper, we fill in this gap by considering two settings. In the first, we study the simple finite-horizon episodic RL setting, where TS is naturally adapted into the concurrent setup by having each agent sample from the current joint posterior at the beginning of each episode. We establish a $\tilde{O}(HS\sqrt{\frac{AT}{n}})$ per-agent regret bound, where $H$ is the horizon of the episode, $S$ is the number of states, $A$ is the number of actions, $T$ is the number of episodes and $n$ is the number of agents. %This regret bound further improves to $\tilde{O}(H\sqrt{\frac{SAT}{n}})$ under. In the second setting, we consider the infinite-horizon RL problem, where a policy is measured by its long-run average reward. Here, despite not having natural episodic breakpoints, we show that by a doubling-horizon schedule, we can adapt TS to the infinite-horizon concurrent learning setting to achieve a regret bound of $\tilde{O}(DS\sqrt{ATn})$, where $D$ is the standard notion of diameter of the underlying MDP and $T$ is the number of timesteps. Note that in both settings, the per-agent regret decreases at an optimal rate of $\Theta(\frac{1}{\sqrt{n}})$, which manifests the power of cooperation in concurrent RL.
Accept
We thank the authors for their submission. This well-written paper presents no-Bayesian-regret algorithms for multi-agent cooperative RL. This is the first work to study Thompson sampling algorithms in the context of concurrent RL - where agents interact with the MDP simultaneously. An experimental evaluation is also provided.
train
[ "xEnvCG0zAgP", "UQm2NT6wMon", "xUaTBrQXKaF7", "rYDltSpn1F", "0Zg7BrULnjn", "xXDRPEEZtSU", "L02uadxVMng", "uEFOWV3D5KG", "b465fPP2_EO", "1zwzRYWvlJ9", "uQf4-mSv9NG", "rXTOHJdJMlq", "Cy6B1R9y3dS", "6ktfbUfiwh" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your continued efforts in the review and for your support!", " Thank you, I am quite happy with the authors' improvements and have decided to increase my score to reflect this.", " We really appreciate your support and all the valuable suggestions. In the following we'll continue to explore the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "UQm2NT6wMon", "L02uadxVMng", "rYDltSpn1F", "0Zg7BrULnjn", "xXDRPEEZtSU", "uEFOWV3D5KG", "uQf4-mSv9NG", "rXTOHJdJMlq", "Cy6B1R9y3dS", "6ktfbUfiwh", "nips_2022_U2s1GFDDihU", "nips_2022_U2s1GFDDihU", "nips_2022_U2s1GFDDihU", "nips_2022_U2s1GFDDihU" ]
nips_2022_zSkYVeX7bC4
Exploring Length Generalization in Large Language Models
The ability to extrapolate from short problem instances to longer ones is an important form of out-of-distribution generalization in reasoning tasks, and is crucial when learning from datasets where longer problem instances are rare. These include theorem proving, solving quantitative mathematics problems, and reading/summarizing novels. In this paper, we run careful empirical studies exploring the length generalization capabilities of transformer-based language models. We first establish that naively finetuning transformers on length generalization tasks shows significant generalization deficiencies independent of model scale. We then show that combining pretrained large language models' in-context learning abilities with scratchpad prompting (asking the model to output solution steps before producing an answer) results in a dramatic improvement in length generalization. We run careful failure analyses on each of the learning modalities and identify common sources of mistakes that highlight opportunities in equipping language models with the ability to generalize to longer problems.
Accept
This paper studies the problem of length generalization for LLMs, and shows that fine-tuning and prompting alone are not sufficient to address it. It introduces a "scratchpad" extension to LLM generation, which prompts the model to produce and condition on intermediate values during reasoning tasks, is a promising way of addressing this issue. The reviewers were unanimous in recommending acceptance, and thus I am happy to follow suit.
train
[ "z5IiE4OwSjQ", "pA8NGkngvVr", "68HQAaLrYc8", "17O2gVhaust", "1anBiiFBTuz", "9C_yJmkFomw", "2_pDCcls3N", "mhsHs93kh6f", "uqrWJ9XgIkF", "EW8oAY2UCLM", "qqjv8sqO8Ve", "lzqXsQU2LdI", "66-le9Jbgeq" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The response is very detailed and helpful. Particularly the inclusion of missing training details, further clarifications about experimental design decisions, and discussion of results.\n\nTo me the fact that the prompt does not always lead to {0, 1} output space suggests that the model has a lot of flexibility f...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "17O2gVhaust", "66-le9Jbgeq", "66-le9Jbgeq", "66-le9Jbgeq", "lzqXsQU2LdI", "qqjv8sqO8Ve", "EW8oAY2UCLM", "EW8oAY2UCLM", "nips_2022_zSkYVeX7bC4", "nips_2022_zSkYVeX7bC4", "nips_2022_zSkYVeX7bC4", "nips_2022_zSkYVeX7bC4", "nips_2022_zSkYVeX7bC4" ]
nips_2022_TItRK4VP9X2
Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets
Deep neural networks (DNNs) typically require massive data to train on, which is a hurdle for numerous practical domains. Facing the data shortfall, one viable option is to acquire domain-specific training data from external uncensored sources, such as open webs or third-party data collectors. However, the quality of such acquired data is often not rigorously scrutinized, and one cannot easily rule out the risk of `"poisoned" examples being included in such unreliable datasets, resulting in unreliable trained models which pose potential risks to many high-stake applications. While existing options usually suffer from high computational costs or assumptions on clean data access, this paper attempts to detect backdoors for potential victim models with minimal prior knowledge. In particular, provided with a trained model, users are assumed to (1) have no prior knowledge of whether it is already poisoned, or what the target class/percentage of samples is poisoned, and (2) have no access to a clean sample set from the same training distribution, nor any trusted model trained on such clean data. To tackle this challenging scenario, we first observe the contrasting channel-level statistics between the backdoor trigger and clean image features, and consequently, how they can be differentiated by progressive channel shuffling. We then propose the randomized channel shuffling method for backdoor-targeted class detection, which requires only a few feed-forward passes. It thus incurs minimal overheads and demands no clean sample nor prior knowledge. We further explore a “full” clean data-free setting, where neither the target class detection nor the trigger recovery can access the clean data. Extensive experiments are conducted with three datasets (CIFAR-10, GTSRB, Tiny ImageNet), three architectures (AlexNet, ResNet-20, SENet-18), and three attacks (BadNets, clean label attack, and WaNet). Results consistently endorse the effectiveness of our proposed technique in backdoor model detection, with margins of 0.291 ~ 0.640 AUROC over the current state-of-the-arts. Codes are available at https://github.com/VITA-Group/Random-Shuffling-BackdoorDetect.
Accept
This work proposes a channel shuffling as a way to distinguish between backdoor and clean examples, based on the hypothesis that trigger features are sparsely encoded and activated in only a few channels. Reviewers all agreed that is a pretty intuitive, yet effective method and that it had solid evaluations after the rebuttal. Reviewer oFCu had the concern that this paper is entirely empirical and has no supporting theory. I think this is ok given the precedence of papers in this field and also the framing of security and privacy is a more practically oriented one anyway. The most critical reviewer (PjHg) pointed out that the benchmarks and related work contextualization were severely lacking. After the rebuttal period, these concerns were mostly alleviated. Given the strength of the evaluations and the novelty of the idea, I believe this paper should be accepted. That said, please address the following for the camera ready: Please improve the writing as this was brought up by several reviewers. There are a lot of grammatical errors. Please improve the discussion of the limitations section of this detection method as suggested by reviewer PjHg.
test
[ "yZM6gKw1rw", "8KP3RILgeRC", "aPmNbSz_YZW", "2s5v2zQTwHj", "lmttsuTLFQk", "hsYO5n_39tW4", "ykYovz2EJJz", "v_Ar0AQqe1x", "QYKm_KjhO18", "zg4XCNVpRmN", "r5mlU-vjlCw", "YYOPNBwZEJO", "usBN4WOO7uo", "FzvsH1S6fmg", "XJ20Ap7QjA", "2W2lo_V5BxC", "AOPa77iOyr-", "WAK_xSL09o", "dosRZyRzV5"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Dear Reviewer PjHg,\n\nWe appreciate the valuable advice and the positive assessment. \n\nIn the revised version, we have made several revisions: \n- Provide results of the larger dataset and more recent architectures (line 310-314 & table 1); \n- Discuss our method does not need retraining (line 137-147); \n- Fu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "lmttsuTLFQk", "hsYO5n_39tW4", "2s5v2zQTwHj", "2W2lo_V5BxC", "v_Ar0AQqe1x", "XJ20Ap7QjA", "QYKm_KjhO18", "_VVTrEV0IP1", "usBN4WOO7uo", "nips_2022_TItRK4VP9X2", "_VVTrEV0IP1", "_VVTrEV0IP1", "WAK_xSL09o", "dosRZyRzV5", "dosRZyRzV5", "AOPa77iOyr-", "nips_2022_TItRK4VP9X2", "nips_2022...
nips_2022_j9JL96S8Vl
Towards Practical Few-shot Query Sets: Transductive Minimum Description Length Inference
Standard few-shot benchmarks are often built upon simplifying assumptions on the query sets, which may not always hold in practice. In particular, for each task at testing time, the classes effectively present in the unlabeled query set are known a priori, and correspond exactly to the set of classes represented in the labeled support set. We relax these assumptions and extend current benchmarks, so that the query-set classes of a given task are unknown, but just belong to a much larger set of possible classes. Our setting could be viewed as an instance of the challenging yet practical problem of extremely imbalanced $K$-way classification, $K$ being much larger than the values typically used in standard benchmarks, and with potentially irrelevant supervision from the support set. Expectedly, our setting incurs drops in the performances of state-of-the-art methods. Motivated by these observations, we introduce a \textbf{P}rim\textbf{A}l \textbf{D}ual Minimum \textbf{D}escription \textbf{LE}ngth (\textbf{PADDLE}) formulation, which balances data-fitting accuracy and model complexity for a given few-shot task, under supervision constraints from the support set. Our constrained MDL-like objective promotes competition among a large set of possible classes, preserving only effective classes that befit better the data of a few-shot task. It is hyper-parameter free, and could be applied on top of any base-class training. Furthermore, we derive a fast block coordinate descent algorithm for optimizing our objective, with convergence guarantee, and a linear computational complexity at each iteration. Comprehensive experiments over the standard few-shot datasets and the more realistic and challenging \textit{i-Nat} dataset show highly competitive performances of our method, more so when the numbers of possible classes in the tasks increase. Our code is publicly available at \url{https://github.com/SegoleneMartin/PADDLE}.
Accept
The reviewers have raised some concerns on the baselines used in the empirical evaluation. However, the additional experiments by the authors seem to have convinced the reviewers. Please make sure to add these additional experiments in the camera-ready version of the paper.
train
[ "7UU3jY3ZNx2", "m-kyG40WvgE", "Rk1D8VlvNg", "RuYkXg4g7AV", "jrRlqBEoX4", "5VzJMxqCYaB", "HYqMDi4qV9e", "TyRamMl96d5", "FD6QH-zS57c", "qPFKXA1Ex_2", "Obg_TOewt6m", "CE3bHArjAW" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have no other concerns. I would keep my score.", " \n**Q:** The optimization objective of transductive few-shot learning methods has a cross-entropy loss and entropy-based loss, but equation (2) has data-fitting accuracy and partition complexity. What is the relationship between them? }\n\n**A:** In fact, unl...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "HYqMDi4qV9e", "Rk1D8VlvNg", "CE3bHArjAW", "Obg_TOewt6m", "5VzJMxqCYaB", "qPFKXA1Ex_2", "FD6QH-zS57c", "nips_2022_j9JL96S8Vl", "nips_2022_j9JL96S8Vl", "nips_2022_j9JL96S8Vl", "nips_2022_j9JL96S8Vl", "nips_2022_j9JL96S8Vl" ]
nips_2022_Fhty8PgFkDo
Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank
Personalized PageRank (PPR) is a fundamental tool in unsupervised learning of graph representations such as node ranking, labeling, and graph embedding. However, while data privacy is one of the most important recent concerns, existing PPR algorithms are not designed to protect user privacy. PPR is highly sensitive to the input graph edges: the difference of only one edge may cause a big change in the PPR vector, potentially leaking private user data. In this work, we propose an algorithm which outputs an approximate PPR and has provably bounded sensitivity to input edges. In addition, we prove that our algorithm achieves similar accuracy to non-private algorithms when the input graph has large degrees. Our sensitivity-bounded PPR directly implies private algorithms for several tools of graph learning, such as, differentially private (DP) PPR ranking, DP node classification, and DP node embedding. To complement our theoretical analysis, we also empirically verify the practical performances of our algorithms.
Accept
Paper studies computing PPR in differential privacy setting. Given the importance of PPR in real world applications, we recommend accepting the paper as it brings an important problem to the DP community. However, we encourage authors to incorporate the comments from the authors, make sure that all the details of the proofs are made available in the final version, and clarify any comments reviewers raised. In particular, we encourage the authors to adequately address the criticisms of @Reviewer MDNv in the true spirit of science.
test
[ "BSYeev9FCb", "mCnyBcEog9", "pbxYJoPYjON2", "AYVTm2at-9F", "hsq98U-sJPs", "7j9MPkEGsaJ", "9RdaPU74JYY", "mnTfsxI_dki", "bh5ispKosWr", "irV9zIqcdL1" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your comments. We think we addressed all your major concerns in our response. Would be great if you can comment on our responses. Thanks.", " We would like to thank all the reviewers again for their comments. \n\nAs the discussion period ends today, we would like gently to request the reviewers...
[ -1, -1, -1, -1, -1, -1, 3, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 2 ]
[ "7j9MPkEGsaJ", "nips_2022_Fhty8PgFkDo", "irV9zIqcdL1", "bh5ispKosWr", "mnTfsxI_dki", "9RdaPU74JYY", "nips_2022_Fhty8PgFkDo", "nips_2022_Fhty8PgFkDo", "nips_2022_Fhty8PgFkDo", "nips_2022_Fhty8PgFkDo" ]
nips_2022_4G1Sfp_1sz7
Generating Training Data with Language Models: Towards Zero-Shot Language Understanding
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural language processing tasks: Unidirectional PLMs (e.g., GPT) are well known for their superior text generation capabilities; bidirectional PLMs (e.g., BERT) have been the prominent choice for natural language understanding (NLU) tasks. While both types of models have achieved promising few-shot learning performance, their potential for zero-shot learning has been underexplored. In this paper, we present a simple approach that uses both types of PLMs for fully zero-shot learning of NLU tasks without requiring any task-specific data: A unidirectional PLM generates class-conditioned texts guided by prompts, which are used as the training data for fine-tuning a bidirectional PLM. With quality training data selected based on the generation probability and regularization techniques (label smoothing and temporal ensembling) applied to the fine-tuning stage for better generalization and stability, our approach demonstrates strong performance across seven classification tasks of the GLUE benchmark (e.g., 72.3/73.8 on MNLI-m/mm and 92.8 on SST-2), significantly outperforming zero-shot prompting methods and achieving even comparable results to strong few-shot approaches using 32 training samples per class.
Accept
The paper introduces three theoretical analyses explaining the effectiveness of the transformer in long sequences. The reviewers raise three aspects that the paper can be further improved. First is the writing of the paper, it may be too dense and hard to follow. Second, the authors may want to elaborate a bit more about what deeper insights the proofs can bring to the community, except as a proof for the transformer’s effectiveness in long sequences. Third, the authors may consider adding empirical results from the experiments to support the theory. For those reasons, I think the paper is not ready to be accepted for now.
test
[ "U6XGzBNvOGU", "7ZsyAu-jaNy", "MTDgf_AEgaT", "VOyWzGeNizv", "G3SSQjQ4A83", "__HEzceqQ-7", "gBDUYdkfB9V", "X1ogFUw0Kmp", "pvXmo2qxWXv", "EZGYE44wVzG", "qBnOLGP2-m", "YQwpb5EpLhV6", "gNCiiUlsq_5n", "nZtDcO-jV2p", "LQkDloDQOBd", "RwoArVe8WHT", "u_jhedqFr3M" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The statement in the update doesn't recognize that the synthetic data is the task-specific data for the classifier. The whole paper will probably need a few rounds of revision for reframing the entire story and shifting it away from calling it zero-shot (starting with updating the title). This is not a suggestion...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "VOyWzGeNizv", "gBDUYdkfB9V", "__HEzceqQ-7", "G3SSQjQ4A83", "YQwpb5EpLhV6", "X1ogFUw0Kmp", "qBnOLGP2-m", "pvXmo2qxWXv", "gNCiiUlsq_5n", "nips_2022_4G1Sfp_1sz7", "u_jhedqFr3M", "RwoArVe8WHT", "LQkDloDQOBd", "nips_2022_4G1Sfp_1sz7", "nips_2022_4G1Sfp_1sz7", "nips_2022_4G1Sfp_1sz7", "ni...
nips_2022_Uynr3iPhksa
Recurrent Memory Transformer
Transformer-based models show their effectiveness across multiple domains and tasks. The self-attention allows to combine information from all sequence elements into context-aware representations. However, global and local information has to be stored mostly in the same element-wise representations. Moreover, the length of an input sequence is limited by quadratic computational complexity of self-attention. In this work, we propose and study a memory-augmented segment-level recurrent Transformer (RMT). Memory allows to store and process local and global information as well as to pass information between segments of the long sequence with the help of recurrence. We implement a memory mechanism with no changes to Transformer model by adding special memory tokens to the input or output sequence. Then the model is trained to control both memory operations and sequence representations processing. Results of experiments show that RMT performs on par with the Transformer-XL on language modeling for smaller memory sizes and outperforms it for tasks that require longer sequence processing. We show that adding memory tokens to Tr-XL is able to improve its performance. This makes Recurrent Memory Transformer a promising architecture for applications that require learning of long-term dependencies and general purpose in memory processing, such as algorithmic tasks and reasoning.
Accept
The paper proposes a memory augmented architecture for transformers to deal with the segment-based long sequences. The idea is simple and easy to follow. A special memory token is added to the transformer and corresponding memory operations are introduced to control the storage of the information from previous segments. The experiments show the comparison between the proposed method and the Transfomer-XL architecture. The reviews are overall positive.
train
[ "xh3hWNlSDh", "gvVJKk0RJBO", "n7Uz2HFJ777", "TXhmduzzLJN", "qw9EWphwGEG", "WEP0iwWmBhW" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Q1. We seriously take your concerns about reproducibility and have added more details about the models, optimizer, tokenization and training procedure to the updated version of the paper. We also provide a full set of hyperparameters used in each of experimental runs in supplementary materials. Hopefully, provid...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "WEP0iwWmBhW", "qw9EWphwGEG", "TXhmduzzLJN", "nips_2022_Uynr3iPhksa", "nips_2022_Uynr3iPhksa", "nips_2022_Uynr3iPhksa" ]
nips_2022_lArVAWWpY3
Statistical, Robustness, and Computational Guarantees for Sliced Wasserstein Distances
Sliced Wasserstein distances preserve properties of classic Wasserstein distances while being more scalable for computation and estimation in high dimensions. The goal of this work is to quantify this scalability from three key aspects: (i) empirical convergence rates; (ii) robustness to data contamination; and (iii) efficient computational methods. For empirical convergence, we derive fast rates with explicit dependence of constants on dimension, subject to log-concavity of the population distributions. For robustness, we characterize minimax optimal, dimension-free robust estimation risks, and show an equivalence between robust sliced 1-Wasserstein estimation and robust mean estimation. This enables lifting statistical and algorithmic guarantees available for the latter to the sliced 1-Wasserstein setting. Moving on to computational aspects, we analyze the Monte Carlo estimator for the average-sliced distance, demonstrating that larger dimension can result in faster convergence of the numerical integration error. For the max-sliced distance, we focus on a subgradient-based local optimization algorithm that is frequently used in practice, albeit without formal guarantees, and establish an $O(\epsilon^{-4})$ computational complexity bound for it. Our theory is validated by numerical experiments, which altogether provide a comprehensive quantitative account of the scalability question.
Accept
All the reviewers are positive about the paper, they found that it is well written and provides very interesting theoretical as well as practical contributions which are relevant for the machine learning practice.
val
[ "GRKRbV1SCA9", "WnWNHReAeW_", "OD_BtP8VkwV", "Y2SFFhWhciW", "KZEgiUScBh4", "ZIyolsNRS-7g", "RaG5AGauW2s", "zzzhm4GpIsup", "vvRTfWVQE8t", "sOynOTLUIt", "gp-p-2DrZkc", "3zNruuxDMJ9", "J4iF4gR_aOK", "w-KgCZpQHtj", "k-Ep5xL9mqr", "JTBdZrOIjf1" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewer reading our rebuttal and for their positive feedback. Regarding Remark 2, the $O_d(1)$ eigenvalue bound only holds for specific choices of $m$, e.g. $m(j) = a^j$, so that the sum is bounded independently of $d$. Of course, not every circular matrix has bounded eigenvalues; we just meant...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 2 ]
[ "WnWNHReAeW_", "w-KgCZpQHtj", "3zNruuxDMJ9", "KZEgiUScBh4", "vvRTfWVQE8t", "RaG5AGauW2s", "zzzhm4GpIsup", "JTBdZrOIjf1", "k-Ep5xL9mqr", "w-KgCZpQHtj", "w-KgCZpQHtj", "J4iF4gR_aOK", "nips_2022_lArVAWWpY3", "nips_2022_lArVAWWpY3", "nips_2022_lArVAWWpY3", "nips_2022_lArVAWWpY3" ]
nips_2022_xz-2eyIh7u
Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds
We consider a linear stochastic bandit problem involving $M$ agents that can collaborate via a central server to minimize regret. A fraction $\alpha$ of these agents are adversarial and can act arbitrarily, leading to the following tension: while collaboration can potentially reduce regret, it can also disrupt the process of learning due to adversaries. In this work, we provide a fundamental understanding of this tension by designing new algorithms that balance the exploration-exploitation trade-off via carefully constructed robust confidence intervals. We also complement our algorithms with tight analyses. First, we develop a robust collaborative phased elimination algorithm that achieves $\tilde{O}\left(\alpha+ 1/\sqrt{M}\right) \sqrt{dT}$ regret for each good agent; here, $d$ is the model-dimension and $T$ is the horizon. For small $\alpha$, our result thus reveals a clear benefit of collaboration despite adversaries. Using an information-theoretic argument, we then prove a matching lower bound, thereby providing the first set of tight, near-optimal regret bounds for collaborative linear bandits with adversaries. Furthermore, by leveraging recent advances in high-dimensional robust statistics, we significantly extend our algorithmic ideas and results to (i) the generalized linear bandit model that allows for non-linear observation maps; and (ii) the contextual bandit setting that allows for time-varying feature vectors.
Accept
This work studies an interesting collaborative linear bandits problem and makes solid technical contributions (efficient algorithms, optimal regret, and generalization to other settings). Clear accept. Please do address the minor issues pointed out by the reviewers in the final version.
test
[ "Cu5IV-3geo4", "7v02fwjdQeF", "Ul3dL9PrYLu", "HqV1XAlR7ZM", "NhPkGL_APhS", "zfYwfzfQefT", "JwvoB54o8Tn", "jchOyhyIK6-", "kCt-i26uZ7a", "Exlo2u4UIfU", "h3BeGCyTM_", "PZp1Lo5z68", "MGgPrtXpdQF" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. It addresses my questions/comments. I will keep my already positive rating unchanged.", " Thank you for the detailed response! The response addresses my questions. I would like to keep my score unchanged.", " Thank you very much for your insightful review of our paper, and for your...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "jchOyhyIK6-", "HqV1XAlR7ZM", "Exlo2u4UIfU", "NhPkGL_APhS", "MGgPrtXpdQF", "JwvoB54o8Tn", "h3BeGCyTM_", "kCt-i26uZ7a", "PZp1Lo5z68", "nips_2022_xz-2eyIh7u", "nips_2022_xz-2eyIh7u", "nips_2022_xz-2eyIh7u", "nips_2022_xz-2eyIh7u" ]
nips_2022_gIGeujOKfyV
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time counterparts of deep residual neural networks (NNs), and numerous extensions for recurrent NNs have been proposed. Since the 1980s, ODEs have also been used to derive theoretical results for NN learning rules, e.g., the famous connection between Oja's rule and principal component analysis. Such rules are typically expressed as additive iterative update processes which have straightforward ODE counterparts. Here we introduce a novel combination of learning rules and Neural ODEs to build continuous-time sequence processing nets that learn to manipulate short-term memory in rapidly changing synaptic connections of other nets. This yields continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our novel models outperform the best existing Neural Controlled Differential Equation based models on various time series classification tasks, while also addressing their fundamental scalability limitations. Our code is public.
Accept
A novel blend of linear transformers and "Fast Weight Programmers" is proposed where slow weights generate an embedding and learning rule parameterizations of a Neural ODE based evolution of "fast weights" of another network. The new architectures appear to provide superior performance on several benchmarks in comparison to NCDE/NRDE benchmarks. The reviewers agree on the core contributions and recommend strengthening some presentation aspects of the paper, possibly reorganizing the main and supplementary materials so the paper is more self-contained.
train
[ "p3DjD0hoJ4U", "KG0qVNmpikp", "T_7RXOMEwl", "UpOg_amiEyk", "zLGX0rKHo1g", "q5Fcuj0I_iF", "INQQ2zrZqeE", "K_tONgc0GAi", "wRQ5ziL6uR2", "MxCBXwgE4Q", "W7zCtR0O5FP", "NPUtbV9bVxC", "-kZuL0AUMz3", "__VurSc46CQ", "3C4nr_B30M" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your response and the updated score!", " Thank you to the author(s) for their response. In view of the responses and clarification, I am amending my score from 6 to 7.", " This is just a friendly reminder about the NeurIPS rebuttal deadline today. Thank you!", " This is just a friend...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "KG0qVNmpikp", "T_7RXOMEwl", "INQQ2zrZqeE", "wRQ5ziL6uR2", "q5Fcuj0I_iF", "K_tONgc0GAi", "3C4nr_B30M", "__VurSc46CQ", "MxCBXwgE4Q", "W7zCtR0O5FP", "-kZuL0AUMz3", "nips_2022_gIGeujOKfyV", "nips_2022_gIGeujOKfyV", "nips_2022_gIGeujOKfyV", "nips_2022_gIGeujOKfyV" ]
nips_2022_xOK40an4ag1
Operative dimensions in unconstrained connectivity of recurrent neural networks
Recurrent Neural Networks (RNN) are commonly used models to study neural computation. However, a comprehensive understanding of how dynamics in RNN emerge from the underlying connectivity is largely lacking. Previous work derived such an understanding for RNN fulfilling very specific constraints on their connectivity, but it is unclear whether the resulting insights apply more generally. Here we study how network dynamics are related to network connectivity in RNN trained without any specific constraints on several tasks previously employed in neuroscience. Despite the apparent high-dimensional connectivity of these RNN, we show that a low-dimensional, functionally relevant subspace of the weight matrix can be found through the identification of \textit{operative} dimensions, which we define as components of the connectivity whose removal has a large influence on local RNN dynamics. We find that a weight matrix built from only a few operative dimensions is sufficient for the RNN to operate with the original performance, implying that much of the high-dimensional structure of the trained connectivity is functionally irrelevant. The existence of a low-dimensional, operative subspace in the weight matrix simplifies the challenge of linking connectivity to network dynamics and suggests that independent network functions may be placed in specific, separate subspaces of the weight matrix to avoid catastrophic forgetting in continual learning.
Accept
This paper defines a new way to identify a low-dimensional subspace in recurrent weight matrices of a recurrent neural network that is important for computation. All reviewers agreed the approach is novel and insightful.
train
[ "1TcmOoIQ0J", "0ECezAlkGUy", "pM9pnWhFOmp", "oAHBowLHtR", "1mA_Rgk5SU", "8-g0K3A0Lly", "OW5EnrUw16k", "MpMKQybqw1i", "bOnyIfQdhg_", "Ftw1q8fF3DK", "nHa-U3EV8wq", "Lq--nD4ty4U", "BBw5NgkBKzN", "8y1lY04VfPI", "m_CfEF-4Xp6", "LqIy6HjE81V", "ydd29rWKkgF" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for taking our response and the revised manuscript into account. \nWe agree with the reviewer’s comment that the low-dimensional subspaces spanned by operative dimensions might be harder to interpret than the low-rank solutions presented by Mastrogiuseppe et al., which have yet lower dimensi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "8-g0K3A0Lly", "MpMKQybqw1i", "OW5EnrUw16k", "1mA_Rgk5SU", "Lq--nD4ty4U", "bOnyIfQdhg_", "nHa-U3EV8wq", "bOnyIfQdhg_", "ydd29rWKkgF", "LqIy6HjE81V", "m_CfEF-4Xp6", "8y1lY04VfPI", "nips_2022_xOK40an4ag1", "nips_2022_xOK40an4ag1", "nips_2022_xOK40an4ag1", "nips_2022_xOK40an4ag1", "nips...
nips_2022_06OVtS901hF
Multi-Class $H$-Consistency Bounds
We present an extensive study of $H$-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set $H$, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayes-consistency, $H$-calibration or $H$-consistency, and more informative than excess error bounds derived for $H$ being the family of all measurable functions. We give a series of new $H$-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial $H$-consistency bound can be given in some cases. To our knowledge, these are the first $H$-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.
Accept
This work studies the generalization error in the multi-class classification setting for the minimizer of the ERM with a surrogate loss. The authors present non-asymptotic guarantees using the concept of H-consistency. The present work extends a previous work which was only considering binary classification. The reviewers have found the contribution important and relevant. They also found that the presentation could be improved (e.g., by better referencing to known proof techniques) to have the paper more accessible to non-expert. I do recommend acceptance and ask the authors to revise the paper accordingly for the camera-ready version.
train
[ "Pp2adMIzji", "vCLFsErGxwU", "kDhXLRzgPTb", "4aN1d1d3gqg", "S5TP5jdpnmB", "WlkS5mfRci0", "wqe4oBJct-", "La3Nx1C_MDn" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the comprehensive answer. I maintain my score for the paper, but as suggested by other reviewers, it would be better to add more explanations or discussions to the results.", " Dear reviewers,\n\nThank you all for your useful feedback. We have carefully addressed all the questions raised. Please l...
[ -1, -1, -1, -1, -1, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, 3, 1, 3 ]
[ "S5TP5jdpnmB", "nips_2022_06OVtS901hF", "La3Nx1C_MDn", "wqe4oBJct-", "WlkS5mfRci0", "nips_2022_06OVtS901hF", "nips_2022_06OVtS901hF", "nips_2022_06OVtS901hF" ]
nips_2022_zdmYnIRXvKS
Biologically plausible solutions for spiking networks with efficient coding
Understanding how the dynamics of neural networks is shaped by the computations they perform is a fundamental question in neuroscience. Recently, the framework of efficient coding proposed a theory of how spiking neural networks can compute low-dimensional stimulus signals with high efficiency. Efficient spiking networks are based on time-dependent minimization of a loss function related to information coding with spikes. To inform the understanding of the function and dynamics of biological networks in the brain, however, the mathematical models have to be informed by biology and obey the same constraints as biological networks. Currently, spiking network models of efficient coding have been extended to include some features of biological plausibility, such as architectures with excitatory and inhibitory neurons. However, biological realism of efficient coding theories is still limited to simple cases and does not include single neuron and network properties that are known to be key in biological circuits. Here, we revisit the theory of efficient coding with spikes to develop spiking neural networks that are closer to biological circuits. Namely, we find a biologically plausible spiking model realizing efficient coding in the case of a generalized leaky integrate-and-fire network with excitatory and inhibitory units, equipped with fast and slow synaptic currents, local homeostatic currents such as spike-triggered adaptation, hyperpolarization-activated rebound current, heterogeneous firing thresholds and resets, heterogeneous postsynaptic potentials, and structured, low-rank connectivity. We show how the complexity of E-E connectivity matrix shapes network responses.
Accept
This paper proposes a derivation of spiking networks from an efficient signal reconstruction cost function. The derivation leads to more biological features than previous ones, including fast and slow synaptic currents, and rebound currents. This is a nice addition to the growing literature in this field. The negative review below mostly focused on formatting of the paper. The AC did not see a violation of NeurIPS formatting policy, however agrees that the presentation could be clearer.
test
[ "_Dnsi9qElx", "Qx9TJXNcXQX", "3D6xpj2PuH7", "AcsBAOi5o8", "fQP7oLcVhQH", "5Wfeo37RjpA", "GxCJT4D_IX", "xHt-0IUOcu4", "deglsnKD_x1", "VqBRWjbngi", "da3fI9o5fpHQ", "gOD6rToJtTA", "eDMULuZ-SoV", "M6wVAmZHsCn", "CJfHDyybbOo", "i2ms-tfr2Jy" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am happy with the author's response. I think they have made an effort to address all my comments. Given the authors response and the other reviews, I see no reason to change my ratings. I still think this paper could be accepted in the conference.", " We apologise, it seems that something went wrong with the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "eDMULuZ-SoV", "3D6xpj2PuH7", "AcsBAOi5o8", "5Wfeo37RjpA", "5Wfeo37RjpA", "gOD6rToJtTA", "xHt-0IUOcu4", "deglsnKD_x1", "VqBRWjbngi", "da3fI9o5fpHQ", "M6wVAmZHsCn", "i2ms-tfr2Jy", "CJfHDyybbOo", "nips_2022_zdmYnIRXvKS", "nips_2022_zdmYnIRXvKS", "nips_2022_zdmYnIRXvKS" ]
nips_2022_HLzjd09oRx
Improving Multi-Task Generalization via Regularizing Spurious Correlation
Multi-Task Learning (MTL) is a powerful learning paradigm to improve generalization performance via knowledge sharing. However, existing studies find that MTL could sometimes hurt generalization, especially when two tasks are less correlated. One possible reason that hurts generalization is spurious correlation, i.e., some knowledge is spurious and not causally related to task labels, but the model could mistakenly utilize them and thus fail when such correlation changes. In MTL setup, there exist several unique challenges of spurious correlation. First, the risk of having non-causal knowledge is higher, as the shared MTL model needs to encode all knowledge from different tasks, and causal knowledge for one task could be potentially spurious to the other. Second, the confounder between task labels brings in a different type of spurious correlation to MTL. Given such label-label confounders, we theoretically and empirically show that MTL is prone to taking non-causal knowledge from other tasks. To solve this problem, we propose Multi-Task Causal Representation Learning (MT-CRL) framework. MT-CRL aims to represent multi-task knowledge via disentangled neural modules, and learn which module is causally related to each task via MTL-specific invariant regularization. Experiments show that MT-CRL could enhance MTL model's performance by 5.5% on average over Multi-MNIST, MovieLens, Taskonomy, CityScape, and NYUv2, and show it could indeed alleviate spurious correlation problem.
Accept
This work studies how to alleviate potential spurious correlation in multi-task learning, and proposes a Multi-Task Causal Representation Learning (MT-CRL) framework, which aims to represent multi-task knowledge via disentangled neural modules, and learns which module is causally related to each task via MTL-specific invariant regularization The proposed method sounds reasonable, the results are solid, and the paper is well written. Overall it is a good work.
train
[ "RGe_IU26k_", "K9FxtLDZtT", "wjCgdhWqbVr", "VaDEGByrqDs", "9eF63q_hyMS", "ZlFputa1uL8", "Tl48GdiwjiM", "U_YU-31Tbl7", "9H26FVKrYJM7", "zUazOO2dm8", "ZSlwh_ol_Dp", "r6V0hsyDaHt", "sBGjwXVBDLG", "Vf3pMSzPmMn", "7X7WkbHPpaT", "r_zrnPdTvZW", "nCjItRNZ4sh" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate your acknowledgment of our work and your intuitive comments that helped us to improve our paper.", " Thank you for your response. The authors provide experimental evidence that addressed my concerns, I am willing to increase the score to 7.", " Sincerely thanks all the reviewers for their...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "K9FxtLDZtT", "VaDEGByrqDs", "nips_2022_HLzjd09oRx", "9eF63q_hyMS", "nCjItRNZ4sh", "Tl48GdiwjiM", "U_YU-31Tbl7", "r_zrnPdTvZW", "zUazOO2dm8", "ZSlwh_ol_Dp", "r6V0hsyDaHt", "7X7WkbHPpaT", "Vf3pMSzPmMn", "nips_2022_HLzjd09oRx", "nips_2022_HLzjd09oRx", "nips_2022_HLzjd09oRx", "nips_2022...
nips_2022_GeT7TSy1_hL
Online Bipartite Matching with Advice: Tight Robustness-Consistency Tradeoffs for the Two-Stage Model
We study the two-stage vertex-weighted online bipartite matching problem of Feng, Niazadeh, and Saberi (SODA ‘21) in a setting where the algorithm has access to a suggested matching that is recommended in the first stage. We evaluate an algorithm by its robustness $R$, which is its performance relative to that of the optimal offline matching, and its consistency $C$, which is its performance when the advice or the prediction given is correct. We characterize for this problem the Pareto-efficient frontier between robustness and consistency, which is rare in the literature on advice-augmented algorithms, yet necessary for quantifying such an algorithm to be optimal. Specifically, we propose an algorithm that is $R$-robust and $C$-consistent for any $(R,C)$ with $0 \leq R \leq \frac{3}{4}$ and $\sqrt{1-R} + \sqrt{1-C} = 1$, and prove that no other algorithm can achieve a better tradeoff.
Accept
The paper studies two stage matching with advice and completely characterizes the tradeoff between the robustness and consistency. The online matching problem is a central problem in online algorithms with numerous applications such as assigning jobs to machines, impressions to advertisers, etc. The model studied here is a very simplified online model where there are only two stages. On the other hand, the paper is a rare case in advice augmented algorithms where the tradeoff between robustness and consistency is fully understood. The reviewers all appreciate the tight characterization. There is a minor concern that the paper does not include experimental evaluation. The authors are encouraged to include the simple result for the edge weighted case (in the author response) to provide more context to the main result.
train
[ "3mLbHZLtptM", "j89cstTDELT", "_6qhHqU3AET", "Kaa2aQ3FJGp", "PMchWdP9QES", "Hxv3TedR5B7", "GLCHmG30mVD", "aenEvMeRxid", "2MczSQdymyM", "7kKkoyxTYC3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clear explanation.", " Thanks for the detailed and satisfactory response. Although I still find the model of predicting the whole of D1 and having only two stages a bit unnatural and it is arguably a very restricted version of the \"online\" setting, I see your point and have updated my score fro...
[ -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Kaa2aQ3FJGp", "_6qhHqU3AET", "7kKkoyxTYC3", "2MczSQdymyM", "aenEvMeRxid", "GLCHmG30mVD", "nips_2022_GeT7TSy1_hL", "nips_2022_GeT7TSy1_hL", "nips_2022_GeT7TSy1_hL", "nips_2022_GeT7TSy1_hL" ]
nips_2022_9XQa6cgLo21
Safety Guarantees for Neural Network Dynamic Systems via Stochastic Barrier Functions
Neural Networks (NNs) have been successfully employed to represent the state evolution of complex dynamical systems. Such models, referred to as NN dynamic models (NNDMs), use iterative noisy predictions of NN to estimate a distribution of system trajectories over time. Despite their accuracy, safety analysis of NNDMs is known to be a challenging problem and remains largely unexplored. To address this issue, in this paper, we introduce a method of providing safety guarantees for NNDMs. Our approach is based on stochastic barrier functions, whose relation with safety are analogous to that of Lyapunov functions with stability. We first show a method of synthesizing stochastic barrier functions for NNDMs via a convex optimization problem, which in turn provides a lower bound on the system's safety probability. A key step in our method is the employment of the recent convex approximation results for NNs to find piece-wise linear bounds, which allow the formulation of the barrier function synthesis problem as a sum-of-squares optimization program. If the obtained safety probability is above the desired threshold, the system is certified. Otherwise, we introduce a method of generating controls for the system that robustly minimize the unsafety probability in a minimally-invasive manner. We exploit the convexity property of the barrier function to formulate the optimal control synthesis problem as a linear program. Experimental results illustrate the efficacy of the method. Namely, they show that the method can scale to multi-dimensional NNDMs with multiple layers and hundreds of neurons per layer, and that the controller can significantly improve the safety probability.
Accept
This paper introduces a new approach to synthesizing barrier certificates for the verification of systems whose dynamics are defined by a neural network. There was some confusion on the exact contribution, but it was clarified in the rebuttal that while there are many prior works that aim to verify systems with neural controllers, this is the first paper that aims to provide guarantees for systems where the dynamics themselves are modeled with a neural network. This is also connected to another concern expressed in the reviews, which is the absence of a proper baseline. As the authors pointed out, the lack of any competing systems that can handle NN dynamic models makes such a baseline comparison difficult, but nevertheless, the authors added an additional baseline as part of the rebuttal process. Overall, I agree with reviewers D3ig and JTrX that this is a technically solid paper acceptable for publication at NeurIPS.
train
[ "gtoXi6nXDqf", "aDSABIYe4Az", "nL-0MN8Xrk4", "TBb8RmwRdG2", "09wVqyX0Uam", "PuKL3GP498pO", "qMalHsj8dq", "iQh1722l2Vp", "tAk-Zdh59Rs", "bgmsKfEUNnh", "hfqQfCPk863", "WVdIfnyWnfg" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their reply.\n\n> 1. The main confusion of the dynamics (3) is due to the fact that g is not a neural network, not to mention that g is totally independent of the state . I think it is reasonable to consider affine control systems. However, since g is just a constant matrix here and the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "nL-0MN8Xrk4", "TBb8RmwRdG2", "iQh1722l2Vp", "09wVqyX0Uam", "PuKL3GP498pO", "tAk-Zdh59Rs", "WVdIfnyWnfg", "hfqQfCPk863", "bgmsKfEUNnh", "nips_2022_9XQa6cgLo21", "nips_2022_9XQa6cgLo21", "nips_2022_9XQa6cgLo21" ]
nips_2022_OFsja-NZGbY
Zonotope Domains for Lagrangian Neural Network Verification
Neural network verification aims to provide provable bounds for the output of a neural network for a given input range. Notable prior works in this domain have either generated bounds using abstract domains, which preserve some dependency between intermediate neurons in the network; or framed verification as an optimization problem and solved a relaxation using Lagrangian methods. A key drawback of the latter technique is that each neuron is treated independently, thereby ignoring important neuron interactions. We provide an approach that merges these two threads and uses zonotopes within a Lagrangian decomposition. Crucially, we can decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks. While each of these problems is provably hard, we provide efficient relaxation methods that are amenable to efficient dual ascent procedures. Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques in both time and bound tightness.
Accept
The authors develop a novel approach for verifying input-output properties of neural networks like adversarial robustness, combining the advantages of earlier work on abstract interpretation and Lagrangian decomposition. The reviewers agree that the paper presents a novel, sound and significant contribution to the literature on NN verification. One reviewer had concerns regarding whether the approach actually shows substantial benefits when used as part of a complete verifier in terms of the number of verified instances, but agreed that this is likely problem dependent and does not take away the significance of the work. Another reviewer raised more serious concerns that the author should address in the final version of the paper. In particular: 1) Fairness of comparing against baselines with weaker intermediate bounds: The authors should revise the paper stating that the baselines they compare against depend on the quality of the intermediate bounds and the SOTA versions of these methods use tighter intermediate bounds than the ones used by the authors. 2) Highlight the issues with tightness of verification vs speed of verification tradeoffs in the context of complete verification (ie branch and bound), and discuss how the authors' approach might compare with approaches like alpha-beta CROWN.
train
[ "5-fWDDLYoS", "IKOSw1Zu5gf", "LHSi9alwjN", "LPDjSIZaHzB", "67sx2OBR3li", "4w7kiNyAb-", "jqheEWGjicN", "BqDJLj581r4", "SxYqP2xJHm_", "40Y3uuSRqXs", "jp4UtRICOWR", "-L00SBK4rpx", "PtQYh8RbcQ", "bSGisz_ik-I", "FE7DQI-Wlmk" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for continuing the discussion. While fair evaluation is quite complicated and nuanced when controlling for all variables, we certainly see the value in attempting to modify competing methods to control for similar runtimes (possibly by doing some limited branching with faster baselines, or stopping earl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5, 4, 5 ]
[ "LPDjSIZaHzB", "BqDJLj581r4", "jqheEWGjicN", "40Y3uuSRqXs", "FE7DQI-Wlmk", "bSGisz_ik-I", "PtQYh8RbcQ", "SxYqP2xJHm_", "-L00SBK4rpx", "jp4UtRICOWR", "nips_2022_OFsja-NZGbY", "nips_2022_OFsja-NZGbY", "nips_2022_OFsja-NZGbY", "nips_2022_OFsja-NZGbY", "nips_2022_OFsja-NZGbY" ]
nips_2022_39XK7VJ0sKG
Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions
Integrating functions on discrete domains into neural networks is key to developing their capability to reason about discrete objects. But, discrete domains are (1) not naturally amenable to gradient-based optimization, and (2) incompatible with deep learning architectures that rely on representations in high-dimensional vector spaces. In this work, we address both difficulties for set functions, which capture many important discrete problems. First, we develop a framework for extending set functions onto low-dimensional continuous domains, where many extensions are naturally defined. Our framework subsumes many well-known extensions as special cases. Second, to avoid undesirable low-dimensional neural network bottlenecks, we convert low-dimensional extensions into representations in high-dimensional spaces, taking inspiration from the success of semidefinite programs for combinatorial optimization. Empirically, we observe benefits of our extensions for unsupervised neural combinatorial optimization, in particular with high-dimensional representations.
Accept
This paper proposes a new neural set function extensions methodology that allows approximating function on sets with neural networks in a way that is amenable to standard gradient-based optimization techniques. The proposed methodology is evaluated on a number of discrete combinatorial optimization problems such as finding maximum cliques in a graph on a number of existing graphical benchmarks. Allowing neural networks (and other ML methods) to natively handle discrete functions is important in a variety of applications. This paper is a step towards a more robust solution to this problem and a worthy contributions to the conference.
train
[ "ox3LM8Pga6", "PM4Ak1wkWK-", "Ws3BUbQKnRk", "TuoxLto85b", "sLVl82eJmoz", "dOhDS6yIza_", "c4r7Zpt0BEd", "OPesk2RtSqt", "5D7h5_g0Ohf", "lBrB1cckjad" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " \"Thank you very much for your consideration of our response, and for adjusting your score!\"", " We apologise for neglecting to provide more details regarding runtime and memory costs. Here are some clarifications:\n \n- For runtime, scalar SFEs are comparable to the other methods (Erdos and REINFORCE). \nNeur...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 2 ]
[ "TuoxLto85b", "Ws3BUbQKnRk", "OPesk2RtSqt", "dOhDS6yIza_", "nips_2022_39XK7VJ0sKG", "c4r7Zpt0BEd", "5D7h5_g0Ohf", "lBrB1cckjad", "nips_2022_39XK7VJ0sKG", "nips_2022_39XK7VJ0sKG" ]
nips_2022_wwWCZ7sER_C
Algorithms with Prediction Portfolios
The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick a best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them. Ideally, we would like the algorithm's performance to depend on the quality of the {\em best} predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
Accept
The paper was generally well-received by the reviewers, who highlighted the significance of the setup, the strength of the results, and the enjoyable writing in their initial reviews. The additional experimental results provided during the rebuttal phase were particularly appreciated. Eventually we have reached consensus that the paper is clearly worthy of being published at NeurIPS 2022. I encourage the authors to work the additional experiments into the final version of the paper, and take all the remaining comments of the reviewers into account.
train
[ "nPT4-t84n0", "jcq9Z2Blh-U", "bW-H-Qk0u_", "jghHHAyK78k", "KYADr6FKZkz", "tlJzTRHjyvD", "CdnjGDFkvDWh", "G5Gh4tHnn-e", "YUcRu4ODT_B", "vjje0k8H5J", "GBLJLDICo9o", "kNNn0j9kbb", "Bcuplni9mjo", "fhdGeMq5W5V", "W_K2i42aJO3", "3f44Y_VXTT" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Ah, ok! This discussion and example has made clearer how to interpret the combination of results in each section (first one proving best prediction is recovered, second one proving oracle best k predictions is approximately recovered). I have updated my score from 5 to 6 in light of this, although I think the pap...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4, 3 ]
[ "jcq9Z2Blh-U", "bW-H-Qk0u_", "tlJzTRHjyvD", "KYADr6FKZkz", "GBLJLDICo9o", "CdnjGDFkvDWh", "G5Gh4tHnn-e", "3f44Y_VXTT", "W_K2i42aJO3", "fhdGeMq5W5V", "Bcuplni9mjo", "nips_2022_wwWCZ7sER_C", "nips_2022_wwWCZ7sER_C", "nips_2022_wwWCZ7sER_C", "nips_2022_wwWCZ7sER_C", "nips_2022_wwWCZ7sER_C...
nips_2022_bot35zOudq
Pushing the limits of fairness impossibility: Who's the fairest of them all?
The impossibility theorem of fairness is a foundational result in the algorithmic fairness literature. It states that outside of special cases, one cannot exactly and simultaneously satisfy all three common and intuitive definitions of fairness - demographic parity, equalized odds, and predictive rate parity. This result has driven most works to focus on solutions for one or two of the metrics. Rather than follow suit, in this paper we present a framework that pushes the limits of the impossibility theorem in order to satisfy all three metrics to the best extent possible. We develop an integer-programming based approach that can yield a certifiably optimal post-processing method for simultaneously satisfying multiple fairness criteria under small violations. We show experiments demonstrating that our post-processor can improve fairness across the different definitions simultaneously with minimal model performance reduction. We also discuss applications of our framework for model selection and fairness explainability, thereby attempting to answer the question: Who's the fairest of them all?
Accept
This paper provides an interesting framework that investigates trade-offs of multiple fairness criteria. The reviewers agree that the proposed techniques are interesting and make meaningful contributions to algorithmic fairness. Please incorporate the reviewers' suggestions in your next revision (e.g., not using "tractable" to describe MIP).
train
[ "M07JlKufM3", "n4B4GAUJgEC", "tFEqaF4Rp8", "fftOR1XWYd9", "P-6D-xAOIwS", "2gUsGDdYzpB", "SBxJuEMqAJG", "HiS_X4--oMx" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our response - we are glad that we could address your concerns. If we have the opportunity to further adjust the paper for the camera-ready version, we will incorporate your suggestion. ", " I would like to thank the authors for their responses which address most of my concerns. However, I...
[ -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "n4B4GAUJgEC", "tFEqaF4Rp8", "HiS_X4--oMx", "SBxJuEMqAJG", "2gUsGDdYzpB", "nips_2022_bot35zOudq", "nips_2022_bot35zOudq", "nips_2022_bot35zOudq" ]
nips_2022_b-WnRS7kSEN
Tsetlin Machine for Solving Contextual Bandit Problems
This paper introduces an interpretable contextual bandit algorithm using Tsetlin Machines, which solves complex pattern recognition tasks using propositional (Boolean) logic. The proposed bandit learning algorithm relies on straightforward bit manipulation, thus simplifying computation and interpretation. We then present a mechanism for performing Thompson sampling with Tsetlin Machine, given its non-parametric nature. Our empirical analysis shows that Tsetlin Machine as a base contextual bandit learner outperforms other popular base learners on eight out of nine datasets. We further analyze the interpretability of our learner, investigating how arms are selected based on propositional expressions that model the context.
Accept
Thank you for submitting your paper to NeurIPS! The authors introduce a new contextual bandit algorithm using propositional logic (Tsetlin Machines), offering a more interpretable learning approach. The method interfaces with existing bandit algorithms like Thompson Sampling and \epsilon-greedy, and achieves favorable empirical results in an extensive suite of experiments. While the paper does not provide any technical regret guarantees, the review team found the empirical results intriguing and could lead to future work. I am pleased to recommend acceptance.
train
[ "hRCVtRwP46_", "Q6WlbRGrF9y", "RuUQ2h_vZWD", "z2lcFshwE6f", "aeLmz-1M-tF", "5JSMDOK7R-1", "uz5hg3dZ-Xs", "-gfmJolGdWM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the responses. I keep my score. ", " Thank you for the response. I have updated the score. It will be great to add more details of TM, in particular, how it is compatible with different types of data.", " We thank the reviewer for the valuable feedback.\n\nThe complexity of the algorithm is proport...
[ -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "z2lcFshwE6f", "aeLmz-1M-tF", "-gfmJolGdWM", "uz5hg3dZ-Xs", "5JSMDOK7R-1", "nips_2022_b-WnRS7kSEN", "nips_2022_b-WnRS7kSEN", "nips_2022_b-WnRS7kSEN" ]
nips_2022_e8EkYPDHrsY
Learning with convolution and pooling operations in kernel methods
Recent empirical work has shown that hierarchical convolutional kernels inspired by convolutional neural networks (CNNs) significantly improve the performance of kernel methods in image classification tasks. A widely accepted explanation for their success is that these architectures encode hypothesis classes that are suitable for natural images. However, understanding the precise interplay between approximation and generalization in convolutional architectures remains a challenge. In this paper, we consider the stylized setting of covariates (image pixels) uniformly distributed on the hypercube, and characterize exactly the RKHS of kernels composed of single layers of convolution, pooling, and downsampling operations. We use this characterization to compute sharp asymptotics of the generalization error for any given function in high-dimension. In particular, we quantify the gain in sample complexity brought by enforcing locality with the convolution operation and approximate translation invariance with average pooling. Notably, these results provide a precise description of how convolution and pooling operations trade off approximation with generalization power in one layer convolutional kernels.
Accept
The reviewers appreciated the contributions. Several concerns were raised, and the authors addressed them in their submitted response. Accept.
train
[ "BHvkG8_zDX", "RVrJUAJDTBv", "Cn4ccjrzyo", "FA3gFzET5gN", "Co-Ddu2zE3c", "u5kguzp3gKg", "ycjztoA1SK4", "73-7O0wctIX", "8GPat8j52mK" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for his comments and suggestions.\n\nThe kernel (CK-AP-DS) can be viewed as the CNTK of the one-layer CNN (CNN-AP-DS). However it is not necessary, one can introduce such a kernel directly without mentioning CNTK (as it was done in [35,36,45]). We will make it more clear in the abstract and ...
[ -1, -1, -1, -1, -1, 4, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "8GPat8j52mK", "73-7O0wctIX", "ycjztoA1SK4", "Co-Ddu2zE3c", "u5kguzp3gKg", "nips_2022_e8EkYPDHrsY", "nips_2022_e8EkYPDHrsY", "nips_2022_e8EkYPDHrsY", "nips_2022_e8EkYPDHrsY" ]
nips_2022_5HaIds3ux5O
QUARK: Controllable Text Generation with Reinforced Unlearning
Large-scale language models often learn behaviors that are misaligned with user expectations. Generated text may contain offensive or toxic language, contain significant repetition, or be of a different sentiment than desired by the user. We consider the task of unlearning these misalignments by fine-tuning the language model on signals of what not to do. We introduce Quantized Reward Konditioning (Quark), an algorithm for optimizing a reward function that quantifies an (un)wanted property, while not straying too far from the original model. Quark alternates between (i) collecting samples with the current language model, (ii) sorting them into quantiles based on reward, with each quantile identified by a reward token prepended to the language model’s input, and (iii) using a standard language modeling loss on samples from each quantile conditioned on its reward token, while remaining nearby the original language model via a KL-divergence penalty. By conditioning on a high-reward token at generation time, the model generates text that exhibits less of the unwanted property. For unlearning toxicity, negative sentiment, and repetition, our experiments show that Quark outperforms both strong baselines and state-of-the-art reinforcement learning methods like PPO, while relying only on standard language modeling primitives.
Accept
This paper proposes Quantized Reward Konditioning (Quark), an algorithm for (un)learning language model misalignments. This is an important research direction given the importance of developing better-aligned large language models, and the paper does a good job at presenting why it matters and how it is related to prior work. The reviewers think that the paper is well written and clear, and that the approach is novel, sound and interesting. After the rebuttal, all reviewers vote to accept.
val
[ "4uX691eUTs-", "CO1RKHAaVuE", "DosQGPtIert", "_zYJIncbvjw", "h6aBmnQaz-N", "1iDi2sp0DWsH", "cHgTZX5I006", "qgkB6zaczAB", "yxH_Mjb0WcQ", "dRKalgfGtQ_" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your thoughtful and constructive review as well as encouraging comments!\n\n### Significance tests? \nThank you for the suggestion, we will add a full significance analysis in our final draft. For example, we apply the Wilcoxon signed-rank rank test between Quark and PPO for in-domain toxicity (Tabl...
[ -1, -1, -1, -1, -1, -1, 6, 8, 7, 9 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "dRKalgfGtQ_", "yxH_Mjb0WcQ", "qgkB6zaczAB", "h6aBmnQaz-N", "cHgTZX5I006", "nips_2022_5HaIds3ux5O", "nips_2022_5HaIds3ux5O", "nips_2022_5HaIds3ux5O", "nips_2022_5HaIds3ux5O", "nips_2022_5HaIds3ux5O" ]
nips_2022_monPF76G5Uv
Global Normalization for Streaming Speech Recognition in a Modular Framework
We introduce the Globally Normalized Autoregressive Transducer (GNAT) for addressing the label bias problem in streaming speech recognition. Our solution admits a tractable exact computation of the denominator for the sequence-level normalization. Through theoretical and empirical results, we demonstrate that by switching to a globally normalized model, the word error rate gap between streaming and non-streaming speech-recognition models can be greatly reduced (by more than 50% on the Librispeech dataset). This model is developed in a modular framework which encompasses all the common neural speech recognition models. The modularity of this framework enables controlled comparison of modelling choices and creation of new models. A JAX implementation of our models has been open sourced.
Accept
Reviewers acknowledge that this paper has good contributions, including solving label bias problem, mitigating the performance gap between the streaming and non-streaming systems, and proposing a unified WFST theoretical framework, which should have value to the ASR community. However, reviewer j8Qd reported that he/she did not receive the notifications of the author response. He/she then posted new problems on the revised paper on Aug. 26, while the authors cannot respond. The problems are mainly on the comparison with prior works on the mismatching training and inference. The reviewer argued that there are on imprecise comments/claims. Given this situation, I give a recommendation of acceptance but strongly ask the authors to carefully look into these problems and address them with full effort.
train
[ "eHDmZX5qITJ", "1GjpnHKn3Bu", "M5DrHK_vCw", "-SYUdFlDTdK", "Njcy8c8pzxT", "qLUyuo85k9C", "1f6liZzzPP2", "v5RkW88ZWwF", "eAsgjaAUG8y", "UBBu-FwifHd", "ys2bj6k_0w9", "INAxQUNmiMI", "c9oitDHXM-c", "VWhe6JiCtFw", "_AIxKAPPZhN" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers,\nWe are happy to respond to any additional questions & comments. We hope our continued effort to improve the submission strengthens the generally favorable view we believe we have heard from our reviewers, addresses many of the concerns they have pointed out and leads to its full acceptance.", ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 3 ]
[ "nips_2022_monPF76G5Uv", "M5DrHK_vCw", "-SYUdFlDTdK", "Njcy8c8pzxT", "c9oitDHXM-c", "1f6liZzzPP2", "INAxQUNmiMI", "eAsgjaAUG8y", "VWhe6JiCtFw", "_AIxKAPPZhN", "nips_2022_monPF76G5Uv", "nips_2022_monPF76G5Uv", "nips_2022_monPF76G5Uv", "nips_2022_monPF76G5Uv", "nips_2022_monPF76G5Uv" ]