paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_Bqk9c0wBNrZ
Semi-Parametric Neural Image Synthesis
Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Much of this success is due to the scalability of these architectures and hence caused by a dramatic increase in model complexity and in the computational resources invested in training these models. Our work questions the underlying paradigm of compressing large training data into ever growing parametric representations. We rather present an orthogonal, semi-parametric approach. We complement comparably small diffusion or autoregressive models with a separate image database and a retrieval strategy. During training we retrieve a set of nearest neighbors from this external database for each training instance and condition the generative model on these informative samples. While the retrieval approach is providing the (local) content, the model is focusing on learning the composition of scenes based on this content. As demonstrated by our experiments, simply swapping the database for one with different contents transfers a trained model post-hoc to a novel domain. The evaluation shows competitive performance on tasks which the generative model has not been trained on, such as class-conditional synthesis, zero-shot stylization or text-to-image synthesis without requiring paired text-image data. With negligible memory and computational overhead for the external database and retrieval we can significantly reduce the parameter count of the generative model and still outperform the state-of-the-art.
Accept
This paper tackles the general image synthesis problem (unconditional, conditional, text-guided) using a semi-parametric manner. It first retrieves relevant samples from external dataset, and use them as additional conditions for image generation. It is verified with different image synthesis frameworks, e.g. Diffusion-based and Autoregressive-based models. The comprehensive experiments demonstrate the effectiveness of the proposed semi-parametric image generation methods, compared with baselines. The paper received all positive review rating scores after some discussions, leading to an ``Accept'' decision overall.
train
[ "cFu7gyFVL8", "zoZBZ3JSSOs", "zok2Okjzhbs", "0t-xPHY-RE", "lnffzeMXzL", "sE1ExaQRCM-2", "xg10eqjy-S-", "pjtFQ8nK2Cf", "KjRE7lCYZT", "3qaUXIqtG0r", "xtiedZZIeYn", "ifltF6cptq", "IpJRHqnckp", "k4X14VFGaHZ", "wkHCZW12AWu", "tD0hRvAra13" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As suggested by B19f we will include the ablations on D and X in the final version, just as all the other additional experiments presented here.", " Thank you for raising the score, we are pleased to see that the reviewer is satisfied with our answers .\nHere are two further clarifications:\n\n**Size of databas...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "zoZBZ3JSSOs", "zok2Okjzhbs", "pjtFQ8nK2Cf", "xg10eqjy-S-", "sE1ExaQRCM-2", "nips_2022_Bqk9c0wBNrZ", "pjtFQ8nK2Cf", "IpJRHqnckp", "wkHCZW12AWu", "xtiedZZIeYn", "tD0hRvAra13", "k4X14VFGaHZ", "nips_2022_Bqk9c0wBNrZ", "nips_2022_Bqk9c0wBNrZ", "nips_2022_Bqk9c0wBNrZ", "nips_2022_Bqk9c0wBNr...
nips_2022_Yc4MjP2Mnob
Recommender Forest for Efficient Retrieval
Recommender systems (RS) have to select the top-N items from a massive item set. For the sake of efficient recommendation, RS usually represents user and item as latent embeddings, and relies on approximate nearest neighbour search (ANNs) to retrieve the recommendation result. Despite the reduction of running time, the representation learning is independent of ANNs index construction; thus, the two operations can be incompatible, which results in potential loss of recommendation accuracy. To overcome the above problem, we propose the Recommender Forest (a.k.a., RecForest), which jointly learns latent embedding and index for efficient and high-fidelity recommendation. RecForest consists of multiple k-ary trees, each of which is a partition of the item set via hierarchical balanced clustering such that each item is uniquely represented by a path from the root to a leaf. Given such a data structure, an encoder-decoder based routing network is developed: it first encodes the context, i.e., user information, into hidden states; then, leveraging a transformer-based decoder, it identifies the top-N items via beam search. Compared with the existing methods, RecForest brings in the following advantages: 1) the false partition of the boundary items can be effectively alleviated by the use of multiple trees; 2) the routing operation becomes much more accurate thanks to the powerful transformer decoder; 3) the tree parameters are shared across different tree levels, making the index to be extremely memory-efficient. The experimental studies are performed on five popular recommendation datasets: with a significantly simplified training cost, RecForest outperforms competitive baseline approaches in terms of both recommendation accuracy and efficiency.
Accept
The paper introduces a method for top-n item recommendation based on approximate nearest neighbor search (ANN). The authors formulate ANN as a sequence to sequence problem, the input being the user profile and activity, and the output being the top-n recommendations. The focus of the paper is on the computational efficiency of the ANN process. The proposed method learns jointly a tree-based index for organizing the items and a transformer based decoder for the top-n recommendation. The index is composed of multiple trees. Experiments are performed on classical benchmarks. The reviewers consider that this is an original contribution with a convincing experimental evaluation. The authors have added several complementary experiments, including additional baselines, during the rebuttal and answered satisfyingly to the reviewers’ comments and questions. All the reviewers recommend acceptance.
train
[ "6IYbf8Kvmm", "K53RJVxQSWN", "w0l6hi_Qa6m", "bvGrTrVWU3t", "pDqEeB3hRD6", "aOlOx-G83bJ", "FUoe71pC5k_R", "FgV3oHvljA", "vfP6keg_9Sr", "YAG_bhRrLD" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors answered all my questions and addressed all my comments. Hence, I updated my overall rating.", " Thanks for clarifying the experimental settings. I would like to raise my evaluation score and vote for acceptance on this work.", " Thanks for your approval of our work and insightful suggestions.\n\n...
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "aOlOx-G83bJ", "FUoe71pC5k_R", "bvGrTrVWU3t", "pDqEeB3hRD6", "YAG_bhRrLD", "vfP6keg_9Sr", "FgV3oHvljA", "nips_2022_Yc4MjP2Mnob", "nips_2022_Yc4MjP2Mnob", "nips_2022_Yc4MjP2Mnob" ]
nips_2022_lkrnoLxX1Do
Self-Supervised Image Restoration with Blurry and Noisy Pairs
When taking photos under an environment with insufficient light, the exposure time and the sensor gain usually require to be carefully chosen to obtain images with satisfying visual quality. For example, the images with high ISO usually have inescapable noise, while the long-exposure ones may be blurry due to camera shake or object motion. Existing solutions generally suggest to seek a balance between noise and blur, and learn denoising or deblurring models under either full- or self-supervision. However, the real-world training pairs are difficult to collect, and the self-supervised methods merely rely on blurry or noisy images are limited in performance. In this work, we tackle this problem by jointly leveraging the short-exposure noisy image and the long-exposure blurry image for better image restoration. Such setting is practically feasible due to that short-exposure and long-exposure images can be either acquired by two individual cameras or synthesized by a long burst of images. Moreover, the short-exposure images are hardly blurry, and the long-exposure ones have negligible noise. Their complementarity makes it feasible to learn restoration model in a self-supervised manner. Specifically, the noisy images can be used as the supervision information for deblurring, while the sharp areas in the blurry images can be utilized as the auxiliary supervision information for self-supervised denoising. By learning in a collaborative manner, the deblurring and denoising tasks in our method can benefit each other. Experiments on synthetic and real-world images show the effectiveness and practicality of the proposed method. Codes are available at https://github.com/cszhilu1998/SelfIR.
Accept
All three reviewers voted to accept the paper, and the detailed rebuttals from the authors helped to clarify reviewers' original concerns. One remaining concern from one of the reviewers is whether this method should be referred to as "self-supervised". However, authors clarified that it is reasonable to consider this method self-supervised for real-world data. I am fine with therefore leaving "self-supervised" in the title.
test
[ "ZMlHAarxFeW", "_O2a_5l-x6", "_SOZYsJjO_", "rcNtLLEUwOI", "syFZN9MQ_N", "-j__9z6ZPg", "OPLmQYySNXu", "hhYKpl26-ag", "oTf5v00B7F2", "idCelgilBbs", "jtGKw-IbBRq" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nThanks for your more detailed comments.\n\n1. GoPro dataset\n\nThank you for your more detailed explanation.\nWe acknowledge that the sharp images in the GoPro dataset are not defect-free, and will modify the relevant descriptions in the revision.\n\n\n2. Self-supervised learning\n\nTo begin with, we respect di...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "_SOZYsJjO_", "rcNtLLEUwOI", "OPLmQYySNXu", "hhYKpl26-ag", "jtGKw-IbBRq", "jtGKw-IbBRq", "idCelgilBbs", "oTf5v00B7F2", "nips_2022_lkrnoLxX1Do", "nips_2022_lkrnoLxX1Do", "nips_2022_lkrnoLxX1Do" ]
nips_2022_dO11Niyc225
A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning
Temporal-difference learning is a popular algorithm for policy evaluation. In this paper, we study the convergence of the regularized non-parametric TD(0) algorithm, in both the independent and Markovian observation settings. In particular, when TD is performed in a universal reproducing kernel Hilbert space (RKHS), we prove convergence of the averaged iterates to the optimal value function, even when it does not belong to the RKHS. We provide explicit convergence rates that depend on a source condition relating the regularity of the optimal value function to the RKHS. We illustrate this convergence numerically on a simple continuous-state Markov reward process.
Accept
The paper studies the convergence of non-parametric temporal-difference learning in the non-asymptotic regime. All referees agree that the paper is technical sound and the result is important to further our theoretical understanding of reinforcement learning. The paper merits acceptance to the conference.
train
[ "OOK4cQTPZU", "PooSwFNR2N", "1Im3RJag0Rl", "C5BpPaOWcqoz", "V4G7_1YDXY5", "idXA8HdCq5P", "LDmqG6fJ_A", "k7gKyLg1dfc", "3uOZl6pYbBU", "dFwiUsE6doS" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank Reviewer XY19 for his or her further comments.\n\nConcerning the $\\ell_\\infty$-norm analysis, given the further references that you provided on stochastic approximation, we indeed believe that the analysis could be extended to this change of norm. We will add a comment on this, and cite t...
[ -1, -1, -1, -1, -1, -1, 7, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "PooSwFNR2N", "1Im3RJag0Rl", "dFwiUsE6doS", "3uOZl6pYbBU", "k7gKyLg1dfc", "LDmqG6fJ_A", "nips_2022_dO11Niyc225", "nips_2022_dO11Niyc225", "nips_2022_dO11Niyc225", "nips_2022_dO11Niyc225" ]
nips_2022__zPG0ShaZTc
The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes
Convolutional neural networks were the standard for solving many computer vision tasks until recently, when Transformers of MLP-based architectures have started to show competitive performance. These architectures typically have a vast number of weights and need to be trained on massive datasets; hence, they are not suitable for their use in low-data regimes. In this work, we propose a simple yet effective framework to improve generalization from small amounts of data. We augment modern CNNs with fully-connected (FC) layers and show the massive impact this architectural change has in low-data regimes. We further present an online joint knowledge-distillation method to utilize the extra FC layers at train time but avoid them during test time. This allows us to improve the generalization of a CNN-based model without any increase in the number of weights at test time. We perform classification experiments for a large range of network backbones and several standard datasets on supervised learning and active learning. Our experiments significantly outperform the networks without fully-connected layers, reaching a relative improvement of up to $16\%$ validation accuracy in the supervised setting without adding any extra parameters during inference.
Accept
The paper shows that using final fully-connected layers helps the generalization of convolutional neural networks in low-data regimes. The addition of these layers significantly improves model quality resulting in a network with the same number of parameters and better generalization performance. Initially reviewers had mixed evaluation of the paper. All the reviewers saw that the proposed method is simple and easy to follow, at the same time providing clear improvements over baselines. Also agreed that the results are "significant" and "surprising" effect. There were some concerns raised by the reviewers but the author's rebuttal mostly addressed and improved the paper with sufficiently more experiments and analysis supporting the main claim. Reviewer `DX6o` mentioned that there are few updates promised by the authors which can't be validated until camera ready but it does not seem to warrant block publication. The Author-Reviewer discussion period was active and the authors did a great job clearing various concerns and questions and all reviewers agreed to support acceptance of the paper. The paper demonstrates a simple yet effective method for small data regime which would be interesting to the broad NeurIPS audience both for practitioners as well as researchers.
train
[ "5xpGjnbpXI0", "awEMoUbhdb", "9E_O0g6kbHC", "Vjd9z4ofbokZ", "XLy_0Y4_lyZ", "fNQdvc_5JdFr", "K1ySILlZgun", "XZE5cxXhoF", "Co7yqI9Lk-lp", "HgwLBAxd8aT", "oaoe_rofvKr", "u3gCz2oCFAh", "7rqP4Wtg7oP", "PR_35vcPefv", "CQQF6n6Op0" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are glad to see that all the reviewer's concerns have been resolved and that the reviewer increased the score of the paper.\nWe assure the reviewer that we will update the paper with the new experiments for the final revision.", " Thank you for the response! The authors did a great job of dealing with all th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "awEMoUbhdb", "9E_O0g6kbHC", "Vjd9z4ofbokZ", "HgwLBAxd8aT", "fNQdvc_5JdFr", "Co7yqI9Lk-lp", "XZE5cxXhoF", "oaoe_rofvKr", "CQQF6n6Op0", "PR_35vcPefv", "7rqP4Wtg7oP", "nips_2022__zPG0ShaZTc", "nips_2022__zPG0ShaZTc", "nips_2022__zPG0ShaZTc", "nips_2022__zPG0ShaZTc" ]
nips_2022_rUc8peDIM45
The alignment property of SGD noise and how it helps select flat minima: A stability analysis
The phenomenon that stochastic gradient descent (SGD) favors flat minima has played a critical role in understanding the implicit regularization of SGD. In this paper, we provide an explanation of this striking phenomenon by relating the particular noise structure of SGD to its \emph{linear stability} (Wu et al., 2018). Specifically, we consider training over-parameterized models with square loss. We prove that if a global minimum $\theta^*$ is linearly stable for SGD, then it must satisfy $\|H(\theta^*)\|_F\leq O(\sqrt{B}/\eta)$, where $\|H(\theta^*)\|_F, B,\eta$ denote the Frobenius norm of Hessian at $\theta^*$, batch size, and learning rate, respectively. Otherwise, SGD will escape from that minimum \emph{exponentially} fast. Hence, for minima accessible to SGD, the sharpness---as measured by the Frobenius norm of the Hessian---is bounded \emph{independently} of the model size and sample size. The key to obtaining these results is exploiting the particular structure of SGD noise: The noise concentrates in sharp directions of local landscape and the magnitude is proportional to loss value. This alignment property of SGD noise provably holds for linear networks and random feature models (RFMs), and is empirically verified for nonlinear networks. Moreover, the validity and practical relevance of our theoretical findings are also justified by extensive experiments on CIFAR-10 dataset.
Accept
The paper investigates an important topic of why SGD converges to flat minima. Overall the reviewers felt that this is a nicely written paper with a nice contribution to the state of the art.
test
[ "GF-ItH1NC0x", "oZBxQX5Xqj6", "ZDUhVQGhorb", "TKpvsrf-gpe", "NdzThbOBv8", "oAzU-qxmMU9", "nIC2VFvJYnY", "s7z2AvfPLQY", "82YsEyHgU0F", "HX13j1t7OxR", "jY_TjjKEX1", "SnHw2PTXL4L", "_dDVsiMs_v4", "GMT7By_-dnb", "XLRA9tAYX4M", "Tgkm-OYaoIV", "IuekMrQS3Aw", "45jEQ9uAYkA" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe truly appreciate your comment and partially understand your considerations. However, we respectfully disagree with you on most points as explained below. \n\n---\n\n > \"stability analysis seems to provide only a small picture on why GD/SGD generalises well\"\n\n We agree that the stability...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "oZBxQX5Xqj6", "ZDUhVQGhorb", "TKpvsrf-gpe", "HX13j1t7OxR", "nips_2022_rUc8peDIM45", "45jEQ9uAYkA", "45jEQ9uAYkA", "82YsEyHgU0F", "45jEQ9uAYkA", "IuekMrQS3Aw", "IuekMrQS3Aw", "IuekMrQS3Aw", "Tgkm-OYaoIV", "Tgkm-OYaoIV", "Tgkm-OYaoIV", "nips_2022_rUc8peDIM45", "nips_2022_rUc8peDIM45",...
nips_2022_DSoFfnmUSjS
Recommender Transformers with Behavior Pathways
Sequential recommendation requires the recommender to capture the evolving behavior characteristics from logged user behavior data for accurate recommendations. However, user behavior sequences are viewed as a script with multiple ongoing threads intertwined. We find that only a small set of pivotal behaviors can be evolved into the user's future action. As a result, the future behavior of the user is hard to predict. We conclude this characteristic for sequential behaviors of each user as the \textit{Behavior Pathway}. Different users have their unique behavior pathways. Among existing sequential models, transformers have shown great capacity in capturing global-dependent characteristics. However, these models mainly provide a dense distribution over all previous behaviors using the self-attention mechanism, making the final predictions overwhelmed by the trivial behaviors not adjusted to each user. In this paper, we build the \textit{Recommender Transformer} (RETR) with a novel \textit{Pathway Attention} mechanism. RETR can dynamically plan the behavior pathway specified for each user, and sparingly activate the network through this behavior pathway to effectively capture evolving patterns useful for recommendation. The key design is a learned binary route to prevent the behavior pathway from being overwhelmed by trivial behaviors. We empirically verify the effectiveness of RETR on seven real-world datasets and RETR yields state-of-the-art performance.
Reject
This paper presents Recommender Transformer (RETR) with a pathway attention mechanism that can dynamically zeroing-out the interactions (e.g., the trivial/noisy ones) in transformer-based sequential recommender systems. Extensive experimental results demonstrate the effectiveness of the proposed architecture. Overall this paper received mixed reviews with borderline scores. The reviewers raised concerns around baselines and evaluations, some of which the authors promptly addressed in the revision during the rebuttal period. I also read the paper in details myself. I do agree with some of the concerns from the reviewers but I don't think a method needs to beat every other published papers to be published (and I think the current baselines are more than thorough enough). My biggest complaint about the paper is around the writing, specifically, how the proposed idea is presented. This paper tries to tackle an important question, which is that in sequential recommendation, not every interactions are useful in helping predict future interaction. The self-attention mechanism in transformer kind of addresses this problem but in a more "softer" fashion with attention weights. This paper presents a simple yet effective method to introduce a pathway mechanism that adaptively zeroing-out some of the interactions via a binary pathway router. In order to train such a model end-to-end, Gumbel-softmax sampling is utilized. The most important part of the contribution to me is that this is an improvement to the transformer architecture, as opposed to a new model which is what this paper's writing suggests -- the proposed approach is effectively model-agonistic and doesn't marry to a particular loss function or finer-grained architectural choices (number of layers, etc.). Currently there are many baselines in the paper, but each made some different model/architecture choices, which could contribute to the difference in performance (or not, but we wouldn't know). An ideal evaluation should have been to take all the transformer-based baselines that are currently in the paper, add this pathway mechanism without changing anything else, and show that the results improved over the transformer architecture. In this way, we know the improvements are exactly coming from introducing the pathway. The authors might argue some of the current results are already supporting this argument, but my point is to emphasize this point very explicitly rather than leaving it for the readers to infer. From what I read in this paper, I truly believe this pathway idea has its potential. Therefore, I would especially want the authors to further refine the presentation to better convey the idea, which in turn will hopefully increase the impact of this paper once it is eventually published. Some minor comments: * The way the paper is currently written seems to suggest there are only three types of pathways and the network is capable of capturing all of them. I am personally not a big of fan of over-interpreting what a neural net is trying to do. Therefore, I wouldn't overly focus on the characterization of different pathways and only show the qualitative examples at the end as a high-level demonstration. * In Eq 2 "softmax" should really be "sigmoid" if a 0-1 prediction is made there. Then the following line "logit" is probably not the right word here. * The qualitative examples at the end (figure 3) can be more carefully examined/labeled. For example, the current categorization is quite ambiguous -- "Indie" refers to the type of developers while "JPG" refers to the genre of the game, they are certainly not mutually exclusive.
train
[ "OyTyyFxFg8T", "eTCW1-PTT45", "M6T4QDHnx-e", "XphfpovZP6m", "D3X61dzZzOh", "pvdGHBNZflV", "HOfwrrKEyyt", "Wz5DpVDGW-1", "A63DIh6aa0_", "HhY7CtxlkgP", "8uLC8gDdkDW", "ebsumYgZ77D", "VpzkKJJBzq3", "dcTPOURa5CG", "RrmwhJNYTUJ", "NDRCG5J6_5N", "yyw3FJodPHI", "kGNUEpkZKjL", "VfnnsSrHm...
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe are sincerely looking forward to your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to have a further discussion with you about whether your concerns have been clarified or not. Please let us know if ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 3 ]
[ "kGNUEpkZKjL", "D3X61dzZzOh", "D3X61dzZzOh", "kGNUEpkZKjL", "pvdGHBNZflV", "Wz5DpVDGW-1", "kGNUEpkZKjL", "yyw3FJodPHI", "nips_2022_DSoFfnmUSjS", "8uLC8gDdkDW", "ebsumYgZ77D", "OoQ970eZ6CA", "VfnnsSrHmB", "kGNUEpkZKjL", "NDRCG5J6_5N", "yyw3FJodPHI", "nips_2022_DSoFfnmUSjS", "nips_20...
nips_2022_VYYf6S67pQc
Mildly Conservative Q-Learning for Offline Reinforcement Learning
Offline reinforcement learning (RL) defines the task of learning from a static logged dataset without continually interacting with the environment. The distribution shift between the learned policy and the behavior policy makes it necessary for the value function to stay conservative such that out-of-distribution (OOD) actions will not be severely overestimated. However, existing approaches, penalizing the unseen actions or regularizing with the behavior policy, are too pessimistic, which suppresses the generalization of the value function and hinders the performance improvement. This paper explores mild but enough conservatism for offline learning while not harming generalization. We propose Mildly Conservative Q-learning (MCQ), where OOD actions are actively trained by assigning them proper pseudo Q values. We theoretically show that MCQ induces a policy that behaves at least as well as the behavior policy and no erroneous overestimation will occur for OOD actions. Experimental results on the D4RL benchmarks demonstrate that MCQ achieves remarkable performance compared with prior work. Furthermore, MCQ shows superior generalization ability when transferring from offline to online, and significantly outperforms baselines. Our code is publicly available at https://github.com/dmksjfl/MCQ.
Accept
All reviewers are generally positive or borderline about this paper. Reviewer's note that the method is theoretically sound and practical to implement. Even though all of the components have been explored previously, the authors combine them in a novel approach that convincingly improves over prior works. Major concerns have been addressed by the author's response, however, I agree with reviewer fVHB that per dataset tuning of $\lambda$ muddies the comparison with previous approaches that do not do similar. I would encourage the authors to additionally report the best performance with a single setting across datasets to make the comparison clearer.
train
[ "ypA5GtDqVr7", "2GuLaPPxO23", "KRleZFwWeqk", "-7SBQjaf_06", "kiiCzzSgY7u", "pF_XLYDaMrj", "bF4CSSWbGxd", "o_wyfat7EZ", "CeYgin_IRf8", "NRgVb0baYUD", "QdRDMtPlRyj", "DBVROBYbvFJ", "camBAIVvBX", "mXjlR7JWkHw", "rckYN96AOBhD", "fZGFLJluKdJ", "t-QEeKXSLD", "ywstjKTuh4e", "3XX0PKsyDpd...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " We thank the reviewer for the kind reply! We think many of the suggestions and comments from the reviewer are of great value to make our paper stronger. We are more than happy to include the discussion part of the CVAE into our revision.\n\nWe apologize that we misunderstand the comments from the reviewer (we thi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "2GuLaPPxO23", "DBVROBYbvFJ", "NeJy062t6gK", "kiiCzzSgY7u", "pF_XLYDaMrj", "Esk2paR47UX", "o_wyfat7EZ", "CeYgin_IRf8", "NRgVb0baYUD", "kxA_ykyvzHV", "mXjlR7JWkHw", "camBAIVvBX", "rckYN96AOBhD", "7JZH26DsJ6S", "fZGFLJluKdJ", "t-QEeKXSLD", "ywstjKTuh4e", "NeJy062t6gK", "E0lc9o6_UgO...
nips_2022_kOIaB1hzaLe
Contrastive Neural Ratio Estimation
Likelihood-to-evidence ratio estimation is usually cast as either a binary (NRE-A) or a multiclass (NRE-B) classification task. In contrast to the binary classification framework, the current formulation of the multiclass version has an intrinsic and unknown bias term, making otherwise informative diagnostics unreliable. We propose a multiclass framework free from the bias inherent to NRE-B at optimum, leaving us in the position to run diagnostics that practitioners depend on. It also recovers NRE-A in one corner case and NRE-B in the limiting case. For fair comparison, we benchmark the behavior of all algorithms in both familiar and novel training regimes: when jointly drawn data is unlimited, when data is fixed but prior draws are unlimited, and in the commonplace fixed data and parameters setting. Our investigations reveal that the highest performing models are distant from the competitors (NRE-A, NRE-B) in hyperparameter space. We make a recommendation for hyperparameters distinct from the previous models. We suggest a bound on the mutual information as a performance metric for simulation-based inference methods, without the need for posterior samples, and provide experimental results.
Accept
The three reviewers agreed that the work is a valuable contribution to its field, and presents extensive experiments. For the readers' benefit, I kindly ask the authors to take into account reviewers comments while preparing the camera-ready version. In particular, the revised version should include: - the updated results table (across seeds, per dataset); - a clearer formatting of Figure 5; - an expanded discussion points on (i) how their method compares against learning the bias of NRE-B (Ma and Collins, 2018) (ii) clarifying the part on sequential vs. amortized methods.
train
[ "qYlHxpLa2js", "A8qE-TRVEXt", "2qomx5vJA8", "kM7EsBT6K7v", "-_v160043W", "5xT8SfxBvYZ", "6rYxWp0mWyL4", "KYpLgNSy2b", "2H-UDrkBoJt", "_9rJJ8Gr0vk", "pkCqexw5lHw", "xyHuBNmqJux", "HcHqPEXBJV", "fk-Qoh_W8M9", "MCrx4moNLnV" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their clarifications of my issues and misunderstandings. I remain positive about this paper, and I keep my score. I suggest acceptance of this paper.", " **Conclusion**: I trust the reviewers will address the points above in the final version of the paper. Apart from the updated table wh...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "6rYxWp0mWyL4", "2qomx5vJA8", "2H-UDrkBoJt", "fk-Qoh_W8M9", "nips_2022_kOIaB1hzaLe", "MCrx4moNLnV", "MCrx4moNLnV", "fk-Qoh_W8M9", "fk-Qoh_W8M9", "pkCqexw5lHw", "HcHqPEXBJV", "nips_2022_kOIaB1hzaLe", "nips_2022_kOIaB1hzaLe", "nips_2022_kOIaB1hzaLe", "nips_2022_kOIaB1hzaLe" ]
nips_2022_QotmVXC-8T
Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging
Decentralized optimization is increasingly popular in machine learning for its scalability and efficiency. Intuitively, it should also provide better privacy guarantees, as nodes only observe the messages sent by their neighbors in the network graph. But formalizing and quantifying this gain is challenging: existing results are typically limited to Local Differential Privacy (LDP) guarantees that overlook the advantages of decentralization. In this work, we introduce pairwise network differential privacy, a relaxation of LDP that captures the fact that the privacy leakage from a node u to a node v may depend on their relative position in the graph. We then analyze the combination of local noise injection with (simple or randomized) gossip averaging protocols on fixed and random communication graphs. We also derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging. Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph, matching the privacy-utility trade-off of the trusted curator, up to factors that explicitly depend on the graph topology. Remarkably, these factors become constant for expander graphs. Finally, we illustrate our privacy gains with experiments on synthetic and real-world datasets.
Accept
The paper eventually received a perfectly consistent evaluation from all the reviewers (4 times "accept"), so I can only recommend the acceptance.
test
[ "xEPnu8tHqWl", "wqJFwV0JyZu", "AozFp0HTU3b", "0mvslFIC-kZ", "_sW5dRImSf", "wID1iLRvmvn", "_3Paw2ftGCD", "kBm9l3V731H", "WWpQQm-u2Tb", "1XzKVVN3Fe", "nUZSEMxiXpz", "ZR-ec3vvHEh" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I really appreciate the authors for their response. I think they have answered my questions. \n\nI believe the reason for no group privacy result for the shuffle model also follows from a lack of proper adversarial definition. It has been true even in cryptography from where the shuffle model is borrowed (IKOS pa...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 1 ]
[ "wID1iLRvmvn", "_sW5dRImSf", "_3Paw2ftGCD", "nips_2022_QotmVXC-8T", "ZR-ec3vvHEh", "nUZSEMxiXpz", "1XzKVVN3Fe", "WWpQQm-u2Tb", "nips_2022_QotmVXC-8T", "nips_2022_QotmVXC-8T", "nips_2022_QotmVXC-8T", "nips_2022_QotmVXC-8T" ]
nips_2022_IPcgkUgw3t1
UniGAN: Reducing Mode Collapse in GANs using a Uniform Generator
Despite the significant progress that has been made in the training of Generative Adversarial Networks (GANs), the mode collapse problem remains a major challenge in training GANs, which refers to a lack of diversity in generative samples. In this paper, we propose a new type of generative diversity named uniform diversity, which relates to a newly proposed type of mode collapse named $u$-mode collapse where the generative samples distribute nonuniformly over the data manifold. From a geometric perspective, we show that the uniform diversity is closely related with the generator uniformity property, and the maximum uniform diversity is achieved if the generator is uniform. To learn a uniform generator, we propose UniGAN, a generative framework with a Normalizing Flow based generator and a simple yet sample efficient generator uniformity regularization, which can be easily adapted to any other generative framework. A new type of diversity metric named udiv is also proposed to estimate the uniform diversity given a set of generative samples in practice. Experimental results verify the effectiveness of our UniGAN in learning a uniform generator and improving uniform diversity.
Accept
This paper proposes UniGAN to alleviate mode collapse in GANs. They encourage the uniform distribution by arguing that samples on the manifold are equally accepted as real samples for training GANs. The paper is comprehensive in both theory and experimental results. It receives average rating score 6, leading to an ``Accept'' decision. To further improve the impact of this paper, I suggest the authors to study it in the context of modern SoTA image generation models in the future. Hopefully, It may help the GAN-based model family [1,2,3] to improve the performance, in the competition with diffusion-model, auto-regressive models. References: - [1] Alias-Free Generative Adversarial Networks (StyleGAN3) - [2] LAFITE: Towards Language-Free Training for Text-to-Image Generation - [3] ViTGAN: Training GANs with Vision Transformers
train
[ "0etdHGExFvA", "etzdautH8bl", "I7SZo037XEL", "qtmx3I2MNzP", "EkB0jdaIAO5", "odMnfhFaBX4", "tNW-Gvk0YYR", "_hIcZnLecnT", "1nL94wTTmf6", "l-KnXYFu58c" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your questions. \n\nIn terms of the FID across different datasets, we provide quantitative results on natural image datasets in Table 12-17 in supplementary. Our NF-based model can achieve the FID scores of 8.22 (CelebA), 11.22 (FFHQ), 9.16 (LSUN Car), 8.20 (LSUN Bedroom), 9.83 (LSUN Church). Though St...
[ -1, -1, -1, -1, -1, -1, 4, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "etzdautH8bl", "odMnfhFaBX4", "l-KnXYFu58c", "1nL94wTTmf6", "_hIcZnLecnT", "tNW-Gvk0YYR", "nips_2022_IPcgkUgw3t1", "nips_2022_IPcgkUgw3t1", "nips_2022_IPcgkUgw3t1", "nips_2022_IPcgkUgw3t1" ]
nips_2022_1WZyphXPLwC
Split-kl and PAC-Bayes-split-kl Inequalities for Ternary Random Variables
We present a new concentration of measure inequality for sums of independent bounded random variables, which we name a split-kl inequality. The inequality combines the combinatorial power of the kl inequality with ability to exploit low variance. While for Bernoulli random variables the kl inequality is tighter than the Empirical Bernstein, for random variables taking values inside a bounded interval and having low variance the Empirical Bernstein inequality is tighter than the kl. The proposed split-kl inequality yields the best of both worlds. We discuss an application of the split-kl inequality to bounding excess losses. We also derive a PAC-Bayes-split-kl inequality and use a synthetic example and several UCI datasets to compare it with the PAC-Bayes-kl, PAC-Bayes Empirical Bernstein, PAC-Bayes Unexpected Bernstein, and PAC-Bayes Empirical Bennett inequalities.
Accept
This meta review is based on the reviews, the authors rebuttal and the discussion with the reviewers, and ultimately my own judgement on the paper. There was a consensus that the paper contributes an interesting new concentration of measure inequality and derive a useful PAC-Bayes inequality. I feel this work deserves to be featured at NeurIPS and will attract interest from the community. I would like to personally invite the authors to carefully revise their manuscript to take into account the remarks and suggestions made by reviewers. Congratulations!
train
[ "PMVWj1D8O_t", "x24RGeTkp-", "sHWM1XeHyjm", "eDgKMH-eRB3", "CkGhOeHDRSw", "iibT8VcK1Z3", "3B-ve311uRZ", "YdibMKOq_Vf", "s6EPBuWhoI9", "_jF2YYdoMrp", "H49zjkyUXdk", "kP0NaxeyLF3", "AfiSdGZind", "16VjHPwOpBj" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The following points raised in the discussion are not within the main focus of the paper.\n\nContinuous distributions: while the split-kl can be applied to continuous distributions, it is not designed for them, just as the kl is not designed for them. If a continuous distribution happens to be close to ternary it...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "x24RGeTkp-", "nips_2022_1WZyphXPLwC", "iibT8VcK1Z3", "CkGhOeHDRSw", "s6EPBuWhoI9", "_jF2YYdoMrp", "16VjHPwOpBj", "AfiSdGZind", "kP0NaxeyLF3", "H49zjkyUXdk", "nips_2022_1WZyphXPLwC", "nips_2022_1WZyphXPLwC", "nips_2022_1WZyphXPLwC", "nips_2022_1WZyphXPLwC" ]
nips_2022_CTqkruS5Bb
Unsupervised Object Detection Pretraining with Joint Object Priors Generation and Detector Learning
Unsupervised pretraining methods for object detection aim to learn object discrimination and localization ability from large amounts of images. Typically, recent works design pretext tasks that supervise the detector to predict the defined object priors. They normally leverage heuristic methods to produce object priors, \emph{e.g.,} selective search, which separates the prior generation and detector learning and leads to sub-optimal solutions. In this work, we propose a novel object detection pretraining framework that could generate object priors and learn detectors jointly by generating accurate object priors from the model itself. Specifically, region priors are extracted by attention maps from the encoder, which highlights foregrounds. Instance priors are the selected high-quality output bounding boxes of the detection decoder. By assuming objects as instances in the foreground, we can generate object priors with both region and instance priors. Moreover, our object priors are jointly refined along with the detector optimization. With better object priors as supervision, the model could achieve better detection capability, which in turn promotes the object priors generation. Our method improves the competitive approaches by \textbf{+1.3 AP}, \textbf{+1.7 AP} in 1\% and 10\% COCO low-data regimes object detection.
Accept
The paper received mixed reviews. Three reviewers rated borderline accept and one reviewer rated borderline reject. The authors provided detailed responses to the raised concerns/questions and supported their responses with additional ablation study, experimental result on new dataset (e.g., VOC). For reviewer fpzy (who gave borderline reject), the requested additional analysis have been provided by the authors. The major remaining issue is "The improvement over DETReg is somewhat limited". The results presented in the paper did show consistent improvement over DETReg on three settings with at least 1 mAP improvement. After reading the reviews and the responses, while there are no enthusiastic supports from the reviewers, the AC does not find sufficient ground to reject the paper. This paper introduces new ideas for unsupervised object detection pretraining and show consistent improvement over the baselines over three evaluation settings. The AC believes that this work would benefit the community and thus recommends to accept.
train
[ "YL_om5K4GnC", "WHWDYmoFg1U", "Xs_9cuktjD4", "BzZgece8050", "IeB9talZ2RX", "640H0jAukClf", "0I_SeBO-yhd", "vQMD4yoe7oM3", "UBctz5KNh1Py", "j156RgO_Pi7", "sUZh5uwjY3z", "FrISZOINau", "91chIjSkys8", "fSYYf8bI2pK", "GAvIsoJW7oA", "j5e9zzbwX2t" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time and efforts in reviewing our paper!\n\nWe kindly remind you that the discussion period will end in half a day, and thus we just wonder whether we could have the last chance to address your further concerns or questions (if you have any). We are sincerely glad to improve our paper under you...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "nips_2022_CTqkruS5Bb", "Xs_9cuktjD4", "j156RgO_Pi7", "nips_2022_CTqkruS5Bb", "640H0jAukClf", "0I_SeBO-yhd", "GAvIsoJW7oA", "UBctz5KNh1Py", "j5e9zzbwX2t", "fSYYf8bI2pK", "FrISZOINau", "91chIjSkys8", "nips_2022_CTqkruS5Bb", "nips_2022_CTqkruS5Bb", "nips_2022_CTqkruS5Bb", "nips_2022_CTqk...
nips_2022_2EufPS5ABlJ
Spherical Sliced-Wasserstein
Many variants of the Wasserstein distance have been introduced to reduce its original computational burden. In particular the Sliced-Wasserstein distance (SW), which leverages one-dimensional projections for which a closed-form solution of the Wasserstein distance is available, has received a lot of interest. Yet, it is restricted to data living in Euclidean spaces, while the Wasserstein distance has been studied and used recently on manifolds. We focus more specifically on the sphere, for which we define a novel SW discrepancy, which we call spherical Sliced-Wasserstein, making a first step towards defining SW discrepancies on manifolds. Our construction is notably based on closed-form solutions of the Wasserstein distance on the circle, together with a new spherical Radon transform. Along with efficient algorithms and the corresponding implementations, we illustrate its properties in several machine learning use cases where spherical representations of data are at stake: density estimation on the sphere, variational inference or hyperspherical auto-encoders.
Reject
This paper has generated a long discussion and although it has strong theoretical merits, we all concord that the paper lacks of empirical motivations as well as a strong empirical evaluations with respect to distance distributions not exploiting manifold sructure and thosed define on a manifold. Hence, we believe that at this point it would be preferable to have such empirical evidence (ideally with quantitative results on real-world problems) before accepting the paper. Given that, we are sure that the paper will be much stronger and of broader interest to the ML community.
test
[ "cRH9RMOp31", "dNwnRqc6u1B", "qsHaJcMp7Rs", "z1X4aAbrOJp", "ZSt3kcwv8D2", "LGoymtd-MkP", "GBue9mdgco", "849608OqCzR", "5pl2ad_ugwe", "bL6DxqGgeG", "iWA6ch7jlko", "eFJBUYzZ1m", "nggcRNFe9Cb", "M1FZmheaQd", "Rk714vC5SSK" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their replies. In summary, I like the theory, even it is not mathematically challenging to build up those theory. For the practical side, the theory needs good examples to demonstrate its advantages, which is not shown in the paper. Hence, I would like to keep my score unchan...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "849608OqCzR", "qsHaJcMp7Rs", "GBue9mdgco", "ZSt3kcwv8D2", "5pl2ad_ugwe", "nips_2022_2EufPS5ABlJ", "eFJBUYzZ1m", "nggcRNFe9Cb", "bL6DxqGgeG", "M1FZmheaQd", "Rk714vC5SSK", "nips_2022_2EufPS5ABlJ", "nips_2022_2EufPS5ABlJ", "nips_2022_2EufPS5ABlJ", "nips_2022_2EufPS5ABlJ" ]
nips_2022_xvLWypz8p8
On Margins and Generalisation for Voting Classifiers
We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory. These provide state-of-the-art guarantees on a number of classification tasks. Our central results leverage the Dirichlet posteriors studied recently by Zantedeschi et al. (2021) for training voting classifiers; in contrast to that work our bounds apply to non-randomised votes via the use of margins. Our contributions add perspective to the debate on the ``margins theory'' proposed by Schapire et al. (1998) for the generalisation of ensemble classifiers.
Accept
All reviewers uniformly agree on the paper being interesting and worth publishing -- a very fine read. While the authors have already uploaded an updated version of their paper with minor revisions, I encourage them to use the camera-ready version to carry further improvements taking into accounts all reviews.
train
[ "Vjmxlj0Qi6A", "rDancjdY8tw", "6Rn6uOxWsyO", "VrBS5Ja8h39", "AEJc-8C8QBJ", "v6QZpr6ZQ2K", "R1BFkTfGPui", "vaPHJxQNAom" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my question and I am glad that my suggestion could improve the results! I am happy to see this paper accepted. ", " We thank the reviewer again for the strong support shown for our paper. Please see also our general response.\n\nWe will tidy up the references (thanks for pointing these o...
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "rDancjdY8tw", "vaPHJxQNAom", "R1BFkTfGPui", "v6QZpr6ZQ2K", "nips_2022_xvLWypz8p8", "nips_2022_xvLWypz8p8", "nips_2022_xvLWypz8p8", "nips_2022_xvLWypz8p8" ]
nips_2022_LC1jyMUalIA
Transferring Textual Knowledge for Visual Recognition
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research. Along with the growth of computational capacity, we now have open-source Vision-Language pre-trained models in large scales of the model architecture and amount of data. In this study, we focus on transferring knowledge for vision classification tasks. Conventional methods randomly initialize the linear classifier head for vision classification, but they leave the usage of the text encoder for downstream visual recognition tasks undiscovered. In this paper, we revise the role of the linear classifier and replace the classifier with the embedded language representations of the object categories. These language representations are initialized from the text encoder of the vision-language pre-trained model to further utilize its well-pretrained language model parameters. The empirical study shows that our method improves both the performance and the training speed of video classification, with a negligible change in the model. In particular, our paradigm achieves the state-of-the-art accuracy of 87.3% on Kinetics-400.
Reject
The paper aims to study the idea of transferring textual knowledge from vision-language pertained models to visual recognition or specifically the adaption of CLIP for downstream visual recognition tasks. The authors proposed to revise the role of the linear classifier and replace the classifier with the embedded language representations of the object categories. The idea is simple (and somewhat trivial) and authors demonstrated some promising results in experiments. Despite the positive aspects, there are several major concerns with this paper: 1) the technical depth of the method is weak (the paper only made a minor change to the paradigm of using vision-language pretrained model), 2) the novelty of the idea is limited, in fact the idea of transferring text knowledge or zero-shot/few-shot adaption of CLIP for downstream visual recognition tasks has been extensively studied in CLIP and many its variants, but there lack of comparisons with those work. 3) the empirical study is not convincing and comparison are not extensive (many CLIP variants and related baselines are not compared); also the related work was poorly written with many missing related work in recent advances of CLIP and video related CLIP variants. Overall, the paper has some interesting simple idea that may be worth for further investigation but the paper is not strong enough for publication.
train
[ "ACsO_ukx6Y", "O4zIrsOlnLt", "Onbjioc9EP6", "3G-usGXgFHl", "zvVUBZrF2KM", "1I1eqPxwr0z", "ntsFd6fJfgE", "6deBIOduCBn", "acrOG2KO5IM1", "uPxRObFqIPf", "jyRpDCMioFO", "ynEPHFueR1", "TWK_fWZTy9i", "ZlhQgurCs1y", "v2XIz1L4zQn" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer L6WZ:\n\nWe are glad that our responses addressed all your concerns and resolved the questions! And you said you will update your review accordingly.\n\nHowever, we observe that the score has not been updated yet.\n\nJust a friendly reminder that the deadline for updating your review is in two hour...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "3G-usGXgFHl", "zvVUBZrF2KM", "3G-usGXgFHl", "uPxRObFqIPf", "ynEPHFueR1", "acrOG2KO5IM1", "ZlhQgurCs1y", "v2XIz1L4zQn", "jyRpDCMioFO", "ZlhQgurCs1y", "TWK_fWZTy9i", "v2XIz1L4zQn", "nips_2022_LC1jyMUalIA", "nips_2022_LC1jyMUalIA", "nips_2022_LC1jyMUalIA" ]
nips_2022_NL05_JGVg99
Open-Ended Reinforcement Learning with Neural Reward Functions
Inspired by the great success of unsupervised learning in Computer Vision and Natural Language Processing, the Reinforcement Learning community has recently started to focus more on unsupervised discovery of skills. Most current approaches, like DIAYN or DADS, optimize some form of mutual information objective. We propose a different approach that uses reward functions encoded by neural networks. These are trained iteratively to reward more complex behavior. In high-dimensional robotic environments our approach learns a wide range of interesting skills including front-flips for Half-Cheetah and one-legged running for Humanoid. It is the first skill discovery algorithm that can learn such skills without relying on any form of feature engineering. In the pixel-based Montezuma's Revenge environment our method also works with minimal changes and it learns complex skills that involve interacting with items and visiting diverse locations.
Accept
After a strong rebuttal from the authors and an extensive discussion among the reviewers, I believe the paper's pros outweigh its cons and this paper will be a valuable contribution to NeurIPS. I recommend it for acceptance and encourage the authors to address the reviewers comments for the camera-ready version of the paper, especially regarding the newly added baselines and other comparisons to SOTA approaches listed by the reviewers.
train
[ "lTiBmhc4k9J", "Vh87CsY9FCa", "Eleen-Hzvx", "ULSg9jBPqts", "aGkfLxzJ8ge", "kQDMm2GYomX", "1RvKXNmM9Sw", "T49wu7SFhzFn", "9IWGK-2IYYvf", "-sIn_t_Rnx8", "Xxn2dR6f4eO", "zjZChC2B8XWl", "txkvjuM64kg", "Nw1JwwvdI8o", "LLsbYkdKD4o", "TgQvw8hdmKd", "ykWrjbSYuJD", "S6cSnxiJBg" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate that reviewers have thoroughly gone over the paper, our responses and gave very useful feedback. We'll continue working hard to add another Intrinsic Motivation baseline with checkpointing for the final version of the paper. On top of improving our baselines, we believe this builds new connections b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "aGkfLxzJ8ge", "Xxn2dR6f4eO", "ULSg9jBPqts", "kQDMm2GYomX", "Nw1JwwvdI8o", "ykWrjbSYuJD", "T49wu7SFhzFn", "txkvjuM64kg", "TgQvw8hdmKd", "ykWrjbSYuJD", "S6cSnxiJBg", "TgQvw8hdmKd", "LLsbYkdKD4o", "nips_2022_NL05_JGVg99", "nips_2022_NL05_JGVg99", "nips_2022_NL05_JGVg99", "nips_2022_NL0...
nips_2022_AdK9_GTEvG
LeRaC: Learning Rate Curriculum
Most curriculum learning methods require an approach to sort the data samples by difficulty, which is often cumbersome to perform. In this work, we propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC), which leverages the use of a different learning rate for each layer of a neural network to create a data-free curriculum during the initial training epochs. More specifically, LeRaC assigns higher learning rates to neural layers closer to the input, gradually decreasing the learning rates as the layers are placed farther away from the input. The learning rates increase at various paces during the first training iterations, until they all reach the same value. From this point on, the neural model is trained as usual. This creates a model-level curriculum learning strategy that does not require sorting the examples by difficulty and is compatible with any neural network, generating higher performance levels regardless of the architecture. We conduct comprehensive experiments on eight datasets from the computer vision (CIFAR-10, CIFAR-100, Tiny ImageNet), language (BoolQ, QNLI, RTE) and audio (ESC-50, CREMA-D) domains, considering various convolutional (ResNet-18, Wide-ResNet-50, DenseNet-121), recurrent (LSTM) and transformer (CvT, BERT, SepTr) architectures, comparing our approach with the conventional training regime. Moreover, we also compare with Curriculum by Smoothing (CBS), a state-of-the-art data-free curriculum learning approach. Unlike CBS, our performance improvements over the standard training regime are consistent across all datasets and models. Furthermore, we significantly surpass CBS in terms of training time (there is no additional cost over the standard training regime for LeRaC). Our code is freely available at: http//github.com/link.hidden.for.review.
Reject
The paper proposes a model-level curriculum learning strategy, which assigns higher initial learning rates to shallow layers than deep ones and continues increasing all learning rates until they reach the same value during the training process. It is a model- and task-agnostic approach. Reviewers appreciated the simplicity of the approach, as well as its effectiveness on a multiple domains and different neural networks. Main concerns which remains after the rebuttal are: - There is an insufficient analysis on why the method works. Some intuition has been given in the rebuttal, but all reviewers felt more analysis should be given to reach NeurIPS standards. - There is insufficient comparison with previous work on optimization. In addition, a more minor concern (in AC's opinion) remains: - On vision data, evaluations using augmentations are missing. While it can be argued that the effect of augmentations and LeRaC might be orthogonal, it remains unclear how one can improve on strong baselines there.
train
[ "A0VVxrBFgJW", "DyUmYVDGGsK", "INgXwBvjk-rt", "NibZxik_ezpl", "JJlVny7Fjo1", "DjemtwtuUVB", "vKTb3_86uSv", "5PLuUQxHU_", "P1H-gRb4qUd", "zTGkdPy_ld-", "oHN2mSRsyIH", "8UhSYbeWXCz", "SDetO11sdpE", "TMmDnXOtthQ", "4Ts1fDU6UZ", "czGPqu3OEg6" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for taking the time to read our rebuttal. We address the additional concerns below:\n- Explore the best possible performance for the chosen Dataset-DNN combination and push to improve over it. \nRe: We thank the reviewer for this suggestion. We will use it to improve our results in the final...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "DyUmYVDGGsK", "oHN2mSRsyIH", "NibZxik_ezpl", "JJlVny7Fjo1", "DjemtwtuUVB", "vKTb3_86uSv", "5PLuUQxHU_", "zTGkdPy_ld-", "czGPqu3OEg6", "4Ts1fDU6UZ", "TMmDnXOtthQ", "SDetO11sdpE", "nips_2022_AdK9_GTEvG", "nips_2022_AdK9_GTEvG", "nips_2022_AdK9_GTEvG", "nips_2022_AdK9_GTEvG" ]
nips_2022_pcgMNVhRslj
Alignment-guided Temporal Attention for Video Action Recognition
Temporal modeling is crucial for various video learning tasks. Most recent approaches employ either factorized (2D+1D) or joint (3D) spatial-temporal operations to extract temporal contexts from the input frames. While the former is more efficient in computation, the latter often obtains better performance. In this paper, we attribute this to a dilemma between the sufficiency and the efficiency of interactions among various positions in different frames. These interactions affect the extraction of task-relevant information shared among frames. To resolve this issue, we prove that frame-by-frame alignments have the potential to increase the mutual information between frame representations, thereby including more task-relevant information to boost effectiveness. Then we propose Alignment-guided Temporal Attention (ATA) to extend 1-dimensional temporal attention with parameter-free patch-level alignments between neighboring frames. It can act as a general plug-in for image backbones to conduct the action recognition task without any model-specific design. Extensive experiments on multiple benchmarks demonstrate the superiority and generality of our module.
Accept
Paper was reviewed by four reviewers, receiving: 2 x Borderline Rejects and 2 x Weak Accepts. Importantly, post rebuttal, [1mVh] mentioned upgrading the rating from Borderline Reject to Borderline Accept (though this is not reflected in final ratings). The general concerns raised by the reviewers included, limited improvements over the baselines and lack of certain ablations / comparisons. Much of these concerns have been addressed by various new experiments and discussions provided during the rebuttal period. [1mVh], [Ppgh] and [Fgtq] have all acknowledged that their concerns were largely resolved. [Shks], who remained the only negative reviewer, did not participate in the discussion nor acknowledged reading the author responses. AC has gone through the responses to comments of [Shks] and found them partially convincing. Overall, given the overall positive assessment of the reviewers and the generality of the proposed approach that can be combined with variety of architectures, the work will make a fine contribution to NeurIPS.
train
[ "rSvwyf9AEd6", "b0psO9G7rLZ", "-yO3DWKDtP", "p4z5VLrQhoE", "2K2U1TO0wa", "eqiHNNRMDcb", "N1P8dQfPYjH", "2gOZ_O-bMYi", "6PwBXkVpsBI" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the response from the authors who answer my questions. I have a favorable opinion of the paper and will keep my initial \"Weak Accept\" rating. I think it is a solid enough submission.\n\n", " We really appreciate your valuable comments. Below please find our specific answers to the questions. We w...
[ -1, -1, -1, -1, -1, 4, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "-yO3DWKDtP", "6PwBXkVpsBI", "2gOZ_O-bMYi", "N1P8dQfPYjH", "eqiHNNRMDcb", "nips_2022_pcgMNVhRslj", "nips_2022_pcgMNVhRslj", "nips_2022_pcgMNVhRslj", "nips_2022_pcgMNVhRslj" ]
nips_2022_WyQAmQ8WIU
SlateFree: a Model-Free Decomposition for Reinforcement Learning with Slate Actions
We consider the problem of sequential recommendations, where at each step an agent proposes some slate of $N$ distinct items to a user from a much larger catalog of size $K>>N$. The user has unknown preferences towards the recommendations and the agent takes sequential actions that optimise (in our case minimise) some action-related cost, with the help of Reinforcement Learning. The possible item combinations for a slate is $\binom{K}{N}$, an enormous number rendering value iteration methods intractable. We prove that the slate-MDP can actually be decomposed using just $K$ item-related $Q$ functions per state, which describe the problem in a more compact and efficient way. Based on this, we propose a novel model-free SARSA and Q-learning algorithm that performs $N$ parallel iterations per step, without any prior user knowledge. We call this method SlateFree, i.e. free-of-slates, and we show numerically that it converges very fast to the exact optimum for arbitrary user profiles, and that it outperforms alternatives from the literature.
Reject
This paper considers reinforcement learning with unordered slate recommendations and shows that this problem can be decomposed into one Q-value per available item as compared to one value per possible slate in existing work. The authors derive a Bellman equation for this formulation and propose model-free algorithms based on it. They show on small synthetic tasks that these methods converge and perform favorably compared to existing methods. The reviewers appreciated the new decomposition and its potential to enable significantly more efficient algorithms. However, several reviewers also voiced concerns about the clarity of the presentation, the practicality of the approach and the strength of the assumptions. The authors were able to remove a key limiting assumption (costs only depend on state) in the rebuttal revision of the paper. This was viewed very positively and alleviated the concerns about the strong assumptions. However, the concerns about clarity and practicality could not be fully addressed by the authors' response. For this reason, the paper is recommended to be rejected. Based on the reviewers' comments, the discussions and the AC's own reading of the paper, the following suggestions would make this a very strong paper: * In the fully general setting where costs are action-dependent, the costs in the Bellman equations are policy dependent and therefore change throughout the execution of the algorithms. As the authors acknowledge, this makes it unclear whether the algorithms provably converge. The authors demonstrate good empirical behavior but their experiments are limited to small toy problems in the absence of function approximation. However, the combination of function approximation and changing (policy-dependent) cost functions may lead to less stable algorithms, a major concern in practice. A theoretical convergence analysis or empirical results with function approximation on more realistic problems would be extremely valuable here. * The paper lacks a more thorough discussion of the relation to prior works and settings. The questions raised in the reviews around generality and assumptions in this paper shows that readers are left wondering what exactly enables the results as compared to prior work. A better discussion of the exact setting and comparison to other works would be very valuable and a better use of space in the main paper than the short proofs (which could be moved to the appendix). * The addition of simple illustrative examples would greatly help convey the intuition behind the decomposition. * The setting in this paper considers the slates being unordered and ordering effects seem to not be captured by this formulation. This is in contrast to existing work. The reader may wonder whether this is crucial for the proposed decomposition. Unordered slates is certainly is a deviation from previous works in this area and most practical recommender systems settings, which would limit the applicability of this approach. Carefully discussing this and if necessary / possible extending this to ordered slates would help and strengthen the paper.
train
[ "8g89dNE-98t", "khJ61qWjc5", "PcLhW95zkqP", "Bjl_ELZZmfd", "3XduNou_bgT", "19_C-aUvMuL", "5d5180FiQfC", "ON874jfFsEI", "CvwUcDCfdf", "beN5PpN3RI", "6SAgFcKTOyV", "WujJHVMsZUo" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are grateful to the reviewer for the very interesting comments. \n\n- We understand the reviewer's worries in the case when the cost depends on both state and action. But, in fact, the situation is very simple:\n\n(case SARSA) Indeed, in the case of SlateFree-SARSA one needs to calculate $c(s,j)$ at each step....
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "PcLhW95zkqP", "Bjl_ELZZmfd", "19_C-aUvMuL", "5d5180FiQfC", "WujJHVMsZUo", "6SAgFcKTOyV", "beN5PpN3RI", "CvwUcDCfdf", "nips_2022_WyQAmQ8WIU", "nips_2022_WyQAmQ8WIU", "nips_2022_WyQAmQ8WIU", "nips_2022_WyQAmQ8WIU" ]
nips_2022_zzDrPqn57DL
BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework
Fusing the camera and LiDAR information has become a de-facto standard for 3D object detection tasks. Current methods rely on point clouds from the LiDAR sensor as queries to leverage the feature from the image space. However, people discovered that this underlying assumption makes the current fusion framework infeasible to produce any prediction when there is a LiDAR malfunction, regardless of minor or major. This fundamentally limits the deployment capability to realistic autonomous driving scenarios. In contrast, we propose a surprisingly simple yet novel fusion framework, dubbed BEVFusion, whose camera stream does not depend on the input of LiDAR data, thus addressing the downside of previous methods. We empirically show that our framework surpasses the state-of-the-art methods under the normal training settings. Under the robustness training settings that simulate various LiDAR malfunctions, our framework significantly surpasses the state-of-the-art methods by 15.7% to 28.9% mAP. To the best of our knowledge, we are the first to handle realistic LiDAR malfunction and can be deployed to realistic scenarios without any post-processing procedure.
Accept
The paper proposes a method to fuse two sources of information for Bird’s Eye View (BEV) detection, namely multi-view images and LIDAR data, in a way that any data defects in one source of information does not affect the other. Most existing camera-lidar fusion works decorate lidar points with image features and then perform detection in 3D/BEV space. This work leverages recent Lift-Splat-Shoot work for cameras, which allows one to map both camera and lidar inputs to BEV space, before fusing and applying the detection head. The reviewers appreciate the identification of the problem of present fusion methods that are susceptible to damage in one of the two sources of information, the simplicity of the method and its good empirical performance. They raise concerns regarding its novelty, given the obvious choices of the present method. The rebuttal submitted by the authors presents more empirical results and ablations. Most reviewers appreciate the contribution of the paper, and the paper is suggested for publication.
train
[ "xsk31P8dGT", "pqUNCBau8I", "6xbmrA2vqKY", "3XFfOqOiijp", "SOvZaGyaBk", "xncPIadu0wG", "lHXMdlupFF", "M4JNZ-SY-vp", "Hd6Ce57Ircd", "uVn6Us0aP_", "HNAOCsCSzZw", "B8g021IaaAm", "VrENUT1h6-" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " #### `Q6: Can you please provide results on at least one more dataset with high quality Lidar such as the Waymo Open Dataset?`\nA6: Thanks for your suggestion. We train BEVFusion equipped with PointPillars as LiDAR stream on WaymoD5-3classes and it barely improves the baseline. Due to the time constraints of rebu...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4, 3 ]
[ "B8g021IaaAm", "nips_2022_zzDrPqn57DL", "B8g021IaaAm", "VrENUT1h6-", "VrENUT1h6-", "HNAOCsCSzZw", "uVn6Us0aP_", "Hd6Ce57Ircd", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL" ]
nips_2022_-V1ITIKPH6
Active Learning for Multiple Target Models
We describe and explore a novel setting of active learning (AL), where there are multiple target models to be learned simultaneously. In many real applications, the machine learning system is required to be deployed on diverse devices with varying computational resources (e.g., workstation, mobile phone, edge devices, etc.), which leads to the demand of training multiple target models on the same labeled dataset. However, it is generally believed that AL is model-dependent and untransferable, i.e., the data queried by one model may be less effective for training another model. This phenomenon naturally raises a question "Does there exist an AL method that is effective for multiple target models?" In this paper, we answer this question by theoretically analyzing the label complexity of active and passive learning under the setting with multiple target models, and conclude that AL does have potential to achieve better label complexity under this novel setting. Based on this insight, we further propose an agnostic AL sampling strategy to select the examples located in the joint disagreement regions of different target models. The experimental results on the OCR benchmarks show that the proposed method can significantly surpass the traditional active and passive learning methods under this challenging setting.
Accept
This paper studies a novel active learning setting adapted to learning multiple target model. The authors propose a setting that can benefit to all tasks by focusing on regions with high disagreements. This contribution shows in a sense that the active learning procedure can be transferable to multiple tasks. A theoretical analysis is provided in the form of a bound on label complexity. Experimental results support the claims. The reviewers have globally appreciated the contribution and most the comments raised in the reviews have been addressed in the rebuttal. The overall evaluation of the paper is positive and I propose acceptance. I recommend the authors to take into consideration the last comments of the reviewers, in particular for improving the presentation of the paper.
train
[ "QDb499zjsFs", "bF1zVNvy40X", "0zsszkObiX", "l_Ebu2_Uu47", "DFQoi1zcTlb", "bBWtcq4Duj", "symrIuZYaKc", "4ouN-UijRv7", "ePfJKVYItT3", "yen6I419NJX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I want to keep my initial rating.", " Thank you for the detailed response. The authors appropriately addressed my questions and I updated my rating from 4 to 5 accordingly. I believe that larger-scale experiments would strengthen the algorithmic side of this paper, although this is...
[ -1, -1, -1, -1, -1, -1, 6, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 1 ]
[ "l_Ebu2_Uu47", "DFQoi1zcTlb", "4ouN-UijRv7", "yen6I419NJX", "ePfJKVYItT3", "symrIuZYaKc", "nips_2022_-V1ITIKPH6", "nips_2022_-V1ITIKPH6", "nips_2022_-V1ITIKPH6", "nips_2022_-V1ITIKPH6" ]
nips_2022_bZzS_kkJes
Neural Matching Fields: Implicit Representation of Matching Fields for Visual Correspondence
Existing pipelines of semantic correspondence commonly include extracting high-level semantic features for the invariance against intra-class variations and background clutters. This architecture, however, inevitably results in a low-resolution matching field that additionally requires an ad-hoc interpolation process as a post-processing for converting it into a high-resolution one, certainly limiting the overall performance of matching results. To overcome this, inspired by recent success of implicit neural representation, we present a novel method for semantic correspondence, called Neural Matching Field (NeMF). However, complicacy and high-dimensionality of a 4D matching field are the major hindrances, which we propose a cost embedding network to process a coarse cost volume to use as a guidance for establishing high-precision matching field through the following fully-connected network. Nevertheless, learning a high-dimensional matching field remains challenging mainly due to computational complexity, since a na\"ive exhaustive inference would require querying from all pixels in the 4D space to infer pixel-wise correspondences. To overcome this, we propose adequate training and inference procedures, which in the training phase, we randomly sample matching candidates and in the inference phase, we iteratively performs PatchMatch-based inference and coordinate optimization at test time. With these combined, competitive results are attained on several standard benchmarks for semantic correspondence. Code and pre-trained weights are available at~\url{https://ku-cvlab.github.io/NeMF/}.
Accept
The paper concerns itself with computing high resolution matchings. The authors propose to use represent matchings as maxima of neural "matching" fields, which is a novel and interesting theoretical contribution that allows to obtain high resolution matchings with fixed representation size of the neural field. The matchings are extracted from the neural field via coordinate optimization. State of the art performance is attained on a variety of semantic correspondence benchmarks. Reviewers also acknowledge that the paper is well written and easy to follow. On the downside are larger computational costs. Also newer versions of CATS, i.e. CATS++ (which is concurrent work), outperform the presented paper. Overall, the interesting theoretical contribution that might be useful in other domains and strong empirical performance make the paper a good fit for NeurIPS. In a final version the reviewer recommendations must be taken into account.
train
[ "Bl2rIqY8__", "lAkJFw7axgg", "gGqQzsDoKt9y", "8oOtz7mKmc3", "CJhVjSnGRJE", "eriM8cVkEM", "svlPLHZYana", "wPqusEvVrX", "bcgNmwuE0tm", "mxiqeWtajp3", "WCk0WCu-O96", "ft_ME3go5uS", "jJkA9bAdLPb" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers,\n\nSince the rebuttal discussion is about to end soon, if there is any other concern that we did not adequately address or is not resolved, please let us know, and we will come back to you as soon as possible if we can. \n\nThank you and best regards,\n\nThe authors of Paper 2300.\n", " Thanks t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "nips_2022_bZzS_kkJes", "wPqusEvVrX", "nips_2022_bZzS_kkJes", "jJkA9bAdLPb", "ft_ME3go5uS", "ft_ME3go5uS", "WCk0WCu-O96", "WCk0WCu-O96", "mxiqeWtajp3", "nips_2022_bZzS_kkJes", "nips_2022_bZzS_kkJes", "nips_2022_bZzS_kkJes", "nips_2022_bZzS_kkJes" ]
nips_2022_ReB7CCByD6U
Beyond Mahalanobis Distance for Textual OOD Detection
As the number of AI systems keeps growing, it is fundamental to implement and develop efficient control mechanisms to ensure the safe and proper functioning of machine learning (ML) systems. Reliable out-of-distribution (OOD) detection aims to detect test samples that are statistically far from the training distribution, as they might cause failures of in-production systems. In this paper, we propose a new detector called TRUSTED. Different from previous works, TRUSTED key components (i) include a novel OOD score relying on the concept of statistical data depth, (ii) rely on the idea’s full potential that all hidden layers of the network carry information regarding OOD. Our extensive experiments, comparing over 51k model configurations including different checkpoints, seed and various datasets, demonstrate that TRUSTED achieve state-of-the-art performances by producing an improvement of over 3 AUROC points.
Accept
The paper proposes a out-of-distribution detection approach using integrated rank weighted (IRW). Its main novel feature is leveraging the information from all layers of the model for this task. The detector can be applied to new transformer models without any training, as opposed to data-driven methods. The method is assessed in a comprehensive evaluation and code is provided for reproducibility. One of the limitations, however, is the difficulty to compare the proposed method with related work. Presentation, especially of the technical content, should be improved in the final version. The AC disagrees with the authors' complaint about the biasedness of some reviews. Indeed two reviewers had critical remarks on several aspects of the paper, yet this criticism appears to be fair and driven by the scientific discourse. The answers provided in the rebuttal have clarified most of the reviewers' concerns.
train
[ "d1L-1Ox2f0l", "rAG1fgcKgKY", "J0KVVOI3tId", "GACG2RnqUXA", "GhThF3ZnGcF", "1wKbi3XvSv6", "0cUHrj3GuT2", "GCU8yl9nYK5", "EqF3V14QmAB", "zrYNrqNvnCf", "uIcbAnI9-E" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you, the authors have addressed my questions during rebuttal.\n", " Let us thank reviewer o5Ud for their detailed answer to our response. We are glad they are acknowledging that a direct comparison against [36] and [85] is either not realistic in our setting, or outside of the scope of the paper.\n\n\nIn ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "1wKbi3XvSv6", "J0KVVOI3tId", "0cUHrj3GuT2", "uIcbAnI9-E", "zrYNrqNvnCf", "EqF3V14QmAB", "GCU8yl9nYK5", "nips_2022_ReB7CCByD6U", "nips_2022_ReB7CCByD6U", "nips_2022_ReB7CCByD6U", "nips_2022_ReB7CCByD6U" ]
nips_2022_KWN3I1koJsU
Learning Generalizable Risk-Sensitive Policies to Coordinate in Decentralized Multi-Agent General-Sum Games
While various multi-agent reinforcement learning methods have been proposed in cooperative settings, few works investigate how self-interested learning agents achieve mutual coordination in decentralized general-sum games and generalize pre-trained policies to non-cooperative opponents during execution. In this paper, we present a generalizable and sample efficient algorithm for multi-agent coordination in decentralized general-sum games without any access to other agents' rewards or observations. Specifically, we first learn the distributions over the return of individuals and estimate a dynamic risk-seeking bonus to encourage agents to discover risky coordination strategies. Furthermore, to avoid overfitting opponents' coordination strategies during training, we propose an auxiliary opponent modeling task so that agents can infer their opponents' type and dynamically alter corresponding strategies during execution. Empirically, we show that agents trained via our method can achieve mutual coordination during training and avoid being exploited by non-cooperative opponents during execution, which outperforms other baseline methods and reaches the state-of-the-art.
Reject
The paper presents a novel approach for improving coordination in general-sum games by using risk-sensitive policies based on distributional RL. While the idea is promising, there are significant questions about the paper. For example, there is concern about the lack of theoretical guarantees and intuition about when the approach will work well. There should also be a more extensive discussion of related work. For example, distributional and risk-sensitive RL have been used in multi-agent RL but more should be said about how the proposed method differs and why they can't be included in the experiments (e.g., the decentralized training methods should have no problem running in the general sum case). More extensive experiments would also be helpful. Additional domains and baselines would more clearly show the benefits of the method (and when it could potentially fail).
train
[ "9oV-tfXEqpH", "u8bRigwc4ac", "a11V2OBJQ3r", "TU1cEFlql5", "v0Jz9rqgkH8", "ky_eTfkW7v4", "c0H-uHlGSzM", "39aZAfKvS_x", "RLfacDDqeXR", "HUXyJ8upECy", "kixNcKJvWcR", "6uP1QuKpoWc", "iYs4YxM-znk", "v5eu84Qyxf2", "wd8cCfhu2Ek", "84xlBFWtHX0", "ufJX92hwkqn", "FgEud9uUcYI", "8hlgyAgOZf...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Dear Reviewer DkS2,\n\nWe have supplemented the comparison experiments with pre-existing literature, and our responses with reviewer ZHUo may eliminate some of your confusion. As the response system will be closed soon within one day. We thank you again for your comments. We hope our detailed responses could addr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "iYs4YxM-znk", "a11V2OBJQ3r", "TU1cEFlql5", "v0Jz9rqgkH8", "39aZAfKvS_x", "M6PCWk4DHe6", "RLfacDDqeXR", "8hlgyAgOZfX", "ufJX92hwkqn", "nips_2022_KWN3I1koJsU", "iYs4YxM-znk", "iYs4YxM-znk", "v5eu84Qyxf2", "D5HCzOrkU0x", "FgEud9uUcYI", "8hlgyAgOZfX", "M6PCWk4DHe6", "nips_2022_KWN3I1k...
nips_2022_T7114JzrwB
ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time
Humans have the remarkable ability to recognize and acquire novel visual concepts in a zero-shot manner. Given a high-level, symbolic description of a novel concept in terms of previously learned visual concepts and their relations, humans can recognize novel concepts without seeing any examples. Moreover, they can acquire new concepts by parsing and communicating symbolic structures using learned visual concepts and relations. Endowing these capabilities in machines is pivotal in improving their generalization capability at inference time. In this work, we introduce Zero-shot Concept Recognition and Acquisition (ZeroC), a neuro-symbolic architecture that can recognize and acquire novel concepts in a zero-shot way. ZeroC represents concepts as graphs of constituent concept models (as nodes) and their relations (as edges). To allow inference time composition, we employ energy-based models (EBMs) to model concepts and relations. We design ZeroC architecture so that it allows a one-to-one mapping between a symbolic graph structure of a concept and its corresponding EBM, which for the first time, allows acquiring new concepts, communicating its graph structure, and applying it to classification and detection tasks (even across domains) at inference time. We introduce algorithms for learning and inference with ZeroC. We evaluate ZeroC on a challenging grid-world dataset which is designed to probe zero-shot concept recognition and acquisition, and demonstrate its capability.
Accept
The focus of this work is on the introduction of a compositional reasoning model that enables zero-shot generalization. While there are a number of limitations (e.g. the small domain, limited concepts) but reviewers were content that the demonstrated results on low-resolution image domains proved the approach can scale to more realistic task complexity. The primary open challenge for richer tasks is the identification and training of elementary concepts -- a classification that may not hold.
train
[ "Vp47R6oICO", "q1DcCYxK9z", "kEZPH8TYM6u", "7La6nOELeiNV", "SfTYA_59mHr", "3_epOAZNHDqY", "rDH4IkDeStZX", "KM8k4S_jgHe", "fI-KZesxY3B", "cOsKipe1VuS", "gVO1MIoibrv", "d2Wj0zftKrc", "-SE8DRdE5kR", "JSFjZTQPI17" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for you review! In our response, we have attempted to addressed your concerns about scalability, few-shot learning datasets and question about learning real-world concepts. We have also added Appendix A.14 in the revised version to discuss about scalability, which our 2D to 3D domain adaptation e...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "rDH4IkDeStZX", "3_epOAZNHDqY", "nips_2022_T7114JzrwB", "JSFjZTQPI17", "-SE8DRdE5kR", "-SE8DRdE5kR", "d2Wj0zftKrc", "gVO1MIoibrv", "gVO1MIoibrv", "gVO1MIoibrv", "nips_2022_T7114JzrwB", "nips_2022_T7114JzrwB", "nips_2022_T7114JzrwB", "nips_2022_T7114JzrwB" ]
nips_2022_xvZtgp5wyYT
Learning to Accelerate Partial Differential Equations via Latent Global Evolution
Simulating the time evolution of Partial Differential Equations (PDEs) of large-scale systems is crucial in many scientific and engineering domains such as fluid dynamics, weather forecasting and their inverse optimization problems. However, both classical solvers and recent deep learning-based surrogate models are typically extremely computationally intensive, because of their local evolution: they need to update the state of each discretized cell at each time step during inference. Here we develop Latent Evolution of PDEs (LE-PDE), a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs. LE-PDE learns a compact, global representation of the system and efficiently evolves it fully in the latent space with learned latent evolution models. LE-PDE achieves speedup by having a much smaller latent dimension to update during long rollout as compared to updating in the input space. We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability. We further introduce techniques for speeding-up inverse optimization of boundary conditions for PDEs via backpropagation through time in latent space, and an annealing technique to address the non-differentiability and sparse interaction of boundary conditions. We test our method in a 1D benchmark of nonlinear PDEs, 2D Navier-Stokes flows into turbulent phase and an inverse optimization of boundary conditions in 2D Navier-Stokes flow. Compared to state-of-the-art deep learning-based surrogate models and other strong baselines, we demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
Accept
The paper presents a new method for accelerating the simulation and inverse optimization of partial differential equations (PDEs) of large-scale systems. The proposed approach learns the evolution of dynamics in a “global” latent space (i.e., with fixed dimensionality). The reviewers agree the proposed approach is novel and empirically competitive. issues regarding experiments have largely been addressed by the authors in their rebuttal. Their authors are expected to add some extended discussion (if possible) on (theoretical) properties of PDEs where their approach is expected to succeed. Some of the reviewers increased their scores after the rebuttal period.
train
[ "avDG43zgnev", "xPwUQ94rGg-", "wbEcYR-aolD", "eH3kTFgV2LH", "MCGgTho68yQ", "qTGpjLv61dN", "1z7NbJMK-x4", "rJVs6e2b8f5", "OaBuxhNcF1n", "tYVtii66alA", "rbncqNtP_Em", "ubV8IilYRYb", "2IsWw94qSKx", "46LZzNNWGtX", "fS8fUeS5F_8", "lvMVLrupf7", "PkpJ78nSN0", "XDFxOpXsw-", "VqGOaHv9wJp"...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " Thanks the reviewers for the response! To give an intuitive answer to your question of \"which family of PDEs can be applied the proposed method\", we can think of a PDE as a ground-truth model that evolves the ***state*** of a physical system. Typically, the states show more global, dominant features, and can be...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "qTGpjLv61dN", "wbEcYR-aolD", "MCGgTho68yQ", "o_mqVYBZFUG", "XDFxOpXsw-", "tYVtii66alA", "KIsnb7cTFc", "o_mqVYBZFUG", "o_mqVYBZFUG", "KIsnb7cTFc", "KIsnb7cTFc", "Yvsk7vvSCsb", "Yvsk7vvSCsb", "VqGOaHv9wJp", "nips_2022_xvZtgp5wyYT", "VqGOaHv9wJp", "VqGOaHv9wJp", "VqGOaHv9wJp", "nip...
nips_2022_dcmp81De77k
Localized Curvature-based Combinatorial Subgraph Sampling for Large-scale Graphs
This paper introduces a subgraph sampling method based on curvature to train large-scale graphs via mini-batch training. Owing to the difficulty in sampling globally optimal subgraphs from large graphs, we sample the subgraphs to minimize the distributional metric with combinatorial sampling. In particular, we define a combinatorial metric that distributionally measures the similarity between an original graph and all possible node and edge combinations of the subgraphs. Further, we prove that the subgraphs sampled using the probability model proportional to the discrete Ricci curvature (i.e., Ollivier-Ricci curvatures) of the edges can minimize the proposed metric. Moreover, as accurate calculation of the curvature on a large graph is challenging, we propose to use a localized curvature considering only 3-cycles on the graph, suggesting that this is a sufficiently approximated curvature on a sparse graph. In addition, we show that the probability models of conventional sampling methods are related to coarsely approximated curvatures with no cycles, implying that the curvature is closely related to subgraph sampling. The experimental results confirm the feasibility of integrating the proposed curvature-based sampling method into existing graph neural networks to improve performance.
Reject
The majority reviewers consider that this paper should be rejected. Their concerns include clarity of presentation, a comparison to previous work and finally a number of individual points which were not addressed in the rebuttal period.
train
[ "_uqh6Dev8VS", "wlXw2BZy1Z", "W4Sv97WtfEs", "opDziE1qWu5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper proposes a curvature-based graph subsampling method that aims at sampling structurally representative subgraphs via Olliver's Ricci curvature. *Strength*\nThe paper addresses an important topic with geometric tools that are not very well explored in this context. \n\n*Weaknesses*\n- The writing should b...
[ 3, 4, 5, 3 ]
[ 4, 4, 3, 4 ]
[ "nips_2022_dcmp81De77k", "nips_2022_dcmp81De77k", "nips_2022_dcmp81De77k", "nips_2022_dcmp81De77k" ]
nips_2022_JY6fLgR8Yq
Graph Self-supervised Learning with Accurate Discrepancy Learning
Self-supervised learning of graph neural networks (GNNs) aims to learn an accurate representation of the graphs in an unsupervised manner, to obtain transferable representations of them for diverse downstream tasks. Predictive learning and contrastive learning are the two most prevalent approaches for graph self-supervised learning. However, they have their own drawbacks. While the predictive learning methods can learn the contextual relationships between neighboring nodes and edges, they cannot learn global graph-level similarities. Contrastive learning, while it can learn global graph-level similarities, its objective to maximize the similarity between two differently perturbed graphs may result in representations that cannot discriminate two similar graphs with different properties. To tackle such limitations, we propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA). Specifically, we create multiple perturbations of the given graph with varying degrees of similarity, and train the model to predict whether each graph is the original graph or the perturbed one. Moreover, we further aim to accurately capture the amount of discrepancy for each perturbed graph using the graph edit distance. We validate our D-SLA on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which ours largely outperforms relevant baselines.
Accept
This paper proposes a novel self-supervised learning strategy by considering the quantitative discrepancy of two perturbed graphs, which is measured by graph edit distance. The major concerns come from the motivation of the proposed approach. This has been well addressed in authors’ rebuttal, with additional new experiments. The authors have done a great job in addressing this main concern and other questions raised by reviewers, such as ablation studies on major hyperparameters. The contribution of incorporating graph-level quantitative metric as additional self-supervision signal is clear. Although there are divided ratings in the end, I still recommend acceptance of this paper.
train
[ "bRvq5uXKUdC", "cKWIfYnr41N", "mdqOPwdQbjJ", "-7wlig-d9EH", "pOpSgOPGcpc", "RU-M4cpu85T", "XAEpr-WZmTs", "OfRPwcAJko_", "n6wK--vGMy", "LI4XPn5dt2m", "pvIgLBmrWb", "lGXLuaqFND7", "Ek4znY_Lnrf", "q498xxH2nua", "VuVio-U37T", "swKw59daFPz", "hmPQnk8ugzp", "bEIpzfZ-84h", "iXDH0ijbxeX"...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "officia...
[ " Q1. This work is not well motivated, since there exist works [1, 2, 3] that perform SSL on graphs without perturbing graphs. \n\nA1. This is a critical misunderstanding of our motivations. Please note that one of our main motivations is to **learn the exact discrepancies between different graphs**, and we use gra...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "cKWIfYnr41N", "Ek4znY_Lnrf", "0KdXGuPIKVP", "c83OwEyH1bn", "qpdmGV6iegN", "ULVLpe7cYqR", "nips_2022_JY6fLgR8Yq", "ULVLpe7cYqR", "ULVLpe7cYqR", "ULVLpe7cYqR", "qpdmGV6iegN", "qpdmGV6iegN", "c83OwEyH1bn", "c83OwEyH1bn", "0KdXGuPIKVP", "0KdXGuPIKVP", "0KdXGuPIKVP", "0KdXGuPIKVP", "...
nips_2022_EQgPNPwREa
Tikhonov Regularization is Optimal Transport Robust under Martingale Constraints
Distributionally robust optimization (DRO) has been shown to offer a principled way to regularize learning models. In this paper, we find that Tikhonov regularization is distributionally robust in an optimal transport sense (i.e. if an adversary chooses distributions in a suitable optimal transport neighborhood of the empirical measure), provided that suitable martingale constraints are also imposed. Further, we introduce a relaxation of the martingale constraints which not only provide a unified viewpoint to a class of existing robust methods but also lead to new regularization tools. To realize these novel tools, provably efficient computational algorithms are proposed. As a byproduct, the strong duality theorem proved in this paper can be potentially applied to other problems of independent interest.
Accept
This work focuses on robust stochastic optimization (under a Wasserstein constraint), and shows the efficiency of Tikhonov regularization for this problem. There has been a lively and constructive discussion between authors and reviewers, and ultimately all agree that this work should be accepted, and so do I.
train
[ "wd3Gj6UGLhT", "O_Sqlh669hC", "R7fZKTh07Tj", "_IMNYsEX7n", "JwvZ4Jbbmty", "vzagP25VMLy", "EQzST1Pw6eb", "wsbWNGZYin", "X-a29-DTyc4T", "3bbPknWuOWaO", "KkC59h5nRif", "thxpobdwEgf", "KTvkpdojJ0N", "7D9AJBIMTJI", "ICrAzen_8AmG", "VisaTnf5oj", "G6rjjqEpOA3", "ZU-ioDR86v5", "0OE_OLvTH...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ " Dear Reviewer Wc7X, \n\nThanks for keeping an open mind and for agreeing to change the score.\n \nFollowing your suggestions regarding the experiments, we have done a new set of experiments that reveals an intriguing qualitative difference in the structure of the adversarial optimal coupling. Please see Figure 2(...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "_IMNYsEX7n", "VisaTnf5oj", "_IMNYsEX7n", "thxpobdwEgf", "KkC59h5nRif", "wsbWNGZYin", "X-a29-DTyc4T", "7D9AJBIMTJI", "KTvkpdojJ0N", "ZU-ioDR86v5", "ICrAzen_8AmG", "nips_2022_EQgPNPwREa", "ZU-ioDR86v5", "ZU-ioDR86v5", "G6rjjqEpOA3", "0OE_OLvTHwa", "nips_2022_EQgPNPwREa", "nips_2022_...
nips_2022_4maAiUt0A4
Boosting Out-of-distribution Detection with Typical Features
Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios. Different from most previous OOD detection methods that focus on designing OOD scores or introducing diverse outlier examples to retrain the model, we delve into the obstacle factors in OOD detection from the perspective of typicality and regard the feature's high-probability region of the deep model as the feature's typical set. We propose to rectify the feature into its typical set and calculate the OOD score with the typical features to achieve reliable uncertainty estimation. The feature rectification can be conducted as a plug-and-play module with various OOD scores. We evaluate the superiority of our method on both the commonly used benchmark (CIFAR) and the more challenging high-resolution benchmark with large label space (ImageNet). Notably, our approach outperforms state-of-the-art methods by up to 5.11% in the average FPR95 on the ImageNet benchmark.
Accept
This paper received unanimous recommendations of acceptance. Concerns were expressed regarding the similarity between the proposed method and ReAct, but the concerns were addressed by the authors. The AC agrees with the reviewer regarding the contribution of this paper and recommends acceptance.
train
[ "I2EShbkd1zq", "QfXwhWynOJ", "UHcqo1pgi6", "a__knB5nRhx", "8WbO_cs3487", "U7aO4LU03uq", "lVluesMW8_", "X1viltWeRcF", "tJuTeBTVJMj", "55jL2nHVpRB", "gekuhA1C0s", "V0t8Vts3Lin", "I63AIRc1IJE", "KNxNbCXW8Z9", "b1mqeKlgKQ6", "dolKJVYH_g", "SaLBXfH9Ko_", "duB0j2MLET", "xoWdLbKcVo", ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Thanks again for your appreciation of our paper and the valuable comments. Best regards.", " Thanks to the authors for their thorough answers to my comments and to all other reviewers. I think the responses and the modifications to the manuscript cover my questions appropriately and I found some of the answers ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "QfXwhWynOJ", "duB0j2MLET", "a__knB5nRhx", "tJuTeBTVJMj", "U7aO4LU03uq", "55jL2nHVpRB", "X1viltWeRcF", "SaLBXfH9Ko_", "IntM5JJIdZ9", "gekuhA1C0s", "xoWdLbKcVo", "nips_2022_4maAiUt0A4", "IntM5JJIdZ9", "IntM5JJIdZ9", "IntM5JJIdZ9", "IntM5JJIdZ9", "2l43FfUKsw", "lPvDOez8Vt5", "QmWTM...
nips_2022_W4ZlZZwsQmt
Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data
Hamiltonian mechanics is a well-established theory for modeling the time evolution of systems with conserved quantities (called Hamiltonian), such as the total energy of the system. Recent works have parameterized the Hamiltonian by machine learning models (e.g., neural networks), allowing Hamiltonian dynamics to be obtained from state trajectories without explicit mathematical modeling. However, the performance of existing models is limited as we can observe only noisy and sparse trajectories in practice. This paper proposes a probabilistic model that can learn the dynamics of conservative or dissipative systems from noisy and sparse data. We introduce a Gaussian process that incorporates the symplectic geometric structure of Hamiltonian systems, which is used as a prior distribution for estimating Hamiltonian systems with additive dissipation. We then present its spectral representation, Symplectic Spectrum Gaussian Processes (SSGPs), for which we newly derive random Fourier features with symplectic structures. This allows us to construct an efficient variational inference algorithm for training the models while simulating the dynamics via ordinary differential equation solvers. Experiments on several physical systems show that SSGP offers excellent performance in predicting dynamics that follow the energy conservation or dissipation law from noisy and sparse data.
Accept
Learning from continuous-time physical systems when input data is noisy & sparse, and without access to time derivatives, is a hard problem. The authors propose a novel algorithm using Gaussian Processes, guided by physical knowledge. Reviewers agreed that the work was original. One reviewer raised concerns about the readability of the paper. The authors' responses will likely address most of those concerns. Other reviewers also suggested a number of improvements, which the authors took on board and will easily implement. Despite relatively simple experiment scenarios, this new algorithm demonstrated some advantages in the low data regime, where it improves on previous known algorithms.
train
[ "sn2D8i2xwd0", "dRac-XFXM_V", "8TQyc_K1Gbi", "8oRhqkZEoA2", "lFDT0GHnqB", "S6zy_NmMrp", "fJ2xJRzCGU", "VefeF463zlI", "6aXYvNflSH8", "YWoqnYSMGYCZ", "A0lOAi7fO9U", "IX1wMp1mZuV", "J52f-JfDEFk", "Ss5EeGWdAP8", "2B_lqqSf5oH", "PMixBamr35", "lNRmTzSObe7", "EZEIt_lCCG5", "Arzzd5TqHWK"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " I am glad that your concerns have been addressed. Your comments will help us to revise our manuscript even better.", " You are correct. We will clarify it as you commented.", " I appreciate your reply. I am glad that your concerns have been addressed. Your comments will help us to revise our manuscript even b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 4 ]
[ "8oRhqkZEoA2", "lFDT0GHnqB", "S6zy_NmMrp", "fJ2xJRzCGU", "lNRmTzSObe7", "PMixBamr35", "6aXYvNflSH8", "YWoqnYSMGYCZ", "Ss5EeGWdAP8", "A0lOAi7fO9U", "3hyDxvzW2bv", "3hyDxvzW2bv", "GPu7yR7LP7W", "GPu7yR7LP7W", "Arzzd5TqHWK", "EZEIt_lCCG5", "EZEIt_lCCG5", "nips_2022_W4ZlZZwsQmt", "ni...
nips_2022_L9YayWPcHA_
Plan To Predict: Learning an Uncertainty-Foreseeing Model For Model-Based Reinforcement Learning
In Model-based Reinforcement Learning (MBRL), model learning is critical since an inaccurate model can bias policy learning via generating misleading samples. However, learning an accurate model can be difficult since the policy is continually updated and the induced distribution over visited states used for model learning shifts accordingly. Prior methods alleviate this issue by quantifying the uncertainty of model-generated samples. However, these methods only quantify the uncertainty passively after the samples were generated, rather than foreseeing the uncertainty before model trajectories fall into those highly uncertain regions. The resulting low-quality samples can induce unstable learning targets and hinder the optimization of the policy. Moreover, while being learned to minimize one-step prediction errors, the model is generally used to predict for multiple steps, leading to a mismatch between the objectives of model learning and model usage. To this end, we propose Plan To Predict (P2P), an MBRL framework that treats the model rollout process as a sequential decision making problem by reversely considering the model as a decision maker and the current policy as the dynamics. In this way, the model can quickly adapt to the current policy and foresee the multi-step future uncertainty when generating trajectories. Theoretically, we show that the performance of P2P can be guaranteed by approximately optimizing a lower bound of the true environment return. Empirical results demonstrate that P2P achieves state-of-the-art performance on several challenging benchmark tasks.
Accept
All the reviewers agree that this is a good paper. The idea is original and the paper has good empirical results. There were some confusions, which were resolved during the discussions and the revised paper. I recommend this paper to be accepted, possibly as a spotlight presentation. I enlist a few concerns below, so that the authors can improve their paper. Some of them are by the reviewers, and some of them are by myself. - Be more clear about how R^m is estimated. This is discussed in Appendix B.3 of the revised version, but given its importance to the algorithm, the authors may want to consider discussing it in the main body. - Reviewer q2Vv mentioned their concerns about preventing the agent to go to the uncertain regions, which may prevent the exploration. The authors answered that "the exploration-exploitation tradeoff in RL mainly works on real environments instead of the approximate models". This is not entirely accurate. Methods based on the optimism in face of uncertainty, such as UCRL, actually try to exploit the uncertainty of the model. If the model's promise of large return turns out to be false, due to its large uncertainty, we have gathered useful information and decreased our uncertainty. - I feel that there is a gap between the theoretical results and the algorithm. It does not seem that the optimizer of the the model MDP (Definition 1), which is optimized on L5 of Algorithm 1, is the same as the optimizer of the upper bound on Theorem 1, used to justify the algorithm. For example, the $e^m_t$ term is based on maximum over actions of the TV error between the model and the true environment, weighted according to state distribution induced by the model. On the other hand, $R^m$ in the model MDP (after taking the expectation over $s_{t+1}$ coming from distribution $P^m$), seems to be the chi-squared divergence, which is also weighted by the policy $\pi$. They are not the same. It is OK if they are slightly different for practical purposes, as long as one can show their relation and be clear about it. - The inequality at the beginning of Section 3.2 (Theoretical Results) requires $|J^\hat{P}(\pi) - J(\pi)|$ be smaller than C for both the new policy pi and the old policy $\pi_D$. Disregarding the previous issue (or assuming that it can be resolved), solving the optimization problem defined on L5 of Algorithm 1 only guarantees $|J^\hat{P}(\pi) - J(\pi)|$ to be small for the current policy $\pi_D$, and not the optimized one. As such, the inequality is not satisfied, even if the algorithm works well. Am I missing something? Please clarify it in the revised paper. - It is claimed on L154 that monotonic policy improvement can be achieved by solving the update rule (2). I don't think it is correct. We need the value of the maximizer to be larger than $J(\pi_D)$, which may not always be the case.
train
[ "YNxOIEXg4gf", "c44lGI6mfnN", "4WM3j6k1ARb", "7kOH2dODMQ5", "C35nZlnc9KM", "kpUkUXMeJA", "BI3hvTzIkMi", "eHjIv4ew63T", "uhJof5nBp4k", "hOgrJ_wmFYw", "dfqfZr8qvHa", "OPWc0Cz20ux", "-jyFoE-CbgV", "hMFxbZSxiHt", "T-l9Juofeb", "Rtzaz04-dOz", "A_YB_J1pfmD", "QE7DLdPRSaz", "NlaBpnCwz5g...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their detailed responses, which corroborate my positive assessment of this paper.", " We thank the reviewer for the valuable suggestions and for updating the score. We will further expand the description of Figure 1 in our future revision to improve the readability.", " ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "-jyFoE-CbgV", "4WM3j6k1ARb", "dfqfZr8qvHa", "C35nZlnc9KM", "kpUkUXMeJA", "BI3hvTzIkMi", "T-l9Juofeb", "A_YB_J1pfmD", "QE7DLdPRSaz", "NlaBpnCwz5g", "OPWc0Cz20ux", "QE7DLdPRSaz", "hMFxbZSxiHt", "A_YB_J1pfmD", "Rtzaz04-dOz", "nips_2022_L9YayWPcHA_", "nips_2022_L9YayWPcHA_", "nips_202...
nips_2022_zTQdHSQUQWc
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
Recent studies have shown that deep learning models such as RNNs and Transformers have brought significant performance gains for long-term forecasting of time series because they effectively utilize historical information. We found, however, that there is still great room for improvement in how to preserve historical information in neural networks while avoiding overfitting to noise present in the history. Addressing this allows better utilization of the capabilities of deep learning models. To this end, we design a \textbf{F}requency \textbf{i}improved \textbf{L}egendre \textbf{M}emory model, or {\bf FiLM}: it applies Legendre polynomial projections to approximate historical information, uses Fourier projection to remove noise, and adds a low-rank approximation to speed up computation. Our empirical studies show that the proposed FiLM significantly improves the accuracy of state-of-the-art models in multivariate and univariate long-term forecasting by (\textbf{19.2\%}, \textbf{22.6\%}), respectively. We also demonstrate that the representation module developed in this work can be used as a general plugin to improve the long-term prediction performance of other deep learning modules. Code is available at https://github.com/tianzhou2011/FiLM/.
Accept
Paper provides a time series modeling technique combining the use of Legendre polynomials for projections and Frequency based low rank approximation / selection. The reviewers found the paper to be interesting, and the results convincing and possibly usable in other sequence modeling tasks. Some questions were raised by nNQa about the baselines / comparisons, that I felt were addressed appropriately by the authors. Other questions that were raised about the details of the experiments, including the datasets, the ablations performed, and comparisons to alternatives (such as lagged inputs in LSTMs, and comparisons to n-Hits) seem to have been well addressed by the authors.
train
[ "iYHCxZE4GUY", "2PBHualu9cZ", "NFm5GM8F3U", "M6wx3aVzwRJ", "ss23jqooBgL", "96kGH0vwoK7", "IQ9kWTxy7qo", "7pHU1Rq4Pt3", "CiHej5tkjtL", "bdXdn2TCeL", "ZSLt0-1jUCs" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Pwiu\n\nWe want to thank your valuable comments sincerely. Indeed as you point out, the low-rank approximation is not a stable improvement design for all datasets. On the contrary, it will hurt our performance in the heaviest compression version. But, it might be used as a building block for a futur...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "2PBHualu9cZ", "7pHU1Rq4Pt3", "ss23jqooBgL", "7pHU1Rq4Pt3", "96kGH0vwoK7", "ZSLt0-1jUCs", "bdXdn2TCeL", "CiHej5tkjtL", "nips_2022_zTQdHSQUQWc", "nips_2022_zTQdHSQUQWc", "nips_2022_zTQdHSQUQWc" ]
nips_2022_3MZnNARib5
SAPipe: Staleness-Aware Pipeline for Data Parallel DNN Training
Data parallelism across multiple machines is widely adopted for accelerating distributed deep learning, but it is hard to achieve linear speedup due to the heavy communication. In this paper, we propose SAPipe, a performant system that pushes the training speed of data parallelism to its fullest extent. By introducing partial staleness, the communication overlaps the computation with minimal staleness in SAPipe. To mitigate additional problems incurred by staleness, SAPipe adopts staleness compensation techniques including weight prediction and delay compensation with provably lower error bounds. Additionally, SAPipe presents an algorithm-system co-design with runtime optimization to minimize system overhead for the staleness training pipeline and staleness compensation. We have implemented SAPipe in the BytePS framework, compatible to both TensorFlow and PyTorch. Our experiments show that SAPipe achieves up to 157% speedups over BytePS (non-stale), and outperforms PipeSGD in accuracy by up to 13.7%.
Accept
This paper proposes a new algorithm to speed up data-parallel distributed training, focused on mitigating staleness-induced issues that arise when limiting communication between nodes. All reviewers and myself agree this is a worthwhile contribution, which is backed by both convincing empirical and theoretical results. I consider that the potential novelty concerns that were raised in initial reviews have been addressed by the authors. The main remaining concerns are related to the limitations of the proposed method, that comes with some trade-offs and may not apply to all situations. I believe the authors have adequately answered these concerns by being upfront about these limitations during the discussion period, and I encourage them to make sure this is also clear in the final version of the paper. In spite of these limitations, I believe the novelty and significance of this work meet the bar for acceptance at NeurIPS, since speeding up distributed computations is a very relevant and challenging problem in modern deep learning.
train
[ "wp9hUUHRggo", "W5X2HMWyl8o", "MDyh08MKiqD", "5coHNxf2M98", "Xw8XA41UHYk", "9HLYAPbik5C", "aVTCU2HyZAV", "iDXkG3LgeAv", "G98JD3T5ApH", "Pusx9gIXlTG", "z_UFq0MAZO8", "eWfRcdWc7QG", "hEajH28cCFX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and suggestion. \n\n1. why does SAPipe perform (marginally) better than fully synchronous for some tasks (VGG-16 and ResNet-50)? \n\n **A**: We do observe that in a few cases, SAPipe performs slightly better than fully synchronous SGD. For example, in Table 2, SAPipe-WP-OPT1 has a littl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "W5X2HMWyl8o", "MDyh08MKiqD", "Xw8XA41UHYk", "9HLYAPbik5C", "iDXkG3LgeAv", "G98JD3T5ApH", "hEajH28cCFX", "hEajH28cCFX", "eWfRcdWc7QG", "z_UFq0MAZO8", "nips_2022_3MZnNARib5", "nips_2022_3MZnNARib5", "nips_2022_3MZnNARib5" ]
nips_2022_6hzH8pohyPY
Batch-Size Independent Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms or Independent Arms
In this paper, we study the combinatorial semi-bandits (CMAB) and focus on reducing the dependency of the batch-size $K$ in the regret bound, where $K$ is the total number of arms that can be pulled or triggered in each round. First, for the setting of CMAB with probabilistically triggered arms (CMAB-T), we discover a novel (directional) triggering probability and variance modulated (TPVM) condition that can replace the previously-used smoothness condition for various applications, such as cascading bandits, online network exploration and online influence maximization. Under this new condition, we propose a BCUCB-T algorithm with variance-aware confidence intervals and conduct regret analysis which reduces the $O(K)$ factor to $O(\log K)$ or $O(\log^2 K)$ in the regret bound, significantly improving the regret bounds for the above applications. Second, for the setting of non-triggering CMAB with independent arms, we propose a SESCB algorithm which leverages on the non-triggering version of the TPVM condition and completely removes the dependency on $K$ in the leading regret. As a valuable by-product, the regret analysis used in this paper can improve several existing results by a factor of $O(\log K)$. Finally, experimental evaluations show our superior performance compared with benchmark algorithms in different applications.
Accept
Thank the authors for their submission. The paper studies combinatorial multi-armed bandit with probabilistically triggered arms. That is an MAB setting in which, at each round, the learner chooses a subset of the arms and obtains a reward that is some function of expected rewards of the chosen arms. In addition, the learner only observes feedback on a random subset of her chosen arms (triggered arms). The paper relaxes a smoothness assumption laid by a previous work, and further improves the dependence on K in the regret bound, where K is the batch size (maximum number of triggered arms) The authors provide computationally-efficient algorithms that are based on Bernstein concentration inequality, facilitating the improved bounds. The paper is well-written and organized, and the theoretical results are sound.
test
[ "_VghkKDIDN", "7Gjv3zvTLS", "7qo0wvfkwrF", "OC06JGccak", "tIJnQp3RQgBS", "9zSpvcBfSFd", "ET6xAcroMFx", "3riG_pFDWbI", "vwuK5VKM1ma", "guea8oQS_V5", "JQioC-fcJPp" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nWe wonder if our response has addressed your question about the $(\\alpha,\\beta)$-approximation regret and the experiments. We are happy to have a further discussion if you have more questions.", " Thank you for the response. This will be a good addition to the final version of the paper. ", ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "vwuK5VKM1ma", "ET6xAcroMFx", "OC06JGccak", "JQioC-fcJPp", "vwuK5VKM1ma", "JQioC-fcJPp", "guea8oQS_V5", "vwuK5VKM1ma", "nips_2022_6hzH8pohyPY", "nips_2022_6hzH8pohyPY", "nips_2022_6hzH8pohyPY" ]
nips_2022_H-6iczs__Ro
A Unified Diversity Measure for Multiagent Reinforcement Learning
Promoting behavioural diversity is of critical importance in multi-agent reinforcement learning, since it helps the agent population maintain robust performance when encountering unfamiliar opponents at test time, or, when the game is highly non-transitive in the strategy space (e.g., Rock-Paper-Scissor). While a myriad of diversity metrics have been proposed, there are no widely accepted or unified definitions in the literature, making the consequent diversity-aware learning algorithms difficult to evaluate and the insights elusive. In this work, we propose a novel metric called the Unified Diversity Measure (UDM) that offers a unified view for existing diversity metrics. Based on UDM, we design the UDM-Fictitious Play (UDM-FP) and UDM-Policy Space Response Oracle (UDM-PSRO) algorithms as efficient solvers for normal-form games and open-ended games. In theory, we prove that UDM-based methods can enlarge the gamescape by increasing the response capacity of the strategy pool, and have convergence guarantee to two-player Nash equilibrium. We validate our algorithms on games that show strong non-transitivity, and empirical results show that our algorithms achieve better performances than strong PSRO baselines in terms of the exploitability and population effectivity.
Accept
This paper provides a unifying framework for promoting diverse behaviors in multi-agent RL. The framework---the unified diversity measure--- is general enough to be able to capture several other recently proposed measures as special cases (associated with specific kernel functions). The paper then provides extensions two MARL algorithms (PSRO and Fictitious-play) which make use of UDM to promote diverse behaviors in MARL and show that they converge asymptotically to relevant equilibria and provide numerical examples. Reviewers were generally positive on the paper, finding it well written and proposing an interesting idea for promoting diversity in MARL that seemed intuitive.
train
[ "RjPmnhNkGNy", "qSaoXsc-_FWK", "7HOtGmgmQJC", "an9W-MnErtO", "x5AkIfL8-hk", "oCx8P4dSmrq", "p40Jbc8MVnf", "1VoJI3gaWA", "5BsVUTlAulr", "pgECdCfBuT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and most of my concerns are addressed. I would raise my evaluation.", " Thanks for the response and the additional experiments! ", " Thank you for the answers.", " **Q6: \"If a game has NE, why do we need to explore the diversity, especially when we can get the whole payoff matrix.\...
[ -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "an9W-MnErtO", "p40Jbc8MVnf", "oCx8P4dSmrq", "x5AkIfL8-hk", "5BsVUTlAulr", "1VoJI3gaWA", "pgECdCfBuT", "nips_2022_H-6iczs__Ro", "nips_2022_H-6iczs__Ro", "nips_2022_H-6iczs__Ro" ]
nips_2022_wiBEFdAvl8L
GLIPv2: Unifying Localization and Vision-Language Understanding
We present GLIPv2, a grounded VL understanding model, that serves both localization tasks (e.g., object detection, instance segmentation) and Vision-Language (VL) understanding tasks (e.g., VQA, image captioning). GLIPv2 elegantly unifies localization pre-training and Vision-Language Pre-training (VLP) with three pre-training tasks: phrase grounding as a VL reformulation of the detection task, region-word contrastive learning as a novel region-word level contrastive learning task, and the masked language modeling. This unification not only simplifies the previous multi-stage VLP procedure but also achieves mutual benefits between localization and understanding tasks. Experimental results show that a single GLIPv2 model (all model weights are shared) achieves near SoTA performance on various localization and understanding tasks. The model also shows (1) strong zero-shot and few-shot adaption performance on open-vocabulary object detection tasks and (2) superior grounding capability on VL understanding tasks.
Accept
All three reviewers provided positive reviews and scores for this paper. They were happy to see the strong empirical evaluations and improvements over GLIP, impressed by the zero shot results, and found the new combination of pre-training objectives interesting. A few questions and concerns were brought up by reviewers that had to do with differentiation to the GLIP paper and model. These concerns including novelty in the loss term, tasks accomplished, need for detection boxes at training time, etc were well addressed by the authors. The reviewers also acknowledged that their questions were answered. Given these positive reviews and discussions, I recommend acceptance. Note to authors: Please address the comments raised by the ethics reviewer in your final manuscript. Thank you.
train
[ "hzoykbU2tqD", "4EKJULcZ7PM", "PSr22ue1bHo", "0g9F3DQbHPg", "7tSku1rn58a", "DeNN53stpdV", "6VEWeQvTtp", "NmGPYCze6iB", "Ll2znDYLTLm", "5ElAx_bDHgL", "cTZCcEuHSIE" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank Reviewer WQgW for the reply! \nWe will include the ablation of text encoder initialization, e.g., text-only pretraining model vs clip/unicl-like multimodal pretrained model, in the final version. As we presented in the rebuttal, their performance are nearly the same. \n\nFor the second point, if the reviewe...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "PSr22ue1bHo", "NmGPYCze6iB", "7tSku1rn58a", "nips_2022_wiBEFdAvl8L", "cTZCcEuHSIE", "6VEWeQvTtp", "5ElAx_bDHgL", "Ll2znDYLTLm", "nips_2022_wiBEFdAvl8L", "nips_2022_wiBEFdAvl8L", "nips_2022_wiBEFdAvl8L" ]
nips_2022_08Yk-n5l2Al
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g., T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
Accept
This paper proposes Imagen that uses large transformer language models and diffusion models for text-to-image generation. The major finding is that using large language models pretrained only on text data as text encoders are effective. Dynamic thresholding and Efficient U-Net architecture are proposed to improve the training effectiveness and efficiency of the diffusion model. It received scores of 578. All the reviewers agree that the image generation results are impressive, and the zero-shot results on COCO are strong. This paper also proposes a new benchmark for comprehensively evaluating text-to-image tasks. On the other hand, Reviewer 9tx5 pointed out that one major concern is that the novelty is quite limited. Overall, the AC thinks that the paper presented impressive results and has great significance, therefore, the AC would like to recommend acceptance of the paper.
train
[ "AJACV6zQL--", "OgTNkT8GesN", "fc0GBiwGFdA", "ChSsMWIrEU", "7O7bFM0O8K1", "mJLdxx1Rtj6", "BapW13bEVBU", "sh_fnD7Jdx6S", "dZzP9BPUpki", "y_Zz3qo1oYm", "qr7gHw8LDvd" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The rebuttal addressed some of my concerns, and I like the results shown in this paper. I have raised the score.\n\nHowever, I am not convinced by the author's response that the proposed idea is novel.\n\nFor example, my question `Could the authors explain, except using the massive data and large models, what is ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "mJLdxx1Rtj6", "ChSsMWIrEU", "nips_2022_08Yk-n5l2Al", "sh_fnD7Jdx6S", "BapW13bEVBU", "qr7gHw8LDvd", "y_Zz3qo1oYm", "dZzP9BPUpki", "nips_2022_08Yk-n5l2Al", "nips_2022_08Yk-n5l2Al", "nips_2022_08Yk-n5l2Al" ]
nips_2022_wKd2XtSRsjl
Mutual Information Divergence: A Unified Metric for Multimodal Generative Models
Text-to-image generation and image captioning are recently emerged as a new experimental paradigm to assess machine intelligence. They predict continuous quantity accompanied by their sampling techniques in the generation, making evaluation complicated and intractable to get marginal distributions. Based on a recent trend that multimodal generative evaluations exploit a vison-and-language pre-trained model, we propose the negative Gaussian cross-mutual information using the CLIP features as a unified metric, coined by Mutual Information Divergence (MID). To validate, we extensively compare it with competing metrics using carefully-generated or human-annotated judgments in text-to-image generation and image captioning tasks. The proposed MID significantly outperforms the competitive methods by having consistency across benchmarks, sample parsimony, and robustness toward the exploited CLIP model. We look forward to seeing the underrepresented implications of the Gaussian cross-mutual information in multimodal representation learning and future works based on this novel proposition.
Accept
The paper studies the evaluation metric for multimodal generation models. The authors propose a method MID based on estimating mutual information of visual and text embeddings at sample and distribution level. From experiments, the MID correlates with human evaluation on multiple tasks (text-to-image and image captioning). The authors provide theoretical intuition and analysis of MID and relation to other divergence scores. Experiments are solid and convincing. The reliance on CLIP is discussed though other multimodal encoders than CLIP are not evaluated in experiments. Author discussion with reviewers are helpful to better understand the paper. Overall, it is a solid paper with a clearly described, simple, and effective method.
val
[ "koWWaPwRv5e", "xM0lScaUkcb", "-jn2Ex_eCeb", "Q00fAUjt4Cb", "yootlWiiFSp", "yt6S6ezXu0w", "E2ttyYYCFcF", "C9IlKnmEfH", "KskACgyUx2g", "DAxFk28Yff", "xvsw8z5WRyD", "_oJvIlmFgNt", "QrI68_w7YN", "aJ_QBYMN52C", "NkN4kZD9ZdF" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Most of my concerns have been addressed, and I will raise my vote to \"weak accept\".", " The author's response and revisions largely address my concerns. I think CLIP-reliance could still be an issue where a CLIP-like model is assessed, but given the array of potential applications of the method, it is okay to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 2, 3 ]
[ "yt6S6ezXu0w", "yootlWiiFSp", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "NkN4kZD9ZdF", "aJ_QBYMN52C", "QrI68_w7YN", "_oJvIlmFgNt", "xvsw8z5WRyD", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_202...
nips_2022_8RKJj1YDBJT
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera. In NDR, we adopt the neural implicit function for surface representation and rendering such that the captured color and depth can be fully utilized to jointly optimize the surface and deformations. To represent and constrain the non-rigid deformations, we propose a novel neural invertible deforming network such that the cycle consistency between arbitrary two frames is automatically satisfied. Considering that the surface topology of dynamic scene might change over time, we employ a topology-aware strategy to construct the topology-variant correspondence for the fused frames. NDR also further refines the camera poses in a global optimization manner. Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
Accept
This paper had consistently positive reviews from all reviewers and weaknesses that were expressed were responded to coherently by the authors. I recommend this paper be accepted.
train
[ "KhgpaKdsgR", "TRrKICKJp5", "-tzpDb1mkRG", "CjATPCB5gNB", "Jio87ct4NhX", "Q5DSgGmSKZW", "jSO1LeE3ojK", "J-QYDerZ9hK", "4B3aiP76Oyj", "773pF6PMVdj", "XA7eSlEmvBk", "pgHMs_GX8dg", "bkvyjmr7ex7", "YzdM1Cu4YEQ", "o-S3mRtToq", "aec0hGZTsmH", "m9v1Y24WmXi", "ch-IGmlL8n5", "_Lw-xdHJm0b"...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer NfVp,\n\nThank you for your quick reply and review comments!\n\nBest regards, Paper2241 authors", " Thank you for addressing my concerns.\nI modified my rating.\n\nbest", " Thank you for your quick reply and review comments. For BANMo results shown in Fig.5 of the main paper:\n\n- Before submiss...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "TRrKICKJp5", "pgHMs_GX8dg", "CjATPCB5gNB", "jSO1LeE3ojK", "Q5DSgGmSKZW", "773pF6PMVdj", "XA7eSlEmvBk", "ch-IGmlL8n5", "m9v1Y24WmXi", "aec0hGZTsmH", "_Lw-xdHJm0b", "ch-IGmlL8n5", "m9v1Y24WmXi", "aec0hGZTsmH", "nips_2022_8RKJj1YDBJT", "nips_2022_8RKJj1YDBJT", "nips_2022_8RKJj1YDBJT", ...
nips_2022_K2PTuvVTF1L
Variational inference via Wasserstein gradient flows
Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI) has emerged as a central computational approach to large-scale Bayesian inference. Rather than sampling from the true posterior $\pi$, VI aims at producing a simple but effective approximation $\hat \pi$ to $\pi$ for which summary statistics are easy to compute. However, unlike the well-studied MCMC methodology, algorithmic guarantees for VI are still relatively less well-understood. In this work, we propose principled methods for VI, in which $\hat \pi$ is taken to be a Gaussian or a mixture of Gaussians, which rest upon the theory of gradient flows on the Bures--Wasserstein space of Gaussian measures. Akin to MCMC, it comes with strong theoretical guarantees when $\pi$ is log-concave.
Accept
This paper proposes a novel method for variational inference based on Wasserstein flows. The key contribution is perhaps the rigorous guarantees that are derived from an assumption of log-concavity. While the initial submission was unaware of some existing work on VI that derives guarantees from similar log concavity or smoothness assumptions, the proof strategy that is given uses novel technical methods, and thus is of interest in any case. Readers would benefit from a detailed discussion that can contextualize this work to previous work, which the authors have committed to doing.
train
[ "xSpQOUOJTAc", "DiyQue009Mb", "0Mco7yh3oAX", "KLEoD4SEK1X", "XkUsLBxJ_na", "HplH0lKp61l", "CXVIwlMkPV", "TnJPSTWDwyK", "ex0_EqVR9Nx", "CbhN-frIqd1" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors address most of my concerns and I raise the score.", " Thank you; I appreciate your commitment to improving your excellent work further.", " Thank you for your kind review. We are glad that you enjoyed reading our submission.\n\n> If I have one point of criticism towards the paper, it would be tha...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "KLEoD4SEK1X", "0Mco7yh3oAX", "CbhN-frIqd1", "XkUsLBxJ_na", "ex0_EqVR9Nx", "TnJPSTWDwyK", "nips_2022_K2PTuvVTF1L", "nips_2022_K2PTuvVTF1L", "nips_2022_K2PTuvVTF1L", "nips_2022_K2PTuvVTF1L" ]
nips_2022_GCNIm4cKoRx
Finite-Time Analysis of Adaptive Temporal Difference Learning with Deep Neural Networks
Temporal difference (TD) learning with function approximations (linear functions or neural networks) has achieved remarkable empirical success, giving impetus to the development of finite-time analysis. As an accelerated version of TD, the adaptive TD has been proposed and proved to enjoy finite-time convergence under the linear function approximation. Existing numerical results have demonstrated the superiority of adaptive algorithms to vanilla ones. Nevertheless, the performance guarantee of adaptive TD with neural network approximation remains widely unknown. This paper establishes the finite-time analysis for the adaptive TD with multi-layer ReLU network approximation whose samples are generated from a Markov decision process. Our established theory shows that if the width of the deep neural network is large enough, the adaptive TD using neural network approximation can find the (optimal) value function with high probabilities under the same iteration complexity as TD in general cases. Furthermore, we show that the adaptive TD using neural network approximation, with the same width and searching area, can achieve theoretical acceleration when the stochastic semi-gradients decay fast.
Accept
The reviewers agree that the theoretical results presented in the paper are solid and advance our understanding of the behavior of temporal difference (TD) methods, which are at the core of most reinforcement learning algorithms. The contributions of the paper can be summarized in two main results: - Adaptive TD combined with a ReLU neural network converges when the width of the network is sufficiently large; - Adaptive TD combined with a ReLU neural network converges faster than its non-adaptive counterpart. Both results are important and novel. One consistent complaint among the reviewers was the paper presentation, which was considered slightly sloppy and not very accessible. We strongly encourage the authors to perform a thorough revision of the paper, paying special attention to the definition and consistency of the notation adopted. We also suggest the authors add intuitive explanations wherever possible to make the paper accessible to a wider audience.
val
[ "xpFAo13JjMP", "k6AvAalI9KR", "xZFg2ravERtE", "GWIUvemofC", "wpf-RpFgc3b", "mC1Gpk0Kcbl", "4-wA-BkHQXk", "PNOdrntt5xJ", "gevetM6LZLI", "c75PLnSOj-s" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear AC and reviewers,\n\nThanks for your thoughtful reviews and valuable comments, which have helped us improve the paper significantly. We are encouraged by the endorsements that: 1) The main result of our paper is significant and highly non-trivial (tHmB), which is the first analysis of the convergence of adap...
[ -1, -1, -1, -1, -1, -1, 6, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 1, 4 ]
[ "nips_2022_GCNIm4cKoRx", "xZFg2ravERtE", "c75PLnSOj-s", "gevetM6LZLI", "PNOdrntt5xJ", "4-wA-BkHQXk", "nips_2022_GCNIm4cKoRx", "nips_2022_GCNIm4cKoRx", "nips_2022_GCNIm4cKoRx", "nips_2022_GCNIm4cKoRx" ]
nips_2022_Yul402KcD5d
Multi-Granularity Cross-modal Alignment for Generalized Medical Visual Representation Learning
Learning medical visual representations directly from paired radiology reports has become an emerging topic in representation learning. However, existing medical image-text joint learning methods are limited by instance or local supervision analysis, ignoring disease-level semantic correspondences. In this paper, we present a novel Multi-Granularity Cross-modal Alignment (MGCA) framework for generalized medical visual representation learning by harnessing the naturally exhibited semantic correspondences between medical image and radiology reports at three different levels, i.e., pathological region-level, instance-level, and disease-level. Specifically, we first incorporate the instance-wise alignment module by maximizing the agreement between image-report pairs. Further, for token-wise alignment, we introduce a bidirectional cross-attention strategy to explicitly learn the matching between fine-grained visual tokens and text tokens, followed by contrastive learning to align them. More important, to leverage the high-level inter-subject relationship semantic (e.g., disease) correspondences, we design a novel cross-modal disease-level alignment paradigm to enforce the cross-modal cluster assignment consistency. Extensive experimental results on seven downstream medical image datasets covering image classification, object detection, and semantic segmentation tasks demonstrate the stable and superior performance of our framework.
Accept
A multi-granularity cross-modal alignment framework is proposed, which learns data representations from medical scans paired with the corresponding text reports. The reviewers find the appraoch novel and the paper well-written with an overall clear structure. Extensive experimental results show the effectiveness of the proposed model and experimental details are provided. After the discussion with the authors, all reviewers vote towards acceptance of the paper.
train
[ "Vmi1_HERtqZ", "mrdTA_98M_", "qVUocUgDYEK", "xrkjDSUqGCc", "0nFuglSgXcZ", "l_IOABEa1qI", "nEYk53267b3", "FhN9uD6Q4m", "ON7N8BsTmho", "7gg7j4H-KsM", "FN6dNtlrMUZ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer ZGvT,\n\nThanks a lot for your time and valuable feedback! We will include your suggested experimental results in our paper. ", " Thank you for the response. I appreciate the extra ablation study regarding the dense prediction task. It is indeed an interesting finding that further corroborates the...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "mrdTA_98M_", "nEYk53267b3", "ON7N8BsTmho", "ON7N8BsTmho", "ON7N8BsTmho", "FN6dNtlrMUZ", "7gg7j4H-KsM", "7gg7j4H-KsM", "nips_2022_Yul402KcD5d", "nips_2022_Yul402KcD5d", "nips_2022_Yul402KcD5d" ]
nips_2022_-3Pg7QNIF1S
An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
Semi-supervised few-shot learning consists in training a classifier to adapt to new tasks with limited labeled data and a fixed quantity of unlabeled data. Many sophisticated methods have been developed to address the challenges this problem comprises. In this paper, we propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective, and then augment the extremely label-constrained support set in few-shot classification tasks. Our approach can be implemented in just few lines of code by only using off-the-shelf operations, yet it is able to outperform state-of-the-art methods on four benchmark datasets.
Accept
This paper aims to improve semi-supervised few shot learning by utilizing negative pseudo-labels. The authors report significant improvement over the previous methods in this setting. The reviewers originally had concerns about the significance of the results, but after the discussion period they all supported acceptance more than they supported rejection. Given the simplicity of the method, the size of the improvements, and the unanimous agreement from the reviewers, I support the acceptance of this paper. While the authors improved the paper significantly during the discussion stage, I would urge them to keep working on the presentation and writing for the camera-ready version. There are still writing mistakes throughout the paper, and the meaning of some of the sentences is not clear.
train
[ "i-52FAUiE-o", "8GejR_tbRcF", "VJn-iSHXyF", "F2PXPP-gfOT", "6tjlFl725k5", "85BShUdciPD", "Z-R_ws7GJW8", "kBgTkJ8pHHf0", "KghI6PaRzGO", "gXrrUoc9G1s", "KovXDvRR4R4", "_i89ciFLXVq", "1bpoBdoXfwh", "jjGSW4yO_z_", "BFq-3JfeWUk" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After careful consideration of the rebuttal, the author's comments regarding my concerns, and the discussions pursued by other reviews, I will be maintaining my current recommendation for acceptance.", " Thank you so much for the valuable feedback and your recommendation to accept our work. We will clarify thes...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "gXrrUoc9G1s", "VJn-iSHXyF", "85BShUdciPD", "6tjlFl725k5", "KovXDvRR4R4", "Z-R_ws7GJW8", "kBgTkJ8pHHf0", "BFq-3JfeWUk", "jjGSW4yO_z_", "1bpoBdoXfwh", "_i89ciFLXVq", "nips_2022_-3Pg7QNIF1S", "nips_2022_-3Pg7QNIF1S", "nips_2022_-3Pg7QNIF1S", "nips_2022_-3Pg7QNIF1S" ]
nips_2022_FhWQzNY2UYR
Geo-SIC: Learning Deformable Geometric Shapes in Deep Image Classifiers
Deformable shapes provide important and complex geometric features of objects presented in images. However, such information is oftentimes missing or underutilized as implicit knowledge in many image analysis tasks. This paper presents Geo-SIC, the first deep learning model to learn deformable shapes in a deformation space for an improved performance of image classification. We introduce a newly designed framework that (i) simultaneously derives features from both image and latent shape spaces with large intra-class variations; and (ii) gains increased model interpretability by allowing direct access to the underlying geometric features of image data. In particular, we develop a boosted classification network, equipped with an unsupervised learning of geometric shape representations characterized by diffeomorphic transformations within each class. In contrast to previous approaches using pre-extracted shapes, our model provides a more fundamental approach by naturally learning the most relevant shape features jointly with an image classifier. We demonstrate the effectiveness of our method on both simulated 2D images and real 3D brain magnetic resonance (MR) images. Experimental results show that our model substantially improves the image classification accuracy with an additional benefit of increased model interpretability. Our code is publicly available at https://github.com/jw4hv/Geo-SIC.
Accept
Although there were a couple of initial questions/concerns about certain aspects of the paper, all reviewers appreciated the approach, the quality of presentation and the empirical results. After reading all responses by the authors, my impression is that all questions have been answered satisfactorily during the rebuttal period. Hence, I do recommend acceptance of this paper.
train
[ "U1wrvfGxlha", "VoGoCqDLJt", "3UfWWNE_x5b", "8NfQ2I1GdN2", "XpfoBEiI_vg", "_AS5C0J-Sj0" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank R3 for all the positive comments and constructive feedback. We will add (i) more details of the geometric learning (atlas building-based) loss function in the supplementary material, (ii) add descriptions of the CNN model parameters, and (iii) clarify that the network error propagates iteratively in the ...
[ -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, 4, 5, 3 ]
[ "_AS5C0J-Sj0", "XpfoBEiI_vg", "8NfQ2I1GdN2", "nips_2022_FhWQzNY2UYR", "nips_2022_FhWQzNY2UYR", "nips_2022_FhWQzNY2UYR" ]
nips_2022_zGvRdBW06F5
On-Device Training Under 256KB Memory
On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. Users can benefit from customized AI models without having to transfer the data to the cloud, protecting the privacy. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to low bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backpropagation. To cope with the optimization difficulty, we propose Quantization- Aware Scaling to calibrate the gradient scales and stabilize 8-bit quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/1000 of the memory of PyTorch and TensorFlow while matching the accuracy. Our study enables IoT devices not only to perform inference but also to continuously adapt to new data for on-device lifelong learning. A video demo can be found here: https://youtu.be/XaDCO8YtmBw.
Accept
In this work the authors propose a framework for training CV models on tiny IoT devices with very limited memory. The reviewers agreed that the paper is well written and represents a valuable contribution to the area of efficient / on-device ML. Questions raised by reviewers were sufficiently addressed in the response.
test
[ "EVrNEZEm6HG", "-kprvNikHnq", "GvACt6b4VsY", "EAvL32MkiUR", "Iqvc6cecH7", "B8DEmvoaQ_v", "8MOdtHxw1YV", "g_FQeq5bHtH", "F_HROsgu5Ic", "Tbs8ORY1yk7", "N7RIhEOalVK", "W547II4O81w", "Uuc9sZ_FtYa", "3j7oseeqOvI", "P5n4a2nqXk4", "99ibPJaWTBI" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\nThank you for your response!\nBased on your replies and your promise of opensourcing the code. I am raising my score to 6.\nGood luck!\n\n", " Dear Reviewer Dz6f,\n\nThanks again for your insightful suggestions and comments. We have not heard from you and the rebuttal window is going to close. We...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 5 ]
[ "-kprvNikHnq", "Uuc9sZ_FtYa", "Tbs8ORY1yk7", "N7RIhEOalVK", "W547II4O81w", "3j7oseeqOvI", "nips_2022_zGvRdBW06F5", "nips_2022_zGvRdBW06F5", "Uuc9sZ_FtYa", "99ibPJaWTBI", "P5n4a2nqXk4", "3j7oseeqOvI", "nips_2022_zGvRdBW06F5", "nips_2022_zGvRdBW06F5", "nips_2022_zGvRdBW06F5", "nips_2022_...
nips_2022_StzAAh8RuD
Independence Testing for Bounded Degree Bayesian Networks
We study the following independence testing problem: given access to samples from a distribution $P$ over $\{0,1\}^n$, decide whether $P$ is a product distribution or whether it is $\varepsilon$-far in total variation distance from any product distribution. For arbitrary distributions, this problem requires $\exp(n)$ samples. We show in this work that if $P$ has a sparse structure, then in fact only linearly many samples are required. Specifically, if $P$ is Markov with respect to a Bayesian network whose underlying DAG has in-degree bounded by $d$, then $\tilde{\Theta}(2^{d/2}\cdot n/\varepsilon^2)$ samples are necessary and sufficient for independence testing.
Accept
The manuscript studies the independence testing problem, given samples from a distribution over several binary random variables. While the sample complexity is exponential (in the number of variables), this paper shows that when the distribution is a Bayesian network with small in-degree, the sample complexity is linear. All reviewers asked for a clarification in the motivation, and some reviewers asked for comparison to literature and/or possible alternatives. The authors addressed this well during the rebuttal phase. I recommend adding this to the camera-ready version of the paper, as well as other discussions and clarifications raised by all the reviewers.
train
[ "jHzqcRhTHc", "KV7CmgAITG", "UWQxNchdiLT", "143DfUKRo1J", "KhO50N2O3xn", "9_yfcEaLaut", "JTWXV-H20F", "U_qQi_-erNE" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ... we will make sure to incorporate these to the paper.", " I would like to thank the authors for their detailed reply. \n\nI feel my comments have been properly addressed. I would suggest incorporating your reply to certain points into the manuscript, especially on point 1(ii) (why focusing on the total varia...
[ -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "KV7CmgAITG", "143DfUKRo1J", "U_qQi_-erNE", "JTWXV-H20F", "9_yfcEaLaut", "nips_2022_StzAAh8RuD", "nips_2022_StzAAh8RuD", "nips_2022_StzAAh8RuD" ]
nips_2022_yewD_qbYifc
PCRL: Priority Convention Reinforcement Learning for Microscopically Sequencable Multi-agent Problems
Reinforcement learning (RL) has played an important role in tackling the decision problems emerging from agent fields. However, RL still has challenges in tackling multi-agent large-discrete-action-space (LDAS) problems, possibly resulting from large agent numbers. At each decision step, a multi-agent LDAS problem is often faced with an unaffordable number of candidate actions. Existing work has mainly tackled these challenges utilizing indirect approaches such as continuation relaxation and sub-sampling, which may lack solution quality guarantees from continuation to discretization. In this work, we propose to embed agreed priority conventions into reinforcement learning (PCRL) to directly tackle the microscopically sequenceable multi-agent LDAS problems. Priority conventions include position-based agent priority to break symmetries and prescribed action priority to break ties. In a microscopically sequenceable multi-agent problem, the centralized planner, at each decision step of the whole system, generates an action vector (each component of the vector is for an agent and is generated in a micro-step) by considering the conventions. The action vector is generated sequentially when microscopically viewed, and such generation will not miss the optimal action vector, and can help RL's exploitation around the lexicographic-smallest optimal action vector. Proper learning schemes and action-selection schemes have been designed to make the embedding reality. The effectiveness and superiority of PCRL have been validated by experiments on multi-agent applications, including the multi-agent complete coverage planning application (involving up to $4^{18}>6.8\times 10^{10}$ candidate actions at each decision step) and the cooperative pong game (state-based and pixel-based, respectively), showing PCRL's LDAS dealing ability and high optimality-finding ability than the joint-action RL methods and heuristic algorithms.
Reject
While the ideas in this paper are promising, there are issues with the paper's presentation and experimental results. The paper needs to be (further) updated to clarify the proposed method and discuss additional related work. More extensive experimental results are also needed to show the benefits of the proposed approach.
val
[ "s9l-GsDV5JT", "pZ656sf2bdq", "EXIzaCEW8eM", "ejvZv4lIcqE", "-7v8OzRsCJS", "OedQjZPIxbh", "IZfFewvZaH3", "drCSeutgpmU" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestions and help. We will do as recommended.\n\nFor issue 1: The question is very good. Since we did not express it clearly, we are sorry for the misunderstanding. We will modify the corresponding parts of the manuscript to make the expression more accurate, clearer, and more backed up. \n\nFor...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "pZ656sf2bdq", "-7v8OzRsCJS", "IZfFewvZaH3", "OedQjZPIxbh", "drCSeutgpmU", "nips_2022_yewD_qbYifc", "nips_2022_yewD_qbYifc", "nips_2022_yewD_qbYifc" ]
nips_2022_9GXoMs__ckJ
On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning
We empirically investigate how pre-training on data of different modalities, such as language and vision, affects fine-tuning of Transformer-based models to Mujoco offline reinforcement learning tasks. Analysis of the internal representation reveals that the pre-trained Transformers acquire largely different representations before and after pre-training, but acquire less information of data in fine-tuning than the randomly initialized one. A closer look at the parameter changes of the pre-trained Transformers reveals that their parameters do not change that much and that the bad performance of the model pre-trained with image data could partially come from large gradients and gradient clipping. To study what information the Transformer pre-trained with language data utilizes, we fine-tune this model with no context provided, finding that the model learns efficiently even without context information. Subsequent follow-up analysis supports the hypothesis that pre-training with language data is likely to make the Transformer get context-like information and utilize it to solve the downstream task.
Accept
The paper unanimously receives positive rates thanks to strong motivations and interesting results. As the reviews show satisfaction on the authors’ feedback, the final draft needs to respect it accordingly, for example, about the limitations of this research.
train
[ "kkdyDv2XqhE", "CySwqYtQqw_", "wDZSGV701W", "2-4wzhJhzvb", "kEQVXVQlyA2", "MWM3nIsoBWd", "FcEs-_BBTEQ", "tKQXHngFi_A", "7_nJ9oNMlDKB", "qiTK1XqKwc", "W09RvgbxlOQ", "rU_0tC1Ux2h", "EBYe_O1W4X", "j43DHndu18D", "549CGljMccL", "PWjgJYjylPh" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate your positive evaluation of our response. We changed a sentence in the limitation section a bit so that we emphasize the importance of studying the average result of many more seeds.", " We deeply appreciate the positive feedback on our response. We would gladly open source the code for ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "2-4wzhJhzvb", "wDZSGV701W", "7_nJ9oNMlDKB", "W09RvgbxlOQ", "MWM3nIsoBWd", "FcEs-_BBTEQ", "tKQXHngFi_A", "7_nJ9oNMlDKB", "PWjgJYjylPh", "549CGljMccL", "rU_0tC1Ux2h", "j43DHndu18D", "nips_2022_9GXoMs__ckJ", "nips_2022_9GXoMs__ckJ", "nips_2022_9GXoMs__ckJ", "nips_2022_9GXoMs__ckJ" ]
nips_2022_BK0O0xLntFM
Estimating and Explaining Model Performance When Both Covariates and Labels Shift
Deployed machine learning (ML) models often encounter new user data that differs from their training data. Therefore, estimating how well a given model might perform on the new data is an important step toward reliable ML applications. This is very challenging, however, as the data distribution can change in flexible ways, and we may not have any labels on the new data, which is often the case in monitoring settings. In this paper, we propose a new distribution shift model, Sparse Joint Shift (SJS), which considers the joint shift of both labels and a few features. This unifies and generalizes several existing shift models including label shift and sparse covariate shift, where only marginal feature or label distribution shifts are considered. We describe mathematical conditions under which SJS is identifiable. We further propose SEES, an algorithmic framework to characterize the distribution shift under SJS and to estimate a model’s performance on new data without any labels. We conduct extensive experiments on several real-world datasets with various ML models. Across different datasets and distribution shifts, SEES achieves significant (up to an order of magnitude) shift estimation error improvements over existing approaches.
Accept
The authors study the important problem of distribution shift under a new SJS model. Identifiability results are proved and empirical experiments illustrate the value of the proposed model. During discussion, some concerns on the experiments were addressed. Overall, there was a weak consensus to accept this paper, which I concur with.
train
[ "kvB-2y1jjpB", "82LgxpouEx1", "gHHJn1gdB1-", "QuDCKfDxJh", "Gn4xPAJ4rl8K", "hE8t1eeXRg", "ZYQa9pmpT_K", "7MpvQhIyNbC", "naBd5YC48jC", "EwuaoMnRAuN", "CXRKpmiTQpz", "o6o11tnV0mB", "54nPj1Q9xg", "_aTdShEtStJ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional sensitivity analysis. I have increased my score based on the current paper status and the following reasons.\n\n* Based on Figure 6 of Appendix, the SEED-d is robust if there is small parameter mismatch. I believe that the proposed algorithm can be further improved by increasing the sear...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "gHHJn1gdB1-", "CXRKpmiTQpz", "QuDCKfDxJh", "ZYQa9pmpT_K", "_aTdShEtStJ", "ZYQa9pmpT_K", "_aTdShEtStJ", "54nPj1Q9xg", "o6o11tnV0mB", "CXRKpmiTQpz", "nips_2022_BK0O0xLntFM", "nips_2022_BK0O0xLntFM", "nips_2022_BK0O0xLntFM", "nips_2022_BK0O0xLntFM" ]
nips_2022_GWcdXz0M6a
PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits
In sparse linear bandits, a learning agent sequentially selects an action from a fixed action set and receives reward feedback, and the reward function depends linearly on a few coordinates of the covariates of the actions. This has applications in many real-world sequential decision making problems. In this paper, we devise a simple, novel sparse linear estimation method called $\textrm{PopArt}$ that enjoys a tighter $\ell_1$ recovery guarantee compared to Lasso (Tibshirani, 1996). Our bound naturally motivates an experimental design criterion that is convex and thus computationally efficient to solve. Based on our novel estimator and design criterion, we derive sparse linear bandit algorithms that enjoy improved regret upper bounds upon the state of the art (Hao et al., 2020), especially in terms of the geometry of the given action set. Finally, we prove a matching lower bound for sparse linear bandits in the data-poor regime, which closes the gap between upper and lower bounds in prior work.
Accept
The paper is motivated by the design of low-regret algorithms for high-dimensional sparse linear bandit problems. The challenge is to obtain regret guarantees even in the data-poor regime where the number of samples the learner can gather may be smaller than the dimension. This challenge had been investigated in [12] with a regret scaling as $(sn/C_{min})^{2/3}$ ($s$ is the sparsity of the problem, $n$ the number of samples, $C_{min}$ is the maximum over all possible arm distribution of the resulting average variance. The authors propose a scheme whose regret scales at most as $(sn H)^{2/3}$ where $H$ is a new (minimax) constant, proven to be smaller than $1/C_{min})$. The paper also presents a matching minimax regret lower bound. To achieve this improved regret upper bound, the authors develop a new parameter estimation procedure, based notably on Catoni’s estimator (this kind of estimator has been recently advocated in RL with linear function approximation, see “Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes”, Wagenmaker et al., ICML 2022, and the authors could mention this paper and stress the differences in the use of this estimator). The derivation of the lower bound also relies on new techniques (as mentioned by one of the reviewers). Overall, this is a solid contribution, even though compared to [12], the improvement is not that spectacular.
val
[ "zAL3v9R2WKK", "L-BRE2ZL2PD", "qFNMb5fYTXW", "pvzfPSWUj4f", "OTVrWM6s6oY", "dGo1AfQc-ew", "9q_S2Q6hjZ", "pQAhh0KTwKv" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for following up!\nI guess saying 'comparison between PopArt and Lasso' was a bit confusing. It's more like comparing how one can do design of experiments with PopArt vs Lasso.\n\nTo answer your question, it depends on what 'algorithm' you mean. Two possibilities are: (a) an algorithm that computes an e...
[ -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "L-BRE2ZL2PD", "qFNMb5fYTXW", "pQAhh0KTwKv", "9q_S2Q6hjZ", "dGo1AfQc-ew", "nips_2022_GWcdXz0M6a", "nips_2022_GWcdXz0M6a", "nips_2022_GWcdXz0M6a" ]
nips_2022_UDmPRm-P1nL
Distinguishing Learning Rules with Brain Machine Interfaces
Despite extensive theoretical work on biologically plausible learning rules, clear evidence about whether and how such rules are implemented in the brain has been difficult to obtain. We consider biologically plausible supervised- and reinforcement-learning rules and ask whether changes in network activity during learning can be used to determine which learning rule is being used. Supervised learning requires a credit-assignment model estimating the mapping from neural activity to behavior, and, in a biological organism, this model will inevitably be an imperfect approximation of the ideal mapping, leading to a bias in the direction of the weight updates relative to the true gradient. Reinforcement learning, on the other hand, requires no credit-assignment model and tends to make weight updates following the true gradient direction. We derive a metric to distinguish between learning rules by observing changes in the network activity during learning, given that the mapping from brain to behavior is known by the experimenter. Because brain-machine interface (BMI) experiments allow for precise knowledge of this mapping, we model a cursor-control BMI task using recurrent neural networks, showing that learning rules can be distinguished in simulated experiments using only observations that a neuroscience experimenter would plausibly have access to.
Accept
This paper explores the question of experimentally distinguishing between different hypothesized classes of learning rules in the brain (specifically biased supervised learning and unbiased reinforcement learning). It derives a metric to distinguish between such learning rules based on changes in neural activity seen during learning with a brain-computer interface. The authors show that this metric can be used to identify which learning rules are the best account of the observed activity changes. The reviewers agreed that this paper makes an original and important contribution to the field, and the decision to accept was unanimous.
train
[ "kEXE-gptXR", "8Lbyxh7Edz", "9yNyxqRstZ3", "Awvv_1e5zxN", "Mei7dKLAIlh", "vn73Ji5GrDz", "mlB6ZrCMaVo", "QcRhljZmtPA", "09yURtam7T1", "LULcw35yl0", "0pslcBXGOHQ", "jtHQyhnNI9X", "k59hCcLXDy", "BeeVVG38BK" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the clarifications they provided and agreeing to incorporate some of my suggestions. I am glad I could help improve the quality of this work. \n\nThe points about noise is clear to me and given the author's updates during the rebuttal+discussion period, I believe my concerns ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "8Lbyxh7Edz", "Mei7dKLAIlh", "LULcw35yl0", "QcRhljZmtPA", "mlB6ZrCMaVo", "09yURtam7T1", "BeeVVG38BK", "0pslcBXGOHQ", "jtHQyhnNI9X", "k59hCcLXDy", "nips_2022_UDmPRm-P1nL", "nips_2022_UDmPRm-P1nL", "nips_2022_UDmPRm-P1nL", "nips_2022_UDmPRm-P1nL" ]
nips_2022_wYgRIJ-oK6M
BiT: Robustly Binarized Multi-distilled Transformer
Modern pre-trained transformers have rapidly advanced the state-of-the-art in machine learning, but have also grown in parameters and computational complexity, making them increasingly difficult to deploy in resource-constrained environments. Binarization of the weights and activations of the network can significantly alleviate these issues, however, is technically challenging from an optimization perspective. In this work, we identify a series of improvements that enables binary transformers at a much higher accuracy than what was possible previously. These include a two-set binarization scheme, a novel elastic binary activation function with learned parameters, and a method to quantize a network to its limit by successively distilling higher precision models into lower precision students. These approaches allow for the first time, fully binarized transformer models that are at a practical level of accuracy, approaching a full-precision BERT baseline on the GLUE language understanding benchmark within as little as 5.9%. Code and models are available at:https://github.com/facebookresearch/bit.
Accept
This paper proposes an innovative pipeline for quantizing transformers for extremely low precision (1-2) bits, while reducing the gap of previous methods to full precision by ~3X. This result has important implications for resource-restricted inference, especially if memory is of concern, but 1-bit quantization has significant effect on inference latency as well. This work reaches these strong results by careful normalization, separate quantization for non-negative activations and a combinatorial optimization over various distillation paths. Overall, the paper demonstrates an important albeit incremental advance in the field and is of general interest to the wider community, therefore I propose its acceptance at NeurIPS 2022.
train
[ "ApH2Z7_c9za", "4-mSzBoMFgR", "_8mdxKRA93O", "fmT20z7_wUl", "XxOB92px-n", "X7rLd5FK-jB", "QnFRmiWhdz9", "xNeokIJRg8O", "ZB2GSywGDFN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the response. These clarifications are important to understand the contribution of this work. I now see that small changes to existing proposals can lead to significant improvements in quality. I will raise my rating to borderline accept.", " Thank you, the authors have addressed my questi...
[ -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "XxOB92px-n", "fmT20z7_wUl", "nips_2022_wYgRIJ-oK6M", "ZB2GSywGDFN", "xNeokIJRg8O", "QnFRmiWhdz9", "nips_2022_wYgRIJ-oK6M", "nips_2022_wYgRIJ-oK6M", "nips_2022_wYgRIJ-oK6M" ]
nips_2022_1C36tFZn7sR
Learning Chaotic Dynamics in Dissipative Systems
Chaotic systems are notoriously challenging to predict because of their sensitivity to perturbations and errors due to time stepping. Despite this unpredictable behavior, for many dissipative systems the statistics of the long term trajectories are governed by an invariant measure supported on a set, known as the global attractor; for many problems this set is finite dimensional, even if the state space is infinite dimensional. For Markovian systems, the statistical properties of long-term trajectories are uniquely determined by the solution operator that maps the evolution of the system over arbitrary positive time increments. In this work, we propose a machine learning framework to learn the underlying solution operator for dissipative chaotic systems, showing that the resulting learned operator accurately captures short-time trajectories and long-time statistical behavior. Using this framework, we are able to predict various statistics of the invariant measure for the turbulent Kolmogorov Flow dynamics with Reynolds numbers up to $5000$.
Accept
This paper proposes a neural network-based approach to estimate the Markov operator of dissipative chaotic systems. It introduces a novel combination of Sobolev and dissipativity losses. While the reviewers had initial concerns about clarity, assumption and application condition, and the choice of learning Markov operator versus modelling continuous dynamics, the author-reviewer discussion addressed most concerns, and all reviewers agree this work exceeds the bar for publication. I would encourage the authors to take into consideration the remaining concerns from the reviewers, incorporate key conclusions of the discussions and the limitation of the work in their final version.
train
[ "AdlRoTm8K9", "QZ8ayFQmA5", "JjhgkMSyuXZ", "SyqxVADB2BM", "4HLdAwuoTbb", "U1ZCj5yqUcO", "i7Al0vvN2xU", "LS6w473Eo0v", "l1LTOEv0wf", "kJGAOd3AdzN", "QhriKhkTBpj", "KsH2jVcBq1m", "nxkhXiCnVag", "a1Bgv70IakJ", "wXJKJIZnS8sF", "jUPWRjV3sOR", "gmdLSnUEqSB", "SUPNpaxV3Zs", "Q1-_MnTeh6_...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Thank you for your response. Your feedback helped us make the paper stronger!\n\nThe authors-reviewers discussion period may have ended, but allow us to post a brief response regarding the slope. \n\n7. If we understand correctly, the slope of spectrum is $k^{-5/3}$ in the inverse cascade range ($k_a << k << k_f$...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "QZ8ayFQmA5", "QhriKhkTBpj", "SyqxVADB2BM", "4HLdAwuoTbb", "U1ZCj5yqUcO", "LS6w473Eo0v", "kJGAOd3AdzN", "a1Bgv70IakJ", "nips_2022_1C36tFZn7sR", "KsH2jVcBq1m", "FWQNWdTEl-", "Q1-_MnTeh6_", "a1Bgv70IakJ", "SUPNpaxV3Zs", "gmdLSnUEqSB", "nips_2022_1C36tFZn7sR", "nips_2022_1C36tFZn7sR", ...
nips_2022_9YasTgzma8c
Trading off Image Quality for Robustness is not Necessary with Regularized Deterministic Autoencoders
The susceptibility of Variational Autoencoders (VAEs) to adversarial attacks indicates the necessity to evaluate the robustness of the learned representations along with the generation performance. The vulnerability of VAEs has been attributed to the limitations associated with their variational formulation. Deterministic autoencoders could overcome the practical limitations associated with VAEs and offer a promising alternative for image generation applications. In this work, we propose an adversarially robust deterministic autoencoder with superior performance in terms of both generation and robustness of the learned representations. We introduce a regularization scheme to incorporate adversarially perturbed data points to the training pipeline without increasing the computational complexity or compromising the generation fidelity by leveraging a loss based on the two-point Kolmogorov–Smirnov test between representations. We conduct extensive experimental studies on popular image benchmark datasets to quantify the robustness of the proposed approach based on the adversarial attacks targeted at VAEs. Our empirical findings show that the proposed method achieves significant performance in both robustness and fidelity when compared to the robust VAE models.
Accept
This paper received generally positive reviews that, after discussion, all backed acceptance. The paper was praised for its empirical evaluations, potential significance, clarity, and applicability. While some questions and lower-level issues were raised, I do not feel that the reviewers raised any significant issues that would be a barrier to acceptance, with the small number of issues that were raised well addressed by the authors' responses. My own personal view of the work is also very positive (perhaps more so than the reviewers themselves): I think this is strong work that will be of significant interest to the community. The empirical results are a particular highlight, both in terms of the performance shown, the comprehensive set of experiments considered, and the numerous and appropriate baselines compared to. While I do have some suggestions and minor gripes (see below) that I would like addressed in the final version of the paper, I have no hesitation in enthusiastically recommending its acceptance. Suggestions and minor issues: - My most important complaint with the paper is that the title is too strong and overclaiming. It suggests a trade-off will never occur and that the result applies to all deterministic auto-encoders, rather than the specific type considered. I do not think either of these are true: just because robustness has been improved with minimal change in FID score, does not mean there will not be a trade-off with future developments (i.e. there may well be ways of improving the image quality of [26] that would no longer give good robustness when combined with the suggested approach). Please, therefore, change the title to something more measured and precise for the camera-ready paper. - It would be good to provide more explicit timing information about the training times of all the different models, rather than just SE. Claims are made in the intro and conclusions about speed, but, unless I have missed something, I did not really feel these were properly supported. - While I think the paper is mostly quite clear and well written, I do think the writing could be improved in some of the key technical sections; I generally found Section 3 to be the worst written part of the paper. In particular, I think more high-level explanation was required. While the maths itself is not too difficult to follow, it took me quite a few reads through to get a feel for the intuitions. To give an example of a specific issue, the right-hand side of Figure 1 comes too early and lacks context: the reader will naturally assume that they should be able to understand what is happening when the figure is first referenced, but actually they need to get to Section 3.3 first to get an idea of what is going on. - It might be useful for the authors to have a look at https://openreview.net/forum?id=nzvbBD_3J-g because it actively argues against using GMM priors in the more conventional VAE setting, on the basis that such inductive bias can be more effectively be incorporated through a customised decoder architecture (and not treating the latents as the representation itself), than through regularisation. Of course, their insights may well not carry over to the deterministic auto-encoder setting, but it does hint at an interesting alternative approach and may be worth discussing or at least acknowledging. - Please increase the text size in the figures, these are very difficult to see at present.
train
[ "5gOb7pzb_3g", "sViJMGPIp2m", "RJXXR_dm8L-", "0CkDvT5aw4x", "ecOjbnUm7Px", "rAs_cm_2uDi", "9HRpqM3UHC8", "DSYg4e-kFa8", "gZuiP5RtPrM", "fpnzsgQp1Xj", "_BaGPVntLGM", "dPg7UjEUsms", "ywT6DgyjiYa", "HolsdAfBbUs" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Got it, thank you for the clarification.", " The general trend observed in Saseendran et.al is that the FID improves when the number of modes in the prior is increased. It can be also observed from their sensitivity experiments, that even when the number of modes is further increased from 10, the generation per...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 2, 3 ]
[ "sViJMGPIp2m", "RJXXR_dm8L-", "DSYg4e-kFa8", "rAs_cm_2uDi", "nips_2022_9YasTgzma8c", "HolsdAfBbUs", "ywT6DgyjiYa", "dPg7UjEUsms", "_BaGPVntLGM", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c" ]
nips_2022_Upt5wsECVJe
Mean Estimation in High-Dimensional Binary Markov Gaussian Mixture Models
We consider a high-dimensional mean estimation problem over a binary hidden Markov model, which illuminates the interplay between memory in data, sample size, dimension, and signal strength in statistical inference. In this model, an estimator observes $n$ samples of a $d$-dimensional parameter vector $\theta_{*}\in\mathbb{R}^{d}$, multiplied by a random sign $ S_i $ ($1\le i\le n$), and corrupted by isotropic standard Gaussian noise. The sequence of signs $\{S_{i}\}_{i\in[n]}\in\{-1,1\}^{n}$ is drawn from a stationary homogeneous Markov chain with flip probability $\delta\in[0,1/2]$. As $\delta$ varies, this model smoothly interpolates two well-studied models: the Gaussian Location Model for which $\delta=0$ and the Gaussian Mixture Model for which $\delta=1/2$. Assuming that the estimator knows $\delta$, we establish a nearly minimax optimal (up to logarithmic factors) estimation error rate, as a function of $\|\theta_{*}\|,\delta,d,n$. We then provide an upper bound to the case of estimating $\delta$, assuming a (possibly inaccurate) knowledge of $\theta_{*}$. The bound is proved to be tight when $\theta_{*}$ is an accurately known constant. These results are then combined to an algorithm which estimates $\theta_{*}$ with $\delta$ unknown a priori, and theoretical guarantees on its error are stated.
Accept
The paper addresses the problem of high-dimensional statistical inference from dependent samples. This is a recently emerging area, and the authors establish nearly tight minimax error rate bounds for a basic statistical model (gaussian hidden markov model). The reviewers appreciated the technical strength of the paper, but there were some questions about the framing and context of the problem. The authors clarified these issues suitably in their rebuttal, clearing the way for acceptance.
train
[ "8bMLQJcoBY3", "FVPxv6qd50", "VVavxvCiec", "RK3slRY4jB7", "YuG_etspwmx", "B4KDbVC2HbL", "PfK6GX0hMP4", "tpfslCGR8W-", "OJSIeDUewxw", "iASiU_dxlR8", "sVzEXvtPVe", "RKG6GFv5PU2", "ok2GPY4Los9", "24hh7f15xMQ", "YAfcjBTVfv8" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have added a short paragraph (due to space constraint) to the end of the Contribution section to reflect the reviewer's suggestions. \nIn particular, the usefulness of our techniques in more general models and the connections/differences with prior related work are highlighted. \nWe thank the reviewer for help...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 2, 4, 2 ]
[ "FVPxv6qd50", "VVavxvCiec", "RK3slRY4jB7", "OJSIeDUewxw", "B4KDbVC2HbL", "iASiU_dxlR8", "nips_2022_Upt5wsECVJe", "YAfcjBTVfv8", "24hh7f15xMQ", "ok2GPY4Los9", "RKG6GFv5PU2", "nips_2022_Upt5wsECVJe", "nips_2022_Upt5wsECVJe", "nips_2022_Upt5wsECVJe", "nips_2022_Upt5wsECVJe" ]
nips_2022_d0stFTU2dTI
Exploration via Planning for Information about the Optimal Trajectory
Many potential applications of reinforcement learning (RL) are stymied by the large numbers of samples required to learn an effective policy. This is especially true when applying RL to real-world control tasks, e.g. in the sciences or robotics, where executing a policy in the environment is costly. In popular RL algorithms, agents typically explore either by adding stochasticity to a reward-maximizing policy or by attempting to gather maximal information about environment dynamics without taking the given task into account. In this work, we develop a method that allows us to plan for exploration while taking both the task and the current knowledge about the dynamics into account. The key insight to our approach is to plan an action sequence that maximizes the expected information gain about the optimal trajectory for the task at hand. We demonstrate that our method learns strong policies with 2x fewer samples than strong exploration baselines and 200x fewer samples than model free methods on a diverse set of low-to-medium dimensional control tasks in both the open-loop and closed-loop control settings.
Accept
All reviewers acknowledged to have read the rebuttal. Reviewer iWun's reply isn't visible to the authors (posted too late), see end of metareview. The most important concerns of the reviewers have been addressed by extensive replies and additional experiments. Overall the method is sound and performs well. As acknowledged by the authors, the method comes with inherent limitations to low-to-medium dimensional problems through the use of GPs. The method is useful on its own, and serves as proof-of-concept for the overall idea also for different function approximators - but replacing GPs will require quite a bit of additional work. *** 18 Aug 2022, NeurIPS 2022 Conference Paper2167 Reviewer iWun "Thanks to the authors for the response. I still don't like non-strict mathematical language of the paper, but this doesn't seem to be a problem for other reviewers. In addition, I like the results of the paper, and therefore I increase my score."
train
[ "th6NMaJiFE", "I7hDU6H56hf", "w5O8NSRq0dw", "wbNOuxQFFqd", "8ejPJ5IVSsm", "JEDREWMG-KE", "WWOvxLYDyk", "H7KhpTwO041", "Y9a5ft_Mh4d", "T_3EZFbjBn", "nKGEJ4gQnNl", "Q1MYZ6Bbuio", "giDOo5jADpd", "qYZjBE37trX", "ma7gU0XI80W" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for pointing that out. I am suggesting that you perform a thorough evaluation of your method which would make your paper more useful for readers. I am updating my score to weak accept.", " Thank you for your reply! Please note that one of the two suggested environments that you listed in your reply, cart...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "I7hDU6H56hf", "w5O8NSRq0dw", "Q1MYZ6Bbuio", "Y9a5ft_Mh4d", "H7KhpTwO041", "ma7gU0XI80W", "qYZjBE37trX", "Y9a5ft_Mh4d", "giDOo5jADpd", "Q1MYZ6Bbuio", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI" ]
nips_2022_UZJHudsQ7d
Robust Calibration with Multi-domain Temperature Scaling
Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains. Uncertainty quantification is all the more challenging when training distribution and test distribution are different, even if the distribution shifts are mild. Despite the ubiquity of distribution shifts in real-world applications, existing uncertainty quantification approaches mainly study the in-distribution setting where the train and test distributions are the same. In this paper, we develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains. Our proposed method---multi-domain temperature scaling---uses the heterogeneity in the domains to improve calibration robustness under distribution shift. Through experiments on three benchmark data sets, we find our proposed method outperforms existing methods as measured on both in-distribution and out-of-distribution test sets.
Accept
Reviewers find the paper original, useful, thorough in its numerics (in the revision), and clearly written.
test
[ "Co-zHTguCs", "Gj5QtyPQs-U", "MsSI0qVIriB", "3TbWl_n8M5", "ZLvUmmIqjXG", "zaAewhEVLso", "0UoOIq10f8", "TZINaXREZsc", "yU6GRpgT9zj", "-TcaRtWdNYVF", "vpovcWJjAMb", "HXaWa_JkgYe", "uGzvs24tUfF" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you again for your thoughtful review and valuable feedback!", " I appreciate the additional work that the authors have done during the review session. All of the concerns have been dealt by the authors.", " Thank you for engaging with us and helping us improve the paper. \n\nThank you for your sugges...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "Gj5QtyPQs-U", "zaAewhEVLso", "3TbWl_n8M5", "0UoOIq10f8", "HXaWa_JkgYe", "HXaWa_JkgYe", "uGzvs24tUfF", "uGzvs24tUfF", "vpovcWJjAMb", "uGzvs24tUfF", "nips_2022_UZJHudsQ7d", "nips_2022_UZJHudsQ7d", "nips_2022_UZJHudsQ7d" ]
nips_2022_Q6DJ12oQjrp
Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection
There is currently a large gap in performance between the statistically rigorous methods like linear regression or additive splines and the powerful deep methods using neural networks. Previous works attempting to close this gap have failed to fully consider the exponentially growing number of feature combinations which deep networks consider automatically during training. In this work, we develop a tractable selection algorithm to efficiently identify the necessary feature combinations by leveraging techniques in feature interaction detection. Our proposed Sparse Interaction Additive Networks (SIAN) construct a bridge from these simple and interpretable models to a fully connected neural network. SIAN achieves competitive performance against state-of-the-art methods across multiple large-scale tabular datasets and consistently finds an optimal tradeoff between the modeling capacity of neural networks and the generalizability of simpler methods.
Accept
This paper proposes a scheme to augment a trained neural network (considering in particular the case of unstructured, tabular data) by extending generalized additive models to the multi-layer neural setting in an unusual manner by using higher-order derivatives from an initial deep neural network to select a sparse set of higher order feature interactions on which to fit their augmented network. Reviewers considered the paper well written, easy to follow, considered the method sound and well-motivated, and praised experiments as thorough and detail as adequate for reproducibility. Q2gD wondered specifically how the interpretability of these models measures against post-hoc DNN interpretation methods; the authors responded with a new section in the appendix, causing Q2gD to raise their score. FZvZ had questions about the selection of the model order and how that might affect interpretability which were adequately addressed in rebuttal. wcBj points to "relatively weak direct technical novelty", to which the authors reasonably respond that their forward selection method for interaction terms stands apart from typical approaches that involve backward selection or pruning; several confusing aspects and an suggestion for an explanatory visualization comprises a new section in the Appendix. The experimental results seem well chosen and are convincing, even if the proposed method SIAN is not uniformly the best method on all tasks considered, it is a strong contender overall and in several cases handily outperforms the DNN baseline. The selection procedure seems well motivated and clever, while the architecture of the resultant additive model seems like one particular choice in a sea of possibilities. I doubt this paper will be the last word on the matter. Nonetheless, this seems like a valuable contribution to the literature on applying DNNs to tabular data and the intersection of GAM techniques with deep learning. I thus recommend acceptance.
train
[ "YO1lK65iRu9", "5gV1n3W2km", "y3MIeTLDesi", "eHLiaLYS4Ot", "vcx_6iB2oA_", "ha_tllF_MEy", "e7-B-aRBi", "TH2GxjpHDRm", "TuPSaTJ8CFg", "jSfws9BiuJk", "lME5n4dCg5", "i4AOlX6HnQA" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for helping us improve the manuscript with your suggestions. We hope we were able to sufficiently answer the majority of your questions.", " We are happy to have addressed all of your major concerns.", " We greatly appreciate your reconsideration and are glad to have addressed your concerns regardi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "eHLiaLYS4Ot", "vcx_6iB2oA_", "ha_tllF_MEy", "TH2GxjpHDRm", "TuPSaTJ8CFg", "e7-B-aRBi", "jSfws9BiuJk", "i4AOlX6HnQA", "lME5n4dCg5", "nips_2022_Q6DJ12oQjrp", "nips_2022_Q6DJ12oQjrp", "nips_2022_Q6DJ12oQjrp" ]
nips_2022_vfR3gtIFd8Y
Fast variable selection makes scalable Gaussian process BSS-ANOVA a speedy and accurate choice for tabular and time series regression
Many approaches for scalable GPs have focused on using a subset of data as inducing points. Another promising approach is the Karhunen-Loève (KL) decomposition, in which the GP kernel is represented by a set of basis functions which are the eigenfunctions of the kernel operator. Such kernels have the potential to be very fast, and do not depend on the selection of a reduced set of inducing points. However KL decompositions lead to high dimensionality, and variable selection thus becomes paramount. This paper reports a new method of forward variable selection, enabled by the ordered nature of the basis functions in the KL expansion of the Bayesian Smoothing Spline ANOVA kernel (BSS-ANOVA), coupled with fast Gibbs sampling in a fully Bayesian approach. It quickly and effectively limits the number of terms, yielding a method with competitive accuracies, training and inference times for tabular datasets of low feature set dimensionality. The new algorithm determines how high the orders of included terms should reach, balancing model fidelity with model complexity using $L^0$ penalties inherent in Bayesian and Akaike information criteria. The inference speed and accuracy makes the method especially useful for modeling dynamic systems, by modeling the derivative in a dynamic system as a static problem, then integrating the learned dynamics using a high-order scheme. The methods are demonstrated on two dynamic datasets: a 'Susceptible, Infected, Recovered' (SIR) toy problem, with the transmissibility used as forcing function, along with the experimental 'Cascaded Tanks' benchmark dataset. Comparisons on the static prediction of derivatives are made with a random forest (RF), a residual neural network (ResNet), and the Orthogonal Additive Kernel (OAK) inducing points scalable GP, while for the timeseries prediction comparisons are made with LSTM and GRU recurrent neural networks (RNNs). The GP outperforms the RF and ResNet on the static estimation, and is comparable to OAK. In dynamic systems modeling it outperforms both RNNs, while performing many orders of magnitude fewer calculations. For the SIR test, which involved prediction for a set of forcing functions qualitatively different from those appearing in the training set, BSS-ANOVA captured the correct dynamics while the neural networks failed to do so.
Reject
Reading the reviews, I think there are ultimately two challenges for the authors to address in this work. First I think ends up being a somewhat simple "background for the community" problem, as both several reviewers and the authors in their general comments point out: significantly more background on KL decomposed kernels may be warranted, and this lack of background (along with perhaps some simple notational differences) led to what I felt like were some challenges understanding the full paper. With that being said, I don't think the above alone should be sufficient by itself to result in rejection, despite this lack of context likely being a primary contributor to final review scores. However, I do agree with reviewers' concerns that there are some concrete comparisons to existing scalable GP literature missing. I think the inclusion of OAK + inducing points is a start, but e.g. the use of m=40 inducing points in section 3.4 is surprising to me, partly perhaps because it's not clear which task this was a problem for (you state early that you use m=200 for the cascaded tanks task), and partly because none of the dataset sizes involved appear to me to be even remotely beyond the capability of these existing scalable GP approximations in the literature -- even up to m=512 or m=1024 inducing points is fairly standard practice. Given the reasonably good performance of OAK in some of the new results even with limited inducing point sets, I think that clearly further investigation is warranted there. Beyond inducing point methods, there are also NNGP / Vecchia models (which have also been recently made variational via Wu et al., 2022), and even exact GPs seem readily applicable to some of the tasks considered here with access to even a single moderately powerful GPU.
val
[ "KZ0p14mfw_y", "LfVjcjOs-H1", "SZuXSsx9HTJ", "e4uiJsv_T3Sw", "y23PVFjYMS9", "VDLdbBSuJc_", "W1-GD1btXlB", "NdJeKCUoCFK" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the perceptive assessment of our work. We have tried to improve the presentation to make it more germane for the GP community in machine learning, and we have added comparisons to state-of-the-art inducing points-based scalable GPs.\n\n1. $\\vartheta$ is commonly used as notation for model inputs in...
[ -1, -1, -1, -1, 3, 4, 5, 4 ]
[ -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "NdJeKCUoCFK", "W1-GD1btXlB", "VDLdbBSuJc_", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y" ]
nips_2022_Rqe-fJQtExY
Efficient and Effective Multi-task Grouping via Meta Learning on Task Combinations
As a longstanding learning paradigm, multi-task learning has been widely applied into a variety of machine learning applications. Nonetheless, identifying which tasks should be learned together is still a challenging fundamental problem because the possible task combinations grow exponentially with the number of tasks, and existing solutions heavily relying on heuristics may probably lead to ineffective groupings with severe performance degradation. To bridge this gap, we develop a systematic multi-task grouping framework with a new meta-learning problem on task combinations, which is to predict the per-task performance gains of multi-task learning over single-task learning for any combination. Our underlying assumption is that no matter how large the space of task combinations is, the relationships between task combinations and performance gains lie in some low-dimensional manifolds and thus can be learnable. Accordingly, we develop a neural meta learner, MTG-Net, to capture these relationships, and design an active learning strategy to progressively select meta-training samples. In this way, even with limited meta samples, MTG-Net holds the potential to produce reasonable gain estimations on arbitrary task combinations. Extensive experiments on diversified multi-task scenarios demonstrate the efficiency and effectiveness of our method. Specifically, in a large-scale evaluation with $27$ tasks, which produce over one hundred million task combinations, our method almost doubles the performance obtained by the existing best solution given roughly the same computational cost. Data and code are available at https://github.com/ShawnKS/MTG-Net.
Accept
The overall idea of using a meta-learning network with an active learner for grouped multi-task learning is interesting. The experimental results provided in the original submission and rebuttal are extensive to verify the effectiveness of the proposed method. A major limitation of the proposed method is the high computational cost, especially when each task has its own dataset. Overall, this is a well-written paper that presents an interesting idea for multi-task learning.
train
[ "Csn_spxolPI", "jxVdHnzNfxW", "GO6Fku8luy", "jWO46h8SPXzV", "j_vR85lEt6P", "enD1QLhNxsx", "Tg79K3Lkfd", "SH8eaYqntKf", "oNS8uq9Ug4H", "Q4KvrQYyeu5", "ZvCqYW2p9_9", "Yt8ze0G-Yzh", "gNoQo1Vo6f7", "JAwIpZKHAA1", "vj14BMNbwXq", "HQ9wRwGJTHv", "yHgmcNLX2-", "0LuOi2ObgZ", "IRz-3J3c4Xl"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " We very appreciate this suggestion and will take your advice to discuss the limitations of this work in the revised paper. We also agree with you that when each task has its own dataset, the computational cost of N-task MTL will be significantly larger than that of two-task MTL.\n\nAs you have mentioned, this wor...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3, 5 ]
[ "jxVdHnzNfxW", "yHgmcNLX2-", "Q4KvrQYyeu5", "ZvCqYW2p9_9", "Rqdt7d8VRrS", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "0LuOi2ObgZ", "yHgmcNLX2-", "yHgmcNLX2-", "HQ9wRwGJTHv", "HQ9wRwGJTHv", "nips_2022_Rqe-fJQtExY", "nips_2022_Rqe-fJQtExY", "nips_2022_...
nips_2022_nLKkHwYP4Au
CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D. Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels with the same semantic predictions, which considers semantic consistency and diverse locality abandoned in previous bottom-up approaches. Then, to recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module to directly aggregate fine-grained spatial information from backbone for further proposal refinement. It is memory-and-computation efficient and can better encode the geometry-specific features of each 3D proposal. Our model achieves state-of-the-art 3D detection performance with remarkable gains of +3.6% on ScanNet V2 and +2.6% on SUN RGB-D in term of mAP@0.25. Code will be available at https://github.com/Haiyang-W/CAGroup3D.
Accept
4 expert reviewers suggest acceptance, based mostly on a strong evaluation section that shows good improvements over previous methods. Novelty of the method is deemed sufficient and well ablated. Overall seems like a good quality paper, although a tiny bit on the incremental side, but enough for recommending acceptance.
train
[ "t_WzSYj9ExD", "cmtW5D0LR6i", "uUvEAPVBcda", "OG4W5D5f1jz", "OEu0QBYtZi", "Q2dNQ0cdLP", "sWhvCjtssLG", "qZ0arGpAcvO" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the reviewer for providing thoughtful review and positive feedback. Below are our responses to the questions and suggestions raised by the reviewer.\n\n**R4-Q1: Update the title/introduction.** \n**R4-A1:** Thanks. We agree that the title and introduction are somewhat misleading. Our model tak...
[ -1, -1, -1, -1, 6, 5, 5, 7 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "qZ0arGpAcvO", "sWhvCjtssLG", "Q2dNQ0cdLP", "OEu0QBYtZi", "nips_2022_nLKkHwYP4Au", "nips_2022_nLKkHwYP4Au", "nips_2022_nLKkHwYP4Au", "nips_2022_nLKkHwYP4Au" ]
nips_2022_Fm7Dt3lC_s2
Adaptive Data Debiasing through Bounded Exploration
Biases in existing datasets used to train algorithmic decision rules can raise ethical and economic concerns due to the resulting disparate treatment of different groups. We propose an algorithm for sequentially debiasing such datasets through adaptive and bounded exploration in a classification problem with costly and censored feedback. Exploration in this context means that at times, and to a judiciously-chosen extent, the decision maker deviates from its (current) loss-minimizing rule, and instead accepts some individuals that would otherwise be rejected, so as to reduce statistical data biases. Our proposed algorithm includes parameters that can be used to balance between the ultimate goal of removing data biases -- which will in turn lead to more accurate and fair decisions, and the exploration risks incurred to achieve this goal. We analytically show that such exploration can help debias data in certain distributions. We further investigate how fairness criteria can work in conjunction with our data debiasing algorithm. We illustrate the performance of our algorithm using experiments on synthetic and real-world datasets.
Accept
This paper has seen a lot of discussion between reviewers and authors. Reviewers are fairly positive after the discussion/rebuttal phase and there have been significant score revisions upwards. Few concerns that were highlighted during rebuttal/discussion phase are: 1) Multiple reviewers have pointed out that amongst two sources of bias - data bias and model bias - the authors focus on assembling a dataset to avoid the first type of bias. It has been pointed out using terms like "social bias, unfairness" and "statistical bias" are very misleading . I strongly suggest the authors to better revise the paper according to reviewer comments using more precise terminology -data bias and/or model bias. Clarity has been a concern uniformly shared amongst all reviewers. 2) The authors principally reduce the data to a single dimension using dimension reduction techniques and use thresholded classifier. Authors responded to this concern saying -effective feature learning in general amounts to that and there are optimal data dimension reduction techniques. Further authors also experimentally demonstrate that losses in accuracy is not much due to these techniques. In summary, concerns 1 and 2 are not severe enough (as acknowledged by reviewers raising scores) but important to keep in mind while preparing the camera ready.
train
[ "XhqomLc0bp", "f14Zq5O3akf", "4T9QOtl4qz1", "LQka7RnKDN", "1T5MEJu3_Y", "Mp7ykH4dBa_", "e24BQGMkbW6", "Z-aeoxkAz0Z", "Vl7BPryxrym", "96Gf2T50Ac_", "b8dy-A8m_wl", "vNO0mhMNEoH", "zbJMctwAQD", "-CHKx5guvYk", "OEgGmfg5xOk", "mbnPRwN00w", "IG3Te1dLQkh" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I will update the numeric score to a 6, with possible further change after discussion with other reviewers.\nI am happy with the response given by the authors. I believe that the series of clarification given in the responses regarding Assumption 1 and its implications are definitely n...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "e24BQGMkbW6", "OEgGmfg5xOk", "IG3Te1dLQkh", "mbnPRwN00w", "OEgGmfg5xOk", "nips_2022_Fm7Dt3lC_s2", "IG3Te1dLQkh", "Vl7BPryxrym", "96Gf2T50Ac_", "mbnPRwN00w", "vNO0mhMNEoH", "zbJMctwAQD", "-CHKx5guvYk", "OEgGmfg5xOk", "nips_2022_Fm7Dt3lC_s2", "nips_2022_Fm7Dt3lC_s2", "nips_2022_Fm7Dt3...
nips_2022_bfz-jhJ8wn
Bridging the Gap Between Vision Transformers and Convolutional Neural Networks on Small Datasets
There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
Accept
Authors introduce 3 modifications to ViT architecture to introduce additional inductive biases to improve performance in low-data scenarios: - SOPE: Sequential Overlapping Patch Embedding -- essentially convolutions before partitioning the image into patches. - DAFF: Dynamic Aggregation Feed Forward -- a DWCONV operation is applied to tokens after a FC layer increases the channel dimension. The new tokens are average pooled, input to additional FC layers, and then are used to scale the CLS token. - HI-MHSA: Head-Interacted Multi-Head Self-Attention -- this approach's name is confusing. This does not change the heads in MHSA. Rather a new mechanism is introduced prior to MHSA to introduce new tokens where each new token is derived from a different partition of the original channel dimensions. AC recommends authors use a different name. For example, "Intra-Channel Modeling (ICM) MHSA" or something would be more clear. Performance is evaluated on CIFAR-100, DomainNet subsets, and ImageNet 1K Pros: - [R/AC] The topic is important to the community. - [R/AC] The paper is well written and clear. - [R/AC] The authors present improved performance versus other recent SOTA hybrid model designs (during rebuttal phase, though missing from original work -- should be added to paper). Cons: - [R/AC] The evaluation could be significantly improved. For example, more training experiments on undersampled version of more datasets, with comparisons to other SOTA methods. - [R/AC] The design is complicated and the motivation isn't always clear. - [R] Novelty of the components implemented is low. - [R] Concerns over use of BN as opposed to LN. Authors have provided ablation experiments to demonstrate that BN improves performance of their model over LN. These ablations should be included in the manuscript. - [R/AC] Concerns over lack of comparison to other SOTA methods that mix convolutions with transformers, such as CvT. Authors have provided additional experiment tables that compare against CvT. These tables should be included in the manuscript in a consistent manner (showing number of parameters and FLOPS). - [R/AC] Authors do not include FLOPS in their experiment tables. Please ensure all tables report number of parameters and FLOPS for all models explored. There are python packages to help with computing this, such as "flopth". - [AC] Some spelling and grammatical mistakes. Please spell check the manuscript. Overall Recommendation: Reviews lean toward acceptance, but marginally so. Given that the authors have provided more comparisons against recent relevant SOTA methods, and that the reviewers (including expert in the field) lean toward accept, the AC opinion is that this manuscript can be published and provides some valuable knowledge to the community. There are ways in which the paper can still be improved before publication, such as inclusion of additional evaluation datasets. AC Rating: Borderline Accept
train
[ "PTkmGGoa0u_", "7wQj1CTbS8", "YT6q5kbjSZ_", "I9Do6wg2Rmq", "r3rGXFyGIxN", "XS3fjlnIxiF", "8D2Y2FvynkA", "C46X7i_YBN-", "54Ja5l-bqVU", "GZ0PHiVwk6g", "nRfto7_786O", "Sm1qKnZoEV7", "H2wb8kssSvO", "wRajdpybMhy", "dmjP5qTAkyC", "bbto0VIQNS6", "QZn-rCzAHf", "2UdiSFcHU8t", "h0LGMFzb4Wd...
[ "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " Thanks again for your time, your detailed and insightful comments and kindess! It is a good trip of us these days and your suggestions greatly improve our work, making it more solid. Best wishes.", " Thanks for your time and comments again. Your suggestions and insights help us rethink our work and make it more...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "C46X7i_YBN-", "8D2Y2FvynkA", "I9Do6wg2Rmq", "r3rGXFyGIxN", "dmjP5qTAkyC", "dmjP5qTAkyC", "bbto0VIQNS6", "nRfto7_786O", "GZ0PHiVwk6g", "H2wb8kssSvO", "Sm1qKnZoEV7", "2UdiSFcHU8t", "wRajdpybMhy", "7c0-so-dySO", "bY4zzDmf_WJ", "QZn-rCzAHf", "claapWcVb-e", "h0LGMFzb4Wd", "99wi6Lnfb6...
nips_2022_Bq2-WN5csW
Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent
Distributed Deep Learning (DDL) is essential for large-scale Deep Learning (DL) training. Synchronous Stochastic Gradient Descent (SSGD) 1 is the de facto DDL optimization method. Using a sufficiently large batch size is critical to achieving DDL runtime speedup. In a large batch setting, the learning rate must be increased to compensate for the reduced number of parameter updates. However, a large learning rate may harm convergence in SSGD and training could easily diverge. Recently, Decentralized Parallel SGD (DPSGD) has been proposed to improve distributed training speed. In this paper, we find that DPSGD not only has a system-wise runtime benefit but also a significant convergence benefit over SSGD in the large batch setting. Based on a detailed analysis of the DPSGD learning dynamics, we find that DPSGD introduces additional landscape-dependent noise that automatically adjusts the effective learning rate to improve convergence. In addition, we theoretically show that this noise smoothes the loss landscape, hence allowing a larger learning rate. This result also implies that DPSGD can make learning rate tuning much easier for tasks that require careful learning rate warmup (e.g, Attention-Based Language Modeling). We conduct extensive studies over 18 state-of-the-art DL models/tasks and demonstrate that DPSGD often converges in cases where SSGD diverges when training is sensitive to large learning rates. Our findings are consistent across three different application domains: Computer Vision (CIFAR10 and ImageNet-1K), Automatic Speech Recognition (SWB300 and SWB2000) and Natural Language Processing (Wikitext-103); three different types of neural network models: Convolutional Neural Networks, Long Short-Term Memory Recurrent Neural Networks and Attention-based Transformer Models; and two optimizers: SGD and Adam.
Reject
This paper compares all-reduce SGD (SSGD) with decentralized SGD (DPSGD) and argues that the latter can tolerate lager stepsize due to a smoothing effect induced by noise in DPSGD. The reviewers found that the theoretical contribution is overclaimed. By the strong assumptions needed in the theory section (such as e.g. assuming Gaussian updates) the analysis becomes somewhat disconnected from the experiments, and, in addition, reviewers found several typos and issues in Section 2 of the original submission. Even though the numerical evaluation was judged more positively by all reviewers (and championed by one), we came to the consensus that the paper should be rejected in its current form. (Minor comments:) In the discussion, we also found that that the term “self-adjusting” might be a bit misleading (as learning rates are kept fixed and are not self-adjusting), and that the paper would benefit of a brief discussion of related works that study the benefitting effect of smoothing in large-batch training (such as https://arxiv.org/abs/1805.07898 or https://arxiv.org/abs/1906.10822, etc.).
train
[ "X-h0on63s6M", "Y7CuitXO5VB", "hx92YrCFJ0T", "ZXHDKvJXjP", "L6IFOQRsDS", "MjRD7Lhb7Oz", "9Dqe8IK8oqU", "2niSD2KW9Y", "uTUwq7tt-tr", "yhRi2oXh7fo", "x29T33nOgcD", "IoBUT5n_qd6", "q5TwjNH5KR", "coP-CKeBNiq", "y4joeCH_EIL", "ODy9dpaI4_Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed responses. After reading the rebuttals, I tend to keep my score.", " I acknowledge the authors' response.\nI maintain my rating, since there is no convincing theoretical argument in favor of the proposed method. This is a major problem, since the authors claim to provide such argumen...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 5 ]
[ "2niSD2KW9Y", "IoBUT5n_qd6", "x29T33nOgcD", "q5TwjNH5KR", "q5TwjNH5KR", "ODy9dpaI4_Q", "y4joeCH_EIL", "y4joeCH_EIL", "coP-CKeBNiq", "coP-CKeBNiq", "coP-CKeBNiq", "q5TwjNH5KR", "nips_2022_Bq2-WN5csW", "nips_2022_Bq2-WN5csW", "nips_2022_Bq2-WN5csW", "nips_2022_Bq2-WN5csW" ]
nips_2022_hcVlMF3Nvxg
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Multi-label classification, which predicts a set of labels for an input, has many applications. However, multiple recent studies showed that multi-label classification is vulnerable to adversarial examples. In particular, an attacker can manipulate the labels predicted by a multi-label classifier for an input via adding carefully crafted, human-imperceptible perturbation to it. Existing provable defenses for multi-class classification achieve sub-optimal provable robustness guarantees when generalized to multi-label classification. In this work, we propose MultiGuard, the first provably robust defense against adversarial examples to multi-label classification. Our MultiGuard leverages randomized smoothing, which is the state-of-the-art technique to build provably robust classifiers. Specifically, given an arbitrary multi-label classifier, our MultiGuard builds a smoothed multi-label classifier via adding random noise to the input. We consider isotropic Gaussian noise in this work. Our major theoretical contribution is that we show a certain number of ground truth labels of an input are provably in the set of labels predicted by our MultiGuard when the $\ell_2$-norm of the adversarial perturbation added to the input is bounded. Moreover, we design an algorithm to compute our provable robustness guarantees. Empirically, we evaluate our MultiGuard on VOC 2007, MS-COCO, and NUS-WIDE benchmark datasets. Our code is available at: https://github.com/quwenjie/MultiGuard
Accept
This paper studies adversarial examples for varieties of randomized smoothing, namely, ways to improve the robustness of a classifier by adding noise and averaging over inputs. The main contribution is MultiGuard, which is a provably robust defense for multi-label classification. Moreover, the method works for a variety of classifiers, and the authors also provide theoretical and empirical results to back up their method. The reviewers generally find the technical contribution to be significant (although perhaps elementary) as well as finding the problem domain to be important and interesting. The reviewers also found the mathematical tools to be intuitive and appropriate (e.g., a variant of Neyman-Pearson lemma as well as a law of contraposition to extend the provable guarantees of randomized-smoothing multi-class classification to that of mutli-label). In addition to acknowledging the theoretical results, the reviewers also felt that the empirical studies were sufficient to verify the authors’ main findings. On the negative side, there are some concerns about clarity and rigor of the results. I would encourage the authors to improve the exposition and the preliminaries to increase the readability of the work. Similarly, there are some questions about comparison to prior work and similar ideas that should be addressed. Overall, I recommend acceptance. The positives outweigh the negatives, and the author-review discussion seemed to address many of the main questions.
train
[ "M6ivW3SAT8", "9c1ulXvveW", "YgiWL3qsncL", "-IJfBGgFrya", "EHuY3Xgaq6w", "GPQFsLtlrIe", "RM1cYOpbcX", "kHbMXT3UQEw", "op1YWPY6v9W", "HDC0AXEJY3", "9zmQlCb5_m", "LhZHkk5UJ0E", "E0g0EV0g6so", "0dDa639s_KP", "Q5T60HS986g", "TqhDKd5_5ik" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your time. We really appreciate the suggestion.", " Thanks the authors for providing the code. My concerns are addressed.", " Many thanks for the comment! We really appreciate the constructive feedback, which significantly improves the paper. We will definitively integrate our clarifications into t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2, 3 ]
[ "9c1ulXvveW", "HDC0AXEJY3", "-IJfBGgFrya", "op1YWPY6v9W", "GPQFsLtlrIe", "RM1cYOpbcX", "kHbMXT3UQEw", "LhZHkk5UJ0E", "TqhDKd5_5ik", "Q5T60HS986g", "0dDa639s_KP", "E0g0EV0g6so", "nips_2022_hcVlMF3Nvxg", "nips_2022_hcVlMF3Nvxg", "nips_2022_hcVlMF3Nvxg", "nips_2022_hcVlMF3Nvxg" ]
nips_2022_Yopob26XjmL
Natural gradient enables fast sampling in spiking neural networks
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers---efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling---can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design principles for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
Accept
Although some reviewers have reservations about strong modelling assumptions, the main contribution of the paper is clearly presented and technically sound.
train
[ "sy3xiIaJ65n", "W9Z7EBfbPEQ", "GX8SaKE2dVE", "nCM8ioaqSAL", "u-dI4LRdw-s", "_0TFZ7A-dn3", "Wjop1PzPCBGa", "Jx0T--rWOm", "tsBLEEu5C80", "6YWnjVX4PCo", "_tl0S2QLnk4", "mPqaUuLB-Ym", "iTW4wG4x0PV", "BQgYadFC95", "S10Em2oXyz6", "07yj8ul6c10", "21-EllRMobg", "UX_KZjwRJgq", "RFMKJfIsiW...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " The authors' reply addressed some of my confusion, e.g., the influence of the D matrix in Eq. 14 on the sampling dynamics, the denominator in Eq. 4, and the natural geometry has a smaller discretization error than naive sampling.", " We thank the reviewer for helping us improve the clarity and presentation of ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 5, 4, 4 ]
[ "uP5wb03A8J", "GX8SaKE2dVE", "21-EllRMobg", "_0TFZ7A-dn3", "Wjop1PzPCBGa", "iTW4wG4x0PV", "Jx0T--rWOm", "7J0U3mg6Zbq", "6YWnjVX4PCo", "_tl0S2QLnk4", "mPqaUuLB-Ym", "uP5wb03A8J", "BQgYadFC95", "S10Em2oXyz6", "07yj8ul6c10", "wPyhVXai08s", "UX_KZjwRJgq", "BlvE5lszsL", "nips_2022_Yop...
nips_2022_3AbigH4s-ml
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
The increasing size and complexity of modern ML systems has improved their predictive capabilities but made their behavior harder to explain. Many techniques for model explanation have been developed in response, but we lack clear criteria for assessing these techniques. In this paper, we cast model explanation as the causal inference problem of estimating causal effects of real-world concepts on the output behavior of ML models given actual input data. We introduce CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP). CEBaB consists of short restaurant reviews with human-generated counterfactual reviews in which an aspect (food, noise, ambiance, service) of the dining experience was modified. Original and counterfactual reviews are annotated with multiply-validated sentiment ratings at the aspect-level and review-level. The rich structure of CEBaB allows us to go beyond input features to study the effects of abstract, real-world concepts on model behavior. We use CEBaB to compare the quality of a range of concept-based explanation methods covering different assumptions and conceptions of the problem, and we seek to establish natural metrics for comparative assessments of these methods.
Accept
The paper presents a new benchmark dataset for assessing explanation methods in NLP, on the sentiment analysis domain. The dataset is unique in that it focuses on the casual effects of modifying specific aspects, providing minimal pairs where only one of the aspects is different. After constructing the benchmark, the paper uses a causality-based metric (Section 2) to evaluate existing explanation methods. The reviewers (NcbX / dBGa) agree that the paper is well motivated and can provide useful resources for the community. The experimental results show a simple baseline they propose performs on par with existing explanation methods, which is already an interesting finding to the community. While the reviewer spotted some flaws (not providing important details, positioning of the work, weak discussion of related work, etc), most seem fixable by camera ready. I’d recommend acceptance.
train
[ "rdNHGumYev", "vnC6OtTvnuou", "eNgkajIs_vb", "bnb_MBxovbD", "jSSrg4xCQV", "LwF36bA9vUW", "N24LdB18BxS", "xNIDfohPyQj", "Bsm-RSRRNWE", "aoobWQdsZGoY", "xC9w5Yih2M2", "gw6Ktpkq9yR", "kt22EulxD1M", "upAhuVrwWFQ", "2yMaQsx_TZ", "g5AYqy3g8Sc" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Also an interesting conceptual question. The scenario we've thought about the most concerns controlling for confounds. Some methods are explicitly motivated by their ability to do this. The default exclusive train set for CEBaB might not have rich enough confounds to bring this out. So someone advocating for a co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "vnC6OtTvnuou", "eNgkajIs_vb", "bnb_MBxovbD", "N24LdB18BxS", "kt22EulxD1M", "g5AYqy3g8Sc", "g5AYqy3g8Sc", "2yMaQsx_TZ", "upAhuVrwWFQ", "kt22EulxD1M", "kt22EulxD1M", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-m...
nips_2022_aLNWp0pn1Ij
GAR: Generalized Autoregression for Multi-Fidelity Fusion
In many scientific research and engineering applications, where repeated simulations of complex systems are conducted, a surrogate is commonly adopted to quickly estimate the whole system. To reduce the expensive cost of generating training examples, it has become a promising approach to combine the results of low-fidelity (fast but inaccurate) and high-fidelity (slow but accurate) simulations. Despite the fast developments of multi-fidelity fusion techniques, most existing methods require particular data structures and do not scale well to high-dimensional output. To resolve these issues, we generalize the classic autoregression (AR), which is wildly used due to its simplicity, robustness, accuracy, and tractability, and propose generalized autoregression (GAR) using tensor formulation and latent features. GAR can deal with arbitrary dimensional outputs and arbitrary multifidelity data structure to satisfy the demand of multi-fidelity fusion for complex problems; it admits a fully tractable likelihood and posterior requiring no approximate inference and scales well to high-dimensional problems. Furthermore, we prove the autokrigeability theorem based on GAR in the multi-fidelity case and develop CIGAR, a simplified GAR with the same predictive mean accuracy but requires significantly less computation. In experiments of canonical PDEs and scientific computational examples, the proposed method consistently outperforms the SOTA methods with a large margin (up to 6x improvement in RMSE) with only a few high-fidelity training samples.
Accept
This paper considers the problem of multi-fidelity fusion using generalized autoregression. The authors especially take on problems such as high-dimensionality and non-subsetness with this approach. The reviewers agree that the paper is well written and makes a significant contribution to MF-fusion. I recommend acceptance and strongly encourage the authors to take the reviewer comments into account in preparing the final manuscript.
train
[ "8DV0f66ECnl", "2bs-6GXrmN9", "EU99BPbQU6R", "kcuQndRZaCI", "TnvA2uSe6Aq", "JcKSMnuDeLg", "xJ5gWEG1WFb" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering my questions. I've improved my score based on the responses.", " Thank you for your valuable suggestions for our work.\n\nC1: But there's existing non-AR work [1] that is able to handle both non-structured high-dimensional outputs and non-subset multi-fidelity data [1] besides MF-BNN.\n\nR1...
[ -1, -1, -1, -1, 5, 6, 8 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "2bs-6GXrmN9", "TnvA2uSe6Aq", "JcKSMnuDeLg", "xJ5gWEG1WFb", "nips_2022_aLNWp0pn1Ij", "nips_2022_aLNWp0pn1Ij", "nips_2022_aLNWp0pn1Ij" ]
nips_2022_nrksGSRT7kX
RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning
Offline reinforcement learning (RL) aims to find performant policies from logged data without further environment interaction. Model-based algorithms, which learn a model of the environment from the dataset and perform conservative policy optimisation within that model, have emerged as a promising approach to this problem. In this work, we present Robust Adversarial Model-Based Offline RL (RAMBO), a novel approach to model-based offline RL. We formulate the problem as a two-player zero sum game against an adversarial environment model. The model is trained to minimise the value function while still accurately predicting the transitions in the dataset, forcing the policy to act conservatively in areas not covered by the dataset. To approximately solve the two-player game, we alternate between optimising the policy and adversarially optimising the model. The problem formulation that we address is theoretically grounded, resulting in a probably approximately correct (PAC) performance guarantee and a pessimistic value function which lower bounds the value function in the true environment. We evaluate our approach on widely studied offline RL benchmarks, and demonstrate that it outperforms existing state-of-the-art baselines.
Accept
This paper introduces the idea of Robust Adversarial RL for offline model-based RL, which could have a high impact. It is well organized and the writing is very comprehensive; the authors manage to convey their idea in concise but informative language. The proposed RAMBO approach performs reasonably well in the presented experiments, although it was pointed out that the paper would benefit from more scenarios showing the necessity of RAMBO compared with the current baseline (COMBO). Questions and issues related to the theory that were raised during the reviewing process have been addressed in the rebuttal.
train
[ "Mp0hvD_P1_Q", "vMJDTvlI6s", "-cJADR6Ke11", "Syez9iD7B3O", "0BBR5HQd9CQ", "CWr7IrvC4R", "NRD-r7WzZj1", "bxl3ecwh3-T", "DbKEwR89opc", "bg0JiDq_fnV", "phP4TeUGEfE", "NRKRcGLDsix", "1URRL-_p9pG", "SPCsO5J0_cI", "W-MHJD7KT-7", "RmrGhq9kxC1", "bFDbYxLQNqP", "BahsH0d7H4D", "io2yKENun2P...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_rev...
[ " Thanks a lot for getting back to us despite being on holiday. \n\n \n\nWe have modified the additional experiment so that we now choose the value of the regularisation parameter for COMBO by sweeping over $\\beta \\in$ {$0.1, 0.25, 0.5, 5.0$}, and selecting the best performance. The best performance for COMBO is ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Syez9iD7B3O", "CWr7IrvC4R", "bg0JiDq_fnV", "NRKRcGLDsix", "io2yKENun2P", "SPCsO5J0_cI", "bxl3ecwh3-T", "DbKEwR89opc", "W-MHJD7KT-7", "phP4TeUGEfE", "1URRL-_p9pG", "io2yKENun2P", "BahsH0d7H4D", "bFDbYxLQNqP", "RmrGhq9kxC1", "nips_2022_nrksGSRT7kX", "nips_2022_nrksGSRT7kX", "nips_20...
nips_2022_k713e8vXzwR
Large-Scale Differentiable Causal Discovery of Factor Graphs
A common theme in causal inference is learning causal relationships between observed variables, also known as causal discovery. This is usually a daunting task, given the large number of candidate causal graphs and the combinatorial nature of the search space. Perhaps for this reason, most research has so far focused on relatively small causal graphs, with up to hundreds of nodes. However, recent advances in fields like biology enable generating experimental data sets with thousands of interventions followed by rich profiling of thousands of variables, raising the opportunity and urgent need for large causal graph models. Here, we introduce the notion of factor directed acyclic graphs ($f$-DAGs) as a way to restrict the search space to non-linear low-rank causal interaction models. Combining this novel structural assumption with recent advances that bridge the gap between causal discovery and continuous optimization, we achieve causal discovery on thousands of variables. Additionally, as a model for the impact of statistical noise on this estimation procedure, we study a model of edge perturbations of the $f$-DAG skeleton based on random graphs and quantify the effect of such perturbations on the $f$-DAG rank. This theoretical analysis suggests that the set of candidate $f$-DAGs is much smaller than the whole DAG space and thus may be more suitable as a search space in the high-dimensional regime where the underlying skeleton is hard to assess. We propose Differentiable Causal Discovery of Factor Graphs (DCD-FG), a scalable implementation of $f$-DAG constrained causal discovery for high-dimensional interventional data. DCD-FG uses a Gaussian non-linear low-rank structural equation model and shows significant improvements compared to state-of-the-art methods in both simulations as well as a recent large-scale single-cell RNA sequencing data set with hundreds of genetic interventions.
Accept
In this paper, the authors propose a new DAG constraint for low-rank adjacency matrices., which can scale to larger graphs. All the reviewers consider this paper is sound and the experiments are well designed. However, one question about the case of different graph spaces from other reviewer should be addressed in the final version.
train
[ "gnR_M-z-WJ3", "ZkffK-ETA67", "mD460VI6TB", "4AnXiiSKrJ3", "XSScXHJJhfE", "TaJM35EYnhP", "HlyZOfI-VkC", "sHsky6D8LVE", "HpKqNNESSU1", "KcnTKmHAzm0", "szuEDVlVVyL", "zj_swDhfZd2", "q_sN4pbIeW3", "4FaJo88pvT" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers and area chairs, \n\nWe noticed that the server of anonymous4openscience was down, so we have took the liberty to upload our files to a separate Google Drive so that the remaining reviewer(s) can access the results of our supplementary experiments. \n\nhttps://drive.google.com/file/d/1PlocBals72tAh...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "nips_2022_k713e8vXzwR", "mD460VI6TB", "4AnXiiSKrJ3", "sHsky6D8LVE", "TaJM35EYnhP", "4FaJo88pvT", "q_sN4pbIeW3", "HpKqNNESSU1", "KcnTKmHAzm0", "zj_swDhfZd2", "nips_2022_k713e8vXzwR", "nips_2022_k713e8vXzwR", "nips_2022_k713e8vXzwR", "nips_2022_k713e8vXzwR" ]
nips_2022_VnAwNNJiwDb
Generating Long Videos of Dynamic Scenes
We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time. Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence. A common failure case is for content to never change due to over-reliance on inductive bias to provide temporal consistency, such as a single latent code that dictates content for the entire video. On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes. To address these limitations, we prioritize the time axis by redesigning the temporal latent representation and learning long-term consistency from data by training on longer videos. We leverage a two-phase training strategy, where we separately train using longer videos at a low resolution and shorter videos at a high resolution. To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics.
Accept
All four reviewers enjoyed this paper and were particularly impressed by the videos provided in the supplementary material. The results are very impressive indeed. The reviewers also agreed that using a multi stage approach was interesting and effective. The two new datasets were deemed useful to the generation community and the proposed metrics and human evaluations were appreciated by the reviewers. A few smaller concerns included a missing failure analysis and some clarifications questions which were addressed in the rebuttal. Given the above, I recommend acceptance.
train
[ "GYbF_2ZsuyZ", "XVfypRsO2m", "iThPDgezB7C", "o2VTGEGG8Qf", "IRgZ6I08Dg9", "lDFytyh7Du1", "5mskrduqx0D", "u0jiGSW8zJ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and insightful feedback. We are encouraged that you agree long-term dynamics is understudied and that we selected the correct challenge in video generation to address.\n\n**“Since we are not showing long videos to the discriminator, the generated videos do not have enough dynamics or loo...
[ -1, -1, -1, -1, 5, 7, 7, 6 ]
[ -1, -1, -1, -1, 5, 4, 3, 5 ]
[ "u0jiGSW8zJ", "5mskrduqx0D", "lDFytyh7Du1", "IRgZ6I08Dg9", "nips_2022_VnAwNNJiwDb", "nips_2022_VnAwNNJiwDb", "nips_2022_VnAwNNJiwDb", "nips_2022_VnAwNNJiwDb" ]
nips_2022_-e2SBzFDE8x
Adaptively Exploiting d-Separators with Causal Bandits
Multi-armed bandit problems provide a framework to identify the optimal intervention over a sequence of repeated experiments. Without additional assumptions, minimax optimal performance (measured by cumulative regret) is well-understood. With access to additional observed variables that d-separate the intervention from the outcome (i.e., they are a d-separator), recent "causal bandit" algorithms provably incur less regret. However, in practice it is desirable to be agnostic to whether observed variables are a d-separator. Ideally, an algorithm should be adaptive; that is, perform nearly as well as an algorithm with oracle knowledge of the presence or absence of a d-separator. In this work, we formalize and study this notion of adaptivity, and provide a novel algorithm that simultaneously achieves (a) optimal regret when a d-separator is observed, improving on classical minimax algorithms, and (b) significantly smaller regret than recent causal bandit algorithms when the observed variables are not a d-separator. Crucially, our algorithm does not require any oracle knowledge of whether a d-separator is observed. We also generalize this adaptivity to other conditions, such as the front-door criterion.
Accept
This paper exploits the causal structure in the multi-armed bandits setting and gives a set of novel and strong results, including (1) the conditional benign property -- a nice and simple generalization of prior assumptions; (2) an impossibility result for the previous algorithm C-UCB; and (3) a new algorithm gives sublinear regret in any cases and optimal regret when there actually is a d-separator. The paper is well-organized and nicely written. The reviewers are unanimously positive about this paper.
train
[ "Od5F0y1V32b", "qpTmNUFXYLP", "NjFwin3IrZM", "OiEkWpLHZw", "2vat4PAthL3", "VKSS6EZ_qxE", "U7jp9aTVRLG", "paygbYkJN9y", "GjpZpJCykqr", "mYum_crvzL" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the insightful example. I think including something like it in the paper will contribute greatly.", " I appreciate the authors' response, and some of my concerns have been addressed. This paper studies a novel problem that concerns the trade-off between exploiting (possibly misspecified) graphical st...
[ -1, -1, -1, -1, -1, -1, 7, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "VKSS6EZ_qxE", "OiEkWpLHZw", "U7jp9aTVRLG", "mYum_crvzL", "GjpZpJCykqr", "paygbYkJN9y", "nips_2022_-e2SBzFDE8x", "nips_2022_-e2SBzFDE8x", "nips_2022_-e2SBzFDE8x", "nips_2022_-e2SBzFDE8x" ]
nips_2022_dMK7EwoTYp
MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
In recent years, neural implicit surface reconstruction methods have become popular for multi-view 3D reconstruction. In contrast to traditional multi-view stereo methods, these approaches tend to produce smoother and more complete reconstructions due to the inductive smoothness bias of neural networks. State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views. Yet, their performance drops significantly for larger and more complex scenes and scenes captured from sparse viewpoints. This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints, in particular in less-observed and textureless areas. Motivated by recent advances in the area of monocular geometry prediction, we systematically explore the utility these cues provide for improving neural implicit surface reconstruction. We demonstrate that depth and normal cues, predicted by general-purpose monocular estimators, significantly improve reconstruction quality and optimization time. Further, we analyse and investigate multiple design choices for representing neural implicit surfaces, ranging from monolithic MLP models over single-grid to multi-resolution grid representations. We observe that geometric monocular priors improve performance both for small-scale single-object as well as large-scale multi-object scenes, independent of the choice of representation.
Accept
There was a range of reactions to this paper from borderline reject to strong accept. Although several of the reviewers highlighted that the contribution could be viewed as incremental, it is clearly described, and robust across different types of scenes, and I concur with the three reviewers that give positive ratings. Therefore I am accepting this paper.
test
[ "fTk3TUMTV10", "5iPaUY8rnSl", "iDj-BDIe1ZJ", "KdAUG5_D8Ik", "Mj2ixa9Np1A", "IMSMcpsrjmP", "rimXmVYRTI3", "BzLmkdMxcM", "dPbQ-LMWlgr", "urLzkGW8FLy", "lW-dc6VCnbh", "VGBxfy7T4_", "zwxCSSbVh5p", "wbkQ-A2VCq0", "-pCaRQhf8Oq", "XHCz1fUCWuR", "Ew-2SFxbUOK", "FQ4os9jA0Xv", "DJ59g-xISgf...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official...
[ " Thank you very much for your reply and for increasing your score. We are happy to change the title if the reviewers and AC recommend this.", " Thanks to the authors for addressing my concerns. After reading the rebuttal, I still find the experiments on architectural choices distracting to the main contribution ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ "5iPaUY8rnSl", "FQ4os9jA0Xv", "KdAUG5_D8Ik", "Mj2ixa9Np1A", "IMSMcpsrjmP", "rimXmVYRTI3", "urLzkGW8FLy", "mE7zik7Q1K5", "13Sqj0UV-O1", "LrFE1FwM1Sn", "Ew-2SFxbUOK", "mE7zik7Q1K5", "96CJLZ9sfm", "13Sqj0UV-O1", "LrFE1FwM1Sn", "mE7zik7Q1K5", "96CJLZ9sfm", "13Sqj0UV-O1", "wKiXsoR-kvd...
nips_2022_Euv1nXN98P3
TarGF: Learning Target Gradient Field for Object Rearrangement
Object Rearrangement is to move objects from an initial state to a goal state. Here, we focus on a more practical setting in object rearrangement, i.e., rearranging objects from shuffled layouts to a normative target distribution without explicit goal specification. However, it remains challenging for AI agents, as it is hard to describe the target distribution (goal specification) for reward engineering or collect expert trajectories as demonstrations. Hence, it is infeasible to directly employ reinforcement learning or imitation learning algorithms to address the task. This paper aims to search for a policy only with a set of examples from a target distribution instead of a handcrafted reward function. We employ the score-matching objective to train a Target Gradient Field (TarGF), indicating a direction on each object to increase the likelihood of the target distribution. For object rearrangement, the TarGF can be used in two ways: 1) For model-based planning, we can cast the target gradient into a reference control and output actions with a distributed path planner; 2) For model-free reinforcement learning, the TarGF is not only used for estimating the likelihood-change as a reward but also provides suggested actions in residual policy learning. Experimental results in ball and room rearrangement demonstrate that our method significantly outperforms the state-of-the-art methods in the quality of the terminal state, the efficiency of the control process, and scalability.
Accept
After a strong rebuttal from the authors and an extensive discussion among the reviewers, I believe this work will be a valuable contribution to NeurIPS. I recommend it for acceptance and encourage the authors to address the reviewers comments for the camera-ready version of the paper, especially the point about the simplistic evaluation of the method - please consider a more realistic evaluation scenario.
train
[ "YgtMCuHE1nh", "ei6ZeVvqp0H", "2bhf0XbgkY", "q0n1RDldKxx", "-JXAzTDMF1_", "7UozQ3OS0-j", "OAEbFhwbtA", "PU_oJZoC-Xx", "I2oFuW0KalU", "Ny-8tE-DTr", "oz-TjrksYtA", "iz9N3OK2xn", "34FhNqZRDGI", "Eb4SmRvIn8s", "6nl8YB8RXXy", "LXCjKWQiZaH", "IHl1-4GwYZQ", "jlgfVD7Hhq", "9hBcUj-YNF", ...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for raising your rating to 7. We are so glad that our responses help address your concerns. Thanks again for all your valuable feedback!", " I thank the authors' detailed clarification. Most of my concerns are addressed for example the multi-model distribution and experiment task setups. Based on that, I...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ei6ZeVvqp0H", "2bhf0XbgkY", "9hBcUj-YNF", "c7-lJcUukr3", "OAEbFhwbtA", "OAEbFhwbtA", "Eb4SmRvIn8s", "Ny-8tE-DTr", "nips_2022_Euv1nXN98P3", "3CZzxXMp7vG", "c7-lJcUukr3", "3CZzxXMp7vG", "3CZzxXMp7vG", "c7-lJcUukr3", "jlgfVD7Hhq", "9hBcUj-YNF", "9hBcUj-YNF", "nips_2022_Euv1nXN98P3", ...
nips_2022_rnJzy8JnaX
Rethinking Resolution in the Context of Efficient Video Recognition
In this paper, we empirically study how to make the most of low-resolution frames for efficient video recognition. Existing methods mainly focus on developing compact networks or alleviating temporal redundancy of video inputs to increase efficiency, whereas compressing frame resolution has rarely been considered a promising solution. A major concern is the poor recognition accuracy on low-resolution frames. We thus start by analyzing the underlying causes of performance degradation on low-resolution frames. Our key finding is that the major cause of degradation is not information loss in the down-sampling process, but rather the mismatch between network architecture and input scale. Motivated by the success of knowledge distillation (KD), we propose to bridge the gap between network and input size via cross-resolution KD (ResKD). Our work shows that ResKD is a simple but effective method to boost recognition accuracy on low-resolution frames. Without bells and whistles, ResKD considerably surpasses all competitive methods in terms of efficiency and accuracy on four large-scale benchmark datasets, i.e., ActivityNet, FCVID, Mini-Kinetics, Something-Something V2. In addition, we extensively demonstrate its effectiveness over state-of-the-art architectures, i.e., 3D-CNNs and Video Transformers, and scalability towards super low-resolution frames. The results suggest ResKD can serve as a general inference acceleration method for state-of-the-art video recognition. Our code will be available at https://github.com/CVMI-Lab/ResKD.
Accept
After the rebuttal and discussion, two reviewers recommend acceptance, one borderline rejection. Most concerns of the raised in the borderline review were addressed at a sufficient detail in the rebuttal. The AC sees no reason the reject this paper.
val
[ "PFYjHk21vWf", "DDDcCTV0AJ", "IKCRXHV61y0", "FOdFdNH8UF3", "737lMsA2xZs", "X_R_-THJlQZ", "f5TYKNVuDkH", "hmfqalCDc6-", "LTnpGFs5WDM", "Dft_Jwsxu_", "NAU4rqFVbBJ", "q2JKtNmCYgL" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer THu4,\n\nThank you for your feedback. As we are approaching the end of the discussion period, we would like to ask whether there are any remaining concerns regarding our paper or our response? We are happy to answer any further questions.\n\nWe sincerely thank you for your efforts in reviewing our p...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "q2JKtNmCYgL", "LTnpGFs5WDM", "FOdFdNH8UF3", "hmfqalCDc6-", "Dft_Jwsxu_", "q2JKtNmCYgL", "Dft_Jwsxu_", "Dft_Jwsxu_", "NAU4rqFVbBJ", "nips_2022_rnJzy8JnaX", "nips_2022_rnJzy8JnaX", "nips_2022_rnJzy8JnaX" ]
nips_2022_hBaI5MY0CBz
Feature-Proxy Transformer for Few-Shot Segmentation
Few-shot segmentation~(FSS) aims at performing semantic segmentation on novel classes given a few annotated support samples. With a rethink of recent advances, we find that the current FSS framework has deviated far from the supervised segmentation framework: Given the deep features, FSS methods typically use an intricate decoder to perform sophisticated pixel-wise matching, while the supervised segmentation methods use a simple linear classification head. Due to the intricacy of the decoder and its matching pipeline, it is not easy to follow such an FSS framework. This paper revives the straightforward framework of ``feature extractor $+$ linear classification head'' and proposes a novel Feature-Proxy Transformer (FPTrans) method, in which the ``proxy'' is the vector representing a semantic class in the linear classification head. FPTrans has two keypoints for learning discriminative features and representative proxies: 1) To better utilize the limited support samples, the feature extractor makes the query interact with the support features from bottom to top layers using a novel prompting strategy. 2) FPTrans uses multiple local background proxies (instead of a single one) because the background is not homogeneous and may contain some novel foreground regions. These two keypoints are easily integrated into the vision transformer backbone with the prompting mechanism in the transformer. Given the learned features and proxies, FPTrans directly compares their cosine similarity for segmentation. Although the framework is straightforward, we show that FPTrans achieves competitive FSS accuracy on par with state-of-the-art decoder-based methods.
Accept
This paper studies the plain segmentation framework (feature extractor + linear classification) for few-shot segmentation. It introduces a prompt based query and support interaction method to enable this framework to work well. All the reviewers recognize the proposed method is novel and the performance is good. Though they have some concerns on the computational cost and the fairness of the experiment comparison (e.g., whether they use the same backbone), the authors address these concerns well in their response. All the reviewers agree with accepting this submission. Although their ratings are not very strong supportive, AC agrees this submission brings some values to the community. It inspire some new thinking of the FSS framework design. The overall framework is still heavy. Hopefully, in the future follow up works, the framework can be further simplified.
train
[ "72qhJ8g8wr", "K-X8sYKcYbs", "rPDUYifTyOO", "JM4mce0HAx", "b4OA8GPxjf", "S8VSMgIlvYb", "Bxb7krUJkcQ", "4evA427VNxj", "_KvzerFQR4A" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The rebuttal solves my concerns well, so I raise my rating to 6.", " Thanks for the further comments. \n\n---\n**Q4:** The CNN backbones are typically fixed to alleviate overfitting. In contrast, the ViT backbone is much larger and yet shows resistance against overfitting. Even the baseline with a plain vision ...
[ -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "S8VSMgIlvYb", "rPDUYifTyOO", "b4OA8GPxjf", "_KvzerFQR4A", "4evA427VNxj", "Bxb7krUJkcQ", "nips_2022_hBaI5MY0CBz", "nips_2022_hBaI5MY0CBz", "nips_2022_hBaI5MY0CBz" ]
nips_2022_5JdyRvTrK0q
Private Synthetic Data for Multitask Learning and Marginal Queries
We provide a differentially private algorithm for producing synthetic data simultaneously useful for multiple tasks: marginal queries and multitask machine learning (ML). A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}. Higher binning granularity is required for better accuracy, but this negatively impacts scalability. Eliminating the need for binning allows us to produce synthetic data preserving large numbers of statistical queries such as marginals on numerical features, and class conditional linear threshold queries. Preserving the latter means that the fraction of points of each class label above a particular half-space is roughly the same in both the real and synthetic data. This is the property that is needed to train a linear classifier in a multitask setting. Our algorithm also allows us to produce high quality synthetic data for mixed marginal queries, that combine both categorical and numerical features. Our method consistently runs 2-5x faster than the best comparable techniques, and provides significant accuracy improvements in both marginal queries and linear prediction tasks for mixed-type datasets.
Accept
This paper provides a method for generating synthetic differentially-private datasets for use in answering statistical queries, including Mixed Marginal Queries, Class Conditional Linear Threshold Queries, and "Querying the Error." The is an improvement over previous work. A solid paper that all reviewers are positive about.
train
[ "Mqky4FqVTf_", "hIYRWGBafUZ", "TqWzAPzSIYT", "PH2WhtoFni", "UJTex8YlgfM", "OEY6t6sQavt", "2fCBAKi5cdt", "X_kAfS-x3wY", "rAbqvCtEvCX", "uc7_CAFay4W" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Great --- and thank you again for the time you spent reviewing our paper, which has been valuable to us. ", " Terrific --- and thank you for the time you spent reviewing our paper! Your feedback has been valuable.", " Thanks you for the response!\nI am happy with the responses given. I think that the clarific...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "PH2WhtoFni", "TqWzAPzSIYT", "UJTex8YlgfM", "2fCBAKi5cdt", "uc7_CAFay4W", "rAbqvCtEvCX", "X_kAfS-x3wY", "nips_2022_5JdyRvTrK0q", "nips_2022_5JdyRvTrK0q", "nips_2022_5JdyRvTrK0q" ]
nips_2022_fXq93VpCIy
Sauron U-Net: Simple automated redundancy elimination in medical image segmentation via filter pruning
We present Sauron, a filter pruning method that eliminates redundant feature maps by discarding the corresponding filters with automatically-adjusted layer-specific thresholds. Furthermore, Sauron minimizes a regularization term that, as we show with various metrics, promotes the formation of feature maps clusters. In contrast to most filter pruning methods, Sauron is single-phase, similarly to typical neural network optimization, requiring fewer hyperparameters and design decisions. Additionally, unlike other cluster-based approaches, our method does not require pre-selecting the number of clusters, which is non-trivial to determine and varies across layers. We evaluated Sauron and three state-of-the-art filter pruning methods on three medical image segmentation tasks. This is an area where filter pruning has received little attention and where it can help building efficient models for medical grade computers that cannot use cloud services due to privacy considerations. Sauron achieved models with higher performance and pruning rate than the competing pruning methods. Additionally, since Sauron removes filters during training, its optimization accelerated over time. Finally, we show that the feature maps of a Sauron-pruned model were highly interpretable. The Sauron code is publicly available at https://github.com/blindedrepository.
Reject
The paper proposed a method for pruning filters in image segmentation networks by removing filters during training that are closely clustered. Unlike prior works, the approach is described as single-phase, meaning it prunes during normal training. To obtain smaller networks, a term which promotes feature map clustering is added to the loss. The experiments uses nnUNet as a baseline network and 3 other recent network pruning methods as comparison. The results show good performance (Dice/HD95) with more pruned networks on three medical segmentation datasets (with 3D volumes), and largely reduced FLOPs. Deep neural networks pruning for medical image segmentation is an important problem due to high dimensionality and long training times. The paper is well-written, method and experiment setup are clearly explained. The contribution was nevertheless found to be limited among a very fast growing literature. The core contribution could be better explained. The authors mostly proposed a general filter pruning without any specific optimization for U-Net series. It is thus necessary to compare Sauron against more existing filter pruning methods. The contribution is fair, but that it remains overall a bit limited for NeurIPS, in the sense that it does not offer strong insights. Besides, the effectiveness of Sauron is also questionable on 3D U-Net, where the pruned model fails to provide satisfactory performance.
train
[ "mGdA-hxhAy6", "ceI52SzKcaF", "1y6eRZUzBzv", "kdy4XjUuvdE", "QGib6ga6JoW", "H5GJ7JmK46A", "C1hkn_gwAG", "Y1YvqkiVFh0", "jeUl0hKoUm6", "k8Aim3YcdUs", "Xpj9J-RO5fJ", "mxgTz5X4rOO", "PhQCgu1Xj6I", "q9guf_9EAjG" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the time and feedback provided.\n\n> I am still not convinced about \"the proposed regularization term further promotes such cluster formation\" by enforcing the similarity of features against the first one/channel, though the final results improved.\n\nAs the reviewer indicated, we show...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "ceI52SzKcaF", "jeUl0hKoUm6", "QGib6ga6JoW", "H5GJ7JmK46A", "Y1YvqkiVFh0", "C1hkn_gwAG", "q9guf_9EAjG", "PhQCgu1Xj6I", "mxgTz5X4rOO", "Xpj9J-RO5fJ", "nips_2022_fXq93VpCIy", "nips_2022_fXq93VpCIy", "nips_2022_fXq93VpCIy", "nips_2022_fXq93VpCIy" ]
nips_2022_TrsAkAbC96
Implicit Warping for Animation with Image Sets
We present a new implicit warping framework for image animation using sets of source images through the transfer of motion of a driving video. A single cross-modal attention layer is used to find correspondences between the source images and the driving image, choose the most appropriate features from different source images, and warp the selected features. This is in contrast to the existing methods that use explicit flow-based warping, which is designed for animation using a single source and does not extend well to multiple sources. The pick-and-choose capability of our framework helps it achieve state-of-the-art results on multiple datasets for image animation using both single and multiple source images.
Accept
Consistent reviews, both in content and in score. The cross-identity motion transfer is a good test of the paper's capability -- it would improve the paper to provide more such examples, which are clearly more challenging than the same-identity case. The concerns about the limited diversity of example subjects mentioned by R1 are indeed relevant. The video examples are all male, with quite light skin tone. Please include examples with female subjects, darker (Fitzpatrick 6+) skin tone, and other ethnicities. To be clear: the rebuttal's current response "we will emphasize that training on a diverse dataset is a must" does not go far enough. It is very important that qualitiative examples are shown, even more important than that the test datasets are diverse. If the results are less good, efforts should be made before NeurIPS to improve them (e.g. by retraining), and if improvement is not possible, this should be very clearly stated in the limitations of the final copy and NeurIPS presentation/poster.
test
[ "hdmeJpXgef6", "uHpFik0Fi4U", "JxHhZ-D2Vaw", "Xhu_ICy04aa", "5tchBBeI_JpU", "XY8bArHRtY_", "z7U1Bh286uI", "LCCFpbT_bTa" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response and my main concerns are addressed. I would encourage the authors to make the updates they describe and am happy to upgrade my review to Accept. ", " **Extra key-and-values implementation:**\nIn our implementation, the extra keys and values are learned by the netw...
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "JxHhZ-D2Vaw", "nips_2022_TrsAkAbC96", "XY8bArHRtY_", "z7U1Bh286uI", "LCCFpbT_bTa", "nips_2022_TrsAkAbC96", "nips_2022_TrsAkAbC96", "nips_2022_TrsAkAbC96" ]
nips_2022_cA8Zor8wFr5
AttCAT: Explaining Transformers via Attentive Class Activation Tokens
Transformers have improved the state-of-the-art in various natural language processing and computer vision tasks. However, the success of the Transformer model has not yet been duly explained. Current explanation techniques, which dissect either the self-attention mechanism or gradient-based attribution, do not necessarily provide a faithful explanation of the inner workings of Transformers due to the following reasons: first, attention weights alone without considering the magnitudes of feature values are not adequate to reveal the self-attention mechanism; second, whereas most Transformer explanation techniques utilize self-attention module, the skip-connection module, contributing a significant portion of information flows in Transformers, has not yet been sufficiently exploited in explanation; third, the gradient-based attribution of individual feature does not incorporate interaction among features in explaining the model's output. In order to tackle the above problems, we propose a novel Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer's output. Extensive experiments are conducted to demonstrate the superior performance of AttCAT, which generalizes well to different Transformer architectures, evaluation metrics, datasets, and tasks, to the baseline methods. Our code is available at: https://github.com/qiangyao1988/AttCAT.
Accept
This is an interesting paper with good contribution to the field. Most reviews are positive.
val
[ "9B-iOsuV51i", "KS2lquXMQRz", "ALGE96Bj9n-", "lWrdau-eGFo", "iwontCcncYY", "lL7aBJlwhLR", "wGaHuNdRTaP", "JH-U7HbHo2T", "1k1wgU9-3bT", "9Y6ZagbQTq", "DOTQceLuB2", "VHTq9w1v9Gv" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors addressed my concerns, so I would like to keep my rating.", " Thank you for taking time to read our response! We appreciate your suggestion on articulating the possible extension of our AttCAT to other domains, and we will carefully incorporate our response in the final version of this manuscript. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "lWrdau-eGFo", "ALGE96Bj9n-", "iwontCcncYY", "9Y6ZagbQTq", "VHTq9w1v9Gv", "DOTQceLuB2", "1k1wgU9-3bT", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5" ]
nips_2022_O5arhQvBdH
Trading off Utility, Informativeness, and Complexity in Emergent Communication
Emergent communication (EC) research often focuses on optimizing task-specific utility as a driver for communication. However, there is increasing evidence that human languages are shaped by task-general communicative constraints and evolve under pressure to optimize the Information Bottleneck (IB) tradeoff between the informativeness and complexity of the lexicon. Here, we integrate these two approaches by trading off utility, informativeness, and complexity in EC. To this end, we propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to encode inputs into discrete signals embedded in a continuous space. We evaluate our approach in multi-agent reinforcement learning settings and in color reference games and show that: (1) VQ-VIB agents can continuously adapt to changing communicative needs and, in the color domain, align with human languages; (2) the emergent VQ-VIB embedding spaces are semantically meaningful and perceptually grounded; and (3) encouraging informativeness leads to faster convergence rates and improved utility, both in VQ-VIB and in prior neural architectures for symbolic EC, with VQ-VIB achieving higher utility for any given complexity. This work offers a new framework for EC that is grounded in information-theoretic principles that are believed to characterize human language evolution and that may facilitate human-agent interaction.
Accept
From the ratings alone this paper appears borderline leaning towards acceptance, however, I want to highlight to the authors that in discussion with reviewers and my own reading of the paper there are aspects that shifted this even closer to the decision boundary. In the end, my own conflicted views of the work and the lack of further discussion from the more negative reviewer led to a recommendation of accept. I'll briefly review some of the strengths and weakness in the latest revision as I see them. + The work truly studies the effect of controlling multiple objectives of communication in the emergent communication setting and how these affect trade-offs in complexity, informativeness*, and utility. This scientific approach to the experimental work is in my view a clear strength of the work. + There is a clear motivation for the choice of objectives based upon existing work in related fields. Although I have reservations about how well the realization, building on the Zaslavsky et. al.'s work and within emergent communication is something I think will benefit the community. + The writing itself is clear and easy to read. Figures in the main text and appendix were informative and quite interesting. - Important details, especially around the math and experimental setup, are lacking. Some of thing examples that bothered me were: ambiguity on definition of terms in the objective (all three terms could be stated more precisely but for U(X, Y) we are not told what Y is and for I(X, C) it is not clear if this is coming from 3.2.2 or 3.2.4); lacking clarity around gradient flow (passing gradients back to the sender in this type of setup should be stated very clearly and upfront). - Connection between I(X, C) and complexity. As I was reading I interpreted this to be (as in 3.2.2) "the KL divergence of μ(x) and σ(x) from a unit Gaussian", and found this is to be a very strange choice (not as an objective but as a measure of the complexity of the language). This is, I believe, a different concern than what was raised by one of the reviewers. The issue I saw there was that the codebook need not be uniformly distributed in the continuous space, and therefore the implications on message complexity of a particular variance in the continuous latent space may be inconsistent. For some regions of latent space a unit variance could imply a single message with high probability, while other areas of the latent space may be more closely clustered causing the same unit variance to be nearly uniform over several different messages. Moving on to 3.2.4, we see (perhaps) evidence that the authors encountered the repercussions of this choice "simply penalizing this term in training was insufficient to train VQ-VIB agents to use fewer unique discrete embedding". The motivation is great, but I strongly suspect that there are better ways of realizing it and that this particular choice is not capturing what the author(s) intended. - Limited novelty of the method. The combination of VIB and VQ-VAE seemed like something that would have already been published, since regularizing the prior of the latent distribution is such a well used (maybe even well understood) method, but I looked and must grant that this does seem to be a novel combination. That said, it is not so novel that it would be able to stand on its own without the specific setting itself and connection with models of complexity in human languages to support it. - (Minor) The author(s) made several additional changes to address reviewer concerns around prior work and putting their design choices into context, but there remain some issues in this space. I found the discussion of related work to be fairly shallow and at times even dismissive. It reads as though the work was undertaken entirely devoid of consideration for related work except for that which directly motivated the approach, and then was added defensively but without making real connections between this work and others. I mark this as minor because it is unfortunately somewhat common and because it is a more subjective evaluation. I hope this and the other reviews will help author(s) to understand how the work may be experienced by readers and potentially make further refinements. Overall, despite limitations, I do believe this work will be of interest to researchers in emergent communication and potentially more broadly due to common underlying questions around trading off complexity and other primary learning objectives.
test
[ "D1JUsiRfcf", "anbwjByLyWK", "9vNkbmedKO", "ym2vD8qkwiQ", "J-1L-6-Eean", "7Xja8kj6YKL", "oMZywra4oJQ", "nISzgvlHb7", "9KrVr5sTbbK", "5hzjLuaTtYu", "vczQ21aa36m", "_FPaSb-VyTn", "cNrEvu77PGx", "zsCYJ475JDc", "fQnyiTL0XmB", "VU4xEVWk_dC", "Qo2z7733w3H", "Vs7PdVXHiP", "Ji2xlhDOY8s",...
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", ...
[ " We thank the reviewer for the helpful follow-up comments. We’re happy that the reviewer found our work overall interesting, and we hope that our response below will address all of the reviewer’s concerns. Given that the main concerns appear related to situating our work within the emergent communication literatur...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "cNrEvu77PGx", "_FPaSb-VyTn", "oMZywra4oJQ", "nISzgvlHb7", "9KrVr5sTbbK", "5hzjLuaTtYu", "ukq0svW0Nd_", "bmTe0woNaQW", "j4BFA5u-o0b", "duHS_HFVeIR", "cNrEvu77PGx", "VU4xEVWk_dC", "zsCYJ475JDc", "Z8TxVmz4Ss9", "Vs7PdVXHiP", "Vs7PdVXHiP", "ukq0svW0Nd_", "ZwQWhZPbha1", "nips_2022_O5...
nips_2022_4pwCvvel8or
Online PAC-Bayes Learning
Most PAC-Bayesian bounds hold in the batch learning setting where data is collected at once, prior to inference or prediction. This somewhat departs from many contemporary learning problems where data streams are collected and the algorithms must dynamically adjust. We prove new PAC-Bayesian bounds in this online learning framework, leveraging an updated definition of regret, and we revisit classical PAC-Bayesian results with a batch-to-online conversion, extending their remit to the case of dependent data. Our results hold for bounded losses, potentially \emph{non-convex}, paving the way to promising developments in online learning.
Accept
PAC-Bayes theory provides upper-bounds on the risk of aggregation of predictors in the batch setting. Many PAC-Bayes bounds are actually minimized by EWA (Exponentially Weighted Aggregation), but these bounds can also be applied on (slightly) sub-optimal aggregation procedure, and allow to control their level of sub-optimality: classical examples include Gaussian aggregation / variational Bayes. There are also bounds on the regret of EWA in the online setting. However, while these bounds look quite similar to PAC-Bayes bound, they usually do not allow to work with alternate aggregation procedure such as Gaussian aggregation. Recently, some results allowed to study other aggregation rules, as in van der Hoeven et al. [2018] and Chérief-Abdellatif et al. [2019], but these results still impose strong constraint and cannot be applied to arbitrary aggregation strategies. Here, the authors manage to extend totally PAC-Bayes bounds to the online setting. In other words, their Theorem 2.2 can be used to upper bound the regret of very general aggregation strategies, including of course EWA and Gaussian aggregation. There was initially a disagreement between reviewers, based on the following: 1) on the one hand, the reviewers agree that Theorem 2.2 is a nice extension of PAC-Bayes bounds, and provides a generalization of existing results on EWA and Gaussian aggregation [see Reviewers Kjzo, orn2 and also UW2Y]. 2) on the other hand, it is not clear whether there is a useful application of Theorem 2.2 can lead to new results beyond the "usual cases" EWA / Gaussian. Indeed, these are the two examples discussed by the authors [UW2Y]. After discussion, there was ultimately an agreement that even though some of the reviewers and myseld are still not totally convinced about 2), the nice construction of 1) justifies publication of the paper. I will therefore recommend to accept it. Each of the reviewers raised many minor issues that the authors should take into account in the camera ready version (writing [zWwp, UW2Y] / experiments [Kjzo, orn2] / ...). I will add the following points: - van der Hoeven et al. [2018] already contains a nice discussion on the extension of regret bounds beyond EWA, and share many similarities with this work (even though it does NOT contain a result such as Theorem 2.2). This paper is currently not cited by the authors. The authors should cite it, and discuss it. - "The guarantees Chérief-Abdellatif et al. [2019] provided for SVB hold for Gaussian priors and posteriors and are valid for iid data, which is a particular case of our work." This is an incorrect and misleading statement: this paper is written in the same setting than the classical bounds on EWA. There is no stochastic assumption on the data in this paper (nor in van der Hoeven et al. [2018]).
train
[ "DgqgwDw0iF2", "cxgylGE4mhU", "j3DZmgLIcbO", "5yFslNfr2mL", "Z3_eBseWU16", "z43_12kkuDe", "k0GcjmFFWav", "E_BPeHAGGc", "bnJQF3Mg8cA", "V9YQMKY0R374", "mCGA5eD-WoX", "4uRET1Ug_v", "0QQv7-q_TFr", "zAFYOu1pJGi", "M2cH00CV1L", "bFvpv_vsF18x", "zFRX-jWPHWP", "jbHz_C163vK", "jj6rbmNtTM...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ " We are happy to hear that the new version of the document has dissipated your concerns.\n\nWe thank you for your time.", " I thank the authors for their detailed response, which along with the responses to the other reviewers has shed additional light on the various points of inquiry I had. I will raise my scor...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 3 ]
[ "cxgylGE4mhU", "4uRET1Ug_v", "4uRET1Ug_v", "Z3_eBseWU16", "z43_12kkuDe", "zAFYOu1pJGi", "E_BPeHAGGc", "V9YQMKY0R374", "nips_2022_4pwCvvel8or", "mCGA5eD-WoX", "0QQv7-q_TFr", "Aljnl6LUvf7", "jj6rbmNtTMO", "jbHz_C163vK", "zFRX-jWPHWP", "nips_2022_4pwCvvel8or", "nips_2022_4pwCvvel8or", ...
nips_2022_siG_S8mUWxf
Learning Physical Dynamics with Subequivariant Graph Neural Networks
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics. However, they still encounter several challenges: 1) Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization and should be incorporated into the model design. Existing simulators either consider insufficient symmetry, or enforce excessive equivariance in practice when symmetry is partially broken by gravity. 2) Objects in the physical world possess diverse shapes, sizes, and properties, which should be appropriately processed by the model. To tackle these difficulties, we propose a novel backbone, called Subequivariant Graph Neural Network, which 1) relaxes equivariance to subequivariance by considering external fields like gravity, where the universal approximation ability holds theoretically; 2) introduces a new subequivariant object-aware message passing for learning physical interactions between multiple objects of various shapes in particle-based representation; 3) operates in a hierarchical fashion, allowing for modeling long-range and complex interactions. Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2$\times$ lower rollout MSE on RigidFall compared with state-of-the-art GNN simulators, while exhibiting strong generalization and data efficiency.
Accept
Overall this is an interesting paper. It proposed a new formulation of the equivariant graph neural network, subequivariant GNN. Reviewers agree that the proposed idea could be useful to the community, albeit with perhaps small application scope. So on the novelty side, this paper is okay. The biggest concern among the reviewers is about the experiments, i.e., mostly the fair comparision. I feel the authors did a reasonable job to explaining why the current baselines were chosen and provided additional experimental evidence. Authors could take in account the comments from the reviewers to improve the overall presentation of the paper.
val
[ "EObPDK0NUfw", "F9Bl-zVSAmb", "uLqjY8GvZJG", "TmVBxAWm8-9", "e96rKqnz0ck", "bu08CGkF2bs", "KhnzGe5ELT", "XMX2Cao43uE", "Qj8XDJijv6", "tXcXYpGCRhw", "nGtfWpuIKw", "Ry3dMCYsPaP", "J4d52QYJGaE", "AILcckXgk_C", "6K2GLNCdjVg", "zGjmHIyLi80", "TUsOPVtb59", "hVOV5r8CiJ-", "qYfXXzsu0ZG8"...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " Dear Reviewer bh6f,\n\nThank you very much! We really enjoy the discussion with you, during which your insightful comments have helped greatly improve the paper. Thanks again!\n\nBest, \\\nAuthors", " Thank you for the discussion. I will revise my score.", " \n> **I personally think this is quite an impactful...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "F9Bl-zVSAmb", "uLqjY8GvZJG", "e96rKqnz0ck", "bu08CGkF2bs", "Qj8XDJijv6", "nGtfWpuIKw", "XMX2Cao43uE", "Z2JLE1XZwEI", "zGjmHIyLi80", "TUsOPVtb59", "hVOV5r8CiJ-", "J4d52QYJGaE", "qYfXXzsu0ZG8", "xrkuvyEWdZK", "kYsa7ajBQeF", "XlJLoqYwea-", "mQmWUbjk3i6C", "vrawTSDm-Ku", "tKslL-EMs2...
nips_2022_XtyeppctGgc
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers a significant accuracy drop compared to the full fine-tuning. In this paper, we propose a new parameter-efficient fine-tuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance of full fine-tuning. In this way, SSF also surprisingly outperforms other parameter-efficient fine-tuning approaches even with a smaller number of tunable parameters. Furthermore, different from some existing parameter-efficient fine-tuning methods (e.g., Adapter or VPT) that introduce the extra parameters and computational cost in the training and inference stages, SSF only adds learnable parameters during the training stage, and these additional parameters can be merged into the original pre-trained model weights via re-parameterization in the inference phase. With the proposed SSF, our model obtains 2.46% (90.72% vs. 88.54%) and 11.48% (73.10% vs. 65.57%) performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared to the full fine-tuning but only fine-tuning about 0.3M parameters. We also conduct amounts of experiments in various model families (CNNs, Transformers, and MLPs) and datasets. Results on 26 image classification datasets in total and 3 robustness & out-of-distribution datasets show the effectiveness of SSF. Code is available at https://github.com/dongzelian/SSF.
Accept
This paper provides a simple method to avoid full fine-tuning of vision transformers, namely very simple linear adapters that can be trained and then subsumed into the existing linear layers during inference, which is an interesting characteristic as it prevents added computation during inference (unlike the use of regular adapters as used in NLP). Overall the reviewers appreciated the simplicity and intuition of the method, the improvement in performance over other competing methods such as VPT, the avoidance of computation overhead during inference, and comprehensive experiments. There were some concerns, however, related to the writing of the paper and clarity, robustness on OOD data, complexity analysis/details of runtime, and lack of theoretical justification. Many of these were addressed by the authors, including nice robustness/OOD results which add to the experimental validation. After the rebuttal, the reviewers all agreed on acceptance. While the method is still empirical (the hypothesis related to distribution matching seems unsubstantiated, and there are many other ways to do that, and so should probably not be in the paper), the paper has a strong empirical execution that uses a simple method to deal with limitations that have significant societal/deployment consequences. As a result, I recommend accepting this paper.
train
[ "O2yOidq9Row", "i0pQxf6Kalt", "mDBgcrIPY39c", "fGKLTGg2tEZ", "o7WD1bcajsb", "l1aS2X7gSdQ", "w9mVC_TLHD1m", "DhiBFi2LhX", "ycZMATfCSHR", "vIKRmxes-WV", "USpmRis22fK", "Km9fnfbVGUl", "oPGM2NUJ9hf" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your valuable comments and suggestions! We sincerely appreciate your recognition and constructive comments to improve our work.", " Thank you authors for the detailed responses to my questions. I have reviewed them and they seem to answers my concerns. I have updated my rating accordingly.", " **Q4...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 5 ]
[ "i0pQxf6Kalt", "vIKRmxes-WV", "vIKRmxes-WV", "vIKRmxes-WV", "Km9fnfbVGUl", "oPGM2NUJ9hf", "oPGM2NUJ9hf", "USpmRis22fK", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc" ]
nips_2022_qm5LpHyyOUO
MCMAE: Masked Convolution Meets Masked Autoencoders
Vision Transformers (ViT) become widely-adopted architectures for various vision tasks. Masked auto-encoding for feature pretraining and multi-scale hybrid convolution-transformer architectures can further unleash the potentials of ViT, leading to state-of-the-art performances on image classification, detection and semantic segmentation. In this paper, our MCMAE framework demonstrates that multi-scale hybrid convolution-transformer can learn more discriminative representations via the mask auto-encoding scheme. However, directly using the original masking strategy leads to the heavy computational cost and pretraining-finetuning discrepancy. To tackle the issue, we adopt the masked convolution to prevent information leakage in the convolution blocks. A simple block-wise masking strategy is proposed to ensure computational efficiency. We also propose to more directly supervise the multi-scale features of the encoder to boost multi-scale features. Based on our pretrained MCMAE models, MCMAE-Base improves ImageNet-1K finetuning accuracy by 1.4% compared with MAE-Base. On object detection, MCMAE-Base finetuned for only 25 epochs surpasses MAE-Base fined-tuned for 100 epochs by 2.9% box AP and 2.2% mask AP respectively. Code and pretrained models are available at \url{https://github.com/Alpha-VL/ConvMAE}.
Accept
The reviewers are positive about this submission initially. After the authors' rebuttal, one reviewer pointed out that the name `ConvMAE' is not proper to describe the current work. The authors respond by claiming using an alternative name, which is acknowledged by the reviewer. Overall, all the reviewers stand positive for this work and AC stands with the reviewers. The authors shall take the suggestions from the reviewers to further polish the current work in the camera-ready submissions.
train
[ "ZZ63RrAwYp", "NDxV1kaiQlw", "V8I7SsvVZ90", "ssuZAulGPCci", "sJqlrXfOh42", "BAtNiU6C_eY", "8otWqFlnXbm", "JhGaaXNu6gL2", "i0kZXF2AXc5", "2nxBGoV8-N8", "_ljYpO-7La0", "tpXLzQK9bca", "Mt_r_atVyQT", "SCw1JSaqqO" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We update the results of VideoConvMAE-multiscale pretrained for 1600 epochs on SSV2 in the table below :\n| ConvMAE-multiscale/Epochs | 800 | 1600 | \n|----------------|------|-----|\n| Kinetics-400 | 82.7 |N/A| \n| SSV2 | 70.7 | 71.2| \n", " Thanks! Given this change I have no other concerns about...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4, 3 ]
[ "sJqlrXfOh42", "V8I7SsvVZ90", "ssuZAulGPCci", "i0kZXF2AXc5", "SCw1JSaqqO", "Mt_r_atVyQT", "tpXLzQK9bca", "_ljYpO-7La0", "2nxBGoV8-N8", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO" ]
nips_2022_d4JmP1T45WE
Training Spiking Neural Networks with Event-driven Backpropagation
Spiking Neural networks (SNNs) represent and transmit information by spatiotemporal spike patterns, which bring two major advantages: biological plausibility and suitability for ultralow-power neuromorphic implementation. Despite this, the binary firing characteristic makes training SNNs more challenging. To learn the parameters of deep SNNs in an event-driven fashion as in inference of SNNs, backpropagation with respect to spike timing is proposed. Although this event-driven learning has the advantages of lower computational cost and memory occupation, the accuracy is far below the recurrent neural network-like learning approaches. In this paper, we first analyze the commonly used temporal backpropagation training approach and prove that the sum of gradients remains unchanged between fully-connected and convolutional layers. Secondly, we show that the max pooling layer meets the above invariance rule, while the average pooling layer does not, which will suffer the gradient vanishing problem but can be revised to meet the requirement. Thirdly, we point out the reverse gradient problem for time-based gradients and propose a backward kernel that can solve this problem and keep the property of the invariable sum of gradients. The experimental results show that the proposed approach achieves state-of-the-art performance on CIFAR10 among time-based training methods. Also, this is the first time that the time-based backpropagation approach successfully trains SNN on the CIFAR100 dataset. Our code is available at https://github.com/zhuyaoyu/SNN-event-driven-learning.
Accept
The authors propose a novel training algorithm to train spiking neural networks (SNNs) in an event-driven manner with backpropagation. They perform experiments on standard benchmarks such as CIFAR-10 and CIFAR-100 to verify the effectiveness of the method. The algorithm achieves SOTA performance on these data sets. Event-driven methods are interesting from a hardware perspective as gradient have to propagated only at spike times. The manuscript received mixed ratings, a clear agreement could not be found. Pros: - The authors provide an analysis of event-driven backprop in SNNs which helps to adjust the usual learning procedure. - The authors performed experiments on several data sets and achieved SOA performance w.r.t. other event-driven methods. - They also tackled CIFAR100 for which no results were previously shown with event-driven algorithms - The paper is well-written, although language could be improved at places. Cons: - Improvements over competing event-driven techniques are rather small - Performance is still clearly below non-event based surrogate gradient methods (but this is not surprising) - Not clear how the method scales up beyond CIFAR100 Since the ratings were mixed, I read the paper and believe it is publishable in NeurIPS although it is somewhat borderline.
test
[ "iy9DEnjSDi", "28mTmhyht9", "Y3jKxXWmhxs", "x1tVPZDgjzY", "u7w7T6Cwd3_", "FGgyxnU1SMd", "6ZQ4AlSBRqf", "Ssht7CSobZn", "KjvapaILMHo", "LWBHoY3zbwy", "Mmqw5LjB7G-", "AFsSs-sHe-", "tLWbrSLRy5D", "dEKk844QTj", "NwpburIDs8q" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 1S4Q,\n\nAs you suggested, we have checked and added recommended publications in the new version of our paper. We organize the other concerns as follows:\n1) Aiming at your question on the contribution of our paper (which is also asked by Reviewer tMmb), we have clarified the contribution of our pap...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2, 3 ]
[ "tLWbrSLRy5D", "NwpburIDs8q", "NwpburIDs8q", "tLWbrSLRy5D", "FGgyxnU1SMd", "KjvapaILMHo", "Ssht7CSobZn", "tLWbrSLRy5D", "NwpburIDs8q", "dEKk844QTj", "AFsSs-sHe-", "nips_2022_d4JmP1T45WE", "nips_2022_d4JmP1T45WE", "nips_2022_d4JmP1T45WE", "nips_2022_d4JmP1T45WE" ]
nips_2022_mjUrg0uKpQ
I2DFormer: Learning Image to Document Attention for Zero-Shot Image Classification
Despite the tremendous progress in zero-shot learning (ZSL), the majority of existing methods still rely on human-annotated attributes, which are difficult to annotate and scale. An unsupervised alternative is to represent each class using the word embedding associated with its semantic class name. However, word embeddings extracted from pre-trained language models do not necessarily capture visual similarities, resulting in poor zero-shot performance. In this work, we argue that online textual documents e.g., Wikipedia, contain rich visual descriptions about object classes, therefore can be used as powerful unsupervised side information for ZSL. To this end, we propose I2DFormer, a novel transformer-based ZSL framework that jointly learns to encode images and documents by aligning both modalities in a shared embedding space. In order to distill discriminative visual words from noisy documents, we introduce a new cross-modal attention module that learns fine-grained interactions between image patches and document words. Consequently, our I2DFormer not only learns highly discriminative document embeddings that capture visual similarities but also gains the ability to localize visually relevant words in image regions. Quantitatively, we demonstrate that our I2DFormer significantly outperforms previous unsupervised semantic embeddings under both zero-shot and generalized zero-shot learning settings on three public datasets. Qualitatively, we show that our method leads to highly interpretable results where document words can be grounded in the image regions.
Accept
The authors propose a method to learn a joint representation of an image with a document of the object present in the image. Experiments show that the proposed model outperforms state-of-the-art models. Although the final reviews between reviewers are not aligned, I think authors solved most of their proposed questions.
train
[ "-b8wMaDFY7U", "0qJiMVkldEB", "4vbpCG9XzCC", "L_3sw7C6zO", "n01wp3AkMC", "OCZnqZMG3N_", "x0H85AeucAY", "HMlV18LBBo", "Sno3_HJ_Muq", "Q91Z1OAVj2Z", "iCnlwHEQHq", "takbNy0RvHI", "ziQSA4F-WO", "NsslNJcf7De", "LPAfbpNK5WW", "mJ6cMXtzHl", "qSA8bskeCY1", "jE2uEkq4F7M", "4FGfMgzAyFV" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer uvjr\n\nWe want to thank you once again for your helpful review. We have incorporated your feedback into the manuscript including additional discussion and experiments. We believe your suggestions further improved the clarity of the manuscript and opens it to a wider set of audience. We have also di...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 4, 3 ]
[ "4FGfMgzAyFV", "4vbpCG9XzCC", "nips_2022_mjUrg0uKpQ", "n01wp3AkMC", "OCZnqZMG3N_", "x0H85AeucAY", "HMlV18LBBo", "4FGfMgzAyFV", "Q91Z1OAVj2Z", "jE2uEkq4F7M", "takbNy0RvHI", "ziQSA4F-WO", "NsslNJcf7De", "qSA8bskeCY1", "mJ6cMXtzHl", "nips_2022_mjUrg0uKpQ", "nips_2022_mjUrg0uKpQ", "nip...
nips_2022_fLIgyyQiJqz
Temporal Effective Batch Normalization in Spiking Neural Networks
Spiking Neural Networks (SNNs) are promising in neuromorphic hardware owing to utilizing spatio-temporal information and sparse event-driven signal processing. However, it is challenging to train SNNs due to the non-differentiable nature of the binary firing function. The surrogate gradients alleviate the training problem and make SNNs obtain comparable performance as Artificial Neural Networks (ANNs) with the same structure. Unfortunately, batch normalization, contributing to the success of ANNs, does not play a prominent role in SNNs because of the additional temporal dimension. To this end, we propose an effective normalization method called temporal effective batch normalization (TEBN). By rescaling the presynaptic inputs with different weights at every time-step, temporal distributions become smoother and uniform. Theoretical analysis shows that TEBN can be viewed as a smoother of SNN's optimization landscape and could help stabilize the gradient norm. Experimental results on both static and neuromorphic datasets show that SNNs with TEBN outperform the state-of-the-art accuracy with fewer time-steps, and achieve better robustness to hyper-parameters than other normalizations.
Accept
The paper proposes a method of batch normalization that takes into account the temporal dimension (TEBN) and empirically shows that TEBN can significantly improve the accuracy of spiking neural networks (SNNs). Theoretical analysis also provides new insights into how SNNs should be trained to improve accuracy (particularly in the face of temporal variation of internal covariate shift). This paper had received conflicting evaluations from Reject to Strong Accept, and the reviewers did not reach a consensus even after fairly intense discussion. The disagreement appears to come from what ones expect from SNNs: accuracy, robustness, latency, sparsity, biological plausibility, etc. While it is well empirically supported (particularly with the additional experiments during rebuttal) that TEBN increases accuracy, TEBN certainly looses biological plausibility, and the operations such as variance computation needed in TEBN might not be desirable for some applications of SNNs. Also, since much of the experiments are added during rebuttal, there is a criticism for the lack of consistency in experimental design, which also leads to mixed evaluations regarding the benefit of the proposed approach. Overall, despite several weaknesses and uncertainties, the high accuracy certainly matters to some of the users and researchers of SNNs, and the paper clearly excels in this regard. Hence, I recommend an acceptance.
train
[ "lC9dz2wssv6", "O_hUzdnlUI", "wn-28unYfP6", "IMPoNzsurw", "kupxNe3ClkA", "bgj7GHTOYt", "5AFiNvZ7hxM", "-q_oZimGm6E", "hBAdEHwvrOQ", "T3nG9oSo79m", "O-bB0tWR5ie", "48gMB-5xjg", "mFkc0giAPaD", "XquezqhLxcN", "lJ-DkxqRg8k", "3XtiiCLVLy99", "vuKdywLxM1Ex", "TVqzNYI16k", "PJ8TWtMqVUN5...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official...
[ " [1]https://github.com/fangwei123456/spikingjelly/blob/master/spikingjelly/activation_based/examples/speechcommands.py\n\n[2]Youngeun Kim and Priyadarshini Panda. Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. Frontiers in Neuroscience, 15:773954–773954, 2021.\n\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 5 ]
[ "O_hUzdnlUI", "wn-28unYfP6", "IMPoNzsurw", "lJ-DkxqRg8k", "7B9c4CIozUl", "k_PfUt6I8gk", "O-bB0tWR5ie", "T3nG9oSo79m", "ky4EkCirrE", "mFkc0giAPaD", "TVqzNYI16k", "nips_2022_fLIgyyQiJqz", "XquezqhLxcN", "RNY5iYFOaL5", "3XtiiCLVLy99", "vuKdywLxM1Ex", "k_PfUt6I8gk", "PJ8TWtMqVUN5", "...
nips_2022_js2ssA77fX
Masked Generative Adversarial Networks are Data-Efficient Generation Learners
This paper shows that masked generative adversarial network (MaskedGAN) is robust image generation learners with limited training data. The idea of MaskedGAN is simple: it randomly masks out certain image information for effective GAN training with limited data. We develop two masking strategies that work along orthogonal dimensions of training images, including a shifted spatial masking that masks the images in spatial dimensions with random shifts, and a balanced spectral masking that masks certain image spectral bands with self-adaptive probabilities. The two masking strategies complement each other which together encourage more challenging holistic learning from limited training data, ultimately suppressing trivial solutions and failures in GAN training. Albeit simple, extensive experiments show that MaskedGAN achieves superior performance consistently across different network architectures (e.g., CNNs including BigGAN and StyleGAN-v2 and Transformers including TransGAN and GANformer) and datasets (e.g., CIFAR-10, CIFAR-100, ImageNet, 100-shot, AFHQ, FFHQ and Cityscapes).
Accept
This paper proposes two masking strategies to improve GANs with limited data. The idea is novel and these two strategies can nicely complement each other. The experiment results are promising. The reviewers unanimously raised questions on missing comparison, which seem to be well addressed after author-reviewer discussion. Two reviewers end up raising their scores. There are still some claims in the paper that may need better experimental evidence support, and also more visual cues would be helpful in paper presentation. Also, all reviewers pointed out that the discussion of limitations and broader impact seem not adequate. I would recommend weak acceptance however strongly encourage the authors to address the above-mentioned concerns in their next version.
train
[ "0LtU-pnz1Mn", "ASirOq1Euc3", "pElzxhw8-6M", "kHnjGM8tPB-", "SVUkdcNyjA4", "fSd6tQUImAJ", "jm4tGxyjTqb", "as8kLazzsGn", "UG3U5GGuf1S", "J4Lh6pa7X7p", "_S2jnEcPKT6", "ePIjE72TG92", "H0aiXd7RtV4", "xeV6DWa8hwx", "yE9hR5bWMB", "EeUwAi4XSQ", "enUYom-mzlM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional experiments and analysis provided by the authors. My concerns are well addressed. I will raise my score. ", " Thanks for the timely and detailed response from the authors. My concerns have been addressed, and I will raise the score.", " Dear Reviewer mM3P:\n\nWe thank you for the pre...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "SVUkdcNyjA4", "_S2jnEcPKT6", "EeUwAi4XSQ", "xeV6DWa8hwx", "fSd6tQUImAJ", "enUYom-mzlM", "as8kLazzsGn", "EeUwAi4XSQ", "J4Lh6pa7X7p", "yE9hR5bWMB", "ePIjE72TG92", "H0aiXd7RtV4", "xeV6DWa8hwx", "nips_2022_js2ssA77fX", "nips_2022_js2ssA77fX", "nips_2022_js2ssA77fX", "nips_2022_js2ssA77f...
nips_2022_w5DacXWzQ-Q
SAViT: Structure-Aware Vision Transformer Pruning via Collaborative Optimization
Vision Transformers (ViTs) yield impressive performance across various vision tasks. However, heavy computation and memory footprint make them inaccessible for edge devices. Previous works apply importance criteria determined independently by each individual component to prune ViTs. Considering that heterogeneous components in ViTs play distinct roles, these approaches lead to suboptimal performance. In this paper, we introduce joint importance, which integrates essential structural-aware interactions between components for the first time, to perform collaborative pruning. Based on the theoretical analysis, we construct a Taylor-based approximation to evaluate the joint importance. This guides pruning toward a more balanced reduction across all components. To further reduce the algorithm complexity, we incorporate the interactions into the optimization function under some mild assumptions. Moreover, the proposed method can be seamlessly applied to various tasks including object detection. Extensive experiments demonstrate the effectiveness of our method. Notably, the proposed approach outperforms the existing state-of-the-art approaches on ImageNet, increasing accuracy by 0.7% over the DeiT-Base baseline while saving 50% FLOPs. On COCO, we are the first to show that 70% FLOPs of FasterRCNN with ViT backbone can be removed with only 0.3% mAP drop. The code is available at https://github.com/hikvision-research/SAViT.
Accept
The paper received three positive reviews and one negative review. The raised issues contain technical correctness, ImageNet-22K pertaining, insufficient experiments and speedup on GPUs, computational cost, clarity on ablation studies. During the rebuttal and discussion phases, most of the issues are addressed and reviewers are willing to upgrade. After checking all the reviews, rebuttals, and discussions, the AC agrees with the reviewers that the raised issues are well addressed. The authors shall revise according to the suggestions to further improve the current manuscript in the camera-ready submission. Also, the comparison to token selection-based ViT acceleration methods [a] shall be included in the experiments. [a]. Not All Patches Are What You Need: Expediting Vision Transformers via Token Reorganizations. Liang et al. ICLR 2022.
train
[ "l4y8MaAtMWX", "kYFAzEU2fjY", "OJgiqOMt8_A", "niVtS7BGCEh", "woqfnRadvY", "zCBRkhEp9b3", "o-Ug_TbM3Qy", "qKG7XZxwGwJ", "muDO9b_k6-n", "BtIUuKhbtf", "3qu4VOu8qCC", "UOwyAnChqsY", "tYKA4xEWww" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the detailed feedback and additional results. The response addressed my concerns about the insufficient experiments and actual speedup on GPUs. I am glad to see the search process of the method is much faster than the existing method. I raised the score to 5. \n\n", " Dear ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 3 ]
[ "3qu4VOu8qCC", "nips_2022_w5DacXWzQ-Q", "woqfnRadvY", "tYKA4xEWww", "UOwyAnChqsY", "3qu4VOu8qCC", "3qu4VOu8qCC", "3qu4VOu8qCC", "BtIUuKhbtf", "nips_2022_w5DacXWzQ-Q", "nips_2022_w5DacXWzQ-Q", "nips_2022_w5DacXWzQ-Q", "nips_2022_w5DacXWzQ-Q" ]
nips_2022_ZG5Bi1N4V0U
SeqPATE: Differentially Private Text Generation via Knowledge Distillation
Protecting the privacy of user data is crucial for text generation models, which can leak sensitive information during generation. Differentially private (DP) learning methods provide guarantees against identifying the existence of a training sample from model outputs. PATE is a recent DP learning algorithm that achieves high utility with strong privacy protection on training samples. However, text generation models output tokens sequentially in a large output space; the classic PATE algorithm is not customized for this setting. Furthermore, PATE works well to protect sample-level privacy, but is not designed to protect phrases in samples. In this paper, we propose SeqPATE, an extension of PATE to text generation that protects the privacy of individual training samples and sensitive phrases in training data. To adapt PATE to text generation, we generate pseudo-contexts and reduce the sequence generation problem to a next-word prediction problem. To handle the large output space, we propose a candidate filtering strategy to dynamically reduce the output space, and refine the teacher aggregation of PATE to avoid low agreement due to voting for a large number of candidates. To further reduce privacy losses, we use knowledge distillation to reduce the number of teacher queries. The experiments verify the effectiveness of SeqPATE in protecting both training samples and sensitive phrases.
Accept
The paper studies PATE framework for text generation models and proposes algorithm based on KD to handle large output space. Reviewers think that proposed methods should generate interest among the NeurIPS audience. We encourage the authors to incorporate comments of the reviewers to improve the paper.
train
[ "otVdz2d0okz", "_o0WxvOZ7Bj", "jhv6OF4ivS", "EI1DDh1s80w", "mRx4bkr35Cj", "1Gxr7odK9mz", "DkyUVK1z9e", "bRjvEI0vliL", "Qd_Cy69Bffzc", "7wwv6_8TOz_", "3tM7AWkC4JU", "5hCgf2eXllT", "_9QMAaHMEH", "c2KLppkGk7u", "QA9pCj9wV3t", "FEC0Z0yZdja", "AWn_3KJsDh", "PButlHJjzs7", "TUvidruCY_2"...
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your time and effort in reviewing our paper. We appreciate your encouragement and potential support in the following discussion phase. \n\nThank you for reading our response and the revised paper carefully. We will polish this paper according to your suggestions. Hope you all are doing wel...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "mRx4bkr35Cj", "1Gxr7odK9mz", "c2KLppkGk7u", "QA9pCj9wV3t", "DkyUVK1z9e", "Qd_Cy69Bffzc", "bRjvEI0vliL", "_9QMAaHMEH", "7wwv6_8TOz_", "3tM7AWkC4JU", "5hCgf2eXllT", "PButlHJjzs7", "TUvidruCY_2", "AWn_3KJsDh", "FEC0Z0yZdja", "nips_2022_ZG5Bi1N4V0U", "nips_2022_ZG5Bi1N4V0U", "nips_202...
nips_2022_wlEOsQ917F
A framework for bilevel optimization that enables stochastic and global variance reduction algorithms
Bilevel optimization, the problem of minimizing a value function which involves the arg-minimum of another function, appears in many areas of machine learning. In a large scale empirical risk minimization setting where the number of samples is huge, it is crucial to develop stochastic methods, which only use a few samples at a time to progress. However, computing the gradient of the value function involves solving a linear system, which makes it difficult to derive unbiased stochastic estimates. To overcome this problem we introduce a novel framework, in which the solution of the inner problem, the solution of the linear system, and the main variable evolve at the same time. These directions are written as a sum, making it straightforward to derive unbiased estimates. The simplicity of our approach allows us to develop global variance reduction algorithms, where the dynamics of all variables is subject to variance reduction. We demonstrate that SABA, an adaptation of the celebrated SAGA algorithm in our framework, has $O(\frac1T)$ convergence rate, and that it achieves linear convergence under Polyak-Lojasciewicz assumption. This is the first stochastic algorithm for bilevel optimization that verifies either of these properties. Numerical experiments validate the usefulness of our method.
Accept
The main topic of this work is stochastic bilevel optimization. It provides an efficient algorithm for this task, and provides theoretical results in this setting. The reviewers are unanimous that this is well-presented work of high quality and should be accepted, and so do I.
train
[ "EHufmexU4nj", "U-x-lGcNOb", "-39NDGecalW", "cKGXp0381Yt", "sYrbQhiY0V1", "FIu4aWzcClM", "vO8WSgdo0ep", "7pnZq1yuPCk", "6h1s93-EPg", "YCOZnHGI8sG", "7Tky25mBlIn", "jew388VycbJ", "PITZtkbRMtY", "1lz_0UFhUw4", "yhN9m0WX-4t" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed response and improvements on the revision. My concerns are resolved and I increase my rating from 6 to 7.", " Thank you for updating your review and for your suggestion. We agree that it is worth mentioning what rate we can expect if we stick with the usual regularity assump...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "YCOZnHGI8sG", "FIu4aWzcClM", "6h1s93-EPg", "yhN9m0WX-4t", "1lz_0UFhUw4", "7Tky25mBlIn", "7pnZq1yuPCk", "nips_2022_wlEOsQ917F", "yhN9m0WX-4t", "1lz_0UFhUw4", "PITZtkbRMtY", "nips_2022_wlEOsQ917F", "nips_2022_wlEOsQ917F", "nips_2022_wlEOsQ917F", "nips_2022_wlEOsQ917F" ]
nips_2022_bMYU8_qD8PW
A Unified Model for Multi-class Anomaly Detection
Despite the rapid advance of unsupervised anomaly detection, existing methods require to train separate models for different objects. In this work, we present UniAD that accomplishes anomaly detection for multiple classes with a unified framework. Under such a challenging setting, popular reconstruction networks may fall into an "identical shortcut", where both normal and anomalous samples can be well recovered, and hence fail to spot outliers. To tackle this obstacle, we make three improvements. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer, and confirm the important role of query embedding (i.e., within attention layer) in preventing the network from learning the shortcut. We therefore come up with a layer-wise query decoder to help model the multi-class distribution. Second, we employ a neighbor masked attention module to further avoid the information leak from the input feature to the reconstructed output feature. Third, we propose a feature jittering strategy that urges the model to recover the correct message even with noisy inputs. We evaluate our algorithm on MVTec-AD and CIFAR-10 datasets, where we surpass the state-of-the-art alternatives by a sufficiently large margin. For example, when learning a unified model for 15 categories in MVTec-AD, we surpass the second competitor on the tasks of both anomaly detection (from 88.1% to 96.5%) and anomaly localization (from 89.5% to 96.8%). Code is available at https://github.com/zhiyuanyou/UniAD.
Accept
This paper is on a highly-important topic, and makes solid contributions. Anomaly detection for multi-class datasets without class information is an underexplored area. Reviewers have appreciated the strong experimental results (especially on the important MVtech benchmark), high quality paper writing, and explainability results besides accuracy, via a novel attention mechanism. On the flip side, there were concerns on lack of deep analyses of the constituents of the method and novelty (given that there are some recent papers with similar ideas). The scores were borderline and the authors have put significant effort to address the concerns of the reviewers. Especially extra ablation studies and comparisons with other relevant papers are quite helpful in regards to convincingness of the ideas. I support the acceptance of the paper given all. Please update your paper with the additional content you have provided in the responses below.
train
[ "lsvGaO9YteS", "HfGzyo4M-uo", "IikAOq6YFK", "vHhfv8g1gSb", "JevmHaSsyg", "cq6crNIlv7O", "yns44QgO4gy", "kRezj1MzM36", "HHKTn2Bsphm", "X4Ul9a5HURX", "lWrJbMMFGGB", "YN4ujK8aDgm", "Cp5IcunPYL2", "KhhcvZxEFE", "UiTaKu_cyes" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your valuable suggestions that help us improve the manuscript. We are glad that you appreciate the \"identical short\" problem studied in this work, which is our major focus. In the meantime, we also agree that our current presentation (*i.e.*, abstract and introduction) may give too much space to t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "vHhfv8g1gSb", "IikAOq6YFK", "lWrJbMMFGGB", "X4Ul9a5HURX", "Cp5IcunPYL2", "KhhcvZxEFE", "UiTaKu_cyes", "HHKTn2Bsphm", "YN4ujK8aDgm", "UiTaKu_cyes", "KhhcvZxEFE", "Cp5IcunPYL2", "nips_2022_bMYU8_qD8PW", "nips_2022_bMYU8_qD8PW", "nips_2022_bMYU8_qD8PW" ]
nips_2022_0tG59j2efs
Learning from Future: A Novel Self-Training Framework for Semantic Segmentation
Self-training has shown great potential in semi-supervised learning. Its core idea is to use the model learned on labeled data to generate pseudo-labels for unlabeled samples, and in turn teach itself. To obtain valid supervision, active attempts typically employ a momentum teacher for pseudo-label prediction yet observe the confirmation bias issue, where the incorrect predictions may provide wrong supervision signals and get accumulated in the training process. The primary cause of such a drawback is that the prevailing self-training framework acts as guiding the current state with previous knowledge because the teacher is updated with the past student only. To alleviate this problem, we propose a novel self-training strategy, which allows the model to learn from the future. Concretely, at each training step, we first virtually optimize the student (i.e., caching the gradients without applying them to the model weights), then update the teacher with the virtual future student, and finally ask the teacher to produce pseudo-labels for the current student as the guidance. In this way, we manage to improve the quality of pseudo-labels and thus boost the performance. We also develop two variants of our future-self-training (FST) framework through peeping at the future both deeply (FST-D) and widely (FST-W). Taking the tasks of unsupervised domain adaptive semantic segmentation and semi-supervised semantic segmentation as the instances, we experimentally demonstrate the effectiveness and superiority of our approach under a wide range of settings. Code is available at https://github.com/usr922/FST.
Accept
This paper introduces an approach for reducing confirmation bias during self-training for semantic segmentation, by “learning from the future”, i.e. updating the teacher at a given timestep in self-training with a virtually updated version of the student, without actually using the gradients to update the student yet. Overall, reviewers were enthusiastic about the paper, finding the proposed method to be simple but interesting and of broad utility, and the paper well-written. The rebuttal responses seemed to address most questions and concerns, though there are some remaining weaknesses, such as the fact that the approach adds additional time/computation cost while the performance advantage versus standard self-training decreases with additional training iterations. However, on the balance I agree with reviewers that the strengths of the paper outweigh the weaknesses and recommend acceptance.
train
[ "GFDOgDaY-OL", "FWkEujpdUbt", "7JnxcfAN9W", "p8Z_5mRVZnS", "1s3OqRWvXnm", "I2MzVapZqfm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q1: A more convincing clarification on the motivation.**\n\nFirst, we observe that, although the pseudo-labels are noisy during training, the performance roughly gets better, which means *more accurate predictions*.\nMotivated by this, we wonder if it is possible to use the future state to provide more reliable...
[ -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "I2MzVapZqfm", "1s3OqRWvXnm", "p8Z_5mRVZnS", "nips_2022_0tG59j2efs", "nips_2022_0tG59j2efs", "nips_2022_0tG59j2efs" ]
nips_2022_gRK9SLQHTDV
Don't Roll the Dice, Ask Twice: The Two-Query Distortion of Matching Problems and Beyond
In most social choice settings, the participating agents express their preferences over the different alternatives in the form of linear orderings. While this clearly simplifies preference elicitation, it inevitably leads to poor performance with respect to optimizing a cardinal objective, such as the social welfare, since the values of the agents remain virtually unknown. This loss in performance because of lack of information is measured by distortion. A recent array of works put forward the agenda of designing mechanisms that learn the values of the agents for a small number of alternatives via queries, and use this limited extra information to make better-informed decisions, thus improving distortion. Following this agenda, in this work we focus on a class of combinatorial problems that includes most well-known matching problems and several of their generalizations, such as One-Sided Matching, Two-Sided Matching, General Graph Matching, and k-Constrained Resource Allocation. We design two-query mechanisms that achieve the best-possible worst-case distortion in terms of social welfare, and outperform the best-possible expected distortion achieved by randomized ordinal mechanisms.
Accept
This work studies a narrow, but important problem of how much cardinal information is needed to achieve near optimal matchings. The authors show that with just two queries (one is required for any non-trivial results) they can achieve non-trivial results in a very general setting. Moreover, they show that their results are tight.
test
[ "YVKTVXVOjF5", "RRi3dDVd9S8", "Us5hxr5r3zNH", "JQCDJbSImDq", "88XW3SL5woQ", "cL6l7m78dLO", "NzCsW17ZHJm", "p019MdJakZV", "ABdX79-eCXk", "eT6KoIbpT0Q", "e2y7RfIamp6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for all the responses, they have been quite insightful.", " The particular mechanisms we have designed in this paper are not strategyproof. Strategyproofness, as well as equilibrium efficiency (price of anarchy), has been considered in the context of distortion in previous works for matching (see refe...
[ -1, -1, -1, -1, -1, -1, -1, 7, 4, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 1, 3 ]
[ "RRi3dDVd9S8", "Us5hxr5r3zNH", "NzCsW17ZHJm", "e2y7RfIamp6", "eT6KoIbpT0Q", "ABdX79-eCXk", "p019MdJakZV", "nips_2022_gRK9SLQHTDV", "nips_2022_gRK9SLQHTDV", "nips_2022_gRK9SLQHTDV", "nips_2022_gRK9SLQHTDV" ]
nips_2022_1bE24ZURBqm
Biologically Inspired Dynamic Thresholds for Spiking Neural Networks
The dynamic membrane potential threshold, as one of the essential properties of a biological neuron, is a spontaneous regulation mechanism that maintains neuronal homeostasis, i.e., the constant overall spiking firing rate of a neuron. As such, the neuron firing rate is regulated by a dynamic spiking threshold, which has been extensively studied in biology. Existing work in the machine learning community does not employ bioinspired spiking threshold schemes. This work aims at bridging this gap by introducing a novel bioinspired dynamic energy-temporal threshold (BDETT) scheme for spiking neural networks (SNNs). The proposed BDETT scheme mirrors two bioplausible observations: a dynamic threshold has 1) a positive correlation with the average membrane potential and 2) a negative correlation with the preceding rate of depolarization. We validate the effectiveness of the proposed BDETT on robot obstacle avoidance and continuous control tasks under both normal conditions and various degraded conditions, including noisy observations, weights, and dynamic environments. We find that the BDETT outperforms existing static and heuristic threshold approaches by significant margins in all tested conditions, and we confirm that the proposed bioinspired dynamic threshold scheme offers homeostasis to SNNs in complex real-world tasks.
Accept
The paper proposes a biologically plausible dynamic thresholding mechanism. Spiking neural nets with dynamic thresholding appears to be novel. The paper does a good job of motivating the choice of the model and illustrating its benefits across a series of control tasks. All reviewers support the acceptance of the paper conditional on the following points to be included in the revised manuscript: - The new experiments on image processing performed during the discussion phase have to be included in the revised version. - I agree with Reviewer r3Sy that the paper overly emphasizes the biological plausibility of the method as a point of strength. Please try to focus the paper more on the technical benefits and analysis of the proposed method, following the instructions of the reviewer. - Include the complexity analysis of the model in the revised version. - Please import some of the tables provided during the rebuttal to the revised manuscript. - Explicitly denote the details of the statistics of your experimental results. It might be helpful to import some of the tables from the supplementary materials to the main text. I recommend the acceptance of this paper.
train
[ "1ZSNU9c3Q0l", "_rf0fzJkVu1", "RosBJT-XYq", "e3IJfSs3-nG", "ypk0rm6WC9D", "EyQA9P9blylI", "MsEiUHuUWV", "hjcIaVgMFs-", "W8ZFFMh_HrVT", "55209be0ALD", "a4OpK0VY8kfo", "aaI3Bpe6Bun", "Erux35IddkV", "rUU-BLQBRJn", "rYoxqz3OcOZ", "9yAmUPBtf72", "7zfws4EhJ27" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hello,\n\nI thank the authors for their responses. The results of their experiments outlined in the table are compelling. I would still suggest the authors have better statistical results, but I will increase my score to a 5.", " I appreciate the authors' effort on the response and additional experiments. I r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "e3IJfSs3-nG", "RosBJT-XYq", "ypk0rm6WC9D", "a4OpK0VY8kfo", "EyQA9P9blylI", "MsEiUHuUWV", "7zfws4EhJ27", "W8ZFFMh_HrVT", "9yAmUPBtf72", "rYoxqz3OcOZ", "aaI3Bpe6Bun", "rUU-BLQBRJn", "nips_2022_1bE24ZURBqm", "nips_2022_1bE24ZURBqm", "nips_2022_1bE24ZURBqm", "nips_2022_1bE24ZURBqm", "ni...
nips_2022_-bLLVk-WRPy
Structural Kernel Search via Bayesian Optimization and Symbolical Optimal Transport
Despite recent advances in automated machine learning, model selection is still a complex and computationally intensive process. For Gaussian processes (GPs), selecting the kernel is a crucial task, often done manually by the expert. Additionally, evaluating the model selection criteria for Gaussian processes typically scales cubically in the sample size, rendering kernel search particularly computationally expensive. We propose a novel, efficient search method through a general, structured kernel space. Previous methods solved this task via Bayesian optimization and relied on measuring the distance between GP's directly in function space to construct a kernel-kernel. We present an alternative approach by defining a kernel-kernel over the symbolic representation of the statistical hypothesis that is associated with a kernel. We empirically show that this leads to a computationally more efficient way of searching through a discrete kernel space.
Accept
This is a strong submission that benefitted greatly from productive and clarifying discussion between the authors and reviewers, after which the reviewers reached a unanimous stance in favor of acceptance. I recommend the authors to revise the manuscript accordingly in light of these discussions.
train
[ "jDOEqp2Hed1", "og4RA_8m7gc", "2P_Gvk6OANj", "UTnrUPHgutH", "9vhHfAjXuq2", "Ttx26-maaR9", "iheDwxwgKn", "AYjfwETDETP", "tFipHSkQC-Q", "_B8mX1iUmD-", "UmmfCtnMBv9", "IqxXk-908uO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your careful response to each question.\nI think it is great that positive results are obtained, especially for the validity of the choice of base distance and the hyperparameter optimization of the proposed method, which I wanted to know.\nI continue to recommend the acceptance.", " Thank you for...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "iheDwxwgKn", "9vhHfAjXuq2", "UTnrUPHgutH", "IqxXk-908uO", "UmmfCtnMBv9", "_B8mX1iUmD-", "tFipHSkQC-Q", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy" ]
nips_2022_19MmorTQhho
One Inlier is First: Towards Efficient Position Encoding for Point Cloud Registration
Transformer architecture has shown great potential for many visual tasks, including point cloud registration. As an order-aware module, position encoding plays an important role in Transformer architecture applied to point cloud registration task. In this paper, we propose a one-inlier based position encoding method for point cloud registration network. Specifically, we first find one correspondence by a differentiable optimal transport layer, and use it to normalize each point for position encoding. It can eliminate the challenges brought by the different reference frames of two point clouds, and mitigate the feature ambiguity by learning the spatial consistency. Then, we propose a joint approach for establishing correspondence and position encoding, presenting an iterative optimization process. Finally, we design a progressive way for point cloud alignment and feature learning to gradually optimize the rigid transformation. The proposed position encoding is very efficient, requiring only a small addition of memory and computing overhead. Extensive experiments demonstrate the proposed method can achieve competitive performance with the state-of-the-art methods in both indoor and outdoor scenes.
Accept
Thanks in large part to the rebuttal conversation, the reviewers converged to accept this paper. The reviewers recognize the interest and value of the approach and careful empirical results, bolstered by additional results introduced during the discussion. In preparing the camera-ready, the authors of this paper are encouraged to revisit the comments from reviewer 2PZE suggesting to verify whether the two anchor points are truly ‘inlier’ correspondence; this can be easily done by calculating the distance between the two anchor points under ground-truth transformation. Also, please make the title change and any other edits promised in the rebuttal, especially discussion of drawbacks and avenues for future research.
train
[ "41WK1sVYUyT", "Rm3HrTQAIF", "xCNWQpWVkD2", "XlIlkqws79s", "3aS5o0RXAHA", "mvDDiJuqD4", "PzdUg0pMMZ2", "WcsVdfl72N", "pUkd9rX0ihB", "31SJOF-G23t", "8oEPpglFeng", "szX9PZcpCO", "s2pcVnRoimF" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your constructive suggestions and helps about improving this paper! We will add the suggested experiments and explanations in the revised version.", " Thanks for providing such a detailed answer!\n\nEspecially the additional experimental evidence on the positional encodings helps to overcome the d...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 5 ]
[ "Rm3HrTQAIF", "PzdUg0pMMZ2", "XlIlkqws79s", "mvDDiJuqD4", "nips_2022_19MmorTQhho", "s2pcVnRoimF", "szX9PZcpCO", "8oEPpglFeng", "31SJOF-G23t", "nips_2022_19MmorTQhho", "nips_2022_19MmorTQhho", "nips_2022_19MmorTQhho", "nips_2022_19MmorTQhho" ]