paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_NI7moUOKtc
Debiased Self-Training for Semi-Supervised Learning
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets. Yet these datasets are time-consuming and labor-exhaustive to obtain on realistic tasks. To mitigate the requirement for labeled data, self-training is widely used in semi-supervised learning by iteratively assigning pseudo labels to unlabeled samples. Despite its popularity, self-training is well-believed to be unreliable and often leads to training instability. Our experimental studies further reveal that the bias in semi-supervised learning arises from both the problem itself and the inappropriate training with potentially incorrect pseudo labels, which accumulates the error in the iterative self-training process. To reduce the above bias, we propose Debiased Self-Training (DST). First, the generation and utilization of pseudo labels are decoupled by two parameter-independent classifier heads to avoid direct error accumulation. Second, we estimate the worst case of self-training bias, where the pseudo labeling function is accurate on labeled samples, yet makes as many mistakes as possible on unlabeled samples. We then adversarially optimize the representations to improve the quality of pseudo labels by avoiding the worst case. Extensive experiments justify that DST achieves an average improvement of 6.3% against state-of-the-art methods on standard semi-supervised learning benchmark datasets and 18.9% against FixMatch on 13 diverse tasks. Furthermore, DST can be seamlessly adapted to other self-training methods and help stabilize their training and balance performance across classes in both cases of training from scratch and finetuning from pre-trained models.
Accept
This paper proposed a novel Debiased self-training (DST) approach to reduce both data bias and self-training bias during SSL. The proposed method is simple and empirically seems quite effective. Reviewers are generally positive about the novelty of the method and significance of the results. While authors have tried to address and improve some of the review issues, the related work section and empirical comparison with recent SOTA SSL methods still could be improved. For example, some recent related SSL works are still missing, including but not limited to - DASO: Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced Semi-Supervised Learning, CVPR2022 - CoMatch: Semi-supervised Learning with Contrastive Graph Regularization, ICCV2021 Overall, the paper presents a novel framework for SSL, the empirical results of the method are quite positive, the paper can be accepted, but the authors are recommended to further improve the discussion/comparison of recent related work.
train
[ "MNldj4rUHSB", "h-glLyeY432", "hlP_XBCxNEU", "UL4gThotfQd", "p4II0lBrVaI", "XR0Lw7dYrRk", "YYLORyS8Jl0", "kf6TYThrO3P", "o7HgBwCoOL", "MJYPOh5P-7_", "jKq7_yJC5ov", "IV_xf4jVPC", "BnMIK-Elbt8" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We'd like to thank Reviewer G3sn again for providing an impressively insightful pre-rebuttal review, which has enabled us to make an effective response. We'd also thank you for carefully judging our feedback and acknowledging our work in the final review.", " Thanks for the enthusiastic reply from the authors. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "h-glLyeY432", "BnMIK-Elbt8", "UL4gThotfQd", "p4II0lBrVaI", "BnMIK-Elbt8", "YYLORyS8Jl0", "IV_xf4jVPC", "o7HgBwCoOL", "jKq7_yJC5ov", "nips_2022_NI7moUOKtc", "nips_2022_NI7moUOKtc", "nips_2022_NI7moUOKtc", "nips_2022_NI7moUOKtc" ]
nips_2022_pd6ipu3jDw
Transformer-based Working Memory for Multiagent Reinforcement Learning with Action Parsing
Learning in real-world multiagent tasks is challenging due to the usual partial observability of each agent. Previous efforts alleviate the partial observability by historical hidden states with Recurrent Neural Networks, however, they do not consider the multiagent characters that either the multiagent observation consists of a number of object entities or the action space shows clear entity interactions. To tackle these issues, we propose the Agent Transformer Memory (ATM) network with a transformer-based memory. First, ATM utilizes the transformer to enable the unified processing of the factored environmental entities and memory. Inspired by the human’s working memory process where a limited capacity of information temporarily held in mind can effectively guide the decision-making, ATM updates its fixed-capacity memory with the working memory updating schema. Second, as agents' each action has its particular interaction entities in the environment, ATM parses the action space to introduce this action’s semantic inductive bias by binding each action with its specified involving entity to predict the state-action value or logit. Extensive experiments on the challenging SMAC and Level-Based Foraging environments validate that ATM could boost existing multiagent RL algorithms with impressive learning acceleration and performance improvement.
Accept
All reviewers agree that this paper makes a good contribution in developing a novel transformer-based memory structure for MARL. The developed approach is evaluated through comprehensive and solid experiments. The authors have also clearly addressed the questions/concerns raised by the reviewers.
train
[ "glyeHm1r0uL", "p6a5mNg6q6t", "YKbITWdaknG", "gnsWFiQzFyF", "TnxWaDvOZhD", "d5qIbUcfnsQ", "GvfBGPSd7C", "oo_rfeSHwXP", "pTm1O8s_k4h", "P4Tr0D--pFK", "pi8bo1VM453" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I have no remaining concerns and still suggest to accept the paper.", " Hi authors,\n\nthank you for trying to cover my questions.\n\n[wall-clock time] yes, the comparison in wall-clock time is dependent on which hardware is used, but as you mentioned, if same hardware was used, then...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "oo_rfeSHwXP", "gnsWFiQzFyF", "GvfBGPSd7C", "TnxWaDvOZhD", "d5qIbUcfnsQ", "pi8bo1VM453", "P4Tr0D--pFK", "pTm1O8s_k4h", "nips_2022_pd6ipu3jDw", "nips_2022_pd6ipu3jDw", "nips_2022_pd6ipu3jDw" ]
nips_2022_kyY4w4IgtM8
Sharing Knowledge for Meta-learning with Feature Descriptions
Language is an important tool for humans to share knowledge. We propose a meta-learning method that shares knowledge across supervised learning tasks using feature descriptions written in natural language, which have not been used in the existing meta-learning methods. The proposed method improves the predictive performance on unseen tasks with a limited number of labeled data by meta-learning from various tasks. With the feature descriptions, we can find relationships across tasks even when their feature spaces are different. The feature descriptions are encoded using a language model pretrained with a large corpus, which enables us to incorporate human knowledge stored in the corpus into meta-learning. In our experiments, we demonstrate that the proposed method achieves better predictive performance than the existing meta-learning methods using a wide variety of real-world datasets provided by the statistical office of the EU and Japan.
Accept
This paper presents a novel meta-learning approach based on learning a sentence encoder which maps feature descriptions to embeddings. The sentence encoder is shown to generalize to new tasks during the test phase, hence allowing few-shot learning. The main concern raised by the reviewers was about the use of only two datasets which are non-standard for evaluation meta-learning. However, as the authors note, the proposed approach requires using datasets where feature descriptions are available and hence the choice of datasets seems reasonable. The authors are encouraged to revise the paper to discuss how the approach might be generalized other setups in meta-learning.
train
[ "jSdJE1-nBa1", "zwe2qC4O-Qi", "Z2iNMkdAXuD", "mh7ojlizpSk", "NHc9-6Ss-zW", "nraSoK-9UW", "xxFXC9XmR5B", "LXGKjsbz1xV" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your constructive comments.\n\n> It looks like the only difference between the proposed method and a baseline (MDK + B) is the usage of the feature encoder (Fig 1), which is a 3 layer neural network. It looks like the authors agree with that as well (line 220). So the technical novelty (although gui...
[ -1, -1, -1, -1, 5, 6, 4, 7 ]
[ -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "LXGKjsbz1xV", "xxFXC9XmR5B", "nraSoK-9UW", "NHc9-6Ss-zW", "nips_2022_kyY4w4IgtM8", "nips_2022_kyY4w4IgtM8", "nips_2022_kyY4w4IgtM8", "nips_2022_kyY4w4IgtM8" ]
nips_2022_YxUdazpgweG
MultiScan: Scalable RGBD scanning for 3D environments with articulated objects
We introduce MultiScan, a scalable RGBD dataset construction pipeline leveraging commodity mobile devices to scan indoor scenes with articulated objects and web-based semantic annotation interfaces to efficiently annotate object and part semantics and part mobility parameters. We use this pipeline to collect 230 scans of 108 indoor scenes containing 9458 objects and 4331 parts. The resulting MultiScan dataset provides RGBD streams with per-frame camera poses, textured 3D surface meshes, richly annotated part-level and object-level semantic labels, and part mobility parameters. We validate our dataset on instance segmentation and part mobility estimation tasks and benchmark methods for these tasks from prior work. Our experiments show that part segmentation and mobility estimation in real 3D scenes remain challenging despite recent progress in 3D object segmentation.
Accept
The reviewers tend to agree on the value of this 3D dataset, but point to some questions about labelling and accuracy. The rebuttal very convincingly addresses these points, clarifying the novelty and value of this new dataset. I agree with the authors that datasets are clearly in scope for the main NeurIPS program and that the datasets track explicitly includes as a FAQ: "My work is in scope for this track but possibly also for the main conference. Where should I submit it?" with the answer "This is ultimately your choice".
test
[ "SnjXEVQ8Vm2", "hGjTYegkgu", "fS8cfhPUsxO", "g9U8dcH7hP", "W0JTrPUrMO8x", "mLzE2MzwYtc", "yLHZ0mfWAq", "K5ADA_8ixwL", "XKWPZvuyPI", "sG-YfIucLF", "VTSCtlgUGnr", "6E4lU7FKknX", "biZuN7V3kJj" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their effort in reviewing our paper and recognize the reviewer's opinion. However, we would like to point out that the NeurIPS call for paper explicitly lists \"Infrastructure (e.g., datasets, competitions, implementations, libraries)\" as one of the paper topics sought by the main tra...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "hGjTYegkgu", "XKWPZvuyPI", "yLHZ0mfWAq", "K5ADA_8ixwL", "XKWPZvuyPI", "sG-YfIucLF", "biZuN7V3kJj", "6E4lU7FKknX", "VTSCtlgUGnr", "nips_2022_YxUdazpgweG", "nips_2022_YxUdazpgweG", "nips_2022_YxUdazpgweG", "nips_2022_YxUdazpgweG" ]
nips_2022_qq84D17BPu
Toward Equation of Motion for Deep Neural Networks: Continuous-time Gradient Descent and Discretization Error Analysis
We derive and solve an ``Equation of Motion'' (EoM) for deep neural networks (DNNs), a differential equation that precisely describes the discrete learning dynamics of DNNs. Differential equations are continuous but have played a prominent role even in the study of discrete optimization (gradient descent (GD) algorithms). However, there still exist gaps between differential equations and the actual learning dynamics of DNNs due to discretization error. In this paper, we start from gradient flow (GF) and derive a counter term that cancels the discretization error between GF and GD. As a result, we obtain EoM, a continuous differential equation that precisely describes the discrete learning dynamics of GD. We also derive discretization error to show to what extent EoM is precise. In addition, we apply EoM to two specific cases: scale- and translation-invariant layers. EoM highlights differences between continuous and discrete GD, indicating the importance of the counter term for a better description of the discrete learning dynamics of GD. Our experimental results support our theoretical findings.
Accept
Reviewers were unanimous in recommending that the paper be accepted, and I accordingly recommend the same. I encourage the authors to take into account suggestions made by reviewers so as to further improve the text in the camera-ready version.
test
[ "AqAhq4j-fb", "KQgPqkd6lCc", "t0Cfu9_AZjx", "EXsuZlV5WIm", "OqeDadyeolJw", "tq2sm42Fpnm", "gZNtkH7td47i", "T6EyO-po4xw", "D20lsHVxobp", "y6_7_ZmMu3o", "wG_B2fsmud", "wXJelpaxe_-", "m2ublMtaNAc", "Iz50w-RRmB", "oy8p_VG_H5C", "w9yZ-gYqS5", "L-ZkcQZ90xd", "X_Fgcbk4OfT", "EJ1OTwhJ3xD...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " We really appreciate your suggestions!\nUsing a tiny synthetic dataset and an extremely small network would be a nice idea.\nWe will keep trying for possible future updates.\nWe agree it would make our paper much stronger.", " This reasoning is understandable and I accept it. Could it be possible on a network o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "KQgPqkd6lCc", "gZNtkH7td47i", "EXsuZlV5WIm", "y6_7_ZmMu3o", "tq2sm42Fpnm", "m2ublMtaNAc", "T6EyO-po4xw", "D20lsHVxobp", "MzXKD3ywbEb", "wG_B2fsmud", "EJ1OTwhJ3xD", "m2ublMtaNAc", "X_Fgcbk4OfT", "oy8p_VG_H5C", "L-ZkcQZ90xd", "nips_2022_qq84D17BPu", "nips_2022_qq84D17BPu", "nips_202...
nips_2022_QFQoxCFYEkA
DENSE: Data-Free One-Shot Federated Learning
One-shot Federated Learning (FL) has recently emerged as a promising approach, which allows the central server to learn a model in a single communication round. Despite the low communication cost, existing one-shot FL methods are mostly impractical or face inherent limitations, \eg a public dataset is required, clients' models are homogeneous, and additional data/model information need to be uploaded. To overcome these issues, we propose a novel two-stage \textbf{D}ata-fre\textbf{E} o\textbf{N}e-\textbf{S}hot federated l\textbf{E}arning (DENSE) framework, which trains the global model by a data generation stage and a model distillation stage. DENSE is a practical one-shot FL method that can be applied in reality due to the following advantages: (1) DENSE requires no additional information compared with other methods (except the model parameters) to be transferred between clients and the server; (2) DENSE does not require any auxiliary dataset for training; (3) DENSE considers model heterogeneity in FL, \ie different clients can have different model architectures. Experiments on a variety of real-world datasets demonstrate the superiority of our method. For example, DENSE outperforms the best baseline method Fed-ADI by 5.08\% on CIFAR10 dataset.
Accept
This work proposes a new one-shot FL algorithm. It consists of two steps on the server: a data generation step that trains a GAN to synthesize data utilizing the local models and a distillation step that distills the ensembles local models using the generated data. The method has several advantages in comparison with other one-shot FL algorithms. The performance is verified by experiments. One major concern in the reviews was regarding novelty. This has been addressed by the author. Please clarify the following in the final version 1. Teacher's model: What is the quality of the ensemble model (teacher) in the experiment? Does the distilled model improve over the teacher (similar to self-distillation)? Showing the distillation gap is important to understand how the method works. 2. Contribution of GAN in quality: From pure quality point of view, what if the original data is used to train the ensemble and distilled models? Please also consider adding privacy-utility trade offs in the future work. It is true that one-shot FL is in general more secure than multi-round methods, and some DP work can be applied here directly. But showing on-par or better privacy-utility trade off is an important justification on why it should be adopted.
train
[ "Lg1fP8eq_zo", "GfW2Dp7GWK", "zQoMREJaEQJ1", "3kaITUVELstv", "-hRE4n1ihee", "qkRAm9yoCH9", "xUsK_edVQ_", "9-lTTxgIgfg", "uw3aVFOFvwD", "4I0WqBZPz6W", "CiwZuDY7r9O", "GQ27_-i_f5X", "2y71PkQ1vb4", "ChTm8qxpsaM", "PRrMzeha2kj", "FmHfLYha6an" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " i have updated my score, thanks.", " Hi\n\nThank you for your detailed response and I have improved your score. ", " Dear Reviewer fNZT,\n\nThank you again for your support of our work and valuable feedback! We tried our best to address all mentioned concerns/problems. Are there unclear explanations? We coul...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "xUsK_edVQ_", "zQoMREJaEQJ1", "PRrMzeha2kj", "2y71PkQ1vb4", "qkRAm9yoCH9", "9-lTTxgIgfg", "GQ27_-i_f5X", "PRrMzeha2kj", "PRrMzeha2kj", "FmHfLYha6an", "ChTm8qxpsaM", "2y71PkQ1vb4", "nips_2022_QFQoxCFYEkA", "nips_2022_QFQoxCFYEkA", "nips_2022_QFQoxCFYEkA", "nips_2022_QFQoxCFYEkA" ]
nips_2022_kImIIKGqDFA
Large-batch Optimization for Dense Visual Predictions
Training a large-scale deep neural network in a large-scale dataset is challenging and time-consuming. The recent breakthrough of large-batch optimization is a promising way to tackle this challenge. However, although the current advanced algorithms such as LARS and LAMB succeed in classification models, the complicated pipelines of dense visual predictions such as object detection and segmentation still suffer from the heavy performance drop in the large-batch training regime. To address this challenge, we propose a simple yet effective algorithm, named Adaptive Gradient Variance Modulator (AGVM), which can train dense visual predictors with very large batch size, enabling several benefits more appealing than prior arts. Firstly, AGVM can align the gradient variances between different modules in the dense visual predictors, such as backbone, feature pyramid network (FPN), detection, and segmentation heads. We show that training with a large batch size can fail with the gradient variances misaligned among them, which is a phenomenon primarily overlooked in previous work. Secondly, AGVM is a plug-and-play module that generalizes well to many different architectures (e.g., CNNs and Transformers) and different tasks (e.g., object detection, instance segmentation, semantic segmentation, and panoptic segmentation). It is also compatible with different optimizers (e.g., SGD and AdamW). Thirdly, a theoretical analysis of AGVM is provided. Extensive experiments on the COCO and ADE20K datasets demonstrate the superiority of AGVM. For example, AGVM demonstrates more stable generalization performance than prior arts under extremely large batch size (i.e., 10k). AGVM can train Faster R-CNN+ResNet50 in 4 minutes without losing performance. It enables training an object detector with one billion parameters in just 3.5 hours, reducing the training time by 20.9×, whilst achieving 62.2 mAP on COCO. The deliverables will be released at https://github.com/Sense-X/AGVM.
Accept
The authors describe a new method of large-batch optimisation for dense prediction computer vision tasks. The reviewers appreciate the simplicity of the method, convincing experiments and the potential practical importance. AC recommends acceptance.
train
[ "fuuyAj-Zwmx", "kyVABKuuTdV", "47H_oAl-m8P", "r7ga24fUZXZ", "6ZWFIaux64v", "M_4W8U5AYSM", "YmwI94VBbV6", "Q-sfGuAHgGT", "Si4jFbZ2JWj", "0TPnDMFk3wu", "11RhDmpM_Oa", "d2k7q0F7EVh", "T-RylbySvly", "ayqSR-1EV71", "VZeyRh3unE_", "vLePTGS36Gn", "WLLtGj3ZObn", "cUjnhTO9-fc", "vh9tDLrEx...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer KpQK,\n\nWe sincerely thank the reviewer for the constructive feedback and support!", " I would like to thank the authors for addressing my questions. Also, I appreciate my fellow reviewers' comments that lead to in-depth discussions with the authors.\n\nThe authors well addressed my concerns. Spe...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "kyVABKuuTdV", "cUjnhTO9-fc", "r7ga24fUZXZ", "ayqSR-1EV71", "M_4W8U5AYSM", "Q-sfGuAHgGT", "_cLFnqNu5vL", "vh9tDLrExpA", "nips_2022_kImIIKGqDFA", "11RhDmpM_Oa", "_cLFnqNu5vL", "T-RylbySvly", "cUjnhTO9-fc", "WLLtGj3ZObn", "vLePTGS36Gn", "vh9tDLrExpA", "nips_2022_kImIIKGqDFA", "nips_2...
nips_2022_35I4narr5A
Few-Shot Continual Active Learning by a Robot
In this paper, we consider a challenging but realistic continual learning problem, Few-Shot Continual Active Learning (FoCAL), where a CL agent is provided with unlabeled data for a new or a previously learned task in each increment and the agent only has limited labeling budget available. Towards this, we build on the continual learning and active learning literature and develop a framework that can allow a CL agent to continually learn new object classes from a few labeled training examples. Our framework represents each object class using a uniform Gaussian mixture model (GMM) and uses pseudo-rehearsal to mitigate catastrophic forgetting. The framework also uses uncertainty measures on the Gaussian representations of the previously learned classes to find the most informative samples to be labeled in an increment. We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task. The results show that our approach not only produces state-of-the-art results on the dataset but also allows a real robot to continually learn unseen objects in a real environment with limited labeling supervision provided by its user.
Accept
All reviewers appreciated the importance of the problem being tackled, and the effectiveness of the proposed method. There were a number of concerns about ablations and use of pre-trained feature extractors, but these have been sufficiently addressed in the authors' rebuttal. I agree with the reviewers in recommending acceptance.
train
[ "VnRpvUYZ2tI", "tzSh4pv1CMv", "gLiDdSXDWix", "SWqVULGF5U4", "OE-fiT_6A5m", "brcdAE7VBVd" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for your insightful comments and have used these comments to improve the paper.\n\nWeaknesses:\n\nMemory Usage: We have added a discussion about the memory usage of all the approaches in the paper (L 258-271). In particular, GBCL requires only 0.97 MB of space to store GMMs of the previous classes. I...
[ -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, 3, 5, 3 ]
[ "brcdAE7VBVd", "OE-fiT_6A5m", "SWqVULGF5U4", "nips_2022_35I4narr5A", "nips_2022_35I4narr5A", "nips_2022_35I4narr5A" ]
nips_2022_s7SukMH7ie9
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
Adversarial training (AT) with imperfect supervision is significant but receives limited attention. To push AT towards more practical scenarios, we explore a brand new yet challenging setting, i.e., AT with complementary labels (CLs), which specify a class that a data sample does not belong to. However, the direct combination of AT with existing methods for CLs results in consistent failure, but not on a simple baseline of two-stage training. In this paper, we further explore the phenomenon and identify the underlying challenges of AT with CLs as intractable adversarial optimization and low-quality adversarial examples. To address the above problems, we propose a new learning strategy using gradually informative attacks, which consists of two critical components: 1) Warm-up Attack (Warm-up) gently raises the adversarial perturbation budgets to ease the adversarial optimization with CLs; 2) Pseudo-Label Attack (PLA) incorporates the progressively informative model predictions into a corrected complementary loss. Extensive experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets. The code is publicly available at: https://github.com/RoyalSkye/ATCL.
Accept
This paper focuses on a significant and challenging problem: adversarial training (AT) with complementary labels. A naive combination of AT with existing complementary learning techniques fails to achieve good performance. The authors conduct both theoretical and empirical analyses of this phenomenon and identified two key challenges including intractable adversarial optimization and low-quality adversarial examples. Furthermore, two attack approaches are proposed accordingly: a warm-up attack to ease the adversarial optimization and a pseudo-label attack to improve the adversarial example quality. All reviewers recognize the effectiveness of the proposed method through experimental evaluations. During the discussion, the authors also successfully addressed the reviewers' questions on the problem settings, the novelty of the pseudo-label attack, warm-up strategies, etc. Based on the positive reviews and thorough discussions, we recommend the acceptance of the paper.
train
[ "tLaYQLJoIgX", "Yvkkotsj0dO", "91BuuBK9q_t", "BPTYboF200m", "-taFj7xoiKPp", "wIIszxf11eP", "T-K_K-JbzBZ", "rBUqOzjChV3", "vb5Bqa2ROSp", "MmF1rhW5Pl2", "9xvb8fTBBF6", "YiGAFtkXZz", "6e-wYzVGJsy", "GGVK5p28IC", "OOFm6Z2i248e", "IiWr7mal4jxs", "LTTDjAk26Zg", "SYs7QX0VnU-", "C8dUB90z...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_...
[ " Thanks the author for the through response and sorry for the late reply. The author has addressed most of my concerns, so I would raise my initial score and recommend this work.", " Dear Reviewer Jbnu,\n\nWould you mind acknowledging our rebuttal? As the discussion due is approaching, if you still have some que...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "q4E_h-xfUUc", "q4E_h-xfUUc", "q4E_h-xfUUc", "q4E_h-xfUUc", "OOFm6Z2i248e", "6e-wYzVGJsy", "nips_2022_s7SukMH7ie9", "C8dUB90z2or", "nips_2022_s7SukMH7ie9", "q4E_h-xfUUc", "q4E_h-xfUUc", "q4E_h-xfUUc", "D92bm06NeyQ", "C8dUB90z2or", "C8dUB90z2or", "SYs7QX0VnU-", "SYs7QX0VnU-", "nips_...
nips_2022_DGwX7wSoC-
Stationary Deep Reinforcement Learning with Quantum K-spin Hamiltonian Equation
A foundational issue in deep reinforcement learning (DRL) is that \textit{Bellman's optimality equation has multiple fixed points}---failing to return a consistent one. A direct evidence is the instability of existing DRL algorithms, namely, the high variance of cumulative rewards over multiple runs. As a fix of this problem, we propose a quantum K-spin Hamiltonian regularization term (H-term) to help a policy network stably find a \textit{stationary} policy, which represents the lowest energy configuration of a system. First, we make a novel analogy between a Markov Decision Process (MDP) and a \textit{quantum K-spin Ising model} and reformulate the objective function into a quantum K-spin Hamiltonian equation, a functional of policy that measures its energy. Then, we propose a generic actor-critic algorithm that utilizes the H-term to regularize the policy/actor network and provide Hamiltonian policy gradient calculations. Finally, on six challenging MuJoCo tasks over 20 runs, the proposed algorithm reduces the variance of cumulative rewards by $65.2\% \sim 85.6\%$ compared with those of existing algorithms.
Reject
The paper proposes to add a regularisation term H to RL algorithms in order to work around issues caused by the multiple fixed points of the Bellman’s optimality equation. The added H term is inspired by quantum field theory, specifically the K-spin Ising model. All reviewers thought this was an interesting idea, but by the end of the review period, there remained some problems with this paper. Indeed, this paper is not a theory paper, and there is no mathematical proof that the added H term does accomplish the stated goal of variance reduction. This leaves us with empirical evidence. Unfortunately, as was pointed out by reviewers, "Experiment is limited to the 6 MuJoCo tasks", which is not enough to convince that the algorithm should generally work. Finally, many reviewers were confused by the claim that PPO solves the Bellman Optimality Equation. By the end of the review, not all reviewers were convinced this problem had been resolved. This point should be clarified, and it would be better for the paper to go through a new round of reviews before being accepted for publication.
train
[ "u0GE2bFfrf", "3nHlaClIgP9", "YlGgTa6ideC", "Ib90PbzwOEx", "3mtHRjay2s3", "Q2BP7OYq4l", "PeIbuVsHfS", "Dr_eklqeIRL", "_Y8eI4Qyivp", "2G3QBX5tBZ4", "Tf6OPfb20QI", "gEuv_7cukjg", "E48uwlhVFLb", "fXOT2Umw8mw", "zgiUUrgpkmS", "km2lSDnl8a3", "kUSX5cE6uoF", "rHkhf77PymIS", "ih8Veh64rSC...
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ " The authors sincerely thank all reviewers and area chair. The authors enjoy the discussions and are happy that some key points reached a consensus. \n\nTo recap, this work has made the following major contributions.\n1. Per Reviewer 65t2 and Reviewer Reviewer UzCH’s suggestion, the authors added Appx. E to includ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "nips_2022_DGwX7wSoC-", "3mtHRjay2s3", "Ib90PbzwOEx", "2G3QBX5tBZ4", "Q2BP7OYq4l", "PeIbuVsHfS", "_Y8eI4Qyivp", "Tf6OPfb20QI", "fXOT2Umw8mw", "rHkhf77PymIS", "gEuv_7cukjg", "E48uwlhVFLb", "ih8Veh64rSC", "zgiUUrgpkmS", "QMXeup90k36", "48QfxQ3EYmM", "rHkhf77PymIS", "df-3O7rWju", "B...
nips_2022_kHrE2vi5Rvs
Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization
Deep reinforcement learning (DRL)-based combinatorial optimization (CO) methods (i.e., DRL-NCO) have shown significant merit over the conventional CO solvers as DRL-NCO is capable of learning CO solvers less relying on problem-specific expert domain knowledge (heuristic method) and supervised labeled data (supervised learning method). This paper presents a novel training scheme, Sym-NCO, which is a regularizer-based training scheme that leverages universal symmetricities in various CO problems and solutions. Leveraging symmetricities such as rotational and reflectional invariance can greatly improve the generalization capability of DRL-NCO because it allows the learned solver to exploit the commonly shared symmetricities in the same CO problem class. Our experimental results verify that our Sym-NCO greatly improves the performance of DRL-NCO methods in four CO tasks, including the traveling salesman problem (TSP), capacitated vehicle routing problem (CVRP), prize collecting TSP (PCTSP), and orienteering problem (OP), without utilizing problem-specific expert domain knowledge. Remarkably, Sym-NCO outperformed not only the existing DRL-NCO methods but also a competitive conventional solver, the iterative local search (ILS), in PCTSP at 240$\times$ faster speed. Our source code is available at https://github.com/alstn12088/Sym-NCO.git.
Accept
All reviewers agree that the paper presents interesting results, hence I recommend acceptance. On the other hand there are several issues which need to be addressed in the final version of the paper: 1. The authors should add the experimental results listed in the responses, as these demonstrate more convincingly the significance of the results. 2. The mathematical formulation of the problem and the description of the solution is of extremely low quality (almost made me reject the paper). For example, nothing is defined in equation 1, neither the meaning nor the possible values of the different variables: What are the nodes? What values can features take? What is a solution? Going on to Section 2.1 and 2.2, it is again unclear what a solution is (not to mention a solution sequence), hence why we care about the corresponding MDP, what are the motivations in the definition of the MDP. What is a policy? What is a solution set? And so on. These must be written in a way which is understandable to a reader who is not already very familiar with the topic.
train
[ "wlnh4kdd-Ak", "W_MdHXC4Gc", "FmCL282S6Rz", "gpTtFwt34sb", "-GkwaHQvMN9", "vIGGjgve-4b", "tjfPV_fIkw", "DMDTe586xNL", "CgQtXscPaM", "kwo2irW1MD", "zYmVM69pJ11", "19v1ZJ1bPA", "MOhzeoPFroy", "D9w-HR-JryC", "018NFbibjUx", "BbpQrVofgkA", "XyXLAD3rdq6", "rDXFlM_zbcQ", "TbhJz1v8V_M", ...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Thank you for addressing my concerns regarding the claims on expressive power and hard vs. soft invariant learning.\n\nI find the updated Figure 1 and accompany text more convincing. I acknowledge that the previously made claims regarding the expressive power of ENNs and the required expressive power for combinat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "W_MdHXC4Gc", "FmCL282S6Rz", "gpTtFwt34sb", "tjfPV_fIkw", "czJ_Nqv19cM", "DMDTe586xNL", "j1rjzzhov7", "kwo2irW1MD", "018NFbibjUx", "zYmVM69pJ11", "19v1ZJ1bPA", "MOhzeoPFroy", "D9w-HR-JryC", "Xcz0e2MbDJT", "BbpQrVofgkA", "XyXLAD3rdq6", "rDXFlM_zbcQ", "j1rjzzhov7", "czJ_Nqv19cM", ...
nips_2022_PZtIiZ43E2R
List-Decodable Sparse Mean Estimation
Robust mean estimation is one of the most important problems in statistics: given a set of samples in $\mathbb{R}^d$ where an $\alpha$ fraction are drawn from some distribution $D$ and the rest are adversarially corrupted, we aim to estimate the mean of $D$. A surge of recent research interest has been focusing on the list-decodable setting where $\alpha \in (0, \frac12]$, and the goal is to output a finite number of estimates among which at least one approximates the target mean. In this paper, we consider that the underlying distribution $D$ is Gaussian with $k$-sparse mean. Our main contribution is the first polynomial-time algorithm that enjoys sample complexity $O\big(\mathrm{poly}(k, \log d)\big)$, i.e. poly-logarithmic in the dimension. One of our core algorithmic ingredients is using low-degree {\em sparse polynomials} to filter outliers, which may find more applications.
Accept
This paper studies the problem of list-decodable mean estimation under the assumption that the true mean is *sparse* and the clean distribution is Gaussian with identity covariance. In this setting, we are given n data points and a parameter $0<\alpha \leq 1/2$ such that: (1) an unknown $\alpha$-fraction of the dataset consists of iid samples from $N(\mu, I)$, where the target mean $\mu$ is $k$-sparse (i.e., supported on an unknown set of at most $k$ coordinates), and (2) no assumptions are made on the remaining points. The goal is to output a list of $O(1/\alpha)$ many vectors such that with high probability at least one of these vectors is close to $\mu$, in L2 distance. This list-decodable mean estimation problem has been well-studied in the dense case (i.e., when $k = d$ where $d$ is the dimension). The authors give an efficient algorithm for the sparse case achieving significantly better sample complexity than in the dense case. The submitted version of the paper achieves error $O(\alpha^{-1/2})$, relying on degree-$2$ polynomials. On August 8, the authors updated their manuscript, achieving improved error using higher degree polynomials. The proposed algorithm (both the initial version and the updated version) uses the multi-filtering technique of Diakonikolas, Kane, Stewart from STOC'18 [DKS18b]. Their approach crucially builds on the multi-filtering technique of [DKS18b] to a degree that the pseudocode of the algorithm and the analysis itself are very similar. On the other hand, the work includes some non-trivial steps to adapt the multi-filtering technique to the sparse setting. The reviewers eventually agreed that the paper is above the acceptance threshold. The current scores represent the updated scores by the reviewers after the August update of the submission's results. One issue to note here is that the reviewers did not have time to verify (or even read in any detail) the updated version at a technical level; hence, I have low confidence on its correctness. Overall, the paper seems to be slightly above the acceptance threshold, assuming that the updated version of the paper is correct.
train
[ "2U5M9o7uA3u", "nctcBLhkWwq", "kI-b7SFliXw", "2Sxc3h-pO1v", "K-8gCOWBdlFr", "bdY3wuzpdLg", "jhCHB0SX87", "3jydUowCshb", "fyw3cJQeIAh", "Djyzc4Bizay", "ftDFbFtKV19" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. After reading the response and also other reviews, I would like to adjust my scores. However, I still have doubt on the technical novelty and the presentation for it in the manuscript, which in part also pointed out by Reviewer 7URr. Since I need to evaluate the paper as submitted, I t...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "3jydUowCshb", "bdY3wuzpdLg", "fyw3cJQeIAh", "K-8gCOWBdlFr", "jhCHB0SX87", "ftDFbFtKV19", "Djyzc4Bizay", "fyw3cJQeIAh", "nips_2022_PZtIiZ43E2R", "nips_2022_PZtIiZ43E2R", "nips_2022_PZtIiZ43E2R" ]
nips_2022_vK53GLZJes8
The Pitfalls of Regularization in Off-Policy TD Learning
Temporal Difference (TD) learning is ubiquitous in reinforcement learning, where it is often combined with off-policy sampling and function approximation. Unfortunately learning with this combination (known as the deadly triad), exhibits instability and unbounded error. To account for this, modern Reinforcement Learning methods often implicitly (or sometimes explicitly) assume that regularization is sufficient to mitigate the problem in practice; indeed, the standard deadly triad examples from the literature can be ``fixed'' via proper regularization. In this paper, we introduce a series of new counterexamples to show that the instability and unbounded error of TD methods is not solved by regularization. We demonstrate that, in the off-policy setting with linear function approximation, TD methods can fail to learn a non-trivial value function under any amount of regularization; we further show that regularization can induce divergence under common conditions; and we show that one of the most promising methods to mitigate this divergence (Emphatic TD algorithms) may also diverge under regularization. We further demonstrate such divergence when using neural networks as function approximators. Thus, we argue that the role of regularization in TD methods needs to be reconsidered, given that it is insufficient to prevent divergence and may itself introduce instability. There needs to be much more care in the practical and theoretical application of regularization to Reinforcement Learning methods.
Accept
This paper presents a counterexample-driven analysis of regularization in TD learning with function approximation. Despite the paper's simplicity, the reviewers unanimously though there was a good contribution being made here, and I agree. Highlights include a clarity of presentation and new insights into what is known as the deadly triad. The reviewers generally agreed that these results are relevant to deep RL today, but would have appreciated more forward guidance.
val
[ "FKKbT7FLuK0", "w-WAZ0KhMd", "8SA5Wlco-VD7", "KTA6iAqWO8d", "Y_6Pb03oYWx", "yhO1N-HI9p", "hY47VQ99s2j", "balME3sPHm", "5dhVt5HWfFj", "RtUBEJYenPf", "wPo-yeBbV89", "RwZjvughZMA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I have edited my review/score upward after reading your clarifications. ", " Thank you for your answers! The authors have addressed my questions and I appreciate the additional experiments the authors provided.", " I thank the authors for their additional experiments and other upda...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "yhO1N-HI9p", "balME3sPHm", "Y_6Pb03oYWx", "hY47VQ99s2j", "RwZjvughZMA", "wPo-yeBbV89", "RtUBEJYenPf", "5dhVt5HWfFj", "nips_2022_vK53GLZJes8", "nips_2022_vK53GLZJes8", "nips_2022_vK53GLZJes8", "nips_2022_vK53GLZJes8" ]
nips_2022_mvbr8A_eY2n
Optimal Efficiency-Envy Trade-Off via Optimal Transport
We consider the problem of allocating a distribution of items to $n$ recipients where each recipient has to be allocated a fixed, pre-specified fraction of all items, while ensuring that each recipient does not experience too much envy. We show that this problem can be formulated as a variant of the semi-discrete optimal transport (OT) problem, whose solution structure in this case has a concise representation and a simple geometric interpretation. Unlike existing literature that treats envy-freeness as a hard constraint, our formulation allows us to \emph{optimally} trade off efficiency and envy continuously. Additionally, we study the statistical properties of the space of our OT based allocation policies by showing a polynomial bound on the number of samples needed to approximate the optimal solution from samples. Our approach is suitable for large-scale fair allocation problems such as the blood donation matching problem, and we show numerically that it performs well on a prior realistic data simulator.
Accept
Executive summary: The problem considered in this paper is as follows: There is a distribution over items X \subseteq [0,\bar{x}]^n where x_i denotes the value of the item to recipient i. There are also matching constraints {p_i}_{i \in N}, which require that each agent should be matched a p_i fraction pf the times. The goals is to maximize the sum of recipient utilities subject to the matching propability constraints, and also satisfying that no recipient i envies another recipient by more than a factor \gamma_i. It is shown that this problem can be solved as a semi-discrete optimal transport problem. They also give a stochastic optimization algorithm which converges at rate O(1/sqrt(T)), and a PAC-style sample complexity result (showing that with O(n/eps^2) samples an eps-approximate solution can be found with high probability). Discussion and recommendation: This paper is a bit out of my comfort zone, so I am mostly relying on the reviews, which are rather positive and supportive of the paper. The connection to optimal transport is appreciated, and the approximation results (while rather standard) seem to find their audience as well. Weak accept.
train
[ "6e5ZbDh3kB", "D4sFqM63OCF", "DrqrED4_r6", "rUAZtQzAMS_", "FqokmmbpZfe", "DZ6ISIdBLFj", "Y8EC0xph9i7", "cOyDSjzknnw", "b3KCWxGn9Oc", "gjNubjL1wj4", "kC5nMJvTed2", "T4yQSXCVJ1F", "uqsFI5ejxS0", "1o13chfEk7X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the response. I will keep my score unchanged. I encourage authors to do a more extensive literature review and compare the results and methods in the future version.", " Thank the authors for the comments, which confirm my original thoughts about this paper. For this reason, I will stick t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "gjNubjL1wj4", "b3KCWxGn9Oc", "FqokmmbpZfe", "DZ6ISIdBLFj", "Y8EC0xph9i7", "cOyDSjzknnw", "1o13chfEk7X", "uqsFI5ejxS0", "T4yQSXCVJ1F", "kC5nMJvTed2", "nips_2022_mvbr8A_eY2n", "nips_2022_mvbr8A_eY2n", "nips_2022_mvbr8A_eY2n", "nips_2022_mvbr8A_eY2n" ]
nips_2022_NN_TpS5dpo5
Physically-Based Face Rendering for NIR-VIS Face Recognition
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps as well as a lack of sufficient data for cross-modality model training. To overcome this problem, we propose a novel method for paired NIR-VIS facial image generation. Specifically, we reconstruct 3D face shape and reflectance from a large 2D facial dataset and introduce a novel method of transforming the VIS reflectance to NIR reflectance. We then use a physically-based renderer to generate a vast, high-resolution and photorealistic dataset consisting of various poses and identities in the NIR and VIS spectra. Moreover, to facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss, which not only reduces the modality gap between NIR and VIS images at the domain level but encourages the network to focus on the identity features instead of facial details, such as poses and accessories. Extensive experiments conducted on four challenging NIR-VIS face recognition benchmarks demonstrate that the proposed method can achieve comparable performance with the state-of-the-art (SOTA) methods without requiring any existing NIR-VIS face recognition datasets. With slightly fine-tuning on the target NIR-VIS face recognition datasets, our method can significantly surpass the SOTA performance. Code and pretrained models are released under the insightface GitHub.
Accept
The paper received 3 positive reviews. The reviewers all lean towards acceptance after the rebuttal. Overall this work can be of large interest to the community working on NIR-VIS Recognition. But I hope the authors will present additional visualized results, as suggested by the reviewers.
train
[ "LAfiyyEDufW", "HtoeE3PK9-", "xxISs1sQozF0", "yvctP-Hs8gE", "rYvySiUn702", "Sj7wt6KCf94", "qQeilzKJpJm", "Pn6IESwYeA9" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your answers. They addressed my concerns pre-rebuttal.\nI decide to keep my Accept score.", " We sincerely thank all reviewers for their valuable comments and insightful advice on our paper. We are pleased to see that all reviewers give highly positive ratings (one accept and two borderline accepts)....
[ -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "rYvySiUn702", "nips_2022_NN_TpS5dpo5", "Pn6IESwYeA9", "qQeilzKJpJm", "Sj7wt6KCf94", "nips_2022_NN_TpS5dpo5", "nips_2022_NN_TpS5dpo5", "nips_2022_NN_TpS5dpo5" ]
nips_2022_i7WqjtdD0u
Learning With an Evolving Class Ontology
Lifelong learners must recognize concept vocabularies that evolve over time. A common yet underexplored scenario is learning with class labels over time that refine/expand old classes. For example, humans learn to recognize ${\tt dog}$ before dog breeds. In practical settings, dataset $\textit{versioning}$ often introduces refinement to ontologies, such as autonomous vehicle benchmarks that refine a previous ${\tt vehicle}$ class into ${\tt school-bus}$ as autonomous operations expand to new cities. This paper formalizes a protocol for studying the problem of $\textit{Learning with Evolving Class Ontology}$ (LECO). LECO requires learning classifiers in distinct time periods (TPs); each TP introduces a new ontology of "fine" labels that refines old ontologies of "coarse" labels (e.g., dog breeds that refine the previous ${\tt dog}$). LECO explores such questions as whether to annotate new data or relabel the old, how to leverage coarse labels, and whether to finetune the previous TP's model or train from scratch. To answer these questions, we leverage insights from related problems such as class-incremental learning. We validate them under the LECO protocol through the lens of image classification (on CIFAR and iNaturalist) and semantic segmentation (on Mapillary). Extensive experiments lead to some surprising conclusions; while the current status quo in the field is to relabel existing datasets with new class ontologies (such as COCO-to-LVIS or Mapillary1.2-to-2.0), LECO demonstrates that a far better strategy is to annotate $\textit{new}$ data with the new ontology. However, this produces an aggregate dataset with inconsistent old-vs-new labels, complicating learning. To address this challenge, we adopt methods from semi-supervised and partial-label learning. We demonstrate that such strategies can surprisingly be made near-optimal, in the sense of approaching an "oracle" that learns on the aggregate dataset exhaustively labeled with the newest ontology.
Accept
The setting of evolving and refining classes over time is certainly a practical one in domains such as text classification. This paper offers some insights on questions like whether the entire data should be relabeled, or can one achieve near optimal performance by labeling only the new chunk. The paper concludes that joint training on old and new data, even if inconsistent, in conjunction with semi-supervised learning can be fairly effective.
train
[ "RnK-1QjrKX", "xXkA6HloE7w", "3IiD-LeZ6lX", "UWp470cV8G7", "FhzJ-CsUDSD", "ipYiMikSC52", "0JLuh8-PMc", "GPVb_QHC_FY", "PBl3sqOd5M2", "h3ETQ0iGko", "E_uwV6_trR", "nSo92ZH_wMJ", "I0D9lbZHJbo", "VXQtlKLsLF2", "h96p-BtWSuC" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you Reviewer YxYF for your interaction and upgraded rating!\n\nAgain, we appreciate your positive attitude with \"*no reservation about the quality of the submission*\" (e.g., clarity and well-organized paper structure, the sound experimental setup and proposed models, novel setting of LECO, novel approach...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "UWp470cV8G7", "3IiD-LeZ6lX", "E_uwV6_trR", "PBl3sqOd5M2", "ipYiMikSC52", "GPVb_QHC_FY", "GPVb_QHC_FY", "h96p-BtWSuC", "h3ETQ0iGko", "VXQtlKLsLF2", "nSo92ZH_wMJ", "I0D9lbZHJbo", "nips_2022_i7WqjtdD0u", "nips_2022_i7WqjtdD0u", "nips_2022_i7WqjtdD0u" ]
nips_2022_x2WTG5bV977
The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and their Empirical Equivalence
Recently, it has been observed that a transfer learning solution might be all we need to solve many few-shot learning benchmarks -- thus raising important questions about when and how meta-learning algorithms should be deployed. In this paper, we seek to clarify these questions by 1. proposing a novel metric -- the {\it diversity coefficient} -- to measure the diversity of tasks in a few-shot learning benchmark and 2. by comparing MAML and transfer learning under fair conditions (same architecture, same optimizer and all models trained to convergence). Using the diversity coefficient, we show that the popular MiniImagenet and Cifar-fs few-shot learning benchmarks have low diversity. This novel insight contextualizes claims that transfer learning solutions are better than meta-learned solutions in the regime of low diversity under a fair comparison. Specifically, we empirically find that a low diversity coefficient correlates with a high similarity between transfer learning and Model-Agnostic Meta-Learning (MAML) learned solutions in terms of accuracy at meta-test time and classification layer similarity (using feature based distance metrics like SVCCA, PWCCA, CKA, and OPD). To further support our claim, we find this meta-test accuracy holds even as the model size changes. Therefore, we conclude that in the low diversity regime, MAML and transfer learning have equivalent meta-test performance when both are compared fairly. We also hope our work inspires more thoughtful constructions and quantitative evaluations of meta-learning benchmarks in the future.
Reject
The paper performs some empirical study between transfer learning and MAML (as a meta-learning method) through the lens of task diversity. When the task diversity is low, the authors claim that the performance of MAML and transfer learning methods are similar under a fair comparison (e.g. same architecture, optimizer etc). All reviewers are on a negative side for this paper due to weak experimental supports, poor write-up, weak novelty, etc, and the authors also fail to convince the reviewers through their rebuttal responses. Hence, AC cannot recommend acceptance at the current form. In particular, AC agrees that "the paper looks kind of an intermediate work on the way to its finalized version" and "I couldn't understand the clear takeaway or message from the paper except for certain empirical insights" pointed out by reviewers. AC thinks that this paper becomes much stronger if the authors can propose new better meta-learning benchmarks using the insights obtained through the authors' analysis.
train
[ "ZW9MPODnxe", "W8i2v2Qt1Gh", "OiezYFIryg", "GX75BQ5JVfW", "6AafxDJfqiL", "KAfrK-mJAgp", "i6zPBBaOkKh", "HVpRm3lwQip", "4-O5PLN94Fb", "uhVB3MINJj", "W96vJbpx-Si", "jema0OvfYf" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you the authros for addressing my concerns raised in the initial review. However, I am not satisfied with the reply from the authors. Please see my comments below.\n\n> Providing different results using different probe networks is more superior than an emsemble approach\n\nThis is quite arguable and I will ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "KAfrK-mJAgp", "KAfrK-mJAgp", "jema0OvfYf", "KAfrK-mJAgp", "KAfrK-mJAgp", "W96vJbpx-Si", "uhVB3MINJj", "4-O5PLN94Fb", "nips_2022_x2WTG5bV977", "nips_2022_x2WTG5bV977", "nips_2022_x2WTG5bV977", "nips_2022_x2WTG5bV977" ]
nips_2022_BWEGx_GFCbL
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.
Accept
The paper studies the generalization of a committee machine using algorithm stability. Compared to previous works, the authors obtain similar generalization error for smaller width for both GD and SGD. Reviewers had some conflicting opinions about this paper, with major concerns on the limited novelty compared to [46] and the small interpretability of the generalization bound beyond NTK results. However they valued the ability to control the bias term in a kernel free manner which was left open in [46] and found the stability analysis interesting and promising. I do therefore recommend acceptance of the paper.
train
[ "9b6r2YXH-m", "p6O-DOk__zp", "tX9zIgPYAgA", "i8QMqeNKGzR", "YD4v72-bbLu", "fyo4MR6Xi6K", "dL2vXim_CVg", "bvwMr7tFc9", "09ff1HjLjC", "rqkQrjcnZiW", "pa2aTKfkNEJ", "nYV1gDB6ZRq", "8Zxp6IzO-T_", "PBxpm46PYkm" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the nice suggestion. We will follow your advice and will move the proof ideas of GD and SGD to the main text in the revised version.", " Thanks for the clarification of the role of Assumption 3 particularly in Theorem 6. The added sections in the appendix during the rebuttal on the proof ideas of GD ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3, 2 ]
[ "p6O-DOk__zp", "dL2vXim_CVg", "fyo4MR6Xi6K", "nips_2022_BWEGx_GFCbL", "PBxpm46PYkm", "8Zxp6IzO-T_", "nYV1gDB6ZRq", "pa2aTKfkNEJ", "rqkQrjcnZiW", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL" ]
nips_2022_V88BafmH9Pj
A Contrastive Framework for Neural Text Generation
Text generation is of great importance to many natural language processing applications. However, maximization-based decoding methods (e.g., beam search) of neural language models often lead to degenerate solutions---the generated text is unnatural and contains undesirable repetitions. Existing approaches introduce stochasticity via sampling or modify training objectives to decrease the probabilities of certain tokens (e.g., unlikelihood training). However, they often lead to solutions that lack coherence. In this work, we show that an underlying reason for model degeneration is the anisotropic distribution of token representations. We present a contrastive solution: (i) SimCTG, a contrastive training objective to calibrate the model's representation space, and (ii) a decoding method---contrastive search---to encourage diversity while maintaining coherence in the generated text. Extensive experiments and analyses on three benchmarks from two languages demonstrate that our proposed approach outperforms state-of-the-art text generation methods as evaluated by both human and automatic metrics.
Accept
All four reviewers sided to accept the paper, as the proposed contrastive search approach to mitigating text degeneration problem is simple and effective and has applications to a variety of NLG tasks. Its evaluation is quite comprehensive and includes competitive baselines, human evaluation, and evaluation of both LM/generation quality on Wikitext-103 and effect on a downstream task (dialog). Two of the reviewers were more hesitant (borderline accept), but one of them was quite satisfied with the author response and the other reviewer didn't raise any major issue. The one remaining concern is that experiments with GPT-2 were base on the "small" model, but the rebuttal shows that the findings of the paper mostly hold with bigger language models (medium and large) but become relatively small with XL. We suggest including these additional experiments in the next version of the paper, along with further discussions of these smaller differences.
val
[ "vhGXz0_eBAx", "1brnWHQRnZ3", "geUDHasYjwb", "vpNH_k-4He", "yVgpdBO1sT4X", "59k677nSXUq", "_a58NwIHW4xV", "bPwRE2FlxNd", "TEl5cd0GTRt", "8o_drRyrpT", "zVIOpvI5KN6", "3YR6VMoUPq5", "2mHZdd0oQ3Z", "T56hkp-qWo", "kdhFQZtIHzs" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our response!", " The response has addressed my major concerns. However, contrastive learning and its findings for NLP are not new. \n\nI decide to raise my rating to 5 -- borderline accept.\n\n", " Thank you for reading our response!", " Thank you for your comprehensive reply and add...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "1brnWHQRnZ3", "yVgpdBO1sT4X", "vpNH_k-4He", "_a58NwIHW4xV", "T56hkp-qWo", "kdhFQZtIHzs", "kdhFQZtIHzs", "2mHZdd0oQ3Z", "2mHZdd0oQ3Z", "3YR6VMoUPq5", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj" ]
nips_2022_Gsbnnc--bnw
Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models
Generative models (e.g., GANs, diffusion models) learn the underlying data distribution in an unsupervised manner. However, many applications of interest require sampling from a particular region of the output space or sampling evenly over a range of characteristics. For efficient sampling in these scenarios, we propose Generative Visual Prompt (PromptGen), a framework for distributional control over pre-trained generative models by incorporating knowledge of other off-the-shelf models. PromptGen defines control as energy-based models (EBMs) and samples images in a feed-forward manner by approximating the EBM with invertible neural networks, avoiding optimization at inference. Our experiments demonstrate how PromptGen can efficiently sample from several unconditional generative models (e.g., StyleGAN2, StyleNeRF, diffusion autoencoder, NVAE) in a controlled or/and de-biased manner using various off-the-shelf models: (1) with the CLIP model as control, PromptGen can sample images guided by text, (2) with image classifiers as control, PromptGen can de-bias generative models across a set of attributes or attribute combinations, and (3) with inverse graphics models as control, PromptGen can sample images of the same identity in different poses. (4) Finally, PromptGen reveals that the CLIP model shows a "reporting bias" when used as control, and PromptGen can further de-bias this controlled distribution in an iterative manner. The code is available at https://github.com/ChenWu98/Generative-Visual-Prompt.
Accept
This work concerns a unifying method for repurposing "off the shelf" conditional models in order to define an energy-based model of vectors in the latent space of a pre-trained generative model, for the purpose of controlling synthesis, and a feed-forward approximation using invertible neural networks. The authors present several use cases and experiments on each across a range of different model types. Reviewers were positive on the presentation, originality and usefulness, and generally felt the experiments were well chosen. There were some concerns regarding discussion of societal impact (gbfq), the fact that most results involved faces and those that didn't were less compelling (5eQN), and clarity around the derived energy function and positioning relative to prior work (byc9). Most concerns were addressed in rebuttal, however QXTM felt quantitative results evaluating controllability, specifically, left much to be desired, and lowered their score following a rebuttal that they felt failed to address this issue. Based upon the discussion and my own reading of the paper, the AC views this work in an overall positive light, the valid concerns of QXTM notwithstanding. With some reservations, I recommend acceptance.
train
[ "hJymahhlX6v", "IwrGC_DBXUt", "rPzdRASZ-dh", "mOb2uXRwbIb", "TTePoXoVJRi", "MAqlyZDBMJg", "RyNqXOwil61", "HL6kcFW84G-", "BR0cJsdNuH8", "F3zpUR7kGCw", "_bhYvOU8pN8", "q0cBDN592SC", "X_hfVOZcQe", "_GN8gxqmk-" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Approximation error**\n\nThanks for pointing this out! The approximate error can be measured by $D_{\\text{KL}}(p_{\\theta}(\\boldsymbol{z}) || p(\\boldsymbol{z} | \\mathcal{C}))$. This KL divergence is defined in Eq. (9). However, it is worth noting that $\\log Z$ is expensive to estimate in practice (recall t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "rPzdRASZ-dh", "mOb2uXRwbIb", "RyNqXOwil61", "MAqlyZDBMJg", "nips_2022_Gsbnnc--bnw", "_GN8gxqmk-", "X_hfVOZcQe", "q0cBDN592SC", "_bhYvOU8pN8", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw" ]
nips_2022_LCWQ8OYsf-O
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks
Adapting large-scale pretrained models to various downstream tasks via fine-tuning is a standard method in machine learning. Recently, parameter-efficient fine-tuning methods have shown promise in adapting a pretrained model to different tasks while training only a few parameters. Despite their success, most existing methods are proposed in Natural Language Processing tasks with language Transformers, and adaptation to Computer Vision tasks with Vision Transformers remains under-explored, especially for dense vision tasks. Further, in multi-task settings, individually fine-tuning and storing separate models for different tasks is inefficient. In this work, we provide an extensive single- and multi-task parameter-efficient benchmark and examine existing parameter-efficient fine-tuning NLP methods for vision tasks. Our results on four different dense vision tasks showed that existing methods cannot be efficiently integrated due to the hierarchical nature of the Hierarchical Vision Transformers. To overcome this issue, we propose Polyhistor and Polyhistor-Lite, consisting of Decomposed HyperNetworks and Layer-wise Scaling Kernels, to share information across different tasks with a few trainable parameters. This leads to favorable performance improvements against existing parameter-efficient methods while using fewer trainable parameters. Specifically, Polyhistor achieves competitive accuracy compared to the state-of-the-art while only using less than 10% of their trainable parameters. Furthermore, our methods show larger performance gains when large networks and more pretraining data are used.
Accept
The proposed Polyhistor and Polyhistor-Lite for parameter-efficient multi-task adaptation achieves competitive performance gains on dense vision datasets. All reviewers give consistent positive scores. The requested experiments for more backbones, self-supervised backbones and analyses have been accordingly added during the discussion phase. Reviewer Gyt3 is concerned about the unclear explanation of the framework, and why HyperNetwork and scalable kernels could help. The authors addressed the issues and modified the paper. The meta-reviewers thus recommend to accept this paper, and encourage the authors to add all new experiments and make the presentation more clear in the camera ready.
train
[ "4q58QxRTGT", "0Zc2CCLW99fO", "VqN8Z0RMQ6a", "pFuS339hGX6", "Mv5uZT4Esjf", "_rAU6gSeBy3", "xMsjF5Jfm7v", "h2uf4ti5i6v", "TKMQ4dd9mtk", "hAYmh8ei0x0", "uNbSqOdBwcY", "l2YQMJZbt90", "xyjO6EvinD7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I will keep my score unchanged.", " Thank the authors for the response. \nI have no further questions. After reading the rebuttal and other reviewers' comments, I would like to keep my score at 7. ", " We thank all reviewers for providing constructive thoughtful feedback!\n\nWe a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "pFuS339hGX6", "_rAU6gSeBy3", "nips_2022_LCWQ8OYsf-O", "Mv5uZT4Esjf", "xyjO6EvinD7", "l2YQMJZbt90", "uNbSqOdBwcY", "TKMQ4dd9mtk", "hAYmh8ei0x0", "nips_2022_LCWQ8OYsf-O", "nips_2022_LCWQ8OYsf-O", "nips_2022_LCWQ8OYsf-O", "nips_2022_LCWQ8OYsf-O" ]
nips_2022_hYa_lseXK8
Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm
During initial iterations of training in most Reinforcement Learning (RL) algorithms, agents perform a significant number of random exploratory steps. In the real world, this can limit the practicality of these algorithms as it can lead to potentially dangerous behavior. Hence safe exploration is a critical issue in applying RL algorithms in the real world. This problem has been recently well studied under the Constrained Markov Decision Process (CMDP) Framework, where in addition to single-stage rewards, an agent receives single-stage costs or penalties as well depending on the state transitions. The prescribed cost functions are responsible for mapping undesirable behavior at any given time-step to a scalar value. The goal then is to find a feasible policy that maximizes reward returns while constraining the cost returns to be below a prescribed threshold during training as well as deployment. We propose an On-policy Model-based Safe Deep RL algorithm in which we learn the transition dynamics of the environment in an online manner as well as find a feasible optimal policy using the Lagrangian Relaxation-based Proximal Policy Optimization. We use an ensemble of neural networks with different initializations to tackle epistemic and aleatoric uncertainty issues faced during environment model learning. We compare our approach with relevant model-free and model-based approaches in Constrained RL using the challenging Safe Reinforcement Learning benchmark - the Open AI Safety Gym. We demonstrate that our algorithm is more sample efficient and results in lower cumulative hazard violations as compared to constrained model-free approaches. Further, our approach shows better reward performance than other constrained model-based approaches in the literature.
Accept
This paper presents Model-based PPO-Lagrangian (MBPPO-Lagrangian) algorithm for safe RL, which reduces epistemic and aleatoric uncertainty with an ensemble of neural networks. The authors experimented the proposed algorithm in safety benchmarks such as Safety Gym: PointGoal1 and CarGoal for which MBPPO-Lagrangian showed better performances and safety guarantees than other model-free/based safe RL baseline algorithms. This paper presented a model-based safe RL algorithm that has immense applications in RL for safety-critical problems. The paper is generally well-written and intuitive, with most concepts clearly explained. The safety results demonstrated in the experiments are convincing and the algorithms are easy enough to implement for most practical applications. Therefore, the review committee has a consensus of recommending acceptance for this work to NeurIPS 22.
train
[ "uezg7zjUD5", "OPLPi24xKoP", "7lIs5rejBL", "qNsyWuKTHO6", "VsTvJSQChey", "tZ0PHYBZs6h", "H7O4upmToN4", "X827usIJD8X", "RnWxJbUHe65", "Uhqu0DGsDrn", "D0Jy53AC0Uf", "MIaULM1ywB-", "5URMZ7Chmqa" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for your comments. Your suggested changes will be incorporated.", " Thanks for addressing all my concerns regarding the paper! \n\nI have two more recommendations to improve the paper. I wonder whether showing the standard deviations of graphs in Figure 1 and in Appendix D is possible. Also, it wou...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "OPLPi24xKoP", "RnWxJbUHe65", "tZ0PHYBZs6h", "X827usIJD8X", "nips_2022_hYa_lseXK8", "5URMZ7Chmqa", "MIaULM1ywB-", "D0Jy53AC0Uf", "Uhqu0DGsDrn", "nips_2022_hYa_lseXK8", "nips_2022_hYa_lseXK8", "nips_2022_hYa_lseXK8", "nips_2022_hYa_lseXK8" ]
nips_2022_wO53HILzu65
On the Generalizability and Predictability of Recommender Systems
While other areas of machine learning have seen more and more automation, designing a high-performing recommender system still requires a high level of human effort. Furthermore, recent work has shown that modern recommender system algorithms do not always improve over well-tuned baselines. A natural follow-up question is, "how do we choose the right algorithm for a new dataset and performance metric?" In this work, we start by giving the first large-scale study of recommender system approaches by comparing 24 algorithms and 100 sets of hyperparameters across 85 datasets and 315 metrics. We find that the best algorithms and hyperparameters are highly dependent on the dataset and performance metric. However, there is also a strong correlation between the performance of each algorithm and various meta-features of the datasets. Motivated by these findings, we create RecZilla, a meta-learning approach to recommender systems that uses a model to predict the best algorithm and hyperparameters for new, unseen datasets. By using far more meta-training data than prior work, RecZilla is able to substantially reduce the level of human involvement when faced with a new recommender system application. We not only release our code and pretrained RecZilla models, but also all of our raw experimental results, so that practitioners can train a RecZilla model for their desired performance metric: https://github.com/naszilla/reczilla.
Accept
The core idea is to specialize meta-learning approaches to recommender systems. The specialization is done using features on the dataset themselves so is different from usual autoML approaches. Some code is provided allowing easy comparison to a lot of well tuned baselines in the domain. Yet easily reusable it is also demonstrating that several papers accepted in the past few years were overclaiming because of lazy comparisons. It also formalize some experience that many practitionner have about the "good" algorithms to use depending of the metric ans data for recommender systems. Reviewers significantly updated their scores during the discussion phase as the authors ran a new set of comparisons and clarified some sections. I feel the work can be reused so I recommend an accept.
train
[ "nwh4otVU_Tb", "3Rb0NxMwFbp", "KVBFGo1Oo9", "e3DXg3wzJ_L", "it-WS0JUk5e", "UYXDGaVQ6YJ", "LGR-xo2P8Uv", "6KZnF42ZDYI", "Bxmi2cLJW72", "JRpUdnQCZKc", "V0dDaPfvgbt", "ppycnMAdaBp", "uS_9dY4zB4a" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your additional feedback. We agree with your minor comment about giving more details for practitioners leveraging Section 2. We have now updated Section C.2 with concrete examples and more details.\n\nNote that the particular use cases and goals of a practitioner may be very specific (they may be co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "KVBFGo1Oo9", "6KZnF42ZDYI", "it-WS0JUk5e", "Bxmi2cLJW72", "uS_9dY4zB4a", "ppycnMAdaBp", "V0dDaPfvgbt", "JRpUdnQCZKc", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65" ]
nips_2022_mMT8bhVBoUa
Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning
We develop a framework for generalized variational inference in infinite-dimensional function spaces and use it to construct a method termed Gaussian Wasserstein inference (GWI). GWI leverages the Wasserstein distance between Gaussian measures on the Hilbert space of square-integrable functions in order to determine a variational posterior using a tractable optimization criterion. It avoids pathologies arising in standard variational function space inference. An exciting application of GWI is the ability to use deep neural networks in the variational parametrization of GWI, combining their superior predictive performance with the principled uncertainty quantification analogous to that of Gaussian processes. The proposed method obtains state-of-the-art performance on several benchmark datasets.
Accept
Technically solid paper that introduces and benchmarks a novel inference framework, with application to inference in GPs. All reviewers recommend to accept, after a decent amount of discussion in which reviewers raised their scores in response to a fairly significant round of updates to the manuscript itself. Recommend to accept, despite some questions regarding overall impact.
test
[ "Nm8NRsL2A9v", "1O8Nr77q73-", "N5NVHOc0eyJ", "NhBLJ7z0tg7", "-1kh2llWtte", "flhtJXI4NEN", "_14PmfOKPYE", "d5sPuG_X0k", "59QwKQ5fne", "b74DgukvHAA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the rebuttal, for answering my questions and for the additional figure. I stand with my previous score, that is I would like to see this paper accepted. ", " I thank the authors for their detailed response. Changes made to address the motivation and model comparisons will substantially improve the ma...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "-1kh2llWtte", "_14PmfOKPYE", "flhtJXI4NEN", "nips_2022_mMT8bhVBoUa", "b74DgukvHAA", "59QwKQ5fne", "d5sPuG_X0k", "nips_2022_mMT8bhVBoUa", "nips_2022_mMT8bhVBoUa", "nips_2022_mMT8bhVBoUa" ]
nips_2022_zuL5OYIBgcV
Non-deep Networks
Latency is of utmost importance in safety-critical systems. In neural networks, lowest theoretical latency is dependent on the depth of the network. This begs the question -- is it possible to build high-performing ``non-deep" neural networks? We show that it is. To do so, we use parallel subnetworks instead of stacking one layer after another. This helps effectively reduce depth while maintaining high performance. By utilizing parallel substructures, we show, for the first time, that a network with a depth of just 12 can achieve top-1 accuracy over 80% on ImageNet, 96% on CIFAR10, and 81% on CIFAR100. We also show that a network with a low-depth (12) backbone can achieve an AP of 48% on MS-COCO. We analyze the scaling rules for our design and show how to increase performance without changing the network's depth. Finally, we provide a proof of concept for how non-deep networks could be used to build low-latency recognition systems. Code is available at https://github.com/imankgoyal/NonDeepNetworks.
Accept
This work considers the task of training state-of-the-art CNNs with limited depth. The benefits considered in this work are related to the potential parallelization which is induced by depth reduction. This paper generated a fair bit of discussion with the reviewers about the motivations and the basic thesis. The authors do a good job of representing their viewpoint, and adding a version of this discussion to the final manuscript will undoubtedly be needed. The empirical results look quite promising, but the authors are also encouraged to further discuss adding an additional motivation to reducing depth (e.g., a theoretical reason, as proposed by reviewer zBDf, which is currently only one paragraph long in the related section) and/or performing a deeper study of hyper-parameters affecting accuracy/latency. With proper framing of the question studied here, the scope of evaluation and the assumptions on the hardware, this will be an interesting contribution to NeurIPS
val
[ "q7W43nrhK0t", "OBBlOppRHD8", "5VLOyDT8buX", "w9GscWn15mJ", "Uky-tUI00K", "IK_rou7anBQ", "NYDQaSIHDd9", "AiZrWvEpgrOT", "2AuPYWuWOEM", "eaJMbkiEalA", "D3qntmYPHplx", "a61N5woR6NZ", "BQxZA8Ul4b1", "Z8TYkIwQoB2", "uZ7Bkbtdf_7", "P3d_IJLpbLD", "43AI4ktSBQU", "MWlH6RGrDKf", "p6oduJ1L...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " We believe that the reviewer is missing the point that as we have hardware with more cores, depth will increasingly become an important and limiting factor as there is no way to circumvent depth (or number of sequential steps). \n\nAlso, we pointed out that the differences caused in latency by large and small dep...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4, 5 ]
[ "5VLOyDT8buX", "w9GscWn15mJ", "Uky-tUI00K", "NYDQaSIHDd9", "AiZrWvEpgrOT", "2JNlxK-qRYw", "a61N5woR6NZ", "BQxZA8Ul4b1", "2JNlxK-qRYw", "p6oduJ1LbbH", "MWlH6RGrDKf", "BQxZA8Ul4b1", "43AI4ktSBQU", "P3d_IJLpbLD", "nips_2022_zuL5OYIBgcV", "nips_2022_zuL5OYIBgcV", "nips_2022_zuL5OYIBgcV",...
nips_2022_YpyGV_i8Z_J
Private Estimation with Public Data
We initiate the study of differentially private (DP) estimation with access to a small amount of public data. For private estimation of $d$-dimensional Gaussians, we assume that the public data comes from a Gaussian that may have vanishing similarity in total variation distance with the underlying Gaussian of the private data. We show that under the constraints of pure or concentrated DP, $d+1$ public data samples are sufficient to remove any dependence on the range parameters of the private data distribution from the private sample complexity, which is known to be otherwise necessary without public data. For separated Gaussian mixtures, we assume that the underlying public and private distributions are the same, and we consider two settings: (1) when given a dimension-independent amount of public data, the private sample complexity can be improved polynomially in terms of the number of mixture components, and any dependence on the range parameters of the distribution can be removed in the approximate DP case; (2) when given an amount of public data linear in the dimension, the private sample complexity can be made independent of range parameters even under concentrated DP, and additional improvements can be made to the overall sample complexity.
Accept
This paper studies private estimation with a small amount of public data. The idea is that the small public dataset may allow for significantly stronger positive results (e.g., in terms of sample complexity of private data). The authors study two fundamental settings in this direction -- estimating a Gaussian and a Gaussian mixture -- and provide interesting and technically non-trivial positive results. The consensus from the reviews and subsequent discussion is that this work is both conceptually and technically interesting.
train
[ "zkhfDLCiy9N", "Rx_OO_Vxe28P", "XhNmPwNF4Nb", "jJsn9o6-OMo", "cY4G7gWM2Qk", "h8QHIRStJ4j", "anp0uJjZZ2O", "whVRWHRWn06", "_JQKTqJsO6U", "WTalcKPar82", "XSKys1JaYGQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the thoughtful response. The example you gave helps to clarify my question about the relation to $(\\epsilon, \\delta)$-DP, which is my main concern. Therefore, I'm willing to raise my score.", " **A proof-of-concept numerical result:** \nAlthough we position our work as theoretical, sinc...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "cY4G7gWM2Qk", "h8QHIRStJ4j", "XSKys1JaYGQ", "cY4G7gWM2Qk", "WTalcKPar82", "anp0uJjZZ2O", "whVRWHRWn06", "_JQKTqJsO6U", "nips_2022_YpyGV_i8Z_J", "nips_2022_YpyGV_i8Z_J", "nips_2022_YpyGV_i8Z_J" ]
nips_2022_ptUZl8xDMMN
Graph Scattering beyond Wavelet Shackles
This work develops a flexible and mathematically sound framework for the design and analysis of graph scattering networks with variable branching ratios and generic functional calculus filters. Spectrally-agnostic stability guarantees for node- and graph-level perturbations are derived; the vertex-set non-preserving case is treated by utilizing recently developed mathematical-physics based tools. Energy propagation through the network layers is investigated and related to truncation stability. New methods of graph-level feature aggregation are introduced and stability of the resulting composite scattering architectures is established. Finally, scattering transforms are extended to edge- and higher order tensorial input. Theoretical results are complemented by numerical investigations: Suitably chosen scattering networks conforming to the developed theory perform better than traditional graph-wavelet based scattering approaches in social network graph classification tasks and significantly outperform other graph-based learning approaches to regression of quantum-chemical energies on QM$7$.
Accept
In the discussion, we reached a clear consensus that this paper is interesting for the NeurIPS community and should be accepted. The author's rebuttal and subsequent discussion were very useful and we are looking forward to the final version of the paper with the promised improvements implemented.
train
[ "H5sJBYUSuTq", "xlMie1xPK3H", "kQMVxsctndp2", "my1BXEbD3Dl", "Q0E4qKk6V6d", "1ruWgbUT4SU", "QcRPprNnWKK", "sDyns08qrUA", "KVs3xoipRh9", "k6Mqa_nwFS-K", "xplFjj40VcD", "m0PxVrT73Ok", "OM4UnIjUQIM", "7AAdlmclehi" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " It was a pleasure implementing suggestions and providing answers and explanations for questions!", " I thank the authors for their very detailed rebuttal.\n\nI am not going to go over each bullet point again, but I am quite satisfied with the changes provided; especially in section 3 and after each theorem, the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "xlMie1xPK3H", "sDyns08qrUA", "my1BXEbD3Dl", "Q0E4qKk6V6d", "1ruWgbUT4SU", "QcRPprNnWKK", "OM4UnIjUQIM", "KVs3xoipRh9", "k6Mqa_nwFS-K", "7AAdlmclehi", "m0PxVrT73Ok", "nips_2022_ptUZl8xDMMN", "nips_2022_ptUZl8xDMMN", "nips_2022_ptUZl8xDMMN" ]
nips_2022__r8pCrHwq39
PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points
Traditional temporal action detection (TAD) usually handles untrimmed videos with small number of action instances from a single label (e.g., ActivityNet, THUMOS). However, this setting might be unrealistic as different classes of actions often co-occur in practice. In this paper, we focus on the task of multi-label temporal action detection that aims to localize all action instances from a multi-label untrimmed video. Multi-label TAD is more challenging as it requires for fine-grained class discrimination within a single video and precise localization of the co-occurring instances. To mitigate this issue, we extend the sparse query-based detection paradigm from the traditional TAD and propose the multi-label TAD framework of PointTAD. Specifically, our PointTAD introduces a small set of learnable query points to represent the important frames of each action instance. This point-based representation provides a flexible mechanism to localize the discriminative frames at boundaries and as well the important frames inside the action. Moreover, we perform the action decoding process with the Multi-level Interactive Module to capture both point-level and instance-level action semantics. Finally, our PointTAD employs an end-to-end trainable framework simply based on RGB input for easy deployment. We evaluate our proposed method on two popular benchmarks and introduce the new metric of detection-mAP for multi-label TAD. Our model outperforms all previous methods by a large margin under the detection-mAP metric, and also achieves promising results under the segmentation-mAP metric.
Accept
This paper considers the problem of detecting temporal activities in videos which contain multiple co-occurring activities of different labels. It is an important problem that arises in many computer vision tasks. The paper is generally well written. Specifically, using learnable query points to select representative frames for segment-level video representation seems to be a novel idea. The experiment results also show promises of the proposed method. Nevertheless, a number of comments and questions were raised by the reviewers. We thank the authors for responding to them in detail and even revising their paper accordingly, which includes providing more experiment results to support their claims. The authors are recommended to further revise their paper by addressing the remaining comments raised.
train
[ "GwFlpRTtCAa", "sLxxsjWiggZ", "95r86oB2TS", "BDByOkCCLZq", "qSRhg8oT7Wj", "g9IpSWVoX7", "SNZ34LpXgU8", "e-pmQrvBxaJ", "NjDMGGS43Kx", "sT52aQ_t4QM", "ki1tlv5P7l", "BzOSwbiteML", "qwDbSbC0buz", "Zf9ih2zCrer", "o8vOZSqT5D" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > What is the difference between the method proposed in this paper and [2,4], which the author does not seem to mention?\n\nAs we stated in Line 93 of the revised paper and in the first response, [2] and [4] all use points to represent object tracks or spatiotemporal action tracks, with a focus on representing t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "sLxxsjWiggZ", "SNZ34LpXgU8", "qwDbSbC0buz", "BzOSwbiteML", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39", "BzOSwbiteML", "qwDbSbC0buz", "qwDbSbC0buz", "Zf9ih2zCrer", "o8vOZSqT5D", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39" ]
nips_2022_c63eTNYh9Y
New Lower Bounds for Private Estimation and a Generalized Fingerprinting Lemma
We prove new lower bounds for statistical estimation tasks under the constraint of $(\varepsilon,\delta)$-differential privacy. First, we provide tight lower bounds for private covariance estimation of Gaussian distributions. We show that estimating the covariance matrix in Frobenius norm requires $\Omega(d^2)$ samples, and in spectral norm requires $\Omega(d^{3/2})$ samples, both matching upper bounds up to logarithmic factors. We prove these bounds via our main technical contribution, a broad generalization of the fingerprinting method to exponential families. Additionally, using the private Assouad method of Acharya, Sun, and Zhang, we show a tight $\Omega(d/(\alpha^2 \varepsilon))$ lower bound for estimating the mean of a distribution with bounded covariance to $\alpha$-error in $\ell_2$-distance. Prior known lower bounds for all these problems were either polynomially weaker or held under the stricter condition of $(\varepsilon,0)$-differential privacy.
Accept
This paper establishes improved and near-optimal lower bounds for private statistical estimation, specifically for private covariance estimation of a Gaussian and heavy-tailed mean estimation. The first result leverages a novel technical result, proved in this paper: a generalization of the fingerprint lemma (Bun, Steinke, Ullman' 17) to exponential families. The second result relies on a private version of Assouad's lemma (developed in recent work). The reviewers agreed that this is a technically novel and interesting work that clearly merits acceptance.
train
[ "sGjjfm3_Ioi", "UFrIN6aTcVNy", "95BLqa0FgLe", "Sy3jcu-g_h1a", "hHDlRD3wACM", "WDKyCJoDL-", "xHVThIbryo4", "uPt88MTYOrZ", "fVJQQYxkV2N1", "WtwOtppAaJT", "w7nsNcnudux", "rDomioTuPax" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for addressing my questions. I have no other questions or concerns for now.", " >I understand the challenges you mentioned, and I have no doubt that your extension of the FP lemma to exponential families is completely non-trivial. Yet, you cannot ignore the fact that the structure of the p...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "fVJQQYxkV2N1", "Sy3jcu-g_h1a", "xHVThIbryo4", "uPt88MTYOrZ", "WDKyCJoDL-", "WtwOtppAaJT", "uPt88MTYOrZ", "w7nsNcnudux", "rDomioTuPax", "nips_2022_c63eTNYh9Y", "nips_2022_c63eTNYh9Y", "nips_2022_c63eTNYh9Y" ]
nips_2022_AezHeiz7eF5
Theory and Approximate Solvers for Branched Optimal Transport with Multiple Sources
Branched Optimal Transport (BOT) is a generalization of optimal transport in which transportation costs along an edge are subadditive. This subadditivity models an increase in transport efficiency when shipping mass along the same route, favoring branched transportation networks. We here study the NP-hard optimization of BOT networks connecting a finite number of sources and sinks in $\mathbb{R}^2$. First, we show how to efficiently find the best geometry of a BOT network for many sources and sinks, given a topology. Second, we argue that a topology with more than three edges meeting at a branching point is never optimal. Third, we show that the results obtained for the Euclidean plane generalize directly to optimal transportation networks on two-dimensional Riemannian manifolds. Finally, we present a simple but effective approximate BOT solver combining geometric optimization with a combinatorial optimization of the network topology.
Accept
The paper presents novel structural and algorithmic results for solving the branched optimal transport problem. In the problem, flow is to be routed from sources to sinks (terminals) in the plane with the possibility of adding non-terminal intermediate nodes. The flow cost on each edge is proportional to the distance between endpoints and subadditive in flow amount; this encourages solutions with “branching”, where flow is routed along common paths. The problem is to select the topology of the graph, location of branching points, and flow amounts. The paper presents structural results about the optimal solution: it is always a tree, which also determines the optimal flow amounts. Results are also given about the branching factor and angles in an optimal solution, which are used in a heuristic algorithm for placing the branching points. Reviewers unanimously found the results to be novel and interesting, and the paper of high quality. They appreciated the theoretical and algorithmic work. Reviewers questioned what applications this work might have (especially to ML) — no concrete applications were given in the paper, but the authors speculated on some in the rebuttal. Given this, two reviewers questioned whether the scope of the paper was a good match for NeurIPS. The meta-reviewer finds the match appropriate, given the interest in optimization and OT within the NeurIPS community, but these reviewer comments indicate that the audience is likely narrow, and the paper could be strengthened by connecting it to concrete applications.
train
[ "3pMPlvkRFwe", "ylPh2gi6xn", "KC-TzCy628c", "GZ5pJnsRepz", "nKXWP8jVdojf", "enb4P-hXN5k6", "rFa9Tc-3Nl3", "2tb0IH8aOqT", "5rUZtcxpnD", "s4mT_ASaojb", "SG5LvrbfGKi", "ONC5PZ5g9L", "XMmavrpH0Hu", "JmRxS9U7AcL", "eGSHJG5gccD", "mpUUZQmbDx", "jQqo_E7N6go", "pLi6wTF0b0S" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_rev...
[ " I am satisfied with the response to my review. So I can keep the score as it is.", " I see, thank you for the quick reply.", " Thank you for acknowledging our rebuttal. \nWLOG, the OT solution can be assumed to be acyclic [2]. As such, it provides a valid input topology for our greedy optimization (Alg. 3), ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "SG5LvrbfGKi", "KC-TzCy628c", "enb4P-hXN5k6", "s4mT_ASaojb", "XMmavrpH0Hu", "ONC5PZ5g9L", "2tb0IH8aOqT", "pLi6wTF0b0S", "jQqo_E7N6go", "mpUUZQmbDx", "eGSHJG5gccD", "JmRxS9U7AcL", "JmRxS9U7AcL", "nips_2022_AezHeiz7eF5", "nips_2022_AezHeiz7eF5", "nips_2022_AezHeiz7eF5", "nips_2022_AezH...
nips_2022_cy1TKLRAEML
Is $L^2$ Physics Informed Loss Always Suitable for Training Physics Informed Neural Network?
The Physics-Informed Neural Network (PINN) approach is a new and promising way to solve partial differential equations using deep learning. The $L^2$ Physics-Informed Loss is the de-facto standard in training Physics-Informed Neural Networks. In this paper, we challenge this common practice by investigating the relationship between the loss function and the approximation quality of the learned solution. In particular, we leverage the concept of stability in the literature of partial differential equation to study the asymptotic behavior of the learned solution as the loss approaches zero. With this concept, we study an important class of high-dimensional non-linear PDEs in optimal control, the Hamilton-Jacobi-Bellman (HJB) Equation, and prove that for general $L^p$ Physics-Informed Loss, a wide class of HJB equation is stable only if $p$ is sufficiently large. Therefore, the commonly used $L^2$ loss is not suitable for training PINN on those equations, while $L^{\infty}$ loss is a better choice. Based on the theoretical insight, we develop a novel PINN training algorithm to minimize the $L^{\infty}$ loss for HJB equations which is in a similar spirit to adversarial training. The effectiveness of the proposed algorithm is empirically demonstrated through experiments. Our code is released at https://github.com/LithiumDA/L_inf-PINN.
Accept
The reviewers reached a consensus that this paper meets the bar for being accepted at NeuRIPS, and therefore the AC recommends acceptance. Please refers to the reviews and author's responses for reviewers' opinion on the strength and weakness of the paper.
train
[ "lzKQ7ALCOtT", "qklb0Gkp8xD", "xfRTtnQ-uCp", "Nm1GEpESFoF", "kcZC1KDo9yI", "ekVW-eJqusS", "wvT6LIwTqTT8", "Zkqe2E3wDw_", "4kfqLYrhs6-", "5rkZpSq4tFA", "gDdT72sqm7p", "-v0u8h9LLck", "CAEN1eG69eu", "yIgmk8k0DJ", "1WLC2q3EYV_", "w5Wcc7psQBQ", "DTrMs5iM7Mq" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your re-evaluation and rating update. We will include the relevant discussions on Sobolev norms and $L^p$ norms in the next version of our paper.", " Thanks for the explanation, I suggest making some of these more concrete in the paper. Meanwhile, I have updated my score.", " Thanks fo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "qklb0Gkp8xD", "xfRTtnQ-uCp", "Nm1GEpESFoF", "4kfqLYrhs6-", "nips_2022_cy1TKLRAEML", "wvT6LIwTqTT8", "gDdT72sqm7p", "nips_2022_cy1TKLRAEML", "DTrMs5iM7Mq", "w5Wcc7psQBQ", "1WLC2q3EYV_", "yIgmk8k0DJ", "yIgmk8k0DJ", "nips_2022_cy1TKLRAEML", "nips_2022_cy1TKLRAEML", "nips_2022_cy1TKLRAEML...
nips_2022_3v44ls_4dbg
Learning Infinite-Horizon Average-Reward Restless Multi-Action Bandits via Index Awareness
We consider the online restless bandits with average-reward and multiple actions, where the state of each arm evolves according to a Markov decision process (MDP), and the reward of pulling an arm depends on both the current state of the corresponding MDP and the action taken. Since finding the optimal control is typically intractable for restless bandits, existing learning algorithms are often computationally expensive or with a regret bound that is exponential in the number of arms and states. In this paper, we advocate \textit{index-aware reinforcement learning} (RL) solutions to design RL algorithms operating on a much smaller dimensional subspace by exploiting the inherent structure in restless bandits. Specifically, we first propose novel index policies to address dimensionality concerns, which are provably optimal. We then leverage the indices to develop two low-complexity index-aware RL algorithms, namely, (i) GM-R2MAB, which has access to a generative model; and (ii) UC-R2MAB, which learns the model using an upper confidence style online exploitation method. We prove that both algorithms achieve a sub-linear regret that is only polynomial in the number of arms and states. A key differentiator between our algorithms and existing ones stems from the fact that our RL algorithms contain a novel exploitation that leverages our proposed provably optimal index policies for decision-makings.
Accept
The paper tackles the challenging problem of online learning restless multi armed bandit (RMAB) policies. Among its contributions are the introduction of a new tractable class of RMAP policies to learn over, and tractable learning algorithms, with regret guarantees, along the lines of statistical upper confidence bounds. These could serve as useful building blocks for theoreticians and practitioners in the area alike. The contributions of the paper are unanimously acknowledged to be positive by the reviewers, after their initial reviews were responded to in detail by the paper's author(s) leading to helpful clarifications. In view of this, I recommend acceptance of the paper.
train
[ "fLVM0zkGHnM", "3Sx6kpsHG3u", "qZf1A87mjqZ", "RX3NqraIWzl", "DA7-oQ-LdFB", "nZIMtYPfoxY", "nS2k9E43qMN", "XtPI5mRP-_", "TZZMxJpIM1r", "6WXLvMqQJuI", "zxIbltrT2SD", "qZD7GWv8EYb", "QsKfAKUhsap", "JWbjjLO-z3C", "s1ptbfK0Jig", "Ap2gFcISm9c" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer again for clarifying this problem. In the following, we further discuss the reward function in the context of restless multi-armed bandits (RMAB). Note that we consider RMAB, more precisely, R2MAB in this paper, rather than the classical MAB (which is stateless in general), while each arm i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "3Sx6kpsHG3u", "qZD7GWv8EYb", "6WXLvMqQJuI", "qZD7GWv8EYb", "nZIMtYPfoxY", "nS2k9E43qMN", "XtPI5mRP-_", "Ap2gFcISm9c", "s1ptbfK0Jig", "zxIbltrT2SD", "JWbjjLO-z3C", "QsKfAKUhsap", "nips_2022_3v44ls_4dbg", "nips_2022_3v44ls_4dbg", "nips_2022_3v44ls_4dbg", "nips_2022_3v44ls_4dbg" ]
nips_2022_SbAaNa97bzp
Understanding Robust Learning through the Lens of Representation Similarities
Representation learning, \textit{i.e.} the generation of representations useful for downstream applications, is a task of fundamental importance that underlies much of the success of deep neural networks (DNNs). Recently, \emph{robustness to adversarial examples} has emerged as a desirable property for DNNs, spurring the development of robust training methods that account for adversarial examples. In this paper, we aim to understand how the properties of representations learned by robust training differ from those obtained from standard, non-robust training. This is critical to diagnosing numerous salient pitfalls in robust networks, such as, degradation of performance on benign inputs, poor generalization of robustness, and increase in over-fitting. We utilize a powerful set of tools known as representation similarity metrics, across 3 vision datasets, to obtain layer-wise comparisons between robust and non-robust DNNs with different architectures, training procedures and adversarial constraints. Our experiments highlight hitherto unseen properties of robust representations that we posit underlie the behavioral differences of robust networks. We discover a lack of specialization in robust networks' representations along with a disappearance of `block structure'. We also find overfitting during robust training largely impacts deeper layers. These, along with other findings, suggest ways forward for the design and training of better robust networks.
Accept
The authors study representations obtained from image classifiers and contrast the classic training with adversarial training, so-called non-robust and robust networks, respectively. The authors primarily use the CKA metric on CIFAR10 and subsets of ImageNet2012 provide several novel insights on "salient pitfalls" in robust networks which suggest that robust representations are less specialized with a weaker block structure, early layers in robust networks are largely unaffected by adversarial examples as the representations seem similar for benign vs. perturbed inputs, deeper layers overfit during robust learning, and that models trained to be robust to different threat models have similar representations. The reviewers agreed that these contributions are interesting to the larger community and that the presentation of the results is clear and straightforward. The main issues raised by the reviewers were carefully addressed in the rebuttal. Please update the manuscript as discussed.
train
[ "7HpjJ94RyAW", "s1oLM5BKF3", "tRrEM0kYLbr", "_tjIwXoMgsB", "Kh6CK-frVT-", "ZA4rYsKE9Pe", "gGvNlbV4AVR", "CMF8HzGYsUtX", "2BBuLo8GFI3", "nF_rZCebApx", "vklEEhwbbZPp", "lckD6fDNP9F", "PBnLSj1IjTm", "DO13pufvfsy", "3jVYrkflzXU", "zHE4_I-J1kO", "YMUytX7qYxe", "zr6vVvkyLP", "YriM7CZhn...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " In light of the authors' willingness to sufficiently guard the reach of the claims made and clarify the wording, I've updated my score.", " We thank the reviewer for engaging with our rebuttal. \n\nIn the literature, the notion of adversarial examples is commonly associated with pixel-wise perturbation-based ad...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4, 4 ]
[ "s1oLM5BKF3", "_tjIwXoMgsB", "YMUytX7qYxe", "vklEEhwbbZPp", "2BBuLo8GFI3", "3jVYrkflzXU", "CMF8HzGYsUtX", "zr6vVvkyLP", "nF_rZCebApx", "YriM7CZhnoC", "TOno0nETe5x", "nips_2022_SbAaNa97bzp", "YMUytX7qYxe", "YMUytX7qYxe", "zHE4_I-J1kO", "nips_2022_SbAaNa97bzp", "nips_2022_SbAaNa97bzp",...
nips_2022_W-Z8n9HrWn0
Why Do Artificially Generated Data Help Adversarial Robustness
In the adversarial training framework of \cite{carmon2019unlabeled,gowal2021improving}, people use generated/real unlabeled data with pseudolabels to improve adversarial robustness. We provide statistical insights to explain why the artificially generated data improve adversarial training. In particular, we study how the attack strength and the quality of the unlabeled data affect adversarial robustness in this framework. Our results show that with a high-quality unlabeled data generator, adversarial training can benefit greatly from this framework under large attack strength, while a poor generator can still help to some extent. To make adaptions concerning the quality of generated data, we propose an algorithm that performs online adjustment to the weight between the labeled real data and the generated data, aiming to optimize the adversarial risk. Numerical studies are conducted to verify our theories and show the effectiveness of the proposed algorithm.
Accept
The recommendation is based on the reviewers' comments, the area chair's personal evaluation, and the post-rebuttal discussion. This paper studies how synthetic data can be useful for improving adversarial robustness. All reviewers find the results convincing and valuable. The authors' rebuttal has successfully addressed the reviewers' concerns. Given the unilateral agreement, I am recommending acceptance
train
[ "BS-0NgHtECu", "4HYQyArU4S5", "sECGNQXlE5L", "ECE-pvfKVMb", "H1h5v2CZyUO", "kEqC9TzkKL", "6BiWr1bWh5o", "Ps8T0bJkom1", "2aqzdEwE73R", "DtOwPKU_nlC", "Ms1YIziHEQF", "keBElZRFlzg", "oMS-Gnnv8n" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank you again for providing us with such a constructive and encouraging review! We will try to polish our paper to fully emphasize the motivation and make the mathematical formulas easier to understand in the camera-ready version.", " I thank the authors for answering the questions. One of my...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 4 ]
[ "4HYQyArU4S5", "6BiWr1bWh5o", "ECE-pvfKVMb", "Ps8T0bJkom1", "oMS-Gnnv8n", "keBElZRFlzg", "Ms1YIziHEQF", "DtOwPKU_nlC", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0" ]
nips_2022_t4vTbQnhM8
A Kernelised Stein Statistic for Assessing Implicit Generative Models
Synthetic data generation has become a key ingredient for training machine learning procedures, addressing tasks such as data augmentation, analysing privacy-sensitive data, or visualising representative samples. Assessing the quality of such synthetic data generators hence has to be addressed. As (deep) generative models for synthetic data often do not admit explicit probability distributions, classical statistical procedures for assessing model goodness-of-fit may not be applicable. In this paper, we propose a principled procedure to assess the quality of a synthetic data generator. The procedure is a Kernelised Stein Discrepancy-type test which is based on a non-parametric Stein operator for the synthetic data generator of interest. This operator is estimated from samples which are obtained from the synthetic data generator and hence can be applied even when the model is only implicit. In contrast to classical testing, the sample size from the synthetic data generator can be as large as desired, while the size of the observed data that the generator aims to emulate is fixed. Experimental results on synthetic distributions and trained generative models on synthetic and real datasets illustrate that the method shows improved power performance compared to existing approaches.
Accept
Decision: Accept This paper introduces a non-parametric (NP) Stein operator to allow implicit models to be used in KSD. So this enable the use of KSD for evaluating the performance of implicit models, and the new test statistic shows better test power compared to MMD test. Reviewers commended that the paper writing is clear, and the contribution is solid and novel. There were a few technical concerns regarding the proposed KSD as well as comparisons to MMD, which were mostly addressed in author-reviewer discussions. In revision for camera ready, I'd encourage the authors to include the additional experiments & discussions provided in the author feedback. Perhaps adding more MMD-based test baselines would strengthen the paper even further.
train
[ "4L5ab32O07v", "uaOTOZ-YAUo", "NzqLVYEGtjd", "UGBL24xUGvK", "gF_o3sNb5ko", "_bkAVmya47X", "74weE6g9PDN", "GExauiBuoKb", "3Mypwq9zmz9U", "ZlNmJ1zqs5", "GQGhqWhRfc_", "eRdRs3IXR5A", "rCZy6idlYox", "82E1jiFV4b" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the update. We are very pleased that we have addressed most of your concerns and you now support accepting it!\n", " I appreciate the detailed response from the author. It addresses most of my concerns. I will raise my rating.", " Many thanks for your suggestions. We have amended the t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "uaOTOZ-YAUo", "gF_o3sNb5ko", "UGBL24xUGvK", "GQGhqWhRfc_", "_bkAVmya47X", "ZlNmJ1zqs5", "GExauiBuoKb", "3Mypwq9zmz9U", "rCZy6idlYox", "82E1jiFV4b", "eRdRs3IXR5A", "nips_2022_t4vTbQnhM8", "nips_2022_t4vTbQnhM8", "nips_2022_t4vTbQnhM8" ]
nips_2022_KqI-bX-TfT
Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds
Surface reconstruction for point clouds is an important task in 3D computer vision. Most of the latest methods resolve this problem by learning signed distance functions (SDF) from point clouds, which are limited to reconstructing shapes or scenes with closed surfaces. Some other methods tried to represent shapes or scenes with open surfaces using unsigned distance functions (UDF) which are learned from large scale ground truth unsigned distances. However, the learned UDF is hard to provide smooth distance fields near the surface due to the noncontinuous character of point clouds. In this paper, we propose a novel method to learn consistency-aware unsigned distance functions directly from raw point clouds. We achieve this by learning to move 3D queries to reach the surface with a field consistency constraint, where we also enable to progressively estimate a more accurate surface. Specifically, we train a neural network to gradually infer the relationship between 3D queries and the approximated surface by searching for the moving target of queries in a dynamic way, which results in a consistent field around the surface. Meanwhile, we introduce a polygonization algorithm to extract surfaces directly from the gradient field of the learned UDF. The experimental results in surface reconstruction for synthetic and real scan data show significant improvements over the state-of-the-art under the widely used benchmarks.
Accept
All reviewers were clearly in favor of accepting the paper pre-rebuttal. There was limited discussion post-rebuttal. The AC examined the paper, the reviews, and the authors' response and is inclined to accept the paper. The AC encourages the authors to use their extra page to incorporate their responses to the reviewers into the final version of the paper. In particular, the AC would encourage carefully considering the feedback on presentation from 1bdf.
train
[ "HZaG6h1VuPD", "KDWg4f28MgV", "uFSjNbGUTDw", "-aQDpKoJpKY", "id-iRuH1Xx", "YaaYuYVw7XZ", "BU2uxgnOUkc", "HUNByEGv-q", "jWjC69phTF9", "M1MBLwT1_Q0", "AH6hmdOPeRA", "IiSUPkOhvYU", "RNPJu-qVdjz" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Aixv,\n\nFollowing your questions, we will expand figure captions with detailed descriptions in revision. We would like to know whether you believe we have addressed your concerns, and please let us know if you have any other questions.\n\nThanks for your time,\n\nThe Authors\n\n", " Dear Reviewer...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "jWjC69phTF9", "YaaYuYVw7XZ", "BU2uxgnOUkc", "HUNByEGv-q", "nips_2022_KqI-bX-TfT", "IiSUPkOhvYU", "AH6hmdOPeRA", "M1MBLwT1_Q0", "RNPJu-qVdjz", "nips_2022_KqI-bX-TfT", "nips_2022_KqI-bX-TfT", "nips_2022_KqI-bX-TfT", "nips_2022_KqI-bX-TfT" ]
nips_2022_LT6-Mxgb3QB
Bilinear Exponential Family of MDPs: Frequentist Regret Bound with Tractable Exploration $\&$ Planning
We study the problem of episodic reinforcement learning in continuous state-action spaces with unknown rewards and transitions. Specifically, we consider the setting where the rewards and transitions are modeled using parametric bilinear exponential families. We propose an algorithm, $\texttt{BEF-RLSVI}$, that a) uses penalized maximum likelihood estimators to learn the unknown parameters, b) injects a calibrated Gaussian noise in the parameter of rewards to ensure exploration, and c) leverages linearity of the exponential family with respect to an underlying RKHS to perform tractable planning. We further provide a frequentist regret analysis of $\texttt{BEF-RLSVI}$ that yields an upper bound of $\tilde{\mathcal{O}}(\sqrt{d^3H^3K})$, where $d$ is the dimension of the parameters, $H$ is the episode length, and $K$ is the number of episodes. Our analysis improves the existing bounds for the bilinear exponential family of MDPs by $\sqrt{H}$ and removes the handcrafted clipping deployed in existing $\texttt{RLSVI}$-type algorithms. Our regret bound is order-optimal with respect to $H$ and $K$.
Reject
The paper presents a tractable algorithm for bilinear exponential MDP with regret bound that improves from the best known result and achieves \sqft{d^3 HK} regret. The result appears to be correct with strong technical analysis. Reviewers and ACs appreciate merits of the analysis for this specific problem class. However, both the reviewer team and the AC found that the authors miss to discuss several important and closely related works, such as Zanette et al, '19; Yang and Wang, '19 and a line of works on kernel RL and model-based RL with Eluder dimension analysis. In particular, Table 1 only compares the new result with several recent results on specific MDP models published after 2021, which is far from comprehensive. During the rebuttal, the authors acknowledged that they were not aware of these related works. However, they didn’t revise the submission to include the missing discussions pointed by the reviewer. It remains unclear how the submission’s analysis relates to the aforementioned results that were not discussed in the paper. The authors provided some high-level discussion after rebuttal, but they would need a lot more technical details to be convincing. For example, regret analysis using Eluder dimension for general function class is often a go-to benchmark for non-linear models. The proposed model appears to be a generalized linear model, which is a standard special case of the Eluder dimension analysis. Then one would expect such analysis to lead to a O(d poly(H)\sqrt{T}) regret, (with \sqrt{d} coming from Eluder dimension and \sqrt{d} coming from metric dimension), better than result of this paper. Note that this is just a conjecture, and rigorously working out this analysis would likely need extra work (nontrivial, as the authors pointed out). However, it is still not appropriate to overlook the possibility of using a more general analysis and just focus on a specific parametric model. A careful and honest discussion is necessary. Beyond using Eluder dimension, there are actually a handful of RL theory papers on general function approximation and general model classes. We strongly recommend the authors to redo their paper survey and properly place their contribution in the context of state-of-art RL theory. We have reviewed a very competitive batch of RL papers this year. This submission has strengths but falls on the borderline. After consulting with the senior AC member who is also expert in RL theory, we regretful commend the authors further revise the paper and submit to the next venue.
train
[ "gSU_whVX3IP", "05mCyko5nt", "W6NMI07SZqA", "N4f5exetHrt", "Wa2tf89kZB0", "kDRkCg5obHm", "DLFltiRkjs6", "BwOluoXS-ZE0", "vCZz2abGu9I", "_96fsWUtt4S", "NikBNkZ4js1", "3a-r5Bkyvd-", "VMP-NKkz3GL", "bEH77796apX", "4TNrI0w4eoM", "8ZqaaRfW-ny", "TTUxCLQMBjD" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications! I encourage the authors to include these details in the paper. \n\nI don't have any further questions, and I will adjust my rating to 6 accordingly. I hope our conversation can help you revise the paper.", " Thank you for your helpful input and for engaging with our rebuttal.\n\nR...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "W6NMI07SZqA", "N4f5exetHrt", "kDRkCg5obHm", "DLFltiRkjs6", "NikBNkZ4js1", "BwOluoXS-ZE0", "_96fsWUtt4S", "vCZz2abGu9I", "3a-r5Bkyvd-", "bEH77796apX", "TTUxCLQMBjD", "8ZqaaRfW-ny", "4TNrI0w4eoM", "nips_2022_LT6-Mxgb3QB", "nips_2022_LT6-Mxgb3QB", "nips_2022_LT6-Mxgb3QB", "nips_2022_LT...
nips_2022_3I8VTXMhuPx
Hiding Images in Deep Probabilistic Models
Data hiding with deep neural networks (DNNs) has experienced impressive successes in recent years. A prevailing scheme is to train an autoencoder, consisting of an encoding network to embed (or transform) secret messages in (or into) a carrier, and a decoding network to extract the hidden messages. This scheme may suffer from several limitations regarding practicability, security, and embedding capacity. In this work, we describe a different computational framework to hide images in deep probabilistic models. Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution. As an instantiation, we adopt a SinGAN, a pyramid of generative adversarial networks (GANs), to learn the patch distribution of one cover image. We hide the secret image by fitting a deterministic mapping from a fixed set of noise maps (generated by an embedding key) to the secret image during patch distribution learning. The stego SinGAN, behaving as the original SinGAN, is publicly communicated; only the receiver with the embedding key is able to extract the secret image. We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security. Moreover, we show the flexibility of the proposed method in terms of hiding multiple images for different receivers and obfuscating the secret image.
Accept
This paper studies a novel variation of image steganography. The proposed approach is different from prior work (mostly building on autoencoders) and uses a GAN and hide a secret image in one particular location of the learned distribution. The central idea of the paper seems novel and interesting. The reviewers raised several concerns about limited evaluation and complexities of comparing to other methods that generate directly images. Overall, this paper seems to have novelty and interesting ideas and the benefits seem to overcome the limitations, based on the rebuttal and discussions.
test
[ "5R1L5wgYI8", "NbbXjgChTuU", "8yszKUkpN6g", "lZQE79MVV4J", "7DRMwIJzsBM", "PsMmUYcWP5", "heUQ5AdGxY" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for appreciating our work. The summary of the current paper is indeed thorough and accurate. \n\n1. We thank the reviewer to recognize the ability of the proposed method to hide multiple images for different receivers as a significant advantage over previous methods, despite that we choose to down-weight t...
[ -1, -1, -1, -1, 3, 5, 7 ]
[ -1, -1, -1, -1, 4, 2, 3 ]
[ "heUQ5AdGxY", "PsMmUYcWP5", "lZQE79MVV4J", "7DRMwIJzsBM", "nips_2022_3I8VTXMhuPx", "nips_2022_3I8VTXMhuPx", "nips_2022_3I8VTXMhuPx" ]
nips_2022_iMK2LP0AogI
CUP: Critic-Guided Policy Reuse
The ability to reuse previous policies is an important aspect of human intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning (DRL) agent needs to decide when to reuse and which source policies to reuse. Previous methods solve this problem by introducing extra components to the underlying algorithm, such as hierarchical high-level policies over source policies, or estimations of source policies' value functions on the target task. However, training these components induces either optimization non-stationarity or heavy sampling cost, significantly impairing the effectiveness of transfer. To tackle this problem, we propose a novel policy reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP utilizes the critic, a common component in actor-critic methods, to evaluate and choose source policies. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, and forms a guidance policy. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy. Then the target policy is regularized to imitate the guidance policy to perform efficient policy search. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms.
Accept
The paper proposes a method for how to leverage a list of pretrained policies for learning a new task, by picking the guidance policy through maximal one-step policy improvement evaluated with the learned critic. Contribution is simple, but writing, theories, and experiments/ablation studies are clean and easy to follow. There is a consensus among the reviewers for the acceptance of the paper. Minor comments: - adding a mechanism for automatically growing and pruning source policies could be nice extension, especially on life-long continual learning environment, where once you learned a novel-enough high-reward policy you may want to add it to the source, so when environment changes and changes back, it can reuse that learn optimal behavior. [1] - a fun experiment to include is to ignore reward and only do imitation during policy improvement (just KL term), while still using reward critic for policy selection. If we know the source policies sufficiently cover the full optimal policy, then this could be a good debugging test. [1] Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., ... & Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671.
train
[ "EKEW60cOsiC", "E29WE-DjOTn", "TMhaXisxxt", "9dVEW3J2d9G", "-2KfdDbm45", "fVvEsE9PXBI", "IZNfFjr3Ld", "ehkzH_bgZt", "8AfYOG1X6mO", "TldoHwZ1Q_F", "N0fNiYKluSt", "Yfi2-v1shYi", "C7tW11TiH-8", "ys65rbSDyUe", "G4UZx3ZSwqE", "TXPjdnK2YlP", "gfo-IuBEJzb", "EUDOBQDIv-f", "aLhzSxG0PV9",...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Thank you for the detailed response! The additional clarification and experiments address my problems. ", " Thank you for the encouraging response! We are glad that our response addresses your concerns. We are grateful for your valuable questions and suggestions, which help improve the paper.", " Hi Authors,\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "TXPjdnK2YlP", "TMhaXisxxt", "9dVEW3J2d9G", "aLhzSxG0PV9", "EUDOBQDIv-f", "gfo-IuBEJzb", "ehkzH_bgZt", "8AfYOG1X6mO", "TldoHwZ1Q_F", "Yfi2-v1shYi", "Yfi2-v1shYi", "NVTR0evMDc", "ys65rbSDyUe", "aLhzSxG0PV9", "EUDOBQDIv-f", "gfo-IuBEJzb", "nips_2022_iMK2LP0AogI", "nips_2022_iMK2LP0Ao...
nips_2022_NJr8GBsyTF0
Non-Markovian Reward Modelling from Trajectory Labels via Interpretable Multiple Instance Learning
We generalise the problem of reward modelling (RM) for reinforcement learning (RL) to handle non-Markovian rewards. Existing work assumes that human evaluators observe each step in a trajectory independently when providing feedback on agent behaviour. In this work, we remove this assumption, extending RM to capture temporal dependencies in human assessment of trajectories. We show how RM can be approached as a multiple instance learning (MIL) problem, where trajectories are treated as bags with return labels, and steps within the trajectories are instances with unseen reward labels. We go on to develop new MIL models that are able to capture the time dependencies in labelled trajectories. We demonstrate on a range of RL tasks that our novel MIL models can reconstruct reward functions to a high level of accuracy, and can be used to train high-performing agent policies.
Accept
The reviewers have agreed on many points (at least after some help from the author's explanations and changes in the rebuttal): the problem formulation is interesting (in particular as it relates to evolving human preferences, but also in the practical experimental cases), the writing is clear and the technical solutions are interesting. While there is also a general consensus that more, larger experiments would be desirable, I note this is much more difficult to achieve in the paper's setup than most "vanilla=Markov" RL, as significant modifications are needed to any standard environment to fit this paradigm. Lunar Lander was well appreciated during the rebuttal, and I believe the paper will now have a strong impact as-is (although if the authors can find the time for another similarly sized env prior to the final version, it will be welcome).
train
[ "Zi7qVm-ykC", "IWpq9O-mJ1Z", "PbJ_wbnbyq", "Tqp0RaGVCIv", "yNg18aPob7i", "6jiubyZfqiE", "jjiZ_Gnt75_7", "MflSMAJMBSne", "aCvRYQHqBBZ", "L1Oyq79128F", "Ai5hwij5M92", "77Z6KqURxA6", "YUGrHi0Adpr", "K3dZT8Ce91d", "sOYehp1fl7N", "swivzwOOSo4" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your comments. Please see our recently submitted final revision, which completes both sets of changes that we laid out in our General Rebuttal. ", " We have now submitted our final revision of the paper. Below we detail the overall changes between this final version and the **original** vers...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "jjiZ_Gnt75_7", "nips_2022_NJr8GBsyTF0", "nips_2022_NJr8GBsyTF0", "6jiubyZfqiE", "nips_2022_NJr8GBsyTF0", "Ai5hwij5M92", "77Z6KqURxA6", "swivzwOOSo4", "L1Oyq79128F", "sOYehp1fl7N", "K3dZT8Ce91d", "YUGrHi0Adpr", "nips_2022_NJr8GBsyTF0", "nips_2022_NJr8GBsyTF0", "nips_2022_NJr8GBsyTF0", ...
nips_2022_S0TR0W63NKl
Generalization Bounds for Estimating Causal Effects of Continuous Treatments
We focus on estimating causal effects of continuous treatments (e.g., dosage in medicine), also known as dose-response function. Existing methods in causal inference for continuous treatments using neural networks are effective and to some extent reduce selection bias, which is introduced by non-randomized treatments among individuals and might lead to covariate imbalance and thus unreliable inference. To theoretically support the alleviation of selection bias in the setting of continuous treatments, we exploit the re-weighting schema and the Integral Probability Metric (IPM) distance to derive an upper bound on the counterfactual loss of estimating the average dose-response function (ADRF), and herein the IPM distance builds a bridge from a source (factual) domain to an infinite number of target (counterfactual) domains. We provide a discretized approximation of the IPM distance with a theoretical guarantee in the practical implementation. Based on the theoretical analyses, we also propose a novel algorithm, called Average Dose- response estiMatIon via re-weighTing schema (ADMIT). ADMIT simultaneously learns a re-weighting network, which aims to alleviate the selection bias, and an inference network, which makes factual and counterfactual estimations. In addition, the effectiveness of ADMIT is empirically demonstrated in both synthetic and semi-synthetic experiments by outperforming the existing benchmarks.
Accept
The authors propose theory and an algorithm for estimating average dose-response functions (ADRF) from observational data under assumptions of unconfoundedness and overlap. The approach extends theory and methodology from primarily the work in [13] where neural networks and integral probability metrics are used to learn outcome regressions and re-weighting functions to minimise a bound on the expected loss. The approach was evaluated on semisynthetic datasets and compared favourably to baseline. Reviewers found the setting novel and interesting but were concerned that the analysis was very close to previous works, requiring only a small modification to allow for continuous (rather than binary) treatments. The empirical evaluation was also rather limited, restricted to comparing mean squared errors on benchmark datasets. One of the reviewers asked why we should expect the method to perform so well when the learning objective represents a fairly loose bound on the expected error. The empirical results offer little to answer this question. The authors rebuttal suggests that this is due to the re-weighting function, but there is no empirical or theoretical evidence that this is the deciding factor. For example, how does the ADMIT model perform without re-weighting? In Figure 3, the authors claim to show that baselines perform worse when selection bias increases, but this trend is noisy at best. If anything, I would argue that it suggests that ADMIT does better no matter the selection bias, which begs the question: where is the advantage coming from? Overall, reviewers thought the paper appears sound and offered a few clarifying comments and questions which were mostly answered by the authors. The technical novelty is rather low, but appropriately applied. A revised version of the manuscript should address the presentation issues raised by reviewers as well as the attribution question asked above.
train
[ "aorA3_fWDdq", "WqZdKQ8XfJ0", "3cr2E7kBRd8", "SpelEwEVbvI", "pZUrD31s2Qn", "YFyqweQuN6Q", "FjmX8MpRG4", "r80uf5A-GUx", "BzPmM7M-B_9", "J9XIgKd3t3e", "1KZWUEoak6w", "diL2WbwHVVw", "ypUAHmAHziN", "_Hb8Sp_0g1A" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for clarifying my concerns. I will be maintaining my score.", " I thank the authors for the detailed answers. I re-read the paper and, while the author responses make the context and technical contribution more clear, I believe it should not take a 2 page response to get the main points across. All of...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "YFyqweQuN6Q", "3cr2E7kBRd8", "ypUAHmAHziN", "pZUrD31s2Qn", "J9XIgKd3t3e", "_Hb8Sp_0g1A", "r80uf5A-GUx", "ypUAHmAHziN", "diL2WbwHVVw", "1KZWUEoak6w", "nips_2022_S0TR0W63NKl", "nips_2022_S0TR0W63NKl", "nips_2022_S0TR0W63NKl", "nips_2022_S0TR0W63NKl" ]
nips_2022_bGo0A4bJBc
Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics
Self-training based semi-supervised learning algorithms have enabled the learning of highly accurate deep neural networks, using only a fraction of labeled data. However, the majority of work on self-training has focused on the objective of improving accuracy whereas practical machine learning systems can have complex goals (e.g. maximizing the minimum of recall across classes, etc.) that are non-decomposable in nature. In this work, we introduce the Cost-Sensitive Self-Training (CSST) framework which generalizes the self-training-based methods for optimizing non-decomposable metrics. We prove that our framework can better optimize the desired non-decomposable metric utilizing unlabeled data, under similar data distribution assumptions made for the analysis of self-training. Using the proposed \ttt{CSST} framework, we obtain practical self-training methods (for both vision and NLP tasks) for optimizing different non-decomposable metrics using deep neural networks. Our results demonstrate that CSST achieves an improvement over the state-of-the-art in majority of the cases across datasets and objectives.
Accept
The paper received two negative scores 3/4 (another is 8 with high confidence 5) and the main critcism is that the writing is vague especially for the topic of the paper may not be very popular to the community. The authors have made good efforts in improving their presentation and they also provide additional clarifications as well as new experimental results point to point. Hence in my opinion, the new pdf version is more readable. For its significance and novelty, the proposed new loss with regularization is theoretically sound and empirically effective. It also addresses the self-training for non-decomposable metrics which is the first time in literature to our knowledge. There are many applications for this method and there are little related work in the community which highlights its potential impact. I suggest to accept this paper for its significance, quality and strong results. The writing is also improved during the rebuttal.
train
[ "m58lNK34L8", "Agx6GkyR87", "YqnBokNTY20", "Kqw2AoQawWT", "tmDlfdNTkhdQ", "yXj-01U1UD4m", "H1IyC_9k6Rfl", "xXOK7Ha_3d", "eTZUKEhd835", "TWq2WAOc8g4", "Y8VcHju7DXP", "xV5DLIaey2w" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the answers. I think the paper should have been submitted in a readable form in the first submission. \nI want to wish success in the next submission with a better version of the paper that should be further improved both with respect to the writing clarity and the quality of the experiments. Specifica...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 5 ]
[ "Kqw2AoQawWT", "nips_2022_bGo0A4bJBc", "Y8VcHju7DXP", "tmDlfdNTkhdQ", "yXj-01U1UD4m", "Y8VcHju7DXP", "nips_2022_bGo0A4bJBc", "TWq2WAOc8g4", "xV5DLIaey2w", "nips_2022_bGo0A4bJBc", "nips_2022_bGo0A4bJBc", "nips_2022_bGo0A4bJBc" ]
nips_2022_jJwy2kcBYv
SPD: Synergy Pattern Diversifying Oriented Unsupervised Multi-agent Reinforcement Learning
Reinforcement learning typically relies heavily on a well-designed reward signal, which gets more challenging in cooperative multi-agent reinforcement learning. Alternatively, unsupervised reinforcement learning (URL) has delivered on its promise in the recent past to learn useful skills and explore the environment without external supervised signals. These approaches mainly aimed for the single agent to reach distinguishable states, insufficient for multi-agent systems due to that each agent interacts with not only the environment, but also the other agents. We propose Synergy Pattern Diversifying Oriented Unsupervised Multi-agent Reinforcement Learning (SPD) to learn generic coordination policies for agents with no extrinsic reward. Specifically, we devise the Synergy Pattern Graph (SPG), a graph depicting the relationships of agents at each time step. Furthermore, we propose an episode-wise divergence measurement to approximate the discrepancy of synergy patterns. To overcome the challenge of sparse return, we decompose the discrepancy of synergy patterns to per-time-step pseudo-reward. Empirically, we show the capacity of SPD to acquire meaningful coordination policies, such as maintaining specific formations in Multi-Agent Particle Environment and pass-and-shoot in Google Research Football. Furthermore, we demonstrate that the same instructive pretrained policy's parameters can serve as a good initialization for a series of downstream tasks' policies, achieving higher data efficiency and outperforming state-of-the-art approaches in Google Research Football.
Accept
Reviewers appreciated the paper's contribution of a novel method for unsupervised skill learning in MARL. While the scores were borderline, reviewers are mostly in favor of acceptance, therefore I recommend acceptance as well. Additional baselines and environments added during the rebuttal phase were important considerations in this decision.
val
[ "gwAfZooa16Q", "Su1Lsi1b30H", "X9cREdxmceH", "KZZGdHVRcU-", "L4xfaa5fP_4", "6pqtNe-txPAQ", "nOk7X9HSMHc", "Cn5heZ-JTC", "ecy9dEexydm", "SFarmKvz90", "Zp_EbVpYHIV", "u28Klj0ZQE", "41ubI2mpq4w", "fSIreI3VbMa" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal. \nMy unclear points are clarified. \nAlthough I have less confidence in my understanding, I raised my score. \n", " We sincerely appreciate all reviewers for their time and efforts in evaluating our paper, as well as their detailed comments and suggestions.\n\nWe hope that our respon...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "ecy9dEexydm", "nips_2022_jJwy2kcBYv", "KZZGdHVRcU-", "6pqtNe-txPAQ", "u28Klj0ZQE", "u28Klj0ZQE", "u28Klj0ZQE", "fSIreI3VbMa", "41ubI2mpq4w", "Zp_EbVpYHIV", "nips_2022_jJwy2kcBYv", "nips_2022_jJwy2kcBYv", "nips_2022_jJwy2kcBYv", "nips_2022_jJwy2kcBYv" ]
nips_2022_TTM7iEFOTzJ
EpiGRAF: Rethinking training of 3D GANs
A recent trend in generative modeling is building 3D-aware generators from 2D image collections. To induce the 3D bias, such models typically rely on volumetric rendering, which is expensive to employ at high resolutions. Over the past months, more than ten works have addressed this scaling issue by training a separate 2D decoder to upsample a low-resolution image (or a feature tensor) produced from a pure 3D generator. But this solution comes at a cost: not only does it break multi-view consistency (i.e., shape and texture change when the camera moves), but it also learns geometry in low fidelity. In this work, we show that obtaining a high-resolution 3D generator with SotA image quality is possible by following a completely different route of simply training the model patch-wise. We revisit and improve this optimization scheme in two ways. First, we design a location- and scale-aware discriminator to work on patches of different proportions and spatial positions. Second, we modify the patch sampling strategy based on an annealed beta distribution to stabilize training and accelerate the convergence. The resulting model, named EpiGRAF, is an efficient, high-resolution, pure 3D generator, and we test it on four datasets (two introduced in this work) at \(256^2\) and \(512^2\) resolutions. It obtains state-of-the-art image quality, high-fidelity geometry and trains \({\approx}\)2.5 faster than the upsampler-based counterparts. Code/data/visualizations: https://universome.github.io/epigraf.
Accept
The reviewers found the method simple and effective and considered it a contribution of interest to the community. Claims are well supported by experiments and design choices have been validated. The paper is well written. Furthermore, the authors provided highly detailed responses to all questions by reviewers, which creates confidence that reviewers' remarks will be addressed in the final paper.
test
[ "BKk-tLzFYjW", "V_SKhiggND7", "23Mw4KbVBb", "m7GqTCwVsLX", "VtedKmDiaHS", "pZ3kd-9Def0", "oIErTxvlPPx", "pjn9FRqUFs", "Nxw2zXZne5p", "XI6If5Nn5Dj", "OgMjOlPbp1Q", "Zw6wdfSO3dZ", "b8aDZVqbH9g", "O02Rrg9lCPS", "_JwSNEfncHZ", "HoykHFPUhYJ", "y2zuhLFAsQb" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the reply. I would encourage the authors to add discussion on sampling strategy in the paper. The rebuttal answers my questions and I would like to keep the original rating.", " Dear Reviewer, we are very thankful for your feedback which helped us to improve several important parts of our work. And ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "Nxw2zXZne5p", "_JwSNEfncHZ", "Zw6wdfSO3dZ", "oIErTxvlPPx", "nips_2022_TTM7iEFOTzJ", "y2zuhLFAsQb", "y2zuhLFAsQb", "HoykHFPUhYJ", "HoykHFPUhYJ", "_JwSNEfncHZ", "_JwSNEfncHZ", "O02Rrg9lCPS", "nips_2022_TTM7iEFOTzJ", "nips_2022_TTM7iEFOTzJ", "nips_2022_TTM7iEFOTzJ", "nips_2022_TTM7iEFOTz...
nips_2022_oprTuM8F3dt
Coordinates Are NOT Lonely - Codebook Prior Helps Implicit Neural 3D representations
Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis, which typically uses the coordinate-based multi-layer perceptrons (MLPs) to learn a continuous scene representation. However, existing approaches, such as Neural Radiance Field (NeRF) and its variants, usually require dense input views (i.e. 50-150) to obtain decent results. To relive the over-dependence on massive calibrated images and enrich the coordinate-based feature representation, we explore injecting the prior information into the coordinate-based network and introduce a novel coordinate-based model, CoCo-INR, for implicit neural 3D representation. The cores of our method are two attention modules: codebook attention and coordinate attention. The former extracts the useful prototypes containing rich geometry and appearance information from the prior codebook, and the latter propagates such prior information into each coordinate and enriches its feature representation for a scene or object surface. With the help of the prior information, our method can render 3D views with more photo-realistic appearance and geometries than the current methods using fewer calibrated images available. Experiments on various scene reconstruction datasets, including DTU and BlendedMVS, and the full 3D head reconstruction dataset, H3DS, demonstrate the robustness under fewer input views and fine detail-preserving capability of our proposed method.
Accept
This paper focuses on improving the training efficiency of coordinate based representations by reducing the number of camera views needed during training. To accomplish this, the authors proposed a codebook attention module and a coordinate attention module to inject prior knowledge into implicit representations. The intuition is that doing so encourages the network to learn the semantic correlation between the input point and the scene, enabling "extrapolation" to far away views. The reviewers appreciated the idea of the paper and how it improved image reconstruction quality across various number of views. They raised concerns regarding lack of experiments with models that perform conditioning to pixel features, e.g., PixelNerf, to generalize across scenes, as well as lack of ablative quantitative experiments to evaluate the architectural contributions and lack of a limitations section. The rebuttal submitted by the authors includes experiments with PixelNerf but does not state mention the number of views used, and does not describe how the method “ours (with scan 51 priors)” is obtained. The second experiment where PixelNerf is trained on DTU and tested on BlendedMVS is not a fair experiment and we do expect pixelNerf to fail there given there is no test time adaptation through gradient descent at the test scene as is the case with the present method. The authors are encouraged to move all rebuttal experiments to the main paper, and thoroughly explain the experimental setup they used. Overall, the paper is not very clearly written. Specifically, the reader learns only at the end of the implementation detail section that a separate network is trained per scene. Reviewer q1CY mentions: “Although prior knowledge are used in this work, *it seems like* the model is still trained specifically to one scene and is hard to generalize to novel scenes.” By not comparing or contrasting the proposed approach to cross-scene generalization works the reader is left to wonder what is the generalization capabilities of the proposed model with varying number of input views. The authors are encouraged to clarify these points in their final version.
train
[ "Aiv3ZATEBCQ", "p0hiBHjQKxn", "DieeZHR0UCi", "UcLwAW8NYI", "MsJ9KCF5rKt", "mGXGF9ItiLb", "EFj9gyTTp4C", "5fW-EVnC9cB", "ttr_7Z9pJUS", "jXaBqHcJxwT", "1wzO1UnyoE", "Nz449rURjg", "_gYTfxpMaiS", "QP90EEcZ3Gs", "F28JDvevpA", "-Wn3zThW2Q5", "QkfhKWvtpU3", "Yjz6V0xYRjL" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your valuable and insightful comments. We feel glad about your generally favorable assessment of our methodology. Additional evaluation/ablation and corresponding explanations will be included in the final version.", " We appreciate your valuable and insightful comments. We feel glad about your ge...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "DieeZHR0UCi", "UcLwAW8NYI", "EFj9gyTTp4C", "Nz449rURjg", "Yjz6V0xYRjL", "Yjz6V0xYRjL", "Yjz6V0xYRjL", "QkfhKWvtpU3", "QkfhKWvtpU3", "F28JDvevpA", "F28JDvevpA", "F28JDvevpA", "-Wn3zThW2Q5", "-Wn3zThW2Q5", "nips_2022_oprTuM8F3dt", "nips_2022_oprTuM8F3dt", "nips_2022_oprTuM8F3dt", "n...
nips_2022_VQ9fogN1q6e
Factored Adaptation for Non-Stationary Reinforcement Learning
Dealing with non-stationarity in environments (e.g., in the transition dynamics) and objectives (e.g., in the reward functions) is a challenging problem that is crucial in real-world applications of reinforcement learning (RL). While most current approaches model the changes as a single shared embedding vector, we leverage insights from the recent causality literature to model non-stationarity in terms of individual latent change factors, and causal graphs across different environments. In particular, we propose Factored Adaptation for Non-Stationary RL (FANS-RL), a factored adaption approach that learns jointly both the causal structure in terms of a factored MDP, and a factored representation of the individual time-varying change factors. We prove that under standard assumptions, we can completely recover the causal graph representing the factored transition and reward function, as well as a partial structure between the individual change factors and the state components. Through our general framework, we can consider general non-stationary scenarios with different function types and changing frequency, including changes across episodes and within episodes. Experimental results demonstrate that FANS-RL outperforms existing approaches in terms of return, compactness of the latent state representation, and robustness to varying degrees of non-stationarity.
Accept
The paper proposes a factored reinforcement-learning method to deal with non-stationary environments. After reading the authors' rebuttals, the reviewers agree that this paper provides an original and sound contribution that deserves publication. We recommend that the authors modify their paper as reported in their answers to the reviewers' comments.
train
[ "xcSg23kr6NM", "0mKRC1hQsl0", "Ehcp96rw1ze", "EzDD6oQ-XH", "U3oBjH2XblO", "MwgBkh8DbM8", "lFAqUL4udY-", "wuuf0HpmWbS", "6bkaqZcQN1f", "Y8XcoGT81cN", "-FJEsgRPstM", "DpPtziADEBOn", "-F9P9AubNfb", "ciBhEBdS7rJ", "M2ZDbOf1ig", "_Fh1vvXmKHj", "8MHz7rP4k5O", "ld7Pp-S8dXv", "KqoKI-tsuW...
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your thoughtful review. We will be happy to discuss if you have any other concerns. \n", " We would like to express our sincere thanks for your positive feedback and valuable suggestions. \n- As you suggested in Q4, we have updated a new revision, which includes the ablation studies on the disenta...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 4 ]
[ "ciBhEBdS7rJ", "lFAqUL4udY-", "U3oBjH2XblO", "MwgBkh8DbM8", "wuuf0HpmWbS", "M2ZDbOf1ig", "-FJEsgRPstM", "6bkaqZcQN1f", "Y8XcoGT81cN", "KqoKI-tsuWC", "DpPtziADEBOn", "-F9P9AubNfb", "qKFJ7CFiX7v", "ld7Pp-S8dXv", "_Fh1vvXmKHj", "8MHz7rP4k5O", "nips_2022_VQ9fogN1q6e", "nips_2022_VQ9fog...
nips_2022_0JV4VVBsK6a
Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens
Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. At the same time, if a small set of annotated images is available, either within or outside the domain of interest, how could we leverage these for a video downstream task? We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights. First, as both images and videos contain structured information, we enrich a transformer model with a set of object tokens that can be used across images and videos. Second, the scene representations of individual frames in video should ``align'' with those of still images. This is achieved via a Frame-Clip Consistency loss, which ensures the flow of structured information between images and videos. We explore a particular instantiation of scene structure, namely a Hand-Object Graph, consisting of hands and objects with their locations as nodes, and physical relations of contact/no-contact as edges. SViT shows strong performance improvements on multiple video understanding tasks and datasets, including the first place in the Ego4D CVPR'22 Point of No Return Temporal Localization Challenge. For code and pretrained models, visit the project page at https://eladb3.github.io/SViT/.
Accept
This paper proposes StructureViT (SViT), a network architecture to incorporate structured information from images to aid in video tasks. All four reviewers found several aspects of the paper interesting including the ability to use information from just a few images and be beneficial to video tasks. They noted the thorough experimentation on multiple datasets and also found the paper easy to follow. One of the reviewers had concerns about the positioning of the paper. The authors had multiple discussions with this reviewer and were able to comprehensively update their paper and address most concerns, which was commended by the reviewer. Another reviewer had concerns about comparisons and discussions with regards to previous work. The authors did a good job of addressing most of their concerns. One common concern that emerged from the reviews and discussions was the existence of prior work that incorporates structured information into video tasks, thus reducing the novel contributions of this paper. Having read the paper, reviews and discussions carefully, I think the paper improves upon past work and has sufficient novel contributions that are valuable to readers. I recommend acceptance.
train
[ "A73QzbQ0R1s", "muKvD3wjTLA", "CvmHrLhmQY_", "q8_4mD9M4n_", "QX68AJfzMe", "3n4MQLEXpE", "LO0oYv4ZazO", "68RI0YBIKsg", "kApS2-Dt_W-", "RuQZ5MhQPeY", "PedhLFk7Vi6", "QMQ4o7ylOhB", "5N3K6ns-6BW", "RSDYUPuHjdA", "3mJPCPwS_rD", "mzVhFG5mr4I", "0VuCszxw0WQ", "5npGaw841Yb", "WZ-NqvgM93o...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " I thank the authors for thoroughly answering all my concerns. \n\nI am reasonably convinced about the general applicability of their proposed approach to several tasks after their provided additional results.\n", " Thank you for your insightful comments. \n\nIn our method, we model objects and hands with object...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "5N3K6ns-6BW", "q8_4mD9M4n_", "QX68AJfzMe", "mzVhFG5mr4I", "3n4MQLEXpE", "68RI0YBIKsg", "kApS2-Dt_W-", "PedhLFk7Vi6", "WZ-NqvgM93o", "5npGaw841Yb", "QMQ4o7ylOhB", "3CRrZRToGo5", "MdEFKx14pec", "3mJPCPwS_rD", "mzVhFG5mr4I", "0VuCszxw0WQ", "yhZ04PMJsLp", "D5FGdoyTwz", "nips_2022_0J...
nips_2022_NkK4i91VWp
Increasing Confidence in Adversarial Robustness Evaluations
Hundreds of defenses have been proposed to make deep neural networks robust against minimal (adversarial) input perturbations. However, only a handful of these defenses held up their claims because correctly evaluating robustness is extremely challenging: Weak attacks often fail to find adversarial examples even if they unknowingly exist, thereby making a vulnerable network look robust. In this paper, we propose a test to identify weak attacks and, thus, weak defense evaluations. Our test slightly modifies a neural network to guarantee the existence of an adversarial example for every sample. Consequentially, any correct attack must succeed in breaking this modified network. For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test, while stronger attacks that break these defenses pass it. We hope that attack unit tests - such as ours - will be a major component in future robustness evaluations and increase confidence in an empirical field that is currently riddled with skepticism.
Accept
This paper proposes a simple yet effective test to identify weak adversarial attacks, and thus weak defense evaluations. Empirical results have revealed insufficiently strong evaluations in 11/13 previous published defenses. To me, the paper studies an important problem and makes a valuable contribution to the active research field of adversatial defense and robustness evalution. I recommend acceptance, and encourage the authors to incorporate the reviewers' comments and suggestions when working on the final version.
train
[ "i045wtKBTGh", "zZWFWtkquEN", "6-gZlgmLqJ", "rfOaa5-QmAG", "nyfZ5oofUPF", "gRPlxAhjsjC", "ZsAssj5VDPH", "FZ8dNUoAVJh", "hhY4iwoVki", "iaVefuOZdiV", "lbnOfOCSLx9", "F6Af4Ae9kcu", "8aMU-2FqA-M", "kSmArjRvszv" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThank you for your response and your overall positive assessment of our work! We would be grateful if you could let us know what aspects of the paper would need to be improved so you would consider a higher overall assessment. We will do what we can to address any remaining concerns and thank yo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 5 ]
[ "6-gZlgmLqJ", "rfOaa5-QmAG", "FZ8dNUoAVJh", "gRPlxAhjsjC", "nips_2022_NkK4i91VWp", "kSmArjRvszv", "kSmArjRvszv", "8aMU-2FqA-M", "F6Af4Ae9kcu", "lbnOfOCSLx9", "nips_2022_NkK4i91VWp", "nips_2022_NkK4i91VWp", "nips_2022_NkK4i91VWp", "nips_2022_NkK4i91VWp" ]
nips_2022_zbuq101sCNV
TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition
Creation of 3D content by stylization is a promising yet challenging problem in computer vision and graphics research. In this work, we focus on stylizing photorealistic appearance renderings of a given surface mesh of arbitrary topology. Motivated by the recent surge of cross-modal supervision of the Contrastive Language-Image Pre-training (CLIP) model, we propose TANGO, which transfers the appearance style of a given 3D shape according to a text prompt in a photorealistic manner. Technically, we propose to disentangle the appearance style as the spatially varying bidirectional reflectance distribution function, the local geometric variation, and the lighting condition, which are jointly optimized, via supervision of the CLIP loss, by a spherical Gaussians based differentiable renderer. As such, TANGO enables photorealistic 3D style transfer by automatically predicting reflectance effects even for bare, low-quality meshes, without training on a task-specific dataset. Extensive experiments show that TANGO outperforms existing methods of text-driven 3D style transfer in terms of photorealistic quality, consistency of 3D geometry, and robustness when stylizing low-quality meshes. Our codes and results are available at our project webpage https://cyw-3d.github.io/tango/.
Accept
This paper presents a new CLIP-driven stylization method given an input mesh and text description. Compared to previous works Text2Mesh, the paper introduces a more expressive rendering model based on learnable SVBRDF and normal maps. Many reviewers found the paper easy to follow, the idea promising, and the results visually appealing. They also expressed their concerns regarding the similarity to Text2Mesh, the limitations of the normal maps approach (compared to changing geometry explicitly), and the relighting and material editing of the stylized object. The rebuttal has addressed most of the concerns. The AC agreed with most of the reviewers and recommended accepting the paper. Please revise the papers according to the reviewer’s comments: (1) change the title according to Reviewer kpuS, (2) add relighting/material editing/view synthesis results, and (3) highlight the pros and cons of the proposed method w.r.t. Text2Mesh.
train
[ "10muIgpP4mE", "BxNNR6XVdWB", "lIWctFja3-h", "G1UgoRGyigm", "ly1zSatz50P", "5cK29RWgqG4", "vh276Ag4Lki", "hcOMqCZEgMc", "vYAkU0FSxw-", "Q1194HZhOBL", "nxkKXHxut2m", "ZgOJ3FDGlJu", "UHul6mOdKy" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the comments and additional experiments. The authors' response has resolved most of my concerns, especially the explanation of the disentanglement of light and reflectance. On the other hand, I agree with the comments from Review kpuS that the limitations of this method and Text2mesh should be discusse...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "5cK29RWgqG4", "nips_2022_zbuq101sCNV", "G1UgoRGyigm", "vh276Ag4Lki", "UHul6mOdKy", "ZgOJ3FDGlJu", "nxkKXHxut2m", "Q1194HZhOBL", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV" ]
nips_2022_HIslGib8XD
AutoMS: Automatic Model Selection for Novelty Detection with Error Rate Control
Given an unsupervised novelty detection task on a new dataset, how can we automatically select a ''best'' detection model while simultaneously controlling the error rate of the best model? For novelty detection analysis, numerous detectors have been proposed to detect outliers on a new unseen dataset based on a score function trained on available clean data. However, due to the absence of labeled data for model evaluation and comparison, there is a lack of systematic approaches that are able to select a ''best'' model/detector (i.e., the algorithm as well as its hyperparameters) and achieve certain error rate control simultaneously. In this paper, we introduce a unified data-driven procedure to address this issue. The key idea is to maximize the number of detected outliers while controlling the false discovery rate (FDR) with the help of Jackknife prediction. We establish non-asymptotic bounds for the false discovery proportions and show that the proposed procedure yields valid FDR control under some mild conditions. Numerical experiments on both synthetic and real data validate the theoretical results and demonstrate the effectiveness of our proposed AutoMS method. The code is available at https://github.com/ZhangYifan1996/AutoMS.
Accept
The paper proposes a method for finding the best anomaly detector among a set of candidate methods that are all based on constructing a score function. The selection method is based on a leave-one-out estimate. Some theoretical results are presented and proven in the appendix, and in addition, some experiments are reported. Overall, this paper presents a novel and interesting method for an important problem, and the theoretical considerations are certainly a plus. The only major issue of the paper is that only 4 real world data sets were considered, and despite the fact that this problem was raised by the reviewers, the authors did not include more during the rebuttal phase. From my perspective, a strongly theoretical paper does not require extensive experiments, but the paper under review does not fall into this category. And for this reason, more experiments, on say another 15 data sets would have been really helpful. In summary, this is an interesting paper with a sufficiently good theoretical part and some promising experiments. The latter could have been more, but overall this paper should be accepted.
val
[ "MAedsfmaLr0", "66yvH5jbONY", "wSS2B1Io_7Y", "DP33cR5VIt", "qRfNfQ7bMWJ", "mZT4ryOD63w", "DbpdJYK1h_", "luQw6NfemKR", "gvWF0H3OhX3", "v8HMNqIH8-e", "LVyq5tbqOk3", "w4bHLYK4ZA9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nDear Reviewer MLPQ,\n\nThank you for providing the insightful comments on the **Experiment scale of our AutoMS method and METAOD**.\nWe have tried our best to answer your questions piece by piece, to make it clear that why there is no need for our AutoMS method to go through hundreds of datasets. As MetaOD uses...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "LVyq5tbqOk3", "DP33cR5VIt", "qRfNfQ7bMWJ", "mZT4ryOD63w", "DbpdJYK1h_", "w4bHLYK4ZA9", "LVyq5tbqOk3", "v8HMNqIH8-e", "nips_2022_HIslGib8XD", "nips_2022_HIslGib8XD", "nips_2022_HIslGib8XD", "nips_2022_HIslGib8XD" ]
nips_2022_omI5hgwgrsa
Optimal Algorithms for Decentralized Stochastic Variational Inequalities
Variational inequalities are a formalism that includes games, minimization, saddle point, and equilibrium problems as special cases. Methods for variational inequalities are therefore universal approaches for many applied tasks, including machine learning problems. This work concentrates on the decentralized setting, which is increasingly important but not well understood. In particular, we consider decentralized stochastic (sum-type) variational inequalities over fixed and time-varying networks. We present lower complexity bounds for both communication and local iterations and construct optimal algorithms that match these lower bounds. Our algorithms are the best among the available literature not only in the decentralized stochastic case, but also in the decentralized deterministic and non-distributed stochastic cases. Experimental results confirm the effectiveness of the presented algorithms.
Accept
The paper makes a significant contribution to the literature on distributed SVIs. The results provided are fairly comprehensive -- both lower bounds and algorithms achieving the lower bounds are provided. Hence, the paper is recommended for acceptance.
train
[ "JwOS4uonNR", "5P2CB4zIaUW", "Jxkj6ypJld", "DG57C4ZO-hh", "DokUYpN5e6_", "ZbiyTcwT3aQ", "wguDRuYabqP", "-eFFJZAqKzP", "l8BNoGdn4ju", "zNh2uhLQHoT", "Za3yPrLtt-3", "U3p48Hsw8ef", "IobNwRXCz4b" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We're glad to hear it! Thanks again for the review!", " Thanks for the response. As in my previous review, I think this paper has value, and I still think this paper can be accepted. Variational inequality has more applications than optimization.", " With this message, we would just like to kindly remind Revi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "5P2CB4zIaUW", "-eFFJZAqKzP", "nips_2022_omI5hgwgrsa", "DokUYpN5e6_", "IobNwRXCz4b", "IobNwRXCz4b", "IobNwRXCz4b", "U3p48Hsw8ef", "Za3yPrLtt-3", "nips_2022_omI5hgwgrsa", "nips_2022_omI5hgwgrsa", "nips_2022_omI5hgwgrsa", "nips_2022_omI5hgwgrsa" ]
nips_2022_9U4gLR_lRP
Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit Calibration
Previous works have extensively studied the transferability of adversarial samples in untargeted black-box scenarios. However, it still remains challenging to craft the targeted adversarial examples with higher transferability than non-targeted ones. Recent studies reveal that the traditional Cross-Entropy (CE) loss function is insufficient to learn transferable targeted perturbations due to the issue of vanishing gradient. In this work, we provide a comprehensive investigation of the CE function and find that the logit margin between the targeted and non-targeted classes will quickly obtain saturated in CE, which largely limits the transferability. Therefore, in this paper, we devote to the goal of enlarging logit margins and propose two simple and effective logit calibration methods, which are achieved by downscaling the logits with a temperature factor and an adaptive margin, respectively. Both of them can effectively encourage the optimization to produce larger logit margins and lead to higher transferability. Besides, we show that minimizing the cosine distance between the adversarial examples and the targeted classifier can further improve the transferability, which is benefited from downscaling logits via L2-normalization. Experiments conducted on the ImageNet dataset validate the effectiveness of the proposed methods, which outperforms the state-of-the-art methods in black-box targeted attacks. The source code for our method is available at https://anonymous.4open.science/r/Target-Attack-72EB/README.md.
Reject
In this paper, the authors propose novel method to improve transferability of targeted adversarial attacks by enlarging the margin between targeted logit and non-target logits. Experiments on ImageNet with different methods demonstrated the effectiveness of the method. However, as is pointed out by the reviewers that there exist high overlap between the paper and the existing works, which significantly hinders the novelties of the paper. The paper are expected to clarify the novelty and provide more comprehensive evaluations.
train
[ "04OYiCm6jg", "VacpnSA35Nc", "sT14tZIeoddW", "jNXvtEGBFMIS", "sIAr738LGW9", "OyP0Pw8xB5", "IfQv4z1bM6K", "gLRl60iGOpe", "7eZgZ-rivOg", "eqDmaQ47r91", "IDrIEbjfT7Y", "bNMs61pCbvC", "JcXe9C2n82K" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their response and have checked the revised version. I agree with Reviewer WQZv that the change of the current version is falling into a major revision. In particular, I would like to highlight the high overlap between the previous submitted manuscript and the existing work. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "7eZgZ-rivOg", "jNXvtEGBFMIS", "nips_2022_9U4gLR_lRP", "sIAr738LGW9", "OyP0Pw8xB5", "JcXe9C2n82K", "bNMs61pCbvC", "IDrIEbjfT7Y", "eqDmaQ47r91", "nips_2022_9U4gLR_lRP", "nips_2022_9U4gLR_lRP", "nips_2022_9U4gLR_lRP", "nips_2022_9U4gLR_lRP" ]
nips_2022_2EQzEE5seF
Adversarially Perturbed Batch Normalization: A Simple Way to Improve Image Recognition
Recently, it has been shown that adversarial training (AT) by injecting adversarial samples can improve the quality of recognition. However, the existing AT methods suffer from the performance degradation on the benign samples, leading to a gap between robustness and generalization. We argue that this gap is caused by the inaccurate estimation of the Batch Normalization (BN) layer, due to the distributional discrepancy between the training and test set. To bridge this gap, this paper identifies the adversarial robustness against the indispensable noise in BN statistics. In particular, we proposed a novel strategy that adversarially perturbs the BN layer, termed ARAPT. The ARAPT leverages the gradients to shift BN statistics and helps models resist the shifted statistics to enhance robustness to noise. Then, we introduce ARAPT into a new paradigm of AT called model-based AT, which strengthens models' tolerance to noise in BN. Experiments indicate that the APART can improve model generalization, leading to significant improvements in accuracy on benchmarks like CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet.
Reject
The paper presents a new way of bridging the gap between models’ generalization and robustness, by combining gradients computed on unperturbed BN statistics with gradients computed on perturbed statistics. The main goal is to improve the standard generalization, but the authors should clarify their definition of "robustness" as it seems to confuse all reviewers (e.g., questioning adversarial attacks). Moreover, the method itself is very simple, and the idea of using adversarial perturbation to stabilize model training isn't new (AdvProp, etc.). Reviewers are further concerned about the lack of large-scale experiments or on state-of-the-art architectures. Besides, there are no comparisons with some of the competing methods such as AdvProp. Therefore, I find no sufficient ground to recommend acceptance in this paper's current shape.
train
[ "RjLNTOrnS7", "kUAUmsa75zZ", "D16-Vn2X-qK", "yI3DIcDWGgb", "XUlT8cWDlgO", "_iUK-AEkbF", "mfNvDcYwESe", "z5zom5iC5d1", "GQufvbUajNF", "BLP-ko1mdm3", "cQ48IUAjIlj", "FsXHRovW6BS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the rebuttal, however, the responses only address parts of my concerns. I feel grateful that the authors have added the robustness experiments on ImageNet-C, while many experiments (e.g., Mixup on ImageNet, more backbones on ImageNet) are still missing for now. In my opinion, experiments for this paper...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "kUAUmsa75zZ", "FsXHRovW6BS", "cQ48IUAjIlj", "cQ48IUAjIlj", "BLP-ko1mdm3", "GQufvbUajNF", "GQufvbUajNF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF" ]
nips_2022_LEqYZz7cZOI
Singular Value Fine-tuning: Few-shot Segmentation requires Few-parameters Fine-tuning
Freezing the pre-trained backbone has become a standard paradigm to avoid overfitting in few-shot segmentation. In this paper, we rethink the paradigm and explore a new regime: {\em fine-tuning a small part of parameters in the backbone}. We present a solution to overcome the overfitting problem, leading to better model generalization on learning novel classes. Our method decomposes backbone parameters into three successive matrices via the Singular Value Decomposition (SVD), then {\em only fine-tunes the singular values} and keeps others frozen. The above design allows the model to adjust feature representations on novel classes while maintaining semantic clues within the pre-trained backbone. We evaluate our {\em Singular Value Fine-tuning (SVF)} approach on various few-shot segmentation methods with different backbones. We achieve state-of-the-art results on both Pascal-5$^i$ and COCO-20$^i$ across 1-shot and 5-shot settings. Hopefully, this simple baseline will encourage researchers to rethink the role of backbone fine-tuning in few-shot settings.
Accept
This paper presents a solution to overcome the overfitting problem in few-shot segmentation. Specifically, the proposed method decomposes the backbone parameters into three matrices via singular value decomposition (SVD) and fine-tunes only the singular values, while leaving the others frozen. This allows the model to adjust the feature representation in a new class while maintaining the semantic cues in the pre-trained backbone. All reviewers admit that this paper is well written, and the proposed method is applicable and novel. Furthermore, the authors provide great additional experiments and answers to the reviewers’ concerns. These made all reviewers positive for this paper. The AC agreed with the reviewers that the proposed method would make waves in the few-shot learning paradigm where the parameters of the pre-train model should be frozen. The AC recommends including the results described in the rebuttal for the final camera-ready version.
train
[ "V-Ybl7lwE3U", "hynhqWz7QE", "ysHodz71zOR", "fvmbJiLXmq", "rPuWKhqM9c", "bMreHwSKVDP", "aiBEZFA-9al", "9I8ALjDxLQHj", "DLMKVx2Rch6", "8Ni53g-pq-", "yo4vr2ibeUnV", "__bmyGgw4MaL", "cM6Bn11jmJ", "eRAkE_rPPB2", "5JVWvrZiquY", "XzNUL7mU5JS", "OeVQOCQPl2e", "HjLr_IpBVSz", "dXGQryd7iy1...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " We agree with the reviewer that good performance should be achieved when R=U. The above analysis of 2 and 3 is for random rotation matrix, and does not include the special case of R=U. According to the above results and analysis, we conclude that the choice of R is very important. \n\nFollowing the reviewer's sug...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "fvmbJiLXmq", "cM6Bn11jmJ", "OeVQOCQPl2e", "9I8ALjDxLQHj", "8E_wAQvQjNO", "dXGQryd7iy1", "DLMKVx2Rch6", "8Ni53g-pq-", "yo4vr2ibeUnV", "5JVWvrZiquY", "__bmyGgw4MaL", "eRAkE_rPPB2", "8E_wAQvQjNO", "vCQAVh-_rml", "XzNUL7mU5JS", "XxpRT9q6noL", "HjLr_IpBVSz", "dXGQryd7iy1", "nips_2022...
nips_2022_KCXQ5HoM-fy
Supported Policy Optimization for Offline Reinforcement Learning
Policy constraint methods to offline reinforcement learning (RL) typically utilize parameterization or regularization that constrains the policy to perform actions within the support set of the behavior policy. The elaborative designs of parameterization methods usually intrude into the policy networks, which may bring extra inference cost and cannot take full advantage of well-established online methods. Regularization methods reduce the divergence between the learned policy and the behavior policy, which may mismatch the inherent density-based definition of support set thereby failing to avoid the out-of-distribution actions effectively. This paper presents Supported Policy OpTimization (SPOT), which is directly derived from the theoretical formalization of the density-based support constraint. SPOT adopts a VAE-based density estimator to explicitly model the support set of behavior policy and presents a simple but effective density-based regularization term, which can be plugged non-intrusively into off-the-shelf off-policy RL algorithms. SPOT achieves the state-of-the-art performance on standard benchmarks for offline RL. Benefiting from the pluggable design, offline pretrained models from SPOT can also be applied to perform online fine-tuning seamlessly.
Accept
This work presents an interesting idea of constraining the policy network in offline reinforcement learning (RL) to not only be within the support set but also avoid the out-of-distribution actions effectively unlike the standard behavior policy through behavior regularization. The proposed Supported Policy OpTimization (SPOT) method leverages the theoretical framework of density-based support constraint. and adopts a VAE-based density estimator to model the supports of behavioral actions. Such a simple method indeed allows effective density-based regularization and can be flexibly be combined with most standard off-policy RL algorithms. Experiments also show that the propose algorithm achieves better performances than SOTA offline RL methods. All the reviewers think that the paper is written carefully, with the ideas explained intuitively, and algorithms tested extensively to showcase the effectiveness of SPOT. Therefore the consensus is to accept this paper for publication at NeurIPS22.
train
[ "iNsk8RUtzV", "63IidCfA4L9", "PpinE77wq0F", "WyvhAnLE_cN", "Z4w8lQIyc2w", "b4M_YuTp0G8", "8Z3fr7-pgvG", "qnYX29-RsnJ", "zYUZN57ZOev", "J9YqTTx39u", "1SZNVnn9tMjx", "4QhzeeRzkMI", "8Xi9NjFlG7d", "Kic2JGjLD98", "9RFg_XsEz2F", "3Wy8C7COq9", "YSlemDntD7o" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We'd like to thank you again for your time and efforts in providing a valuable review and carefully judging our feedback. We really enjoy the communication, and it helps us make our paper better.", " Thanks for the detailed response. Most of my concerns are solved. I will increase my score.", " Dear Reviewer,...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2, 4 ]
[ "63IidCfA4L9", "PpinE77wq0F", "Kic2JGjLD98", "b4M_YuTp0G8", "J9YqTTx39u", "8Xi9NjFlG7d", "nips_2022_KCXQ5HoM-fy", "YSlemDntD7o", "3Wy8C7COq9", "9RFg_XsEz2F", "4QhzeeRzkMI", "8Xi9NjFlG7d", "Kic2JGjLD98", "nips_2022_KCXQ5HoM-fy", "nips_2022_KCXQ5HoM-fy", "nips_2022_KCXQ5HoM-fy", "nips_...
nips_2022_DhmYYrH_M3m
Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline
Current end-to-end autonomous driving methods either run a controller based on a planned trajectory or perform control prediction directly, which have spanned two separately studied lines of research. Seeing their potential mutual benefits to each other, this paper takes the initiative to explore the combination of these two well-developed worlds. Specifically, our integrated approach has two branches for trajectory planning and direct control, respectively. The trajectory branch predicts the future trajectory, while the control branch involves a novel multi-step prediction scheme such that the relationship between current actions and future states can be reasoned. The two branches are connected so that the control branch receives corresponding guidance from the trajectory branch at each time step. The outputs from two branches are then fused to achieve complementary advantages. Our results are evaluated in the closed-loop urban driving setting with challenging scenarios using the CARLA simulator. Even with a monocular camera input, the proposed approach ranks first on the official CARLA Leaderboard, outperforming other complex candidates with multiple sensors or fusion mechanisms by a large margin. The source code is publicly available at https://github.com/OpenPerceptionX/TCP
Accept
The paper got split reviews: 1x reject, 1x borderline reject, 1x weak accept, 1x accept. All reviewers found the impressive performance on the challenging CARLA leaderboard to be a major strength of the paper. Reviewer concerns stem from two factors: a) not enough technical contribution to warrant publication at NeurIPS (but results are still publication worthy at more domain-specific conferences eg ICRA, IROS), and b) bulk of the impressive performance (19 points) coming from the ensembling heuristic and only 6 points coming from proposed architectural modifications (shared backbone, multi-step control, temporal module and trajectory guided attention). The meta-reviewer read through the paper, the reviews, the author response, and reviewer discussion. For the meta-reviewer, the impressiveness of the empirical results on a well-studied and important benchmark dominates the above reviewer concerns. As long as there is clear attribution and some understanding as to where this impressive performance improvement is coming from, the community will benefit from being aware of the results even though the proposed method may not be as technically deep as typical NeurIPS papers. The authors are encouraged to include the additional experiments conducted during the rebuttal phase into the final version of the paper, in particular the ones that help distill out the contribution of the different parts of the proposed system.
test
[ "VLQgyJR6lxx", "rRmrTi7eQp", "i5jmgA1FPZ", "2H8wWONoaf-", "pFz88kim1DH", "RMceg6fcoUj", "iBcyEmrT68A", "JdsCqKxsVJR", "F5cIuqKUX_X", "fRueL2xtGWn", "5soHQOeZmw", "sIIzUHVrgSC", "amjsb3Bjx9O", "vDuRZrui4R", "yAE_ePKKe6X", "2uoCsQDqU-v", "Dpg9GwhGgXy", "eS4cC7EOJh8", "rTkwPauIJq" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the follow-up discussion.\n\n> Q3: Yes, I have seen the experiments on the fusion weight, but the unexplored part (and probably more critical, given the experiments with alpha values) is the \"situation\" detector. Seems like a very hard-coded rule in an otherwise learned approach.\n\nAgreed. Developin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 4 ]
[ "rRmrTi7eQp", "vDuRZrui4R", "2H8wWONoaf-", "yAE_ePKKe6X", "JdsCqKxsVJR", "iBcyEmrT68A", "fRueL2xtGWn", "5soHQOeZmw", "nips_2022_DhmYYrH_M3m", "eS4cC7EOJh8", "eS4cC7EOJh8", "Dpg9GwhGgXy", "Dpg9GwhGgXy", "2uoCsQDqU-v", "rTkwPauIJq", "nips_2022_DhmYYrH_M3m", "nips_2022_DhmYYrH_M3m", "...
nips_2022_jAL8Rt7HqB
Adaptive Attention Link-based Regularization for Vision Transformers
Although transformer networks are recently employed in the various vision tasks with the outperforming performance, large training data and a lengthy training time are required to train a model to disregard an inductive bias. Using trainable links between the channel-wise spatial attention of a pre-trained Convolutional Neural Network (CNN) and the attention head of Vision Transformers (ViT), we present a regularization technique to improve the training efficiency of Vision Transformers (ViT). The trainable links are referred to as the attention augmentation module, which is trained simultaneously with ViT, boosting the training of ViT and allowing it to avoid the overfitting issue caused by a lack of data. From the trained attention augmentation module, we can extract the relevant relationship between each CNN activation map and each ViT attention head, and based on this, we also propose an advanced attention augmentation module. Consequently, even with a small amount of data, the suggested method considerably improves the performance of ViT while achieving faster convergence during training.
Reject
Four reviewers provided detailed feedback on this paper. The authors responded to the reviews and I appreciate the authors' comments and clarifications, specifically that each question/comment is addressed in detail. Additional experiments were also performed. The authors also uploaded a revised version of the paper. After the two discussion periods, one of the four reviewers suggest to reject the paper while three reviewers rate the paper as "weak accept", so no reviewer strongly advocates for acceptance. I considered the reviewers' and authors' comments and also tried to assess the paper directly. I believe that the paper should not be accepted to NeurIPS in its current form. Weaknesses include: * Readability: While at least one reviewer describes part of the paper as "clear and easy to follow" one other reviewer mentions clarity as the main weakness and one other reviewer also comments in this direction. I personally found the paper hard to read as well (even after the improvements made in the revision), and I found some of the claims to be fairly generic and partially not well supported. E.g. "resolve the issues of overfitting and lengthy training time of ViT", or "The proposed scheme preserves the original architecture of ViT, which results in its general employment regardless of the architecture of ViT." * Experimental Results: Several questions have been raised regarding the experimental results (e.g. influence of the attention link, choice of hyperparameters). These have been addressed in the discussion, but it seems to me that they were at best partially resolved. * Relation to distillation: The results in the low-data regime rely on learning from a teacher model. This relation to distillation is recognized but somewhat under-explored. This could be a confounding factor in the analysis of the approach. For example, in one response, the authors argue that "However, we believe our source of performance gain is due to transferring CNN's inductive bias with attention". It remained somewhat unclear, whether this transfer would also hold when the CNN is not a more powerful teacher model. Strengths include: * The idea of regularizing the global token’s attention maps with the CNN activation maps is novel and interesting. * The reported experimental improvements in the low-data regime are interesting. Despite recommending the paper for rejection in its current form, I would like to encourage the authors to continue this line of work and present it again to the community with more focused discussions, insights (and possibly experiments). This is an interesting paper and it was evaluated to be close to (but below) the acceptance threshold.
test
[ "GK2am7ybwFK", "qlNe7hkPzf", "lt76YID-1Ps", "yBuKzhMW_VM", "cW8NYNyP02d", "Hq4ONy50sJs", "B_-4XuUVp8N", "UHK6pmc-Nr", "Jb_YkkGcN7e", "tlfAB9moQzf", "Xg7j2IIn4y", "4iP7wmbEuuK", "cCfTKzuVwO", "HDsJW7_bmq", "zZScJCw4V2y", "ClaTtsIXhAq", "i-O8m0K5Z9AL", "fQMhc4cObUA", "5yQVNosj4Rc",...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official...
[ " The authors appreciate the reviewer for their detailed review of our manuscript and positive feedback. We are happy that our response has addressed your concerns.\n\nBest regards,\n\nAuthors", " Dear Authors,\n\nAfter having read the rebuttal in detail, my concerns have been addressed and I recommend the accept...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "qlNe7hkPzf", "i-O8m0K5Z9AL", "cW8NYNyP02d", "pTJg_iFaJTM", "bhz8LzRV8q0", "Xg7j2IIn4y", "Xg7j2IIn4y", "4iP7wmbEuuK", "pTJg_iFaJTM", "bhz8LzRV8q0", "56HfaXRneGe", "Swgcg0vQvD0", "Swgcg0vQvD0", "Swgcg0vQvD0", "56HfaXRneGe", "56HfaXRneGe", "fQMhc4cObUA", "pTJg_iFaJTM", "bhz8LzRV8q0...
nips_2022_cj6K4IWVomU
Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior
Inverse rendering is an ill-posed problem. Previous work has sought to resolve this by focussing on priors for object or scene shape or appearance. In this work, we instead focus on a prior for natural illuminations. Current methods rely on spherical harmonic lighting or other generic representations and, at best, a simplistic prior on the parameters. We propose a conditional neural field representation based on a variational auto-decoder with a SIREN network and, extending Vector Neurons, build equivariance directly into the network. Using this, we develop a rotation-equivariant, high dynamic range (HDR) neural illumination model that is compact and able to express complex, high-frequency features of natural environment maps. Training our model on a curated dataset of 1.6K HDR environment maps of natural scenes, we compare it against traditional representations, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations.
Accept
The paper introduces a rotation-equivariant conditional spherical neural fields for illumination priors. Reviewers mostly like the novelty of the proposed approach, its fit for the considered task of illumination priors, technical soundness and experimental evaluation that is thorough and shows merits of the approach. The rebuttals to the reviewers also were thorough and addressed reviewers concerns well. All in all this is a conceptually and experimentally solid and interesting paper that merits publication at NeurIPS.
train
[ "5Nh617Vxq6F", "BLyyRGmBm6X", "m91U7bas5X", "QPRoW-lEu4w", "QCN8QTSpJRD", "TJ2XkGmm22", "RkkH4MQVTUP", "ofI1LGV2GQc", "KK88f4KUEtB", "XwS44e70ky", "gijX96DtHC2", "0vnYhBVD64m" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and the updated manuscript. The rebuttal clearly addresses my concerns, including ablation without equivariance (at different latent code dimensions as well), and implementation details about the choice of latent code dimensions and resolution of environment maps that can help reproduce...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 2, 4 ]
[ "QPRoW-lEu4w", "m91U7bas5X", "TJ2XkGmm22", "0vnYhBVD64m", "gijX96DtHC2", "RkkH4MQVTUP", "XwS44e70ky", "KK88f4KUEtB", "nips_2022_cj6K4IWVomU", "nips_2022_cj6K4IWVomU", "nips_2022_cj6K4IWVomU", "nips_2022_cj6K4IWVomU" ]
nips_2022_8LE06pFhqsW
E-MAPP: Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance
A critical challenge in multi-agent reinforcement learning(MARL) is for multiple agents to efficiently accomplish complex, long-horizon tasks. The agents often have difficulties in cooperating on common goals, dividing complex tasks, and planning through several stages to make progress. We propose to address these challenges by guiding agents with programs designed for parallelization, since programs as a representation contain rich structural and semantic information, and are widely used as abstractions for long-horizon tasks. Specifically, we introduce Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance(E-MAPP), a novel framework that leverages parallel programs to guide multiple agents to efficiently accomplish goals that require planning over $10+$ stages. E-MAPP integrates the structural information from a parallel program, promotes the cooperative behaviors grounded in program semantics, and improves the time efficiency via a task allocator. We conduct extensive experiments on a series of challenging, long-horizon cooperative tasks in the Overcooked environment. Results show that E-MAPP outperforms strong baselines in terms of the completion rate, time efficiency, and zero-shot generalization ability by a large margin.
Accept
This paper deals with complex long-horizon tasks with multi-agent RL. The authors propose E-MAPP method that leverages parallel programs to guide multiple agents with goals to accomplish the task jointly. Generally, this paper is with an interesting idea and has sound technical contributions. The presentation is a bonus point of this paper. The rebuttal mostly eases the concerns of the reviewers. As a result, all the reviewers vote for an acceptance of this paper. The major weakness of the proposed method lies in the inconvenience of applying E-MAPP to a new environment or task since it requires a huge amount of work. Maybe due to this reason, the experiment is conducted on overcooked v2 environment only. In sum, I think this is an interesting paper tackling a type of challenging task and thus recommend an acceptance of this paper.
train
[ "ABHKPGESPQ", "RfikRszirVV", "N_hNxASIfKS", "aBEEmOx2aZy", "hPjFyilPyv2", "S1fHcsbsbg", "dlNHsV0sXK", "C_iJXUI9WuU", "JEtDjQWkHy2", "4d39UPGM5rZ", "JmjhIdXayWV", "hJBx41_JTt9", "MGMt1ghKVa" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your extensive rebuttal and additional experimentations; I especially appreciate the results showing performance in the partially-observed setting. As several of my proposed weaknesses have been addressed I will increase my score to a 6. I struggle to go above this score for many of the same reasons...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "RfikRszirVV", "MGMt1ghKVa", "C_iJXUI9WuU", "MGMt1ghKVa", "MGMt1ghKVa", "hJBx41_JTt9", "hJBx41_JTt9", "hJBx41_JTt9", "JmjhIdXayWV", "nips_2022_8LE06pFhqsW", "nips_2022_8LE06pFhqsW", "nips_2022_8LE06pFhqsW", "nips_2022_8LE06pFhqsW" ]
nips_2022_htM1WJZVB2I
Vision GNN: An Image is Worth Graph of Nodes
Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new \emph{Vision GNN} (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research. The PyTorch code is available at \url{https://github.com/huawei-noah/Efficient-AI-Backbones} and the MindSpore code is available at \url{https://gitee.com/mindspore/models}.
Accept
This paper proposes to explore the graph structure of images by considering patches as nodes, where the graph is constructed by connecting nearest neighbors. Extensive experiments on various visual tasks, i.e., image recognition and object detection have demonstrated the effectiveness of the proposed ViG. All the reviewers agree on the inspiring and promising exploration. The paper is also well-written and the experimental results are impressive.
train
[ "DhbzYk698Y", "fuc26t8aOb", "oeiiz8mSuXR", "0Y-VognFRHL", "T3VlqR7vIJX", "AVpHDIZwe61", "2zpOJaJjYLU", "i6uOZ2fKwUH", "_tIwhPuP2_o" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After seeing other reviewers' comments and the received freedback from the authors, I keep my score as is. The paper is clear and has its clear novelty beyond vision transformer.", " Thanks for the valuable comments. We respond to weaknesses and questions in the following.\n\n> **Q1:**\nBased on my experience, ...
[ -1, -1, -1, -1, -1, 4, 7, 8, 8 ]
[ -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "fuc26t8aOb", "_tIwhPuP2_o", "i6uOZ2fKwUH", "AVpHDIZwe61", "2zpOJaJjYLU", "nips_2022_htM1WJZVB2I", "nips_2022_htM1WJZVB2I", "nips_2022_htM1WJZVB2I", "nips_2022_htM1WJZVB2I" ]
nips_2022_7a2IgJ7V4W
Semi-supervised Vision Transformers at Scale
We study semi-supervised learning (SSL) for vision transformers (ViT), an under-explored topic despite the wide adoption of the ViT architectures to different tasks. To tackle this problem, we use a SSL pipeline, consisting of first un/self-supervised pre-training, followed by supervised fine-tuning, and finally semi-supervised fine-tuning. At the semi-supervised fine-tuning stage, we adopt an exponential moving average (EMA)-Teacher framework instead of the popular FixMatch, since the former is more stable and delivers higher accuracy for semi-supervised vision transformers. In addition, we propose a probabilistic pseudo mixup mechanism to interpolate unlabeled samples and their pseudo labels for improved regularization, which is important for training ViTs with weak inductive bias. Our proposed method, dubbed Semi-ViT, achieves comparable or better performance than the CNN counterparts in the semi-supervised classification setting. Semi-ViT also enjoys the scalability benefits of ViTs that can be readily scaled up to large-size models with increasing accuracies. For example, Semi-ViT-Huge achieves an impressive 80\% top-1 accuracy on ImageNet using only 1\% labels, which is comparable with Inception-v4 using 100\% ImageNet labels. The code is available at https://github.com/amazon-research/semi-vit.
Accept
This paper explores Semi-ViT, a semi-supervised learning approach for vision transformers. Semi-VIT build-on three stages pipeline such as SimCLRv2. The authors introduce a probabilistic mixup for the semi-supervised finetuning stage which gives consistent experimental improvements. Semi-ViT shows strong empirical results as it achieves 80% top-1 accuracy on ImageNet using only 1% labels, which is comparable with Inception-v4 using 100% ImageNet labels, Demonstrating that ViT+semi-supervised training enables to reach 80% top-1 accuracy on 1% ImageNet is novel and of potential interest to the SSL community. I therefore recommend acceptance. However, I would encourage the authors to clarify that the three-stages pipeline is not a contribution of the paper and focus the novelty on the probabilistic mixup and the experimental study.
test
[ "3pxHGebqe3V", "6gUqPKzFzMM", "8syA9Negenm", "xZo1QIywRB5", "porVH-KIsTa", "VN2h-VVB27ZA", "u7LZRmqPEg9", "qsCJJ1nS7X4", "vRNddoCQ22W", "sKpQmAYiYK1", "J8xafR-ISSs", "Vt1HdGSp6u1" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, thanks for taking your time to read our responses. We have tried our best to answer your questions and address your concerns. Is there still any further confusion or concern we can help you to address? If it is still about the technical novelty, we appreciate if the reviewer could also read the oth...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "8syA9Negenm", "xZo1QIywRB5", "qsCJJ1nS7X4", "u7LZRmqPEg9", "Vt1HdGSp6u1", "J8xafR-ISSs", "sKpQmAYiYK1", "vRNddoCQ22W", "nips_2022_7a2IgJ7V4W", "nips_2022_7a2IgJ7V4W", "nips_2022_7a2IgJ7V4W", "nips_2022_7a2IgJ7V4W" ]
nips_2022_gtCPWaY5bNh
Deep Model Reassembly
In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse. Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models give rise to gratifying performances with the user-specified constraints satisfied. We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning, which could be further elevated to 83.2% with end-to-end fine-tuning. Our code is available at https://github.com/Adamdad/DeRy.
Accept
This paper proposes an interesting new way to think about how to use a model zoo of pre-trained models: extract modular building blocks that are swappable from the networks and then stitch them together. To do the former, a cover set optimization methods is proposed, and the blocks can then be combined in a way that respects various resource and performance constraints. The idea is both interesting and ambitious, and has the potential to open up various avenues of research if done well. The paper lives up to the task: It is well-written (k5P1, 3qgQ, q2yK), conducts experiments to validate if the such stitched networks can do well, and proposes an intuitive principled method to extract the blocks (3qgQ, k5P1). The reviewers did express some concerns about scalability/generalizability to other tasks (k5P1, 3qgQ), larger zoos (k5P1 ,3qgQ), other architectures (all reviewers), and computation (q2yK) as well as several other potential issues such as limited performance improvements. The authors provided strong rebuttals to these, including some new experiments. At the end of the process, the reviewers were all satisfied with most of the concerns, and the overall consensus on the paper seems to be with high scores. Given the potentially high-impact, novel perspective as well as the solid execution, I highly recommend this paper for acceptance.
test
[ "bI9ScNdvKn", "E4_iG2sao3v", "mxPVFQ22pM", "VdiJ8ooV10j", "foaAVaWbzdG", "EYsFvLP2T6Y", "szRgeO3AtEF", "r1mhNcHzmJv", "3jWusBXyE_I", "kZjksFZKLP2", "vD--DKTNL-L", "1o4P2MmYS8a", "6Uw9xMIQA-g", "dwlyT1q7-9k", "XjfgzI9s2s8", "CwuGsPiWqjA" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors and appreciate their effort in improving the manuscript. To summarize, all of my concerns are now well addressed by the authors and hence I am increasing my initial score to Strong Accept.", " Thank you for the detailed response that resolves most of my concerns. As the first effort toward r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "mxPVFQ22pM", "szRgeO3AtEF", "VdiJ8ooV10j", "EYsFvLP2T6Y", "6Uw9xMIQA-g", "1o4P2MmYS8a", "kZjksFZKLP2", "CwuGsPiWqjA", "CwuGsPiWqjA", "CwuGsPiWqjA", "XjfgzI9s2s8", "XjfgzI9s2s8", "dwlyT1q7-9k", "nips_2022_gtCPWaY5bNh", "nips_2022_gtCPWaY5bNh", "nips_2022_gtCPWaY5bNh" ]
nips_2022_ebuR5LWzkk0
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks
An off-the-shelf model as a commercial service could be stolen by model stealing attacks, posing great threats to the rights of the model owner. Model fingerprinting aims to verify whether a suspect model is stolen from the victim model, which gains more and more attention nowadays. Previous methods always leverage the transferable adversarial examples as the model fingerprint, which is sensitive to adversarial defense or transfer learning scenarios. To address this issue, we consider the pairwise relationship between samples instead and propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC). Specifically, we present SAC-w that selects wrongly classified normal samples as model inputs and calculates the mean correlation among their model outputs. To reduce the training time, we further develop SAC-m that selects CutMix Augmented samples as model inputs, without the need for training the surrogate models or generating adversarial examples. Extensive results validate that SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning, and detects the stolen models with the best performance in terms of AUC across different datasets and model architectures. The codes are available at https://github.com/guanjiyang/SAC.
Accept
The reviewers agreed that the proposed method and validation overall are a good contribution. We urge the authors to update their paper to reflect the discussed clarifications, e.g., regarding the threat models in use.
train
[ "SemNRKkXLA-", "MqcMfWeGMx2", "nZXw-maTc0E", "vdr-fHSs4S5", "o8qWSaS0Tym", "925tQrtiS2", "DZq7fLYPBn", "wnByfUKTWT4", "qPmjf9ft3EP", "QPq-D_G_fT", "_eCKDlIUmTn", "zrEL33sttL", "sXRdtMW-5-Y", "kGN31Bvybdd", "NqpsASREIvR" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your support, the detailed reviews, and the suggestions for improvement!", " Thank you very much for your efforts in addressing these concerns. I maintain my rating and lean to accept this paper.", " Dear reviewer, \n\nThanks again for your thoughtful review. Does our response address your qu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "MqcMfWeGMx2", "_eCKDlIUmTn", "sXRdtMW-5-Y", "o8qWSaS0Tym", "925tQrtiS2", "NqpsASREIvR", "wnByfUKTWT4", "qPmjf9ft3EP", "NqpsASREIvR", "kGN31Bvybdd", "zrEL33sttL", "sXRdtMW-5-Y", "nips_2022_ebuR5LWzkk0", "nips_2022_ebuR5LWzkk0", "nips_2022_ebuR5LWzkk0" ]
nips_2022_xL8sFkkAkw
Towards Theoretically Inspired Neural Initialization Optimization
Automated machine learning has been widely explored to reduce human efforts in designing neural architectures and looking for proper hyperparameters. In the domain of neural initialization, however, similar automated techniques have rarely been studied. Most existing initialization methods are handcrafted and highly dependent on specific architectures. In this paper, we propose a differentiable quantity, named GradCoisne, with theoretical insights to evaluate the initial state of a neural network. Specifically, GradCosine is the cosine similarity of sample-wise gradients with respect to the initialized parameters. By analyzing the sample-wise optimization landscape, we show that both the training and test performance of a network can be improved by maximizing GradCosine under gradient norm constraint. Based on this observation, we further propose the neural initialization optimization (NIO) algorithm. Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost compared with the training time. With NIO, we improve the classification performance of a variety of neural architectures on CIFAR10, CIFAR-100, and ImageNet. Moreover, we find that our method can even help to train large vision Transformer architecture without warmup.
Accept
The paper introduces a new procedure to initialize the optimisation in training process of DNN models, including the recent ViT architecture. All the reviewers recommend acceptance and appreciate the promising empirical results backed by the strong theoretical foundations. AC recommends acceptance as well.
test
[ "ZGxmr3h_hns", "6rfdopQ8um4", "Lk1BmkA5QMc", "mIrezaM4l_I", "KsM-p-pn3_t", "5sWj80CTSh", "9mHbJHbrK-y", "2gd-f-2IYgh", "AgKuBcv1BFf", "fMDsKy4Jfc", "Ns4bRDT3ySS", "PSlLuTudKeb", "Om0dU8txmM-", "hHD-oBKDM5_", "a8HrRb-465T", "Txm4DVqI58A", "I_Zls_qdBc6", "URG-X4GXfRX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for appreciating our work. We are sorry that there are only a few hours left before the discussion deadline. We will be in a hurry for the revision version because we also need to consider how to fit in 9 pages after the revision and adding more results and discussions. But we would like to summarize the r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "6rfdopQ8um4", "AgKuBcv1BFf", "2gd-f-2IYgh", "KsM-p-pn3_t", "5sWj80CTSh", "9mHbJHbrK-y", "PSlLuTudKeb", "fMDsKy4Jfc", "a8HrRb-465T", "Ns4bRDT3ySS", "URG-X4GXfRX", "I_Zls_qdBc6", "hHD-oBKDM5_", "Txm4DVqI58A", "nips_2022_xL8sFkkAkw", "nips_2022_xL8sFkkAkw", "nips_2022_xL8sFkkAkw", "n...
nips_2022_QrK0WDLVHZt
Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity
We study structured convex optimization problems, with additive objective $r:=p + q$, where $r$ is ($\mu$-strongly) convex, $q$ is $L_q$-smooth and convex, and $p$ is $L_p$-smooth, possibly nonconvex. For such a class of problems, we proposed an inexact accelerated gradient sliding method that can skip the gradient computation for one of these components while still achieving optimal complexity of gradient calls of $p$ and $q$, that is, $\mathcal{O}(\sqrt{L_p/\mu})$ and $\mathcal{O}(\sqrt{L_q/\mu})$, respectively. This result is much sharper than the classic black-box complexity $\mathcal{O}(\sqrt{(L_p+L_q)/\mu})$, especially when the difference between $L_p$ and $L_q$ is large. We then apply the proposed method to solve distributed optimization problems over master-worker architectures, under agents' function similarity, due to statistical data similarity or otherwise. The distributed algorithm achieves for the first time lower complexity bounds on both communication and local gradient calls, with the former having being a long-standing open problem. Finally the method is extended to distributed saddle-problems (under function similarity) by means of solving a class of variational inequalities, achieving lower communication and computation complexity bounds.
Accept
The paper extends gradient sliding to the situation where both functions are smooth and the sum is strongly convex. The resulting algorithm is then applied to distributed optimization settings and similarity assumptions, where it jointly achieves optimal gradient evaluations and communication complexities, improving on prior complexity bounds by logarithmic factors. Initially, the reviewers were unclear about the motivation and construction of the algorithm, as well as the significance of the theoretical results. However, through extensive discussion, most of the issues were clarified to the satisfaction of the reviewers. Consequently, I recommend acceptance of the paper and urge the reviewers to carefully incorporate all the clarifications in their rebuttal into the camera ready paper. In addition, please provide an accurate answer (either yes or no) to question 3a in the reproducibility checklist.
train
[ "jbp5JCocAB", "YAw2pncJ2k0", "PWS13jW8hrW", "doy3gPHkLK", "eOKzjRUkgzO", "IC_zi4a9IAK", "S6uwaEK-To", "mz4HIW79FZ6", "34z5ZUuKTpS", "LtobKKzTxEy", "5Fhzr9oYh87T", "-9eSCIFk4_z", "oE2AxN_wQD", "MqKLv3GFmCo", "XFPWSS9RP9", "e2eWwStdvc7", "Wl-m5TaTRup", "n_zdf4pi6t-", "3252QHkc7NS",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", ...
[ " We greatly thank Reviewer **21CM** for the response, important comments, and positive final feedback!", " Thanks for the detailed reply. I do not have any more questions. I would like to raise my score. ", " Thank you for the response!\n\nAt the moment we are discussing this with Reviewer **disq**.\nPlease, r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 2 ]
[ "YAw2pncJ2k0", "rh9efOzqPbr", "doy3gPHkLK", "y-8DB4NSnRl", "mz4HIW79FZ6", "34z5ZUuKTpS", "LtobKKzTxEy", "KrL9Kg2Glzd", "aqFyn9WBywqT", "qNHVaZv3kNi", "nips_2022_QrK0WDLVHZt", "MqKLv3GFmCo", "e2eWwStdvc7", "b_mc53LVoAk", "Wl-m5TaTRup", "3252QHkc7NS", "n_zdf4pi6t-", "F7iPpsR-9-", "...
nips_2022_Y4vT7m4e3d
Decentralized Local Stochastic Extra-Gradient for Variational Inequalities
We consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data that is heterogeneous (non-IID) and distributed across many devices. We make a very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between the workers. We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone (when a Minty solution exists) settings. The provided rates explicitly exhibit the dependence on network characteristics (e.g., mixing time), iteration counter, data heterogeneity, variance, number of devices, and other standard parameters. As a special case, our method and analysis apply to distributed stochastic saddle-point problems (SPP), e.g., to the training of Deep Generative Adversarial Networks (GANs) for which decentralized training has been reported to be extremely challenging. In experiments for the decentralized training of GANs we demonstrate the effectiveness of our proposed approach.
Accept
The paper studies decentralized local stochastic extra-gradient for variational inequalities. An extra-gradient method is developed for this problem. Theoretical results are established and complemented by simulations. While there were some concerns about the novelty of the work in the initial review, the authors adequately addressed these comments in their response. While a number of typos were present in the paper, I believe that these can be addressed as a minor revision in the final version. I do encourage the authors to carefully proofread their camera ready submission. The work is of interest to a part of the conference audience and should be accepted.
val
[ "ge54pU1J13", "Qa17JRmBc4J", "qdRDmLAGMY", "uObtIVRaBoT", "XJ7DSdAmDfbT", "S8tFexfaKXyw", "7eZ1zkwJlg", "eqAhRRstvfL", "S5Z8vNsCscv", "TdI3wL1pUfe", "WOl2gleeOGh", "w2ckSbpKWc", "hQtcddfGG8", "zl6qj2PXtaT", "UMI4QsA_nSE" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are very grateful to Reviewer **dmBR** for the response! We are especially grateful for the careful handling of our text! Using Reviewer's response, we tried to make our paper better.\n\n> **The current algorithm includes diffusion strategies on the clients. It has been known diffusion strategies work in distr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "uObtIVRaBoT", "qdRDmLAGMY", "TdI3wL1pUfe", "S5Z8vNsCscv", "nips_2022_Y4vT7m4e3d", "7eZ1zkwJlg", "eqAhRRstvfL", "UMI4QsA_nSE", "zl6qj2PXtaT", "hQtcddfGG8", "w2ckSbpKWc", "nips_2022_Y4vT7m4e3d", "nips_2022_Y4vT7m4e3d", "nips_2022_Y4vT7m4e3d", "nips_2022_Y4vT7m4e3d" ]
nips_2022_CZNFw38dDDS
P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting
Nowadays, pre-training big models on large-scale datasets has become a crucial topic in deep learning. The pre-trained models with high representation ability and transferability achieve a great success and dominate many downstream tasks in natural language processing and 2D vision. However, it is non-trivial to promote such a pretraining-tuning paradigm to the 3D vision, given the limited training data that are relatively inconvenient to collect. In this paper, we provide a new perspective of leveraging pre-trained 2D knowledge in 3D domain to tackle this problem, tuning pre-trained image models with the novel Point-to-Pixel prompting for point cloud analysis at a minor parameter cost. Following the principle of prompting engineering, we transform point clouds into colorful images with geometry-preserved projection and geometry-aware coloring to adapt to pre-trained image models, whose weights are kept frozen during the end-to-end optimization of point cloud analysis tasks. We conduct extensive experiments to demonstrate that cooperating with our proposed Point-to-Pixel Prompting, better pre-trained image model will lead to consistently better performance in 3D vision. Enjoying prosperous development from image pre-training field, our method attains 89.3% accuracy on the hardest setting of ScanObjectNN, surpassing conventional point cloud models with much fewer trainable parameters. Our framework also exhibits very competitive performance on ModelNet classification and ShapeNet Part Segmentation. Code is available at https://github.com/wangzy22/P2P.
Accept
The paper presents a method of prompt tuning to transfer 2D pre-trained weights to tackling 3D understanding problems. All reviewers are positive about the novelty of the method. With large 2D pretrained models, higher performances are still expected from xwSJ, which is also a reasonable comment. Other 3D understanding tasks, such as segmentation and detection of outdoor scenes, are strongly encouraged, as they are the true needs of the industry.
train
[ "wfj_ioPAs5Gl", "sqxz9DtUOVt", "5otA25gZuV0", "wXY8HOp-0c4", "eO6Fqr4bZFp", "kqe3QaivO0Y", "6q_BZXt02KxE", "YZmrhk1BSG", "ubT1Uqey4z", "3IImbi3g0GB", "s2-jiHxiMt3", "w3Lv1dxR95Z", "GoAQDbVo5CE", "wggKVfpXQ5I", "GvGKoJFSsb0", "Nj0LjORO4JM", "YqQZsJe3YaT", "Gt9U0hrzKne", "Dyyhuw-Vb...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for upgrading your score and providing valuable feedback. We will update our revised paper according to our discussions. Thank you again for your insightful and constructive suggestions that improve paper quality!", " Thanks for your responses, which I believe are reasonable. Considering that these impor...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 5, 3 ]
[ "sqxz9DtUOVt", "kqe3QaivO0Y", "YZmrhk1BSG", "ubT1Uqey4z", "GoAQDbVo5CE", "6q_BZXt02KxE", "s2-jiHxiMt3", "Dyyhuw-Vbo", "3IImbi3g0GB", "Gt9U0hrzKne", "w3Lv1dxR95Z", "YqQZsJe3YaT", "Nj0LjORO4JM", "GvGKoJFSsb0", "nips_2022_CZNFw38dDDS", "nips_2022_CZNFw38dDDS", "nips_2022_CZNFw38dDDS", ...
nips_2022_NQFFNdsOGD
Your Transformer May Not be as Powerful as You Expect
Relative Positional Encoding (RPE), which encodes the relative distance between any pair of tokens, is one of the most successful modifications to the original Transformer. As far as we know, theoretical understanding of the RPE-based Transformers is largely unexplored. In this work, we mathematically analyze the power of RPE-based Transformers regarding whether the model is capable of approximating any continuous sequence-to-sequence functions. One may naturally assume the answer is in the affirmative---RPE-based Transformers are universal function approximators. However, we present a negative result by showing there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is. One key reason lies in that most RPEs are placed in the softmax attention that always generates a right stochastic matrix. This restricts the network from capturing positional information in the RPEs and limits its capacity. To overcome the problem and make the model more powerful, we first present sufficient conditions for RPE-based Transformers to achieve universal function approximation. With the theoretical guidance, we develop a novel attention module, called Universal RPE-based (URPE) Attention, which satisfies the conditions. Therefore, the corresponding URPE-based Transformers become universal function approximators. Extensive experiments covering typical architectures and tasks demonstrate that our model is parameter-efficient and can achieve superior performance to strong baselines in a wide range of applications. The code will be made publicly available at https://github.com/lsj2408/URPE.
Accept
This paper studies relative positive embeddings based Transformers. The authors present a negative result that there exist continuous sequence-to-sequence functions that relative based Transformers cannot approximate (irrespective of the depth and width of the network). The authors then propose a novel attention module, called Universal RPE-based (URPE) Attention which resolves this problem and show superior performance on a wide range of applications. There is a strong consensus amongst the reviewers that the paper is technically-solid, novel, well-motivated and has good practical applications. I agree with the reviewers and recommend acceptance.
train
[ "kmIhJGXk3B", "YIlkYrGKH4z", "VBORkb5LxR7", "1aY_Bc4QiP1", "KfWrSZHaD-", "gJ0GvGNF17dh", "GQcMMKhC0q3", "JI7fG4WY77L", "5yosMZcm_W", "mUZQkCHm4YI", "Dxl0lYEnx_z", "lCVBF1aW-_Y", "pdA4B0WIkU", "MDPWUjwa6DE", "UDqgijb7kqQ", "KXCso_8veZB" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your appreciation of our work! Your feedback is insightful to help us improve our paper. Thanks!", " Thanks for the authors' responses. Overall, I think this is a practical method with good theoretical proof. The paper writing, mathematical analysis, and experiments on kinds of modali...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3, 1 ]
[ "YIlkYrGKH4z", "gJ0GvGNF17dh", "KfWrSZHaD-", "nips_2022_NQFFNdsOGD", "mUZQkCHm4YI", "lCVBF1aW-_Y", "pdA4B0WIkU", "pdA4B0WIkU", "MDPWUjwa6DE", "UDqgijb7kqQ", "KXCso_8veZB", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNds...
nips_2022_2ktj0977QGO
Multi-Instance Causal Representation Learning for Instance Label Prediction and Out-of-Distribution Generalization
Multi-instance learning (MIL) deals with objects represented as bags of instances and can predict instance labels from bag-level supervision. However, significant performance gaps exist between instance-level MIL algorithms and supervised learners since the instance labels are unavailable in MIL. Most existing MIL algorithms tackle the problem by treating multi-instance bags as harmful ambiguities and predicting instance labels by reducing the supervision inexactness. This work studies MIL from a new perspective by considering bags as auxiliary information, and utilize it to identify instance-level causal representations from bag-level weak supervision. We propose the CausalMIL algorithm, which not only excels at instance label prediction but also provides robustness to distribution change by synergistically integrating MIL with identifiable variational autoencoder. Our approach is based on a practical and general assumption: the prior distribution over the instance latent representations belongs to the non-factorized exponential family conditioning on the multi-instance bags. Experiments on synthetic and real-world datasets demonstrate that our approach significantly outperforms various baselines on instance label prediction and out-of-distribution generalization tasks.
Accept
The paper studies multiple instance learning (MIL) by treating bags as auxiliary information, aiming to identify invariant causal representations using only bag labels available in the MIL setting. To achieve identifiability, it is assumed that the prior distribution over the instance latent variables belongs to the non-factorized exponential family conditioning on the bags. This allows the disentanglement between the causal and non-causal factors and only the causal ones are supposed to contribute to the instance labels (while the bag-level labels are used in the proposed objective function in Eq. 8 to accommodate the MIL setting). Experiments are conducted on multiple datasets to demonstrate the instance prediction and out-of-distribution generalization performance of the proposed TargetedMIL algorithm. The perspective of learning invariant causal representations is new in the context of multiple instance learning. Reviewers have acknowledged this interesting aspect of the proposed work. Authors and reviewers engaged in a detailed discussion and the authors' rebuttal helped to address some major confusions, which further improved the qualify of the paper. The authors are encouraged to more clearly highlight the key difference from two important references in the final version of the paper, including identifiable VAEs and multiple instance VAE, which are relevant to the proposed work. The causal inference related assumption could also be further clarified as suggested by one reviewer.
val
[ "JuI44NupTW4", "jjanCzuIkU", "0CU0jl98tcR", "BeYw_NUCHdF", "Bci25bSSIe2", "XLyWgb7mYlX", "gHC_YfPibF", "FavpOhwP8xm", "d0DKQ9Bb7F8", "-9_g5jGXKNJ", "MeuO5FtuWtI", "jsvfpG9LnTq", "N5hQmfQi3AZ", "bvTLqovApN_", "v36CVg-7opp", "md4bbzox3t8", "yTq-NdOq5CM", "ZpFNe_I7OY" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for reading our responses and raising the score! \n\nWe will certainly incorporate the discussions into the manuscript. And also, thanks very much for the dataset recommendations; we will run experiments with the suggested datasets in the future.\n\n**Q: Did you tune the hyperparameters for th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "jjanCzuIkU", "-9_g5jGXKNJ", "XLyWgb7mYlX", "Bci25bSSIe2", "jsvfpG9LnTq", "bvTLqovApN_", "FavpOhwP8xm", "N5hQmfQi3AZ", "nips_2022_2ktj0977QGO", "v36CVg-7opp", "v36CVg-7opp", "ZpFNe_I7OY", "md4bbzox3t8", "yTq-NdOq5CM", "nips_2022_2ktj0977QGO", "nips_2022_2ktj0977QGO", "nips_2022_2ktj0...
nips_2022_Siv3nHYHheI
Online Training Through Time for Spiking Neural Networks
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency. Particularly, backpropagation through time (BPTT) with surrogate gradients (SG) is popularly used to enable models to achieve high performance in a very small number of time steps. However, it is at the cost of large memory consumption for training, lack of theoretical clarity for optimization, and inconsistency with the online property of biological learning rules and rules on neuromorphic hardware. Other works connect the spike representations of SNNs with equivalent artificial neural network formulation and train SNNs by gradients from equivalent mappings to ensure descent directions. But they fail to achieve low latency and are also not online. In this work, we propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning by tracking presynaptic activities and leveraging instantaneous loss and gradients. Meanwhile, we theoretically analyze and prove that the gradients of OTTT can provide a similar descent direction for optimization as gradients from equivalent mapping between spike representations under both feedforward and recurrent conditions. OTTT only requires constant training memory costs agnostic to time steps, avoiding the significant memory costs of BPTT for GPU training. Furthermore, the update rule of OTTT is in the form of three-factor Hebbian learning, which could pave a path for online on-chip learning. With OTTT, it is the first time that the two mainstream supervised SNN training methods, BPTT with SG and spike representation-based training, are connected, and meanwhile it is in a biologically plausible form. Experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS demonstrate the superior performance of our method on large-scale static and neuromorphic datasets in a small number of time steps. Our code is available at https://github.com/pkuxmq/OTTT-SNN.
Accept
The authors propose an online training algorithm (OTTT) for spiking neural networks (SNNs) using eligibility traces and instantaneous loss values. They show empirically that this method performs better than previous ones in feed-forward spiking neural networks. All reviewers agree that the empirical results are impressive and that the method is interesting for neuromorphic hardware. The authors also provide a mathematical analysis of the learning method. Weaknesses: - Networks are mostly applied to static tasks, while more temporal tasks are potentially more interesting for SNNs - Comparison to previously proposed methods is missing. In general, a very interesting and strong paper. I propose acceptance.
test
[ "oLKVV80Nozw", "thmBtJmnP2n", "x1cpu2CSdHK", "wA3T4kFjAuV", "VitYfTW-Vxe", "-w21uEDkDN7", "B_LH-4NCNx", "DgINWVYhXZ", "rT7sttFiw-", "tv0b3LzKyUq", "U3BosS7lzm2", "2DoPlOk4XNe", "SzWqUlzjrF", "xLDk7a-OJDu", "PQNAUilIClb", "EvOlY615l4W", "WgB_bPmVz-a", "JejHTho56Vh", "evzCJ1zUn5W" ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the valuable suggestion and we will clarify this in the following revision. Yes, for each input sample, the network is reset at time step 0, and at each discrete time step $t$ the input at time step $t$ is passed to the network, with total $T$ time steps. For static images, the input at all time ste...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "x1cpu2CSdHK", "wA3T4kFjAuV", "xLDk7a-OJDu", "SzWqUlzjrF", "-w21uEDkDN7", "evzCJ1zUn5W", "nips_2022_Siv3nHYHheI", "evzCJ1zUn5W", "evzCJ1zUn5W", "evzCJ1zUn5W", "evzCJ1zUn5W", "JejHTho56Vh", "JejHTho56Vh", "JejHTho56Vh", "WgB_bPmVz-a", "WgB_bPmVz-a", "nips_2022_Siv3nHYHheI", "nips_20...
nips_2022_y5ziOXtKybL
Asymptotic Properties for Bayesian Neural Network in Besov Space
Neural networks have shown great predictive power when dealing with various unstructured data such as images and natural languages. The Bayesian neural network captures the uncertainty of prediction by putting a prior distribution for the parameter of the model and computing the posterior distribution. In this paper, we show that the Bayesian neural network using spike-and-slab prior has consistency with nearly minimax convergence rate when the true regression function is in the Besov space. Even when the smoothness of the regression function is unknown the same posterior convergence rate holds and thus the spike-and-slab prior is adaptive to the smoothness of the regression function. We also consider the shrinkage prior, which is more feasible than other priors, and show that it has the same convergence rate. In other words, we propose a practical Bayesian neural network with guaranteed asymptotic properties.
Accept
This work conducts novelty study and extends the results on asymptotic convergence of Bayesian ReLU networks from the Hölder space to the more general Besov space. The reviewers consider it "a strong theoretical results closing a gap for posterior contraction of BNN in Besov spaces". The authors' feedback addressed a few concerns in the initial reviews including the lack of clarity, the question on the technical challenges to extend to a Besov space, and an error in the proof. During the author-reviewer discussion period, the authors also corrected a constant which determines the complexity of the neural network model. As a result, the condition of the theory became "harsh" (see authors' response to reviewer x58D), and the numerical results did not satisify the theory's requirement any more. The authors provided an updated version of the paper to include the change and moved the experiments to appendix. Nonetheless, the reviewers did not think that change decreased its theoretical value and still considered it above the threshold for acceptance due to its novelty. The remaining concerns from the reviewer is lack of evaluations for the non-smoothness assumption and its usage in some real applications, also for the possible difference between a purposely designed fixed prior with a learnable prior.
train
[ "MU0W4dh2stK", "X1xWcqGvks", "v43t0jJ07vJ", "gusvobF1w4a", "C9EXjBLuqSj", "AIMZ-WrVGHS", "i3wTSVlBbWN", "75nQhJ6gQL8", "S8Ougl3jOfM", "hdXKYNRtVYh", "UU1Amylx77", "Y5z3ztdZWOM" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for this clarification. It is clear now. ", " Thanks for your constructive questions.\n\n## Q. \n\n> The question about the learnable hyperparameters is about the prior derived in the paper, which seems to be a fixed prior. However, we normally set a learnable prior to practice. Will this le...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "gusvobF1w4a", "v43t0jJ07vJ", "i3wTSVlBbWN", "C9EXjBLuqSj", "S8Ougl3jOfM", "nips_2022_y5ziOXtKybL", "hdXKYNRtVYh", "UU1Amylx77", "Y5z3ztdZWOM", "nips_2022_y5ziOXtKybL", "nips_2022_y5ziOXtKybL", "nips_2022_y5ziOXtKybL" ]
nips_2022_iKKfdIm81Jt
Planning for Sample Efficient Imitation Learning
Imitation learning is a class of promising policy learning algorithms that is free from many practical issues with reinforcement learning, such as the reward design issue and the exploration hardness. However, the current imitation algorithm struggles to achieve both high performance and high in-environment sample efficiency simultaneously. Behavioral Cloning~(BC) does not need in-environment interactions, but it suffers from the covariate shift problem which harms its performance. Adversarial Imitation Learning~(AIL) turns imitation learning into a distribution matching problem. It can achieve better performance on some tasks but it requires a large number of in-environment interactions. Inspired by the recent success of EfficientZero in RL, we propose EfficientImitate~(EI), a planning-based imitation learning method that can achieve high in-environment sample efficiency and performance simultaneously. Our algorithmic contribution in this paper is two-fold. First, we extend AIL into the MCTS-based RL. Second, we show the seemingly incompatible two classes of imitation algorithms (BC and AIL) can be naturally unified under our framework, enjoying the benefits of both. We benchmark our method not only on the state-based DeepMind Control Suite, but also on the image version which many previous works find highly challenging. Experimental results show that EI achieves state-of-the-art results in performance and sample efficiency. EI shows over 4x gain in performance in the limited sample setting on state-based and image-based tasks and can solve challenging problems like Humanoid, where previous methods fail with small amount of interactions.
Accept
This paper introduces a simple approach that improves the sample efficiency of model-based RL for continuous control tasks. The proposed approach, EfficientImitate builds on EfficientZero and uses a hybrid BC-AIL training scheme. The contribution is relatively simple and is shown through satisfactory experiments to give a substantial sample efficiency boost. The paper is clear and appropriately contextualizes its contribution. All reviewers found the paper to be clear, novel, technically sound, and empirically well validated. The results show that the innovations constitute a meaningful contribution to a fairly general problem class. In initial reviews, two of the three reviewers mentioned that the method is not shown on discrete-action problems. Personally, I wouldn't have seen this to be a major concern, since continuous control problems are a large problem class. However, the authors replied with a comment indicating that for lunar lander (a discrete action atari environment), the method works. It is unclear if this addition was cherry-picked among discrete environments and it is also unclear to me if the authors will add this to the paper. Nevertheless, as noted, I don't find this to be a major gap in the paper as it currently stands. Given the sufficiently positive reviews, level of review agreement, and my own reading, I endorse this paper for acceptance.
train
[ "0svlHjnEb3b", "BL9UN01uWR", "5jQIJlNAe_k", "VZbL-4wEEf", "0NQ-VCh5Bex", "kOqy2Fzo9Ds", "Mj13WOQOia", "VbPH5LO8aVC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response.\n\nYour explanations make sense. I would encourage you to propagate them to the paper in order to help the readers also to build up the intuitions that you have.\n\nThank you for conducting additional experiments.", " Thanks for the response. The additional discussions and visualizat...
[ -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "0NQ-VCh5Bex", "5jQIJlNAe_k", "VbPH5LO8aVC", "Mj13WOQOia", "kOqy2Fzo9Ds", "nips_2022_iKKfdIm81Jt", "nips_2022_iKKfdIm81Jt", "nips_2022_iKKfdIm81Jt" ]
nips_2022_nE8IJLT7nW-
Peripheral Vision Transformer
Human vision possesses a special type of visual processing systems called peripheral vision. Partitioning the entire visual field into multiple contour regions based on the distance to the center of our gaze, the peripheral vision provides us the ability to perceive various visual features at different regions. In this work, we take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition. We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data. We evaluate the proposed network, dubbed PerViT, on ImageNet-1K and systematically investigate the inner workings of the model for machine perception, showing that the network learns to perceive visual data similarly to the way that human vision does. The performance improvements in image classification over the baselines across different model sizes demonstrate the efficacy of the proposed method.
Accept
The paper proposes a transformer architecture that models human-like peripheral vision. Experiment results show it achieves good performance. All the reviewers consider the paper above the bar. They like the novelty and the strong empirical performance. The AC finds no reason to object.
train
[ "gRhmhLejDe", "VAzHPQyi5kA", "FCn4erTKtVh", "qmWH2AiC4jA", "uhQLGTn-nMA", "8-2Gr-7BCBJ", "tirRpvy69aM", "zjit52bYJpz", "HNLttS6A7GO", "P6VF4FyZ0fv", "LPSF9yFx0To", "oUWfi5FLbL0", "MvNbBKrlnw0u", "NVwVNq3niaB", "tFa302Plyji", "fVUXM17N8A6", "lRp_nQ9Ggtc", "5dV3nUbi2-G", "GEALQhg2A...
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We truly appreciate the positive evaluations and will do our best to reflect the comments as much as possible.", " Thank you authors for your message and flagging the missing score change. I just upgraded my score to reflect the changes.\n\nBest wishes.", " We again thank the reviewer for the professional, in...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "VAzHPQyi5kA", "qmWH2AiC4jA", "HNLttS6A7GO", "8-2Gr-7BCBJ", "nips_2022_nE8IJLT7nW-", "MvNbBKrlnw0u", "P6VF4FyZ0fv", "MvNbBKrlnw0u", "LPSF9yFx0To", "fVUXM17N8A6", "NVwVNq3niaB", "nips_2022_nE8IJLT7nW-", "B13llK6vyqX", "5dV3nUbi2-G", "lRp_nQ9Ggtc", "GEALQhg2Ay", "nips_2022_nE8IJLT7nW-"...
nips_2022_QzFJmwwBMd
ZARTS: On Zero-order Optimization for Neural Architecture Search
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency. It introduces trainable architecture parameters to represent the importance of candidate operations and proposes first/second-order approximation to estimate their gradients, making it possible to solve NAS by gradient descent algorithm. However, our in-depth empirical results show that the approximation often distorts the loss landscape, leading to the biased objective to optimize and, in turn, inaccurate gradient estimation for architecture parameters. This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation. Specifically, three representative zero-order optimization methods are introduced: RS, MGS, and GLD, among which MGS performs best by balancing the accuracy and speed. Moreover, we explore the connections between RS/MGS and gradient descent algorithm and show that our ZARTS can be seen as a robust gradient-free counterpart to DARTS. Extensive experiments on multiple datasets and search spaces show the remarkable performance of our method. In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue. Also, we search on the search space of DARTS to compare with peer methods, and our discovered architecture achieves 97.54\% accuracy on CIFAR-10 and 75.7\% top-1 accuracy on ImageNet. Finally, we combine our ZARTS with three orthogonal variants of DARTS for faster search speed and better performance. Source code will be made publicly available at: \url{https://github.com/vicFigure/ZARTS}.
Accept
This paper aims to solve the instability issues of differentiable architecture search (DARTS) using zero-order optimization. Three different optimization techniques are proposed and their efficacy is demonstrated successfully on several benchmark datasets and different variants of DARTS. Although there are some concerns regarding the computational complexity of zero-order optimization, the reviewers have found the contribution of this submission significant for acceptance at NeurIPS. Given this, we are happy to recommend acceptance.
train
[ "FCfSK2BccN9", "Z3ijPUw5-y3b", "lJ2fbTsOUL", "JHcxZ1CrWCW", "_0kY_GlMofa", "OoCAViMMr9d", "CSQ8Vju7qbE", "J0RqUeAMpG" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response, most of my concerns are addressed. I have raised my rating to borderline accept.", " Thank you for your thorough and valuable comments. We answer your questions as follows in the hope of resolving your concerns.\n\n**Q1: ZARTS is more time-consuming than other DARTS-based meth...
[ -1, -1, -1, -1, -1, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "JHcxZ1CrWCW", "J0RqUeAMpG", "CSQ8Vju7qbE", "_0kY_GlMofa", "OoCAViMMr9d", "nips_2022_QzFJmwwBMd", "nips_2022_QzFJmwwBMd", "nips_2022_QzFJmwwBMd" ]
nips_2022_IvnoGKQuXi
Class-Dependent Label-Noise Learning with Cycle-Consistency Regularization
In label-noise learning, estimating the transition matrix plays an important role in building statistically consistent classifier. Current state-of-the-art consistent estimator for the transition matrix has been developed under the newly proposed sufficiently scattered assumption, through incorporating the minimum volume constraint of the transition matrix T into label-noise learning. To compute the volume of T, it heavily relies on the estimated noisy class posterior. However, the estimation error of the noisy class posterior could usually be large as deep learning methods tend to easily overfit the noisy labels. Then, directly minimizing the volume of such obtained T could lead the transition matrix to be poorly estimated. Therefore, how to reduce the side-effects of the inaccurate noisy class posterior has become the bottleneck of such method. In this paper, we creatively propose to estimate the transition matrix under the forward-backward cycle-consistency regularization, of which we have greatly reduced the dependency of estimating the transition matrix T on the noisy class posterior. We show that the cycle-consistency regularization helps to minimize the volume of the transition matrix T indirectly without exploiting the estimated noisy class posterior, which could further encourage the estimated transition matrix T to converge to its optimal solution. Extensive experimental results consistently justify the effectiveness of the proposed method, on reducing the estimation error of the transition matrix and greatly boosting the classification performance.
Accept
This work addresses the problem of estimating the transition matrix by using forward-backward cycle-consistency, with class-dependent noisy labels. There is merit in this work, as the proposed method might encourage the estimated transition matrix to converge to its optimal solution, without explicitly estimating the noisy class posterior probability. Therefore, it could help to build better statistically consistent classifiers. It is shown theoretically that the proposed method is superior compared to compared methods, and the effectiveness of the method is demonstrated on several different datasets. There was a lively discussion going on between the reviewers and the authors. Although there remain some open questions about how the hyperparameter values are chosen, I think this paper should be accepted.
train
[ "q-jl8H2tSdq", "5PfNLmDU34", "EHO_NmhT1sw", "uQWd1q8gVR", "KlZL5zUjWWb", "GuAwOXlqVcu", "xUf9OzGCPZ1", "OPQyVvltZ9T", "V6U8iKy2Bj2", "TvMG3-bAo1", "MIOejKv3wp", "cjdR3bgSbpN", "2fxE9Lc2-JJ", "_Qyy2D1kmnW", "fu_ImERHzeN", "mBOiYivWveG", "c4jY7sh1pan", "yrYAgR20Wj5", "PuN27q2z9E" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The response well addresses my concern and I would keep my score.", " Dear reviewer 3zVp,\n\nIt seems we have addressed all your major concerns. Can you kindly reconsider the recommendation? Thanks very much.\n\nBest", " Thank you very much for your quick responses. Your comments have greatly helped improve t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "cjdR3bgSbpN", "EHO_NmhT1sw", "KlZL5zUjWWb", "2fxE9Lc2-JJ", "GuAwOXlqVcu", "xUf9OzGCPZ1", "fu_ImERHzeN", "mBOiYivWveG", "nips_2022_IvnoGKQuXi", "mBOiYivWveG", "mBOiYivWveG", "yrYAgR20Wj5", "PuN27q2z9E", "c4jY7sh1pan", "mBOiYivWveG", "nips_2022_IvnoGKQuXi", "nips_2022_IvnoGKQuXi", "...
nips_2022_V3kqJWsKRu4
InsPro: Propagating Instance Query and Proposal for Online Video Instance Segmentation
Video instance segmentation (VIS) aims at segmenting and tracking objects in videos. Prior methods typically generate frame-level or clip-level object instances first and then associate them by either additional tracking heads or complex instance matching algorithms. This explicit instance association approach increases system complexity and fails to fully exploit temporal cues in videos. In this paper, we design a simple, fast and yet effective query-based framework for online VIS. Relying on an instance query and proposal propagation mechanism with several specially developed components, this framework can perform accurate instance association implicitly. Specifically, we generate frame-level object instances based on a set of instance query-proposal pairs propagated from previous frames. This instance query-proposal pair is learned to bind with one specific object across frames through conscientiously developed strategies. When using such a pair to predict an object instance on the current frame, not only the generated instance is automatically associated with its precursors on previous frames, but the model gets a good prior for predicting the same object. In this way, we naturally achieve implicit instance association in parallel with segmentation and elegantly take advantage of temporal clues in videos. To show the effectiveness of our method InsPro, we evaluate it on two popular VIS benchmarks, i.e., YouTube-VIS 2019 and YouTube-VIS 2021. Without bells-and-whistles, our InsPro with ResNet-50 backbone achieves 43.2 AP and 37.6 AP on these two benchmarks respectively, outperforming all other online VIS methods. Code is available at https://github.com/hf1995/InsPro.
Accept
The paper discusses a method for online video instance segmentation. Reviewers appreciated the proposed method but raised concerns regarding difference of reported results to other papers, method being similar to prior work, and limited novelty. The rebuttal addressed most of the concerns prompting reviewers to increase their rating to an accept recommendation. AC doesn't see reasons to overturn an unanimous reviewer recommendation.
val
[ "xaXCO9ap49N", "ZoyjWbtIIJB", "1sghoR7OYE", "Dn6jfb4sIeW", "_RYCw6tA4DD", "GrzJW7Opha", "4iLy3clB0Q3", "6HqW7pJpR-", "6HxE3UIM0gn", "w4T1rKUlF3n", "YIPUF-AjFZ", "fN1LZg_q6gK", "KKJPQeY5aE", "en4ag8a2oYD", "_dfeZn1T65", "e-w-M5WT3Hf", "B1zWW1R5IvR", "3L6ZJA3vGKb", "NFJ7wLIF63t", ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " Wow! It is very glad to hear that we address most of your concerns. Thank you very much for this kind rating upgrade. We are really delighted to hear this good news. Have a nice one!", " Thanks for your response and I'm feeling sorry for the delayed reply. The revised version of InsPro covers most of my concern...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "ZoyjWbtIIJB", "4iLy3clB0Q3", "Dn6jfb4sIeW", "6HqW7pJpR-", "KKJPQeY5aE", "6HxE3UIM0gn", "_dfeZn1T65", "3L6ZJA3vGKb", "YIPUF-AjFZ", "nips_2022_V3kqJWsKRu4", "fN1LZg_q6gK", "en4ag8a2oYD", "iBpk8uQqq8E", "bkTkc0HIqOk", "i3g4aiT5etZ", "B1zWW1R5IvR", "nips_2022_V3kqJWsKRu4", "NFJ7wLIF63...
nips_2022_aGFQDrNb-KO
Multi-dataset Training of Transformers for Robust Action Recognition
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming to learn robust representations for action recognition. In particular, the informative loss maximizes the expressiveness of the feature embedding while the projection loss for each dataset mines the intrinsic relations between classes across datasets. We verify the effectiveness of our method on five challenging datasets, Kinetics- 400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets. Extensive experimental results show that our method can consistently improve state-of-the-art performance. Code and models are released.
Accept
The paper proposes a co-training method for video representation learning, by training video transformers on multiple video datasets. The paper proposes two novel loss terms: informative loss and projection loss. The informative loss encourages the variance of each dimension in the embedding to be large. The projection loss maps predictions from other datasets to the current dataset, to learn the label relation across datasets by using ground-truth action labels to compute standard cross-entropy loss. Based on the feedback provided by the reviewers, we recommend this paper for publication at NeurIPS 2022. The reviewers had some concerns about the paper. Reviewer YQNQ had concerns that the design of the projection loss and the informative loss did not consider the temporal dynamics, and that it does not compare with multi-domain methods. Reviewer iVPd recommended considering tasks like detection, segmentation, etc. discussing these methods in the related work for broader scope. Reviewer U8Wa mentioned that the experimental findings in this paper are quite different from findings in CoVER, but no explanation is provided. We thank the authors for addressing the comments of the reviewers in their review during the author feedback period. The authors seem to have addressed some of the concerns/feedback from the reviewers with detailed discussions -- it would be good to include these discussions, as much as possible, in the updated paper or supplemental materials.
train
[ "u8znSkxkW3M", "K5XPJiaOGGQ", "iOuevRk4Ecq", "GfuPRDEfzCr", "xSBxokVttq6", "fBFi2XNKUa_", "jOkuFm850-z", "OFMh8_NpsF_", "a7SyyN7UZKN" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\n Thank you for the clarification. My concerns have been addressed and I don't have further questions.", " Dear Reviewer iVPd, \nThank you very much again for the time and effort put into reviewing our paper. We believe that we have addressed all your concerns in our response. We have also follo...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "K5XPJiaOGGQ", "iOuevRk4Ecq", "xSBxokVttq6", "a7SyyN7UZKN", "OFMh8_NpsF_", "jOkuFm850-z", "nips_2022_aGFQDrNb-KO", "nips_2022_aGFQDrNb-KO", "nips_2022_aGFQDrNb-KO" ]
nips_2022_a8qX5RG36jd
LasUIE: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model
Universally modeling all typical information extraction tasks (UIE) with one generative language model (GLM) has revealed great potential by the latest study, where various IE predictions are unified into a linearized hierarchical expression under a GLM. Syntactic structure information, a type of effective feature which has been extensively utilized in IE community, should also be beneficial to UIE. In this work, we propose a novel structure-aware GLM, fully unleashing the power of syntactic knowledge for UIE. A heterogeneous structure inductor is explored to unsupervisedly induce rich heterogeneous structural representations by post-training an existing GLM. In particular, a structural broadcaster is devised to compact various latent trees into explicit high-order forests, helping to guide a better generation during decoding. We finally introduce a task-oriented structure fine-tuning mechanism, further adjusting the learned structures to most coincide with the end-task's need. Over 12 IE benchmarks across 7 tasks our system shows significant improvements over the baseline UIE system. Further in-depth analyses show that our GLM learns rich task-adaptive structural bias that greatly resolves the UIE crux, the long-range dependence issue and boundary identifying.
Accept
This paper proposes a latent adaptive structure-aware generative language model (GLM) to leverage syntactic knowledge for information extraction tasks. The proposed model incorporates a latent structure induction module that automatically induces tree-like structures akin to dependency and constituency trees. Experiments in 12 IE benchmarks across 7 tasks showed significant improvements over the baseline. Overall, all reviewers feel positively about this paper, even though they mention some aspects which can be improved in the final version. The conversion of information extraction tasks into a problem solvable by a GLM problem with three different prediction modules is original and valuable, and the experiments are well designed and generally convincing (although additional experiments in more recent and larger scale datasets would make the paper stronger). The author response addressed well all the concerns of the reviewers, including the addition of several missing references. I urge the authors to incorporate these in their paper and to report the runtime of their method, to better understand the tradeoff between performance and speed, as well as examples of induced tree structures produced by their method, as suggested by one of the reviewers.
val
[ "uVMGhKHU04q", "eaTT5AoHY81", "HkMUAIRIkgH", "ogNoDsgUuvN", "K96IrfIsejW", "7ZnHCqstb3y", "l7TdtIvr2uU", "8rrYTAX4nV4", "BX_yt1PFy60", "mCjvPcStWp3" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your acknowledgment. We representatively present the parsing results of the constituency syntax. Following are the experimental results of the grammar induction w.r.t. each tag, as you indicated. The results are the recall rates of the labels that were identified by the model (label recall). \...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "HkMUAIRIkgH", "K96IrfIsejW", "7ZnHCqstb3y", "mCjvPcStWp3", "BX_yt1PFy60", "8rrYTAX4nV4", "nips_2022_a8qX5RG36jd", "nips_2022_a8qX5RG36jd", "nips_2022_a8qX5RG36jd", "nips_2022_a8qX5RG36jd" ]
nips_2022_Ojakr9ofova
Scalable Infomin Learning
The task of infomin learning aims to learn a representation with high utility while being uninformative about a specified target, with the latter achieved by minimising the mutual information between the representation and the target. It has broad applications, ranging from training fair prediction models against protected attributes, to unsupervised learning with disentangled representations. Recent works on infomin learning mainly use adversarial training, which involves training a neural network to estimate mutual information or its proxy and thus is slow and difficult to optimise. Drawing on recent advances in slicing techniques, we propose a new infomin learning approach, which uses a novel proxy metric to mutual information. We further derive an accurate and analytically computable approximation to this proxy metric, thereby removing the need of constructing neural network-based mutual information estimators. Compared to baselines, experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify that our method can more effectively remove unwanted information with limited time budget.
Accept
**Summary**: This paper develops an infomin-based representation method that based on the recently-proposed sliced mutual informaiton estimator. Unlike other methods, the proposed approach does not rely on an adversarial objective and provides tractable proxy-metric that eliminates the need for neural estimators of the mutual information. Experiments on independence tests, disentangled representation learning and algorithmic fairness aim to illustrate both improved utility and higher scalability. **Strengths**: Reviewers we overall positively predisposed towards this paper. They noted that this is a well-written paper, with sound and well-motivated theoretical analysis [d477, hqSG]. The proposed method, which derives from canonical correlation analysis is novel and computationally efficient [d477]. Experiments are sound and satisfactory, with benchmarks that include a good range of datasets and baselines. Reviewer *dNaL* notes good results in terms of both expectation and variance, in addition to improved computational efficiency relative to adversarial methods. **Weaknesses**: Reviewers also noted limitations. Reviewer *dn477* noted a missing reference to CLUB (Chen et al., ICML, 2020) which would be a strong neural baseline. Several reviewers found that scalability claims are not strongly supported and that larger-scale experiments might strengthen the paper in this context [d477, hqSG]. More generally, reviewers were concerned that the submission lacks certain important implementation details [d477, dNAL]. In terms of the experiments, reviewers had a number of suggestions, including a comparison between slice MI to the analytically calculated MI for some toy example [d477], a comparison simpler a simpler adversarial cross-entropy based methods for the fairness experiments (e.g. DANN/LAFTR) [dNaL], a comparison to LieGroupVAE [hqSG], and reporting of disentanglement metrics such as MIG and the FactorVAE score [hqSG]. **Reviewer Author Discussion**: While the authors were not able to carry out larger-scale experiments, they provided an ablation study to further support claims of scalability. They also added discussion of CLUB, and clarified that DANN/LAFTR are similar to the Neural TC baselines, clarified that disentanglement scores cannot be computed for the vector-valued quantities that are under consideration for this paper. Reviewer *D477* raised their score 5->6, reviewer *hqSG* raised their score 6->7. **Reviewer AC Discussion**: Reviewers unfortunately did not respond to the AC during the discussion phase. The AC takes this as a signal that reviewers do not object to acceptance, but also do not champion it. **Overall Recommendation**: This submission is just about above the bar for an accept, though lack of a clear champion among reviewers somewhat limits confidence.
train
[ "cn_PKrwR5xw", "hvoVdfYiwbY", "GFWg_R7mW0Z", "2bwJkktn7nN", "dPFdeme7-AF", "Isg1gxFA7H8", "l4HbSze2QC-", "2ibZBv8ZAGc", "KBp8iWkiAAJ", "0yCXVwNjE9Y", "3dWgIzjhqtE", "MzPMjm83zUp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' response. \n\nI am glad to see that the presentation of the paper significantly improves, and the authors add a pseudo algo comparison with adversarial learning based approaches as well as the CLUB baseline. I am mostly satisfied with the answers. Thus I update the score from 5 to 6.\n", ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "dPFdeme7-AF", "GFWg_R7mW0Z", "2ibZBv8ZAGc", "l4HbSze2QC-", "KBp8iWkiAAJ", "nips_2022_Ojakr9ofova", "3dWgIzjhqtE", "MzPMjm83zUp", "0yCXVwNjE9Y", "nips_2022_Ojakr9ofova", "nips_2022_Ojakr9ofova", "nips_2022_Ojakr9ofova" ]
nips_2022_OzbkiUo24g
Linear tree shap
Decision trees are well-known due to their ease of interpretability. To improve accuracy, we need to grow deep trees or ensembles of trees. These are hard to interpret, offsetting their original benefits. Shapley values have recently become a popular way to explain the predictions of tree-based machine learning models. It provides a linear weighting to features independent of the tree structure. The rise in popularity is mainly due to TreeShap, which solves a general exponential complexity problem in polynomial time. Following extensive adoption in the industry, more efficient algorithms are required. This paper presents a more efficient and straightforward algorithm: Linear TreeShap. Like TreeShap, Linear TreeShap is exact and requires the same amount of memory.
Accept
Shapley values are a common tool used for evaluating feature importance. In this work the authors present a way to accelerate the computation of these values when the model used is a tree or an ensemble of trees. The algorithm presented has linear computational complexity with respect to the maximal depth of the tree $D$ while previous algorithms had a computational complexity proportional to D^2 or even worse. The results are theoretically well grounded, and a small empirical study shows the merits do translate from theory to practice. There was a consensus among reviewers that this work presents a strong scientific contribution that is relevant to NeurIPS. Some comments were made about the presentation of this work, but all agreed that the merits outweigh these limitation and therefore we recommend accepting this work to NeurIPS. Nevertheless, we encourage the authors to take a close look at the comments made by the reviewers and try to improve the presentation for the camera-ready of this work. We think that improving the presentation will improve the potential impact of this work.
train
[ "HY6bStVUcF", "kRR8JpTRaQ", "CoFjWgrjo7S3", "FiQ3H_HFDBY", "2QhrYA20dRP", "yG61pcTtXhu", "AyIf5rs6eK", "oO3zrH5x17", "hbVQ7FQcVaH", "QHW_bhgkEQ9" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their response. A couple additional thoughts:\n\n**About equation 11.** I get that the two steps are replacing $M$ with $F(R)$ and partitioning subsets based on their size. The part that would be nice to reproduce is how you derive the new weights for each $S \\subseteq F(R) \\setminus i...
[ -1, -1, -1, -1, -1, 7, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 4, 3 ]
[ "FiQ3H_HFDBY", "AyIf5rs6eK", "yG61pcTtXhu", "hbVQ7FQcVaH", "oO3zrH5x17", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g" ]
nips_2022_L7P3IvsoUXY
CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks
Previous works have validated that text generation APIs can be stolen through imitation attacks, causing IP violations. In order to protect the IP of text generation APIs, recent work has introduced a watermarking algorithm and utilized the null-hypothesis test as a post-hoc ownership verification on the imitation models. However, we find that it is possible to detect those watermarks via sufficient statistics of the frequencies of candidate watermarking words. To address this drawback, in this paper, we propose a novel Conditional wATERmarking framework (CATER) for protecting the IP of text generation APIs. An optimization method is proposed to decide the watermarking rules that can minimize the distortion of overall word distributions while maximizing the change of conditional word selections. Theoretically, we prove that it is infeasible for even the savviest attacker (they know how CATER works) to reveal the used watermarks from a large pool of potential word pairs based on statistical inspection. Empirically, we observe that high-order conditions lead to an exponential growth of suspicious (unused) watermarks, making our crafted watermarks more stealthy. In addition, CATER can effectively identify IP infringement under architectural mismatch and cross-domain imitation attacks, with negligible impairments on the generation quality of victim APIs. We envision our work as a milestone for stealthily protecting the IP of text generation APIs.
Accept
The authors propose a watermarking technique (CATER) to claim ownership of text generation APIs in the presence of imitation attacks. Their main idea is based on the observation that in the state of the art by analyzing the word frequency in API responses as well as publicly available data, an adversary's odds to learn the watermark increases. To remedy this, CATER conditionally watermarks the response to prevent the adversary from deciphering the watermarking keys. Reviewers found the topic of the paper timely, is writing clear, and the overall contribution sound and of interest to the community.
train
[ "2oFl72mOWjr", "szkVDD0Xa7E", "v7V0fWYCio8", "F4CqGaYKtE", "F_3t2JOyHUN", "WFsXxdRMQ8X", "HRantg3xX_J", "NIOM37eSuRr", "4s38ngKXvD9", "1aiX3vg3uqK", "Nz4g6yupWxg", "oCljCNcEf_a", "Z8_LU853CFs", "vP-tUydhfu", "9qTbOLGogi8", "2gy2pY_7QCr" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to appreciate the reviewer’s encouraging comments and positive feedback, which has helped us polish our submission.", " We would like to appreciate the reviewer’s invaluable feedback, which has helped us improve our submission.", " Review EoSA here. Thanks a lot for clarifying my questions and c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 3 ]
[ "F4CqGaYKtE", "v7V0fWYCio8", "Z8_LU853CFs", "1aiX3vg3uqK", "oCljCNcEf_a", "NIOM37eSuRr", "Nz4g6yupWxg", "2gy2pY_7QCr", "9qTbOLGogi8", "vP-tUydhfu", "oCljCNcEf_a", "Z8_LU853CFs", "nips_2022_L7P3IvsoUXY", "nips_2022_L7P3IvsoUXY", "nips_2022_L7P3IvsoUXY", "nips_2022_L7P3IvsoUXY" ]
nips_2022_r__gfIasEdN
GAPX: Generalized Autoregressive Paraphrase-Identification X
Paraphrase Identification is a fundamental task in Natural Language Processing. While much progress has been made in the field, the performance of many state-of- the-art models often suffer from distribution shift during inference time. We verify that a major source of this performance drop comes from biases introduced by negative examples. To overcome these biases, we propose in this paper to train two separate models, one that only utilizes the positive pairs and the other the negative pairs. This enables us the option of deciding how much to utilize the negative model, for which we introduce a perplexity based out-of-distribution metric that we show can effectively and automatically determine how much weight it should be given during inference. We support our findings with strong empirical results.
Accept
This paper tackles a discriminative problem by a generative model, where the generation probabilities can be twisted to adjust negative samples’ weights. Reviewers generally found the paper interesting. However, one concern is that the paper only considers the paraphrase-identification problem, which sounds narrow. It is expected that the approach may be generalized to different tasks.
train
[ "7iMFE7dX_0", "2a4GzyL9RnW", "965R0S5YURH", "xIRccp5hEBE", "2EUGPtA2G8v", "3_Mtp1MB9SY", "eMulUvWjlDR", "PxIQ7_LwsQA", "RfvPNef3xoo", "Yr4fWThj8z", "cp_PStZDSoz", "I4fa0I0t62F", "mP82RPxlAYw" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' response.\n\nThe clarification about OODP and GAPX is helpful for the readers to navigate the results. I now realize that you've talked about this during Section 4.5, but again within that paragraph you are jumping back and forth between several different points and it's a bit hard to foll...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "PxIQ7_LwsQA", "xIRccp5hEBE", "2EUGPtA2G8v", "3_Mtp1MB9SY", "RfvPNef3xoo", "I4fa0I0t62F", "mP82RPxlAYw", "cp_PStZDSoz", "Yr4fWThj8z", "nips_2022_r__gfIasEdN", "nips_2022_r__gfIasEdN", "nips_2022_r__gfIasEdN", "nips_2022_r__gfIasEdN" ]
nips_2022_-me36V0os8P
Explaining Preferences with Shapley Values
While preference modelling is becoming one of the pillars of machine learning, the problem of preference explanation remains challenging and underexplored. In this paper, we propose \textsc{Pref-SHAP}, a Shapley value-based model explanation framework for pairwise comparison data. We derive the appropriate value functions for preference models and further extend the framework to model and explain \emph{context specific} information, such as the surface type in a tennis game. To demonstrate the utility of \textsc{Pref-SHAP}, we apply our method to a variety of synthetic and real-world datasets and show that richer and more insightful explanations can be obtained over the baseline.
Accept
Overall, the opinion about this paper is quite positive, especially because of its novelty: It establishes the first connection between preference learning and explainability/Shapley. In terms of presentation and technical soundness, the paper seems to be convincing, too. A few critical points (e.g., regarding the evaluation) have been raised in the reviews, but they could essentially be resolved in the discussion. Another critical issue that came up in the final discussion is the following one: The authors learn a binary preference predicate g(X,Y) predicting the degree of preference of X over Y, though without any constraints. In particular, such a model may induce violations of transitivity in the sense that X>Y and Y>Z and Z>X. Such inconsistencies are debatable from a (normative) preference modeling point of view, although it's true that they can be observed in practice. In any case, they appear to be important from an EXPLAINABILITY point of view, as they might be confusing to the user. This point isn't addressed in the paper.
train
[ "hDEw369FlfG", "yk8XHcrp5nx", "SkVXgd4tMY", "dreOIp1tAEy", "g0VpxAlJgZN", "OaHeVajcbOEC", "4oMIseHy4J", "AQXcfpcezLDt", "aPztgJE0FxC", "OQra7A20cQV", "FoTx82LVwh-", "nz6jH_PYjU", "aj96iePqNlt", "459MSw9TPpS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your question!\n\nThe specific comparison done in appendix B is precisely meant to illustrate the importance of redefining the value function in order to make it suited for preferential data, i.e. to remove the features in conjunction with each other as the reviewer suggests -- and to assign Shapley...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "yk8XHcrp5nx", "OaHeVajcbOEC", "dreOIp1tAEy", "aPztgJE0FxC", "4oMIseHy4J", "459MSw9TPpS", "aj96iePqNlt", "nz6jH_PYjU", "FoTx82LVwh-", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P" ]
nips_2022_BgMz5LHc07R
C-Mixup: Improving Generalization in Regression
Improving the generalization of deep networks is an important open challenge, particularly in domains without plentiful data. The mixup algorithm improves generalization by linearly interpolating a pair of examples and their corresponding labels. These interpolated examples augment the original training set. Mixup has shown promising results in various classification tasks, but systematic analysis of mixup in regression remains underexplored. Using mixup directly on regression labels can result in arbitrarily incorrect labels. In this paper, we propose a simple yet powerful algorithm, C-Mixup, to improve generalization on regression tasks. In contrast with vanilla mixup, which picks training examples for mixing with uniform probability, C-Mixup adjusts the sampling probability based on the similarity of the labels. Our theoretical analysis confirms that C-Mixup with label similarity obtains a smaller mean square error in supervised regression and meta-regression than vanilla mixup and using feature similarity. Another benefit of C-Mixup is that it can improve out-of-distribution robustness, where the test distribution is different from the training distribution. By selectively interpolating examples with similar labels, it mitigates the effects of domain-associated information and yields domain-invariant representations. We evaluate C-Mixup on eleven datasets, ranging from tabular to video data. Compared to the best prior approach, C-Mixup achieves 6.56%, 4.76%, 5.82% improvements in in-distribution generalization, task generalization, and out-of-distribution robustness, respectively. Code is released at https://github.com/huaxiuyao/C-Mixup.
Accept
This is an interesting and technically solid paper. The reviews are very consistent as well.
train
[ "ouRjGvMuAW", "SNqVKhQsEzC", "50SxbIVYjWO", "9YZGZLb9R9S", "W0ftybwZaWHR", "yB5PQCEWNZ9", "QKossfDUXj", "BxrqyNi1Sg6", "-HOglvVeGWV", "flhqX516Dcv", "5fMQvwqh8oB", "nuoFcIFFhfa", "ydhRzfzH47U", "FdhMe_kM15Q", "b7QcuAwTILhJ", "mz-7NozIMjF", "FU2cMGbgxjg", "BZfmljPbEvc", "aCGl7ONPf...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " Hi Reviewer 13Sq,\n\nThanks for pointing out this issue. We are sorry about the confusion. We indeed compared with AutoMix [2] (ECCV'2022) in our additional experiments. We made a mistake when adding the citation and have fixed this issue in the updated version. Many thanks!", " Thanks for your quick response. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "SNqVKhQsEzC", "50SxbIVYjWO", "9YZGZLb9R9S", "W0ftybwZaWHR", "QKossfDUXj", "BxrqyNi1Sg6", "FU2cMGbgxjg", "FdhMe_kM15Q", "FU2cMGbgxjg", "FdhMe_kM15Q", "nuoFcIFFhfa", "BZfmljPbEvc", "nips_2022_BgMz5LHc07R", "b7QcuAwTILhJ", "mz-7NozIMjF", "TX27y5JcMb2", "fszTYQufrlX", "aCGl7ONPfG", ...
nips_2022_dUYLikScE-
Infinite-Fidelity Coregionalization for Physical Simulation
Multi-fidelity modeling and learning is important in physical simulation related applications. It can leverage both low-fidelity and high-fidelity examples for training so as to reduce the cost of data generation yet still achieving good performance. While existing approaches only model finite, discrete fidelities, in practice, the feasible fidelity choice is often infinite, which can correspond to a continuous mesh spacing or finite element length. In this paper, we propose Infinite Fidelity Coregionalization (IFC). Given the data, our method can extract and exploit rich information within infinite, continuous fidelities to bolster the prediction accuracy. Our model can interpolate and/or extrapolate the predictions to novel fidelities that are not covered by the training data. Specifically, we introduce a low-dimensional latent output as a continuous function of the fidelity and input, and multiple it with a basis matrix to predict high-dimensional solution outputs. We model the latent output as a neural Ordinary Differential Equation (ODE) to capture the complex relationships within and integrate information throughout the continuous fidelities. We then use Gaussian processes or another ODE to estimate the fidelity-varying bases. For efficient inference, we reorganize the bases as a tensor, and use a tensor-Gaussian variational posterior approximation to develop a scalable inference algorithm for massive outputs. We show the advantage of our method in several benchmark tasks in computational physics.
Accept
The paper tackles the multi-fidelity simulation problem by modeling the grid variation with NODE, coupled with a GP. Experiments on multiple physical simulators show better performance compared to baselines. Please also report computational efficiency and sample complexity in the final version.
train
[ "Sv3hSba0yMM", "_pAdCb7qtfhE", "h7tigo9ElCrf", "vguisfp_yB", "gpmjtArrWw", "857-h_swPh", "3oLfFVWj9ja", "xhi45D5WSH", "C4vXmUGM1Ex", "Ow8frKfLkyK" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering my questions. I've raised my score based on the response.\nGood luck", " C6: Does the proposed model support varying input and output dimensions at different fidelity levels?\n\nR6: Great question. Since the input to our model is the identify information of the problem, such as PDE paramete...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "_pAdCb7qtfhE", "h7tigo9ElCrf", "Ow8frKfLkyK", "gpmjtArrWw", "857-h_swPh", "C4vXmUGM1Ex", "xhi45D5WSH", "nips_2022_dUYLikScE-", "nips_2022_dUYLikScE-", "nips_2022_dUYLikScE-" ]
nips_2022_fzvDZ0mraPP
Giga-scale Kernel Matrix-Vector Multiplication on GPU
Kernel matrix-vector multiplication (KMVM) is a foundational operation in machine learning and scientific computing. However, as KMVM tends to scale quadratically in both memory and time, applications are often limited by these computational constraints. In this paper, we propose a novel approximation procedure coined \textit{Faster-Fast and Free Memory Method} ($\text{F}^3$M) to address these scaling issues of KMVM for tall~($10^8\sim 10^9$) and skinny~($D\leq7$) data. Extensive experiments demonstrate that $\text{F}^3$M has empirical \emph{linear time and memory} complexity with a relative error of order $10^{-3}$ and can compute a full KMVM for a billion points \emph{in under a minute} on a high-end GPU, leading to a significant speed-up in comparison to existing CPU methods. We demonstrate the utility of our procedure by applying it as a drop-in for the state-of-the-art GPU-based linear solver FALKON, \emph{improving speed 1.5-5.5 times} at the cost of $<1\%$ drop in accuracy. We further demonstrate competitive results on \emph{Gaussian Process regression} coupled with significant speedups on a variety of real-world datasets.
Accept
The authors propose an new approximation procedure for Kernel matrix-vector multiplication target to tall and skinny kernel matrices. The proposed method achieves significant speedups over the state-of-the-art GPU-based linear solver FALKON while sacrificing only small drops in accuracy due to approximation. The paper discusses a specific use case (low dimensional data) but it is very clear about the scope. The reviewers agree that the problem still has high significance, is well motivated and the reported performance gains are convincing. The experiments also provide interesting insights into the inner working of the method and the trade-offs between accuracy and efficiency. For a potential camera ready version the authors should carefully work in the reviewers' comments on the presentation to improve the accessibility of their work for a general NeurIPS audience. Also additional details about the low-level GPU optimizations would be good to add in section 4.1 and some comments on how to extend the method to other kernel functions would strengthen the paper.
train
[ "K-nQyc8hMc", "q9Z3krHetpP", "epvIapVG1MTB", "zLJGuTysyP", "5aJa4Gbs9FpY", "2fmBTU988lV", "ztBDNqKmeE3", "5gIBTWBF9Ks", "cdPF9frpE5P", "KEWdZIQ51f" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm grateful to the authors for their response to my questions. I will keep my score as accept.", " Thank you for your time and effort in reviewing the paper! We respond to your comments and questions below:\n\n*Q1*: I would like to to see some ablation studies and experiments on F^{2.5}M. \n*A1*: We have pro...
[ -1, -1, -1, -1, -1, -1, 7, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 2 ]
[ "5aJa4Gbs9FpY", "KEWdZIQ51f", "cdPF9frpE5P", "5gIBTWBF9Ks", "ztBDNqKmeE3", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP" ]
nips_2022_nJt27NQffr
Self-Supervised Learning via Maximum Entropy Coding
A mainstream type of current self-supervised learning methods pursues a general-purpose representation that can be well transferred to downstream tasks, typically by optimizing on a given pretext task such as instance discrimination. In this work, we argue that existing pretext tasks inevitably introduce biases into the learned representation, which in turn leads to biased transfer performance on various downstream tasks. To cope with this issue, we propose Maximum Entropy Coding (MEC), a more principled objective that explicitly optimizes on the structure of the representation, so that the learned representation is less biased and thus generalizes better to unseen downstream tasks. Inspired by the principle of maximum entropy in information theory, we hypothesize that a generalizable representation should be the one that admits the maximum entropy among all plausible representations. To make the objective end-to-end trainable, we propose to leverage the minimal coding length in lossy data coding as a computationally tractable surrogate for the entropy, and further derive a scalable reformulation of the objective that allows fast computation. Extensive experiments demonstrate that MEC learns a more generalizable representation than previous methods based on specific pretext tasks. It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking. Interestingly, we show that existing batch-wise and feature-wise self-supervised objectives could be seen equivalent to low-order approximations of MEC. Code and pre-trained models are available at https://github.com/xinliu20/MEC.
Accept
The paper in general received three positive feedbacks and ratings. The three reviewers all recognize the theoretical soundness of the paper, and the paper is also clearly presented with informative and strong experimental results. There is a few places making one reviewer less comfortable in terms of the exact effectiveness of the proposed theory. While overall the experimental results are comprehensive and basically suppor the claiming points made by the paper. The authors may furher clarify the points based on the comments.
train
[ "1D5QseuDfao", "jhNmg6p4W44", "_CRwj_41D_3", "LOZdyTPyjUyY", "7Rshwe65owV", "JseAaEPPCRG", "kBn4idsJKm", "HNkE8jJcGhP", "TodRS3dY18", "W95okT9-Gr", "b6QZR_vTPX", "WYBS_UVgqaJ" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the reviewer for providing the additional feedback. We address your concerns below.\n\n>**\"Actually, in response point 1, Barlow Twins are supposed to reach 73.5\\%. So here, the extra two orders approximately indeed only provide a 0.1\\% improvement.\"**\n\n**First, it should be clarified tha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "_CRwj_41D_3", "LOZdyTPyjUyY", "7Rshwe65owV", "W95okT9-Gr", "JseAaEPPCRG", "b6QZR_vTPX", "WYBS_UVgqaJ", "TodRS3dY18", "W95okT9-Gr", "nips_2022_nJt27NQffr", "nips_2022_nJt27NQffr", "nips_2022_nJt27NQffr" ]
nips_2022_KglFYlTiASW
Neural Transmitted Radiance Fields
Neural radiance fields (NeRF) have brought tremendous progress to novel view synthesis. Though NeRF enables the rendering of subtle details in a scene by learning from a dense set of images, it also reconstructs the undesired reflections when we capture images through glass. As a commonly observed interference, the reflection would undermine the visibility of the desired transmitted scene behind glass by occluding the transmitted light rays. In this paper, we aim at addressing the problem of rendering novel transmitted views given a set of reflection-corrupted images. By introducing the transmission encoder and recurring edge constraints as guidance, our neural transmitted radiance fields can resist such reflection interference during rendering and reconstruct high-fidelity results even under sparse views. The proposed method achieves superior performance from the experiments on a newly collected dataset compared with state-of-the-art methods.
Accept
This paper proposes a novel neural radiance field rendering method that is dealing with specular reflection on the object’s surface. The authors present a novel method to solve the limitation of the existing NeRF-based methods for the scenes behind the transparent surfaces with specular reflection. The review results are two A(7) and two BA(5). After carefully checking out the rebuttals and discussions, I recommend the paper to be accepted for this NeurIPS.
train
[ "cK17dISDzl7", "1-uMB9cROD", "yxqwaJ88gtE", "MJS4EiHsPeo", "GeVmhlOwaz", "EPlijxvg5fo", "oz2LoycuaC_", "hnW7O_Pk0y", "vzSCNbbiLkB", "3hM9QqO5dLS", "Grf62Qo59r0", "KPpd4YA8U_", "wmvYGXr6tLb", "AicpL3uDBb", "6k3_tBmsFWU", "aZk-k_OIXhG", "UxXchnQsb83", "QLkPGH-LYkF", "6IjNF0Eyqf3", ...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer K1pa, thanks for you kind reply very much. We are glad to have this opportunity to address your concerns. We will continue improving our paper to make it better.\n", " Dear authors,\n\nThank you for uploading an updated version of the manuscript and an HTML file. The HTML file was really helpful i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "QLkPGH-LYkF", "QLkPGH-LYkF", "6hKE4Voh6mo", "6IjNF0Eyqf3", "QLkPGH-LYkF", "UxXchnQsb83", "nips_2022_KglFYlTiASW", "nips_2022_KglFYlTiASW", "6hKE4Voh6mo", "6hKE4Voh6mo", "6IjNF0Eyqf3", "QLkPGH-LYkF", "QLkPGH-LYkF", "QLkPGH-LYkF", "UxXchnQsb83", "nips_2022_KglFYlTiASW", "nips_2022_Kgl...
nips_2022_wlrYnGZ37Wv
Sequencer: Deep LSTM for Image Classification
In recent computer vision research, the advent of the Vision Transformer (ViT) has rapidly revolutionized various architectural design efforts: ViT achieved state-of-the-art image classification performance using self-attention found in natural language processing, and MLP-Mixer achieved competitive performance using simple multi-layer perceptrons. In contrast, several studies have also suggested that carefully redesigned convolutional neural networks (CNNs) can achieve advanced performance comparable to ViT without resorting to these new ideas. Against this background, there is growing interest in what inductive bias is suitable for computer vision. Here we propose Sequencer, a novel and competitive architecture alternative to ViT that provides a new perspective on these issues. Unlike ViTs, Sequencer models long-range dependencies using LSTMs rather than self-attention layers. We also propose a two-dimensional version of Sequencer module, where an LSTM is decomposed into vertical and horizontal LSTMs to enhance performance. Despite its simplicity, several experiments demonstrate that Sequencer performs impressively well: Sequencer2D-L, with 54M parameters, realizes 84.6% top-1 accuracy on only ImageNet-1K. Not only that, we show that it has good transferability and the robust resolution adaptability on double resolution-band. solution-band. Our source code is available at https://github.com/okojoalg/sequencer.
Accept
Four reviewers provided detailed feedback on this paper. The authors responded to the reviews and I appreciate the authors' comments and clarifications, specifically that each question/comment is addressed in detail. The authors also uploaded a revised version of the paper. After the two discussion periods, all four reviewers suggest to accept the paper (although the scores do not exceed a "weak accept"). After considering the reviewers' and authors' comments, I believe that the paper should be accepted to NeurIPS. Weaknesses include: * Some concerns about experimental results, e.g. highlighting accuracy vs. number of parameters but not also highlighting limitations when looking throughput (comparing only parameters (or FLOPS) can sometimes be misleading, see also [The efficiency misnomer, ICLR22](https://arxiv.org/abs/2110.12894)). But it's good that throughput numbers are presented in the paper and the paper acknowledges this limitation. Related: concerns about computational cost. * Some concerns regarding relevant related literature (addressed in comments and revision) and novelty of the approach. * Limitation to image classification only in the experiments (partially addressed in comments and revision). * More interpretation of the effect of using LSTMs could be helpful to the reader (partially addressed in comments). Strengths include: * Interesting, conceptually simple approach that revisits LSTMs for images, which could be specifically useful for high resolution images. * Reviewers agree that the paper is well-written. * Experimental results and ablations are strong with respect to the claims made. Minor points (not affecting this decision, but potentially useful to authors when preparing the final revision): * MLP-based methods "cannot cope with flexible input sizes during inference" - I think this is only partially true, even the original MLP-Mixer paper shows how this can be solved e.g. in fine-tuning by "modifying the shape of Mixer’s token-mixing MLP blocks" * minor typo I randomly encountered: Table 3, row 3, column "Flowers" 89.5 -> 98.5 * "It is demonstrated that modeling long-range dependencies by self-attention is not necessarily essential in computer vision" - To some degree similar "demonstrations" are visible in CNNs and MLP-Mixers, so this claim seems a bit strong, maybe?
train
[ "JzsvoxPspUu", "PtNi56Ce1S1", "vcytZlC49Z1", "5SOiseeBCR8", "Jmr6X-hmg1k", "uE4MKBEvyBr", "HzVRTA9px-B", "v02LgXSwmKI", "9E2JNrrxZkX", "3Y_jBaO7gsr", "BR7MmAgezxJ", "EsmSLbo41JW", "BCUgzjlVL7", "h_kpfnBLW_X", "c6ypb13dnO4" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response from the authors. My concerns are mostly addressed. Although I am still worried about the throughput issue in standard ImageNet resolution, I lean toward acceptance as the successful trial of replacing self-attention with LSTM in ViT deserves credit.", " Thanks for your positive comments...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "v02LgXSwmKI", "vcytZlC49Z1", "Jmr6X-hmg1k", "3Y_jBaO7gsr", "EsmSLbo41JW", "EsmSLbo41JW", "BCUgzjlVL7", "h_kpfnBLW_X", "h_kpfnBLW_X", "c6ypb13dnO4", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv" ]
nips_2022_osPA8Bs4MJB
Delving into Sequential Patches for Deepfake Detection
Recent advances in face forgery techniques produce nearly visually untraceable deepfake videos, which could be leveraged with malicious intentions. As a result, researchers have been devoted to deepfake detection. Previous studies have identified the importance of local low-level cues and temporal information in pursuit to generalize well across deepfake methods, however, they still suffer from robustness problem against post-processings. In this work, we propose the Local- & Temporal-aware Transformer-based Deepfake Detection (LTTD) framework, which adopts a local-to-global learning protocol with a particular focus on the valuable temporal information within local sequences. Specifically, we propose a Local Sequence Transformer (LST), which models the temporal consistency on sequences of restricted spatial regions, where low-level information is hierarchically enhanced with shallow layers of learned 3D filters. Based on the local temporal embeddings, we then achieve the final classification in a global contrastive way. Extensive experiments on popular datasets validate that our approach effectively spots local forgery cues and achieves state-of-the-art performance.
Accept
All reviewers are positive about this paper. Generally speaking, the proposed method is novel and is also easy to follow due to well writing. Also, the experiments are comprehensive. In the rebuttal, the authors also provide some qualitative results to clearly respond to the concerns of reviewers. So, I suggest accepting this paper.
train
[ "NNvno0qQ25B", "j3TOg77SWjl", "NVe6S2N1xI", "BDEdZIairjt", "4OW5cyxmLwU", "MTiipQMwz4", "Mtt0wLxbTsn", "V2VKodwIWTM", "0QRUdzP29dP", "p3teEjcTMZ9", "zbbz1ew0FuN", "5Oo04rruNC8-", "HHN9S5nxl1ON", "G9wLvB4cNBW", "_BqXop1Au-y", "jo2urd0juA_", "JUqIE2pcn8Q", "KgHskORqAw8", "ul9Sk2FIv...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Hi reviewer,\n\nThe discussion period is closing soon. Please take a look at our responses to your pre-rebuttal concerns. 1) Regarding novelty, we clarify the differences between this paper and related arts, where the key dilemma between **robustness** and **generalization** is resolved by the introduced low-leve...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4, 4 ]
[ "JUqIE2pcn8Q", "p3teEjcTMZ9", "zbbz1ew0FuN", "_BqXop1Au-y", "nips_2022_osPA8Bs4MJB", "nips_2022_osPA8Bs4MJB", "zAVQlCO2DrC", "zAVQlCO2DrC", "zAVQlCO2DrC", "ul9Sk2FIvVa", "KgHskORqAw8", "JUqIE2pcn8Q", "JUqIE2pcn8Q", "jo2urd0juA_", "jo2urd0juA_", "nips_2022_osPA8Bs4MJB", "nips_2022_osP...
nips_2022_MbVS6BuJ3ql
Maximum Class Separation as Inductive Bias in One Matrix
Maximizing the separation between classes constitutes a well-known inductive bias in machine learning and a pillar of many traditional algorithms. By default, deep networks are not equipped with this inductive bias and therefore many alternative solutions have been proposed through differential optimization. Current approaches tend to optimize classification and separation jointly: aligning inputs with class vectors and separating class vectors angularly. This paper proposes a simple alternative: encoding maximum separation as an inductive bias in the network by adding one fixed matrix multiplication before computing the softmax activations. The main observation behind our approach is that separation does not require optimization but can be solved in closed-form prior to training and plugged into a network. We outline a recursive approach to obtain the matrix consisting of maximally separable vectors for any number of classes, which can be added with negligible engineering effort and computational overhead. Despite its simple nature, this one matrix multiplication provides real impact. We show that our proposal directly boosts classification, long-tailed recognition, out-of-distribution detection, and open-set recognition, from CIFAR to ImageNet. We find empirically that maximum separation works best as a fixed bias; making the matrix learnable adds nothing to the performance. The closed-form implementation and code to reproduce the experiments are available on github.
Accept
This paper aims at introducing a criterium for class separation. The paper demonstrates high performance, by proposing an affine transformation of the canonical embedding of labels, which lead to a maximal separation between those new vectors. Given the simplicity and good numerical results, I recommend accepting this paper; however, the minor revisions suggested by reviewer BhPp needs to be addressed in the camera-ready version.
train
[ "mNjl8vJ2iqA", "gJmnMgirm4", "O7-jqKUpfcH", "IH_MvrJaxay", "nXyDOznw3Qh", "BcwjerF_FpR", "82MnAD7OhJb", "V1lT5wXHERc", "VRePyw72Ud9u", "Og87ZAxhqc", "0D1_NLqYv2Z", "yPm3kip21hv", "cZ1jwnDsIJK" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I have no more questions, and I hope these suggestions can be useful when preparing a revised version.", " We thank the reviewer for their response.\n\nRegarding feature dimensionality, you can indeed set feature embedding to eg. $512$ dimensions with a standard softmax cross-entropy fo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "gJmnMgirm4", "IH_MvrJaxay", "nXyDOznw3Qh", "VRePyw72Ud9u", "82MnAD7OhJb", "0D1_NLqYv2Z", "cZ1jwnDsIJK", "yPm3kip21hv", "Og87ZAxhqc", "nips_2022_MbVS6BuJ3ql", "nips_2022_MbVS6BuJ3ql", "nips_2022_MbVS6BuJ3ql", "nips_2022_MbVS6BuJ3ql" ]
nips_2022_kcQiIrvA_nz
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets, based on which researchers and developers can easily evaluate and improve their learning methods. Since the data collection is usually time-consuming or even expensive, how to protect their copyrights is of great significance and worth further exploration. In this paper, we revisit dataset ownership verification. We find that existing verification methods introduced new security risks in DNNs trained on the protected dataset, due to the targeted nature of poison-only backdoor watermarks. To alleviate this problem, in this work, we explore the untargeted backdoor watermarking scheme, where the abnormal model behaviors are not deterministic. Specifically, we introduce two dispersibilities and prove their correlation, based on which we design the untargeted backdoor watermark under both poisoned-label and clean-label settings. We also discuss how to use the proposed untargeted backdoor watermark for dataset ownership verification. Experiments on benchmark datasets verify the effectiveness of our methods and their resistance to existing backdoor defenses.
Accept
This paper proposes a methods to verify unauthorized use of open-sourced dataset. The idea is to inject verifiable backdoor watermarks. The authors first show that existing backdoor watermarks can be exploited by adversaries for attacks. They then proposed novel untargeted backdoor watermarking techniques that are both effective and harmless in poisoned-label (UBW-P) and clean-label (UBW-C) settings. A malicious network that trained using the watermarked dataset may predict randomly for watermarked test data and clearly for clean test data, so it is possible to verify using the difference between the two predictions, for watermarked and clean test data. The authors agree that the proposed untargeted watermarks are useful and the problem being studied is interesting. The authors are suggested to address remaining concerns of the reviewers, such as whether the random classification is better than previous guided misclassification for verifying malicious users.
train
[ "22JMt-5jpDV", "DYRzi32DHwT", "Mc13g0wBN0J", "YgPwpzn_Hna", "dogSr_xa9wX", "4R-oMjbbsAp", "fywNmFsKS3wq", "fTdLNGcv2R", "crzzszRL1p", "hHfuFYGTqmn5", "N9GwcFBj9Ao", "qPv0iys0-g", "J_Iv2zvxl8qt", "9vicxjS6K46", "i95KYOnJosS", "h6oQ97N-nOb", "UFbJ3K9pyXs", "k4paZC8xhMo", "D-iNlUn2-...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", ...
[ " There are no ethical issues in my opinion. There are no ethical issues in my opinion. There are no ethical issues in my opinion.", " Thank you for your recognition of our discussions and kind explanations. We do respect your decision and are willing to wait for your final score after the Reviewer-Metareviewer d...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2022_kcQiIrvA_nz", "Mc13g0wBN0J", "YgPwpzn_Hna", "fTdLNGcv2R", "fTdLNGcv2R", "fTdLNGcv2R", "fTdLNGcv2R", "qPv0iys0-g", "hHfuFYGTqmn5", "J_Iv2zvxl8qt", "i95KYOnJosS", "i95KYOnJosS", "9vicxjS6K46", "fR3qiUq7ISY", "D-iNlUn2-Y2", "fR3qiUq7ISY", "k4paZC8xhMo", "V1sGQYEAHvi", "9s...
nips_2022_F7NQzsl334D
ClimbQ: Class Imbalanced Quantization Enabling Robustness on Efficient Inferences
Quantization compresses models to low bits for efficient inferences which has received increasing attentions. However, existing approaches focused on balanced datasets, while imbalanced data is pervasive in the real world. Therefore, in this study, we investigate the realistic problem, quantization on class-imbalanced data. We observe from the analytical results that quantizing imbalanced data tends to obtain a large error due to the differences between separate class distributions, which leads to a significant accuracy loss. To address this issue, we propose a novel quantization framework, Class Imbalanced Quantization (ClimbQ) that focuses on diminishing the inter-class heterogeneity for quantization error reduction. ClimbQ first scales the variance of each class distribution and then projects data through the new distributions to the same space for quantization. To guarantee the homogeneity of class variances after the ClimbQ process, we examine the quantized features and derive that the homogeneity satisfies when data size for each class is restricted (bounded). Accordingly, we design a Homogeneous Variance Loss (HomoVar Loss) which reweights the data losses of each class based on the bounded data sizes to satisfy the homogeneity of class variances. Extensive experiments on class-imbalanced and benchmark balanced datasets reveal that ClimbQ outperforms the state-of-the-art quantization techniques, especially on highly imbalanced data.
Accept
After rebuttal, the reviewers unanimously agree that the submission should be accepted for publication at NeurIPS.
train
[ "QzOhALEf2wd", "PUvmtZO27gv", "J1L2F6N7W0f", "b-eMVSr-dkY", "y4rSUKj6Xgf", "T2zIWXr6gEO", "v6QhNSmN9nl", "wl0CDEcig3", "AxsK32u0SV1", "zgttOU_QLLN", "gWIFA1tKMh", "IKCeUCjrrdE", "uZKcq8FWI89", "18oY5L2-8gI", "PP3ESY3FlcW", "j-PDzOjdQZ2", "W7jtW_X8zpp" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed answer and the new quantization error results. I have updated the rating accordingly.", " Thanks for your detailed elaboration. I recommend the authors to combine the above content into the paper, since it can strengthen the contributions of your work. I do appreciate the efforts and the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "v6QhNSmN9nl", "J1L2F6N7W0f", "y4rSUKj6Xgf", "T2zIWXr6gEO", "IKCeUCjrrdE", "zgttOU_QLLN", "wl0CDEcig3", "gWIFA1tKMh", "gWIFA1tKMh", "W7jtW_X8zpp", "j-PDzOjdQZ2", "PP3ESY3FlcW", "18oY5L2-8gI", "nips_2022_F7NQzsl334D", "nips_2022_F7NQzsl334D", "nips_2022_F7NQzsl334D", "nips_2022_F7NQzs...
nips_2022_4F0Pd2Wjl0
Error Correction Code Transformer
Error correction code is a major part of the physical communication layer, ensuring the reliable transfer of data over noisy channels. Recently, neural decoders were shown to outperform classical decoding techniques. However, the existing neural approaches present strong overfitting, due to the exponential training complexity, or a restrictive inductive bias, due to reliance on Belief Propagation. Recently, Transformers have become methods of choice in many applications, thanks to their ability to represent complex interactions between elements. In this work, we propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths. We encode each channel's output dimension to a high dimension for better representation of the bits' information to be processed separately. The element-wise processing allows the analysis of channel output reliability, while the algebraic code and the interaction between the bits are inserted into the model via an adapted masked self-attention module. The proposed approach demonstrates the power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins, at a fraction of their time complexity.
Accept
This paper is part of a popular line of research aiming to apply neural network concepts to the decoding of error-correcting codes. The main novelty consists in the introduction of an architecture based on transformers. The authors provide convincing and thorough numerical results comparing the BER and the complexity of the proposed approach with various baselines. Such results apply to codes in the short to medium block-length range (from 32 to 128 bits). The reviewers have expressed a number of concerns in their initial reports. After the rebuttal stage, most of these concerns have been resolved. The reviewers Nt2o and MQDw have particularly appreciated the additional numerical results provided by the authors (BP baselines, non-Gaussian channels, other modulations and SCL decoder for polar codes). This is also explicitly pointed out in the updated reviews. In summary, there is clear consensus towards accepting the paper. After my own reading of the manuscript, I agree with this assessment and I am happy to recommend acceptance. As a final note, I would like to encourage the authors to include in the camera ready the additional experiments and discussions mentioned in the rebuttal.
train
[ "Km4WD5jxiXn", "BpfyE6OBR7Y", "V0E26lz5cBX", "E4njdxncOR7", "MsE6WVZOiIv", "q1vpEDDT5kI", "qsyu5hU9Xa", "zMMaItNd-X1", "h7TXKmjTNll", "i2n0exlmbhT", "YecrtciLAeq", "yL2lOIGjFh" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for the valuable ideas, which have no doubt helped improve our manuscript.\nWe would be happy to know if you are satisfied with our answers, or if there is anything else we can address.", " Thank you for the reply and the revised manuscript. I have read them and adjusted the score accordingly.",...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "h7TXKmjTNll", "E4njdxncOR7", "nips_2022_4F0Pd2Wjl0", "yL2lOIGjFh", "YecrtciLAeq", "i2n0exlmbhT", "h7TXKmjTNll", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0" ]
nips_2022_vgIz0emVTAd
DISCO: Adversarial Defense with Local Implicit Functions
The problem of adversarial defenses for image classification, where the goal is to robustify a classifier against adversarial examples, is considered. Inspired by the hypothesis that these examples lie beyond the natural image manifold, a novel aDversarIal defenSe with local impliCit functiOns (DISCO) is proposed to remove adversarial perturbations by localized manifold projections. DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location. It is implemented with an encoder and a local implicit module, where the former produces per-pixel deep features and the latter uses the features in the neighborhood of query pixel for predicting the clean RGB value. Extensive experiments demonstrate that both DISCO and its cascade version outperform prior defenses, regardless of whether the defense is known to the attacker. DISCO is also shown to be data and parameter efficient and to mount defenses that transfers across datasets, classifiers and attacks.
Accept
In this paper, DISCO, a test-time defense against adversarial attack, is proposed based on prior concents of adversarial denoising, manifold modeling, and implicit function. The authors show promising efficiency and experimental results in DISCO. However, a large concern raised by some reviewers is the limitied novelty but the authors claimed that the perspective of modeling local statistics and the introduction of the local implicit function for adversarial defense are important contributions. Another limitation is that some reviewers concern robustness evaluation on norm-bounded attacks only, but the authors claim that many baselines in RobustBench [25,26,32,33,81,87,99,110,116,119,116] are evaluated only on norm-bounded attacks. Since most reviewers are satisfied with authors' responses, this work is suggested to be accepted but the AC hopes the authors continue to clarify the limitations and consider taking recent publications into consideration to further revise the paper.
train
[ "f2ab6Yu6Ya4", "xTJx05aD7x", "Qx-RMNJ-rx9", "OCVuUVQN7v", "1GmX18t_r8O", "KHSgHBBtPgN", "Ud85DTzmTy4", "ak2l36xKZhb", "_PAtjxpA1IT", "6dv9nkAMWze", "WgsjHr4wL7B", "m642s8fS00g", "yzw041YJo-" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\nWe appreciate your efforts in reviewing our paper. We have addressed your questions in detail. As the deadline is approaching, would you please check our response and acknowledge our rebuttal?\nThank you so much.\nBest regards,\nAuthors", " Thank you for the thorough response. It has adequately...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4, 3 ]
[ "ak2l36xKZhb", "KHSgHBBtPgN", "yzw041YJo-", "m642s8fS00g", "WgsjHr4wL7B", "6dv9nkAMWze", "_PAtjxpA1IT", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd" ]
nips_2022_6rhl2k1SUGs
Watermarking for Out-of-distribution Detection
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models. However, existing methods largely ignore the reprogramming property of deep models and thus may not fully unleash their intrinsic strength: without modifying parameters of a well-trained deep model, we can reprogram this model for a new purpose via data-level manipulation (e.g., adding a specific feature perturbation). This property motivates us to reprogram a classification model to excel at OOD detection (a new task), and thus we propose a general methodology named watermarking in this paper. Specifically, we learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking. Extensive experiments verify the effectiveness of watermarking, demonstrating the significance of the reprogramming property of deep models in OOD detection.
Accept
The reviewers agree that the proposed method is interesting and yields good performance. A number of concerns were raised during the initial round of reviews concerning the rigorousness and completeness of experiments, but these were addressed during extensive back-and-forth between authors and reviewers.
train
[ "MFKTygbr2kF", "YOvmwWDqPe6", "8oJGA4TT3u8", "S_P8xvp0qd8", "1mvxr7LLXCe", "L3r7CEiNgw", "IYAd9rZzf_y", "GwShy9bg7YC", "jDM4sgM4xo", "iMQSsvedng_", "xm3bOit9_5s", "FUxiyzCb8Iz", "FO-kWtmDjbV", "qFl9vurc9Z_", "Zv-v1dOkmFn", "vThqHlEQTR_", "0Vqvr1OWuwcc", "au3MH8XUIUy", "BdegYlMim1...
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ " Dear Reviewer WTn8,\n\nGlad to hear that your concerns are addressed well. Thanks for supporting our paper to be accepted.\n\nBest regards,\n\nAuthors of #1621", " Sincerely thanks for the constructive suggestions/comments of all the reviewers. We have correspondingly revised the current submission and marked t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "jDM4sgM4xo", "nips_2022_6rhl2k1SUGs", "BPD9S8oB4bZ", "iMQSsvedng_", "vThqHlEQTR_", "xm3bOit9_5s", "xm3bOit9_5s", "FUxiyzCb8Iz", "07-0TcR_qU", "vThqHlEQTR_", "bA_UWXls75g", "0BXWdZ-jPkk", "au3MH8XUIUy", "QRP8ENGCcmI", "nips_2022_6rhl2k1SUGs", "zGX08tJxYNB", "BPD9S8oB4bZ", "OujBX2wy...
nips_2022_bIlUqzwObX
Reinforcement Learning with a Terminator
We present the problem of reinforcement learning with exogenous termination. We define the Termination Markov Decision Process (TerMDP), an extension of the MDP framework, in which episodes may be interrupted by an external non-Markovian observer. This formulation accounts for numerous real-world situations, such as a human interrupting an autonomous driving agent for reasons of discomfort. We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds. We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret. Motivated by our theoretical analysis, we design and implement a scalable approach, which combines optimism (w.r.t. termination) and a dynamic discount factor, incorporating the termination probability. We deploy our method on high-dimensional driving and MinAtar benchmarks. Additionally, we test our approach on human data in a driving setting. Our results demonstrate fast convergence and significant improvement over various baseline approaches.
Accept
All reviewers are in agreement that this paper should be accepted. It combines clear writing, a well-motivated setting (external termination due to unobserved accumulation of costs), and sound theoretical analysis with a novel algorithmic contribution (TermPG) that performs well on an interesting domain that aligns well with the stated setting. Furthermore, the additional leveraging of the cost estimation for dynamic discounting may itself be of fairly broad interest to research in RL. Clear Accept, really solid paper.
train
[ "ghIE9_xi6Fi", "O9wi3iwn1LV", "O1jyfgSLxF5", "XOlz6thldNB", "aFLtkU-mNMX", "BkDB8f7xiE", "bYboBHxnT6v", "iAsE8EDZaoz", "oHyn8BqxDbe", "_MfTwYZxSUh", "d1D-syiasit", "3FGwXNnlrv5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response. I encourage you to make the changes & clarifications discussed. I recommend acceptance of the paper.", " Thanks to the authors for their response and clarifications. I've read through all the other reviews and responses, and am satisfied to recommend acceptance...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "d1D-syiasit", "XOlz6thldNB", "iAsE8EDZaoz", "aFLtkU-mNMX", "3FGwXNnlrv5", "bYboBHxnT6v", "d1D-syiasit", "_MfTwYZxSUh", "nips_2022_bIlUqzwObX", "nips_2022_bIlUqzwObX", "nips_2022_bIlUqzwObX", "nips_2022_bIlUqzwObX" ]
nips_2022_p_g2nHlMus
Rethinking Generalization in Few-Shot Classification
Single image-level annotations only correctly describe an often small subset of an image’s content, particularly when complex real-world scenes are depicted. While this might be acceptable in many classification scenarios, it poses a significant challenge for applications where the set of classes differs significantly between training and test time. In this paper, we take a closer look at the implications in the context of few-shot learning. Splitting the input samples into patches and encoding these via the help of Vision Transformers allows us to establish semantic correspondences between local regions across images and independent of their respective class. The most informative patch embeddings for the task at hand are then determined as a function of the support set via online optimization at inference time, additionally providing visual interpretability of ‘what matters most’ in the image. We build on recent advances in unsupervised training of networks via masked image modelling to overcome the lack of fine-grained labels and learn the more general statistical structure of the data while avoiding negative image-level annotation influence, aka supervision collapse. Experimental results show the competitiveness of our approach, achieving new state-of-the-art results on four popular few-shot classification benchmarks for 5-shot and 1-shot scenarios.
Accept
This paper tackles few-shot learning with a transformer architecture and, inspired by the intuition that fine-grained information is ignored in existing methods, uses an inner-loop token re-weighting method to improve results. Overall the reviewers appreciated the use of modern architectures (Vision Transformers), the reasonableness of the re-weighting intuition, and experimental results. Concerns were raised about comparison to existing methods with similar intuitions (e.g. [A] mentioned by eF5W), fairness of the comparison with respect to model capacity and in general ablations demonstrating that it's the method (not transformers by themselves) leading to improved results, and lack of principled explanations for the design choices, and computational complexity. The authors provided strong rebuttals, including new experiments using linear classifiers and prototypical approaches, use of smaller models, and a demonstration of potential pruning methods to address computational complexity. The reviewers were overall receptive to the rebuttal, and all recommended acceptance of this paper after some back-and-forth. The paper provides both a nice benchmark applying Vision Transformers to few-shot learning as well as a method that is demonstrably better through ablation studies. Therefore, this paper provides several nice contributions to the community, and I recommend acceptance.
val
[ "FoJbLjv1q4p", "xed_i8L6fo", "yfGwk4qBB-H", "Ihh3_GbsB2S", "WiNslw0VrUP", "jSGLAMzN_A0", "8iDmcnOT_Vj", "-cn33EhWxD", "fZkjysigkyP", "6R2dlUKR48i", "awuPyh8j9s-", "dDhOVIcpwNF", "A9HFNo0-M3c", "v6SSYysrYfX", "KCdhzm_i_c" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your continued feedback!\n\n> _[...] for the supervised pre-training for the same FSL task in Fig. 4, what exactly is done?_\n\nFor adequate comparison to related work in FSL, we follow the widely adopted pretraining scheme used in FEAT [52] and other works (e.g. DeepEMD [53]) for our supervised pre...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 5 ]
[ "xed_i8L6fo", "8iDmcnOT_Vj", "awuPyh8j9s-", "-cn33EhWxD", "fZkjysigkyP", "8iDmcnOT_Vj", "KCdhzm_i_c", "v6SSYysrYfX", "6R2dlUKR48i", "A9HFNo0-M3c", "dDhOVIcpwNF", "nips_2022_p_g2nHlMus", "nips_2022_p_g2nHlMus", "nips_2022_p_g2nHlMus", "nips_2022_p_g2nHlMus" ]
nips_2022_--aQNMdJc9x
Misspecified Phase Retrieval with Generative Priors
In this paper, we study phase retrieval under model misspecification and generative priors. In particular, we aim to estimate an $n$-dimensional signal $\mathbf{x}$ from $m$ i.i.d.~realizations of the single index model $y = f(\mathbf{a}^T\mathbf{x})$, where $f$ is an unknown and possibly random nonlinear link function and $\mathbf{a} \in \mathbb{R}^n$ is a standard Gaussian vector. We make the assumption $\mathrm{Cov}[y,(\mathbf{a}^T\mathbf{x})^2] \ne 0$, which corresponds to the misspecified phase retrieval problem. In addition, the underlying signal $\mathbf{x}$ is assumed to lie in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs. We propose a two-step approach, for which the first step plays the role of spectral initialization and the second step refines the estimated vector produced by the first step iteratively. We show that both steps enjoy a statistical rate of order $\sqrt{(k\log L)\cdot (\log m)/m}$ under suitable conditions. Experiments on image datasets are performed to demonstrate that our approach performs on par with or even significantly outperforms several competing methods.
Accept
In this paper, the authors study the standard phase retrieval problem, in the case where the signal is assumed to come from a generative model prior. In particular, they propose an algorithm that starts with a spectral method followed by an iterative approach. The authors provide two theorems giving guarantees on the performance of each step of the algorithm and illustrate how their procedure performs with respect to some previous algorithms. All reviewers were found to judge positively the work of the authors, finding the paper clear, and well organized, and discussing honestly both the advantages and the limitations of their methods and theorems. The reviewers also found the answer to their questions during the rebuttal phase satisfactory.
train
[ "9EhSYHamwA6", "2YnOsXRc8mH", "K_GYClagkMr", "xVGHx7K9REz", "4PfuxtXc2Sa", "E4FeSgD-53y", "NAdEREDnZxZ", "7-74ZXBWMrc", "jgyU9ZueFcS", "qYOA7I0tli8", "wUN_QqPkOrt", "miOX_i6vhL", "7aHXDd-oStf" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are pleased that the reviewer found our answers globally satisfactory, and we thank the reviewer again for the comments. Our responses to the two points are as follows:\n\n(**Comparison with the Bayes-optimal performance**) This is a helpful suggestion. We will compare the performances of our algorithm and the...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3, 3 ]
[ "2YnOsXRc8mH", "K_GYClagkMr", "7aHXDd-oStf", "miOX_i6vhL", "wUN_QqPkOrt", "qYOA7I0tli8", "jgyU9ZueFcS", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x" ]
nips_2022_V_4BQGbcwFB
Positively Weighted Kernel Quadrature via Subsampling
We study kernel quadrature rules with convex weights. Our approach combines the spectral properties of the kernel with recombination results about point measures. This results in effective algorithms that construct convex quadrature rules using only access to i.i.d. samples from the underlying measure and evaluation of the kernel and that result in a small worst-case error. In addition to our theoretical results and the benefits resulting from convex weights, our experiments indicate that this construction can compete with the optimal bounds in well-known examples.
Accept
We thank the authors and reviewers for their work throughout the reviewing process. The paper generated detailed and interesting discussions. While there remains minor concerns, we are confident that the paper brings new elements and will generate exciting discussions in the kernel quadrature community, and we are happy to recommend acceptance. We trust the authors to use all information in the discussion threads to polish the camera-ready version of the paper.
train
[ "YubQrGn7tqp", "nIYRJNJ6eR4", "9PXbD-OCEpg", "_0xSQ3bowK", "z-m1FoRRkqT", "vz6fs5Sq24c", "AUd1FDFU8n", "nqQZ6Hk57OV", "Uct1aPwY-Hx", "Q1zT_ndtZy5", "CKquFpF2P-O", "lXO6x6hAijt", "uHM8oC9Eb_m", "Eaxwvp3MNXx" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I will keep my original score.", " Thank you to all the reviewers for constructive comments and suggestions. Although we have already replied to each reviewer, we here summarize our primary updates of the revised manuscript in two parts:\n\n- *Contribution and Limitation*: We have ad...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 3 ]
[ "Q1zT_ndtZy5", "nips_2022_V_4BQGbcwFB", "_0xSQ3bowK", "z-m1FoRRkqT", "nqQZ6Hk57OV", "Eaxwvp3MNXx", "Eaxwvp3MNXx", "uHM8oC9Eb_m", "lXO6x6hAijt", "CKquFpF2P-O", "nips_2022_V_4BQGbcwFB", "nips_2022_V_4BQGbcwFB", "nips_2022_V_4BQGbcwFB", "nips_2022_V_4BQGbcwFB" ]