paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_X8mmH03wFlD | Understanding the Failure of Batch Normalization for Transformers in NLP | Batch Normalization (BN) is a core and prevalent technique in accelerating the training of deep neural networks and improving the generalization on Computer Vision (CV) tasks. However, it fails to defend its position in Natural Language Processing (NLP), which is dominated by Layer Normalization (LN). In this paper, we are trying to answer why BN usually performs worse than LN in NLP tasks with Transformer models. We find that the inconsistency between training and inference of BN is the leading cause that results in the failure of BN in NLP. We define Training Inference Discrepancy (TID) to quantitatively measure this inconsistency and reveal that TID can indicate BN's performance, supported by extensive experiments, including image classification, neural machine translation, language modeling, sequence labeling, and text classification tasks. We find that BN can obtain much better test performance than LN when TID keeps small through training. To suppress the explosion of TID, we propose Regularized BN (RBN) that adds a simple regularization term to narrow the gap between batch statistics and population statistics of BN. RBN improves the performance of BN consistently and outperforms or is on par with LN on 17 out of 20 settings, including ten datasets and two common variants of Transformer. | Accept | The paper studies the reason why Batch Normalization is not effective in NLP tasks. The authors find that the inconsistency between training and inference leads to the failure. They define Training Inference Discrepancy (TID) to measure the inconsistency and show that BN can obtain better performance when TID is small. The authors propose Regularized BN with an additional regularization term. Experiments show RBN is better than plain BN and comparable to Layer Normalization.
Authors may want incorporate the additional analysis in the feedback. | train | [
"LY8v6fUCnUi",
"QKnk09ZZnv",
"2L3dNc_8gD",
"1GGRNny_7lx",
"Y0jiNoVtkgu",
"HrwmgEeXjEz",
"9SdFLPP-PGE",
"-XkpWWrYByv"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Sure, we will add the additional results to the revised version. Thanks!",
" The additional experimental results should be included in the revised version to make this work more convincing.",
" We thank the reviewer for the encouraging and insightful comments. ",
" We thank the reviewer for the encouraging ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"QKnk09ZZnv",
"1GGRNny_7lx",
"-XkpWWrYByv",
"9SdFLPP-PGE",
"HrwmgEeXjEz",
"nips_2022_X8mmH03wFlD",
"nips_2022_X8mmH03wFlD",
"nips_2022_X8mmH03wFlD"
] |
nips_2022_W-xJXrDB8ik | Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment | This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization. We formulate online pseudo-labeling as an optimization problem to find pseudo-labels that maximize the mutual information between the label and data while being close to a given model probability. We derive a fixed-point iteration method and prove its convergence to the optimal solution. In contrast to baselines, MIRA combined with pseudo-label prediction enables a simple yet effective clustering-based representation learning without incorporating extra training techniques or artificial constraints such as sampling strategy, equipartition constraints, etc. With relatively small training epochs, representation learned by MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/${\it k}$-NN evaluation and transfer learning. Especially, with only 400 epochs, our method applied to ImageNet dataset with ResNet-50 architecture achieves 75.6% linear evaluation accuracy. | Accept | This paper proposes a pseudo-labelling algorithm for unsupervised representation learning inspired by information maximization.
The reviewers found that the proposed method is theoretically well grounded and that the authors provide extensive experimentations to demonstrate the validity of their approach. I agree with those conclusion after reading the paper. I therefore support acceptance. | train | [
"GS-J0q8RzV4",
"DI35LWbYtY",
"s-IBBepzZME",
"oAtkE--MEw9",
"oGdYxeQnfvJV",
"Zft6wEhMaC",
"ji8_QAG4Iz3",
"8OgWIwgoxhK",
"mdBrnSMOit",
"Qhp_i9xk_6U",
"zX87lv2njFZ",
"oMOKezYed-Z",
"IrnaP0iZdVb",
"Xfsy5Hmk6C6",
"yyOp9cT-ES",
"MibC0T8e3I_",
"O2icJxbjfBs",
"aavw41RbZTD"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are happy to hear that our responses have been helpful for the understanding of our work. We will incorporate the additional results and feedback, including the limitations, into the next version. For the recommending papers [a, b], we will add discussion on the papers in the revised version.\n\nThank you!\n\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"s-IBBepzZME",
"Qhp_i9xk_6U",
"zX87lv2njFZ",
"ji8_QAG4Iz3",
"mdBrnSMOit",
"IrnaP0iZdVb",
"nips_2022_W-xJXrDB8ik",
"mdBrnSMOit",
"Qhp_i9xk_6U",
"aavw41RbZTD",
"oMOKezYed-Z",
"yyOp9cT-ES",
"MibC0T8e3I_",
"O2icJxbjfBs",
"nips_2022_W-xJXrDB8ik",
"nips_2022_W-xJXrDB8ik",
"nips_2022_W-xJXr... |
nips_2022_um2BxfgkT2_ | Pure Transformers are Powerful Graph Learners | We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice. Given a graph, we simply treat all nodes and edges as independent tokens, augment them with token embeddings, and feed them to a Transformer. With an appropriate choice of token embeddings, we prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers, which is already more expressive than all message-passing Graph Neural Networks (GNN). When trained on a large-scale graph dataset (PCQM4Mv2), our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results compared to Transformer variants with sophisticated graph-specific inductive bias. Our implementation is available at https://github.com/jw9730/tokengt. | Accept | The paper applies transformers directly to a graph by treating all nodes and edges in the graph as tokens. The author prove this approach is theoretically at least as expressive as an invariant graph network, which is already more expressive than all message-passing graph neural networks. The approach is simple and interesting. Reviewers had concerns on the empirical studies as only one dataset was used. In the response, the authors have partially addressed the concerns by offering more results. More discussion/analysis of the empirical results is expected. Considering that the paper is mainly on the theoretical side and the paper does provides interesting new insights, I'd recommend acceptance. | test | [
"_wSUrLD4RcM",
"cTlAYsgfQVQ",
"7XIA1MTSbHG",
"kriAHTonRH",
"Ny2RLkE3yAd",
"QXPw4PGudKI",
"AaeKY3Pnu2",
"_nj11zepgp1",
"2mhIykArnz1",
"aZzieG9hMql",
"oFnD20SZxT8",
"jJq7UJDrv0R",
"L5P1o82MDDY",
"lylut5jBVny",
"L5-55B_Xzrh",
"_QNSiesND3_5",
"2SUPNaEpJoH",
"7RgfdJEhsTG",
"rvf1wltR6l... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author... | [
" Thanks to the author for the detailed reply. Most of my concerns have been addressed. I keep my original rating.\n",
" Thank you for the detailed response. Most of my concerns are addressed. Hence I raised my score to 6 (I am actually a 5.5 now). I strongly suggest the authors add all the discussions to the fin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
"QXPw4PGudKI",
"Sz-hv6eoMtl",
"bC8D4WS0PBHO",
"5eDd0-MG5X",
"SLEUsI1pVNq",
"2SUPNaEpJoH",
"aZzieG9hMql",
"vNi7da5Aa1EF",
"rvf1wltR6lZ",
"5eDd0-MG5X",
"5eDd0-MG5X",
"5eDd0-MG5X",
"5eDd0-MG5X",
"5eDd0-MG5X",
"nips_2022_um2BxfgkT2_",
"EByE1osL5uL",
"EByE1osL5uL",
"lVmW25iPI5Y",
"lVm... |
nips_2022_Lr2Z85cdvB | Differentiable hierarchical and surrogate gradient search for spiking neural networks | Spiking neural network (SNN) has been viewed as a potential candidate for the next generation of artificial intelligence with appealing characteristics such as sparse computation and inherent temporal dynamics. By adopting architectures of deep artificial neural networks (ANNs), SNNs are achieving competitive performances in benchmark tasks such as image classification. However, successful architectures of ANNs are not necessary ideal for SNN and when tasks become more diverse effective architectural variations could be critical. To this end, we develop a spike-based differentiable hierarchical search (SpikeDHS) framework, where spike-based computation is realized on both the cell and the layer level search space. Based on this framework, we find effective SNN architectures under limited computation cost. During the training of SNN, a suboptimal surrogate gradient function could lead to poor approximations of true gradients, making the network enter certain local minima. To address this problem, we extend the differential approach to surrogate gradient search where the SG function is efficiently optimized locally. Our models achieve state-of-the-art performances on classification of CIFAR10/100 and ImageNet with accuracy of 95.50%, 76.25% and 68.64%. On event-based deep stereo, our method finds optimal layer variation and surpasses the accuracy of specially designed ANNs meanwhile with 26$\times$ lower energy cost ($6.7\mathrm{mJ}$), demonstrating the advantage of SNN in processing highly sparse and dynamic signals. Codes are available at \url{https://github.com/Huawei-BIC/SpikeDHS}. | Accept | This paper proposes a new architecture search algorithm for spiking neural networks (SNNs). The key insight is to optimize both the cell and the architecture level of the SNN. Convincing numerical results are provided on image classification tasks (CIFAR10, CIFAR100, and an event-based stereo task).
One concern raised by the reviewers regards the comparison to existing work (some of which appears to be very recent). This point is raised by all the four reviewers (although it has led to a rather large variance in their initial assessments). After an in-depth discussion between authors and reviewers and a discussion between AC and reviewers as well, it appears that this concern has been addressed in a satisfactory way. Other concerns (e.g., training pipeline and versatility by reviewer cjsQ) have been also resolved, and the remaining ones (measuring energy accurately as mentioned by reviewer LhUf, and computational overhead on neuromorphic hardware as mentioned by reviewer hUzC) have been regarded as out of scope.
In summary, the reviewers have found the authors’ response convincing and have reached a consensus towards accepting the paper. After my own reading of the manuscript, I agree with this assessment and I am happy to recommend acceptance. As a final note, I would like to encourage the authors to include in the camera ready the discussions related to the feedback from the reviewers.
| train | [
"8A5Zx1db9M-",
"ZRpPkr5bn5o",
"HJ6tN9KC5q8",
"XBHjb5P3Bty",
"RfBM5ydvewX",
"u4AwF9ufDyl",
"xLaCRXI0KA",
"AvK3nlWU6G4",
"EQZxGej2SuS",
"6jRJ90u9RYi",
"6m0SU06XPa",
"zdLed46sH1",
"wkJs38wHYJW",
"dx7KkE_yqXQ",
"zGLcJgDMgfV",
"0RzQrCiHb1i",
"WW5HtbQ0i4",
"CVvSInpg_hh",
"GxGNgybmjru",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for your reply. We understand your concerns for search time cost and power consumption measurement problems in the field of SNN. However, we believe these issues should not influence the main contribution of this work stated in the general response, the reasons are following:\n\n\n**For the issue of sea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
5
] | [
"ZRpPkr5bn5o",
"0RzQrCiHb1i",
"XBHjb5P3Bty",
"RfBM5ydvewX",
"u4AwF9ufDyl",
"zGLcJgDMgfV",
"AvK3nlWU6G4",
"dx7KkE_yqXQ",
"6jRJ90u9RYi",
"6m0SU06XPa",
"zdLed46sH1",
"wkJs38wHYJW",
"xINKCDvGgDo",
"YxxibRhimpE",
"GxGNgybmjru",
"CVvSInpg_hh",
"nips_2022_Lr2Z85cdvB",
"nips_2022_Lr2Z85cdv... |
nips_2022_FRDiimH26Tr | TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training | Sparsely gated Mixture-of-Expert (MoE) has demonstrated its effectiveness in scaling up deep neural networks to an extreme scale. Despite that numerous efforts have been made to improve the performance of MoE from the model design or system optimization perspective, existing MoE dispatch patterns are still not able to fully exploit the underlying heterogeneous network environments. In this paper, we propose TA-MoE, a topology-aware routing strategy for large-scale MoE trainging, from a model-system co-design perspective, which can dynamically adjust the MoE dispatch pattern according to the network topology. Based on communication modeling, we abstract the dispatch problem into an optimization objective and obtain the approximate dispatch pattern under different topologies. On top of that, we design a topology-aware auxiliary loss, which can adaptively route the data to fit in the underlying topology without sacrificing the model accuracy. Experiments show that TA-MoE can substantially outperform its counterparts on various hardware and model configurations, with roughly 1.01x-1.61x, 1.01x-4.77x, 1.25x-1.54x improvements over the popular DeepSpeed-MoE, FastMoE and FasterMoE systems. | Accept | Mixture-of-Expert (MoE) models have demonstrated a lot of success recently. To further improve upon the existing literature this paper studies MoE routing for different network topologies. This is essentially to deal with the communication overhead of MoE training. The strategy is to add another layer on top for the topology along with a corresponding objective to optimize. The authors also provide experiments demonstrating improved speed of convergence. The reviewers were in general positive and liked the idea of the paper. The reviewers did however raise issues about lack of clear demonstration that accuracy is not compromised, lack of large data, and a few other more technical concerns. The reviewers concerns seem to be more or less addressed by the authors. My overall assessment of the paper is positive. I think the general premise of the paper is interesting and the paper has interesting ideas. I do agree however that the experiments need to be more thorough. I am recommending acceptance but request that the authors follow the reviewers comments to improve their experimental results | val | [
"CJjEFXOZzNK",
"F6xWNVk_HYP",
"L7fg_BjF_lU",
"hmVpzhdb8p",
"hQsA_W62jWp",
"sG7x1zlbhQ1",
"E0b_tRA2Ny5",
"DnQMqmcbZA",
"Ebhq9jKib8R",
"OZBQk6it-V7",
"xGdY48er99r"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I agree with your point. I will keep my current rating 5 considering all of those. Thanks!",
" We appreciate the constructive feedback and valuable suggestions again.\n\nWe apologize that we are currently not able to have experiments with more experts or more updates due to the limitations of the computation re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
2
] | [
"F6xWNVk_HYP",
"hmVpzhdb8p",
"hQsA_W62jWp",
"sG7x1zlbhQ1",
"E0b_tRA2Ny5",
"OZBQk6it-V7",
"Ebhq9jKib8R",
"xGdY48er99r",
"nips_2022_FRDiimH26Tr",
"nips_2022_FRDiimH26Tr",
"nips_2022_FRDiimH26Tr"
] |
nips_2022_4rTN0MmOvi7 | DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection | Open-world object detection, as a more general and challenging goal, aims to recognize and localize objects described by arbitrary category names. The recent work GLIP formulates this problem as a grounding problem by concatenating all category names of detection datasets into sentences, which leads to inefficient interaction between category names. This paper presents DetCLIP, a paralleled visual-concept pre-training method for open-world detection by resorting to knowledge enrichment from a designed concept dictionary. To achieve better learning efficiency, we propose a novel paralleled concept formulation that extracts concepts separately to better utilize heterogeneous datasets (i.e., detection, grounding, and image-text pairs) for training. We further design a concept dictionary (with descriptions) from various online sources and detection datasets to provide prior knowledge for each concept. By enriching the concepts with their descriptions,
we explicitly build the relationships among various concepts to facilitate the open-domain learning. The proposed concept dictionary is further used to provide sufficient negative concepts for the construction of the word-region alignment loss, and to complete labels for objects with missing descriptions in captions of image-text pair data. The proposed framework demonstrates strong zero-shot detection performances, e.g., on the LVIS dataset, our DetCLIP-T outperforms GLIP-T by 9.9% mAP and obtains a 13.5% improvement on rare categories compared to the fully-supervised model with the same backbone as ours. | Accept | The paper receives overall positive reviews and rebuttal has resolved the reviewer's concerns. Reviewers agree that the paper proposes a simple yet effective approach to enrich language concepts to learn better region-concept alignment for object detection. The approach is supported by solid empirical evidence on the LVIS dataset and 13 DATA. AC agrees the methodology of processing data is worth sharing to a broader audience and recommends to accept the paper. | train | [
"bisKCjF7Kg",
"9HsBY-fFyYT",
"0eT8pCePoSo",
"d0Fes_rWLIW",
"BR-YqPZcJEK",
"9jxWHTM-hxH",
"P2V-NZewsdSk",
"oDAYdq6SD6",
"HZVTm7KSOY",
"ke5CqQNASSq",
"yZZAUT13S4I",
"lSwlXGuae3v"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' effort to address my comments, especially evaluating their method using the VILD protocol. This result should be included in the final version (at least in the supplementary). I still believe I was right in my original assessment about the advantages and disadvantages of the proposed met... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
5
] | [
"9HsBY-fFyYT",
"lSwlXGuae3v",
"yZZAUT13S4I",
"BR-YqPZcJEK",
"ke5CqQNASSq",
"P2V-NZewsdSk",
"HZVTm7KSOY",
"nips_2022_4rTN0MmOvi7",
"nips_2022_4rTN0MmOvi7",
"nips_2022_4rTN0MmOvi7",
"nips_2022_4rTN0MmOvi7",
"nips_2022_4rTN0MmOvi7"
] |
nips_2022_Wo1HF2wWNZb | On the Identifiability of Nonlinear ICA: Sparsity and Beyond | Nonlinear independent component analysis (ICA) aims to recover the underlying independent latent sources from their observable nonlinear mixtures. How to make the nonlinear ICA model identifiable up to certain trivial indeterminacies is a long-standing problem in unsupervised learning. Recent breakthroughs reformulate the standard independence assumption of sources as conditional independence given some auxiliary variables (e.g., class labels and/or domain/time indexes) as weak supervision or inductive bias. However, nonlinear ICA with unconditional priors cannot benefit from such developments. We explore an alternative path and consider only assumptions on the mixing process, such as Structural Sparsity. We show that under specific instantiations of such constraints, the independent latent sources can be identified from their nonlinear mixtures up to a permutation and a component-wise transformation, thus achieving nontrivial identifiability of nonlinear ICA without auxiliary variables. We provide estimation methods and validate the theoretical results experimentally. The results on image data suggest that our conditions may hold in a number of practical data generating processes. | Accept | Strong paper with all reviewers arguing for acceptance.
Only minor concerns from the reviewers were on whether the preconditions required for the theorems in the paper were likely to hold in practice. This was discussed thoughtfully by the author response, including a new appendix section. | train | [
"2-FK49RYgwHH",
"yhBR3ZY6yt",
"RhNmxm9pCVn",
"YyRX0HGx4L",
"CQSSofLkVV9",
"C1_sZVQAnIso",
"PClA-HpXzo4",
"skLa7x0sa0V",
"MERRuAam3oB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We greatly appreciate the reviewer’s time, encouraging comments, and constructive suggestions. Our responses are provided below.\n\n**Q1**: What would happen if the ground truth data were to violate the assumptions but the method was run anyway?\n\n**A1**: Thanks for the great question. In the ablation study, ass... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"MERRuAam3oB",
"RhNmxm9pCVn",
"YyRX0HGx4L",
"skLa7x0sa0V",
"C1_sZVQAnIso",
"PClA-HpXzo4",
"nips_2022_Wo1HF2wWNZb",
"nips_2022_Wo1HF2wWNZb",
"nips_2022_Wo1HF2wWNZb"
] |
nips_2022_nRcyGtY2kBC | Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation | Consider the problem of improving the estimation of conditional average treatment effects (CATE) for a target domain of interest by leveraging related information from a source domain with a different feature space. This heterogeneous transfer learning problem for CATE estimation is ubiquitous in areas such as healthcare where we may wish to evaluate the effectiveness of a treatment for a new patient population for which different clinical covariates and limited data are available. In this paper, we address this problem by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Then, we show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners. On a new semi-synthetic data simulation benchmark for heterogeneous transfer learning, we not only demonstrate performance improvements of our heterogeneous transfer causal effect learners across datasets, but also provide insights into the differences between these learners from a transfer perspective. | Accept | The paper studies methods for estimating conditional average treatment effects (CATE) under a shift in domain where source and target feature spaces are heterogenous. It is assumed that the (respective) CATEs in both source and target domains are identifiable through ignorability and overlap. No formal assumptions are made regarding the similarity of potential outcome distributions across domains, but implicitly that there exists a shared structure in the outcome functions. A number of heuristics are proposed to modify popular neural network CATE estimators to this setting, including a wide array of meta-learners such as propensity weighting, doubly robust estimators and TARNet.
Reviewers appreciated the setting of heterogenous feature domain adaptation which is understudied in the literature and representative of many transfer tasks of interest, such as transfer from a clinical trial to an observational cohort. Typically, the feature set collected in trials is smaller than in, say, a registry. However, as pointed out by one reviewer, the empirical evaluation does not consider such applications. In addition, no details are given in the main paper for how the heterogenous feature spaces are constructed for experiments (this is only given in the Appendix). The uniform sampling is quite unrealistic and most likely less challenging than real-world cases.
The authors make assumptions of ignorability and overlap, referring to previous work that this renders the causal effect identifiable. While this is true, the interesting complication in this work is that no assumptions are made regarding similarities of feature sets or outcome functions; these are left implicit. As a result, no claims can be made about the usefulness of source data for this task, see e.g., [1] for a discussion on hardness of transfer. In other words, the authors rely on empirical evidence to demonstrate this usefulness. In semi-synthetic experiments, the authors find that their proposed approach improves significantly over using only shared features, even when the number of target samples is minimal.
Reviewers were concern with the contextualisation of the work in the literature, given previous work on transportability of causal effects and on domain adaptation. Adding to this list, I would suggest that the authors refer to previous work on heterogenous-feature transfer learning. Under ignorability and overlap, the settings are not much different from each other, not least demonstrated by the fact that the T-learner solution performs well.
The authors propose several "building blocks" but don't evaluate the importance of these in isolation, using, for example, an ablation study. This makes it difficult to assess which components are necessary and which are not.
In summary, the considered setting is interesting and the algorithmic contributions appear useful empirically. The theoretical and methodological contributions are rather small, and the work should be better contextualised in the related topics of domain adaptation and transportability.
[1] Ben-David, Shai, and Ruth Urner. "On the hardness of domain adaptation and the utility of unlabeled target samples." International Conference on Algorithmic Learning Theory. Springer, Berlin, Heidelberg, 2012. | train | [
"yn0PVHkgsy0",
"h9bRlCbPoy0",
"O-0n4XN8rPZ",
"nmd8lSbCW1f",
"XQCjMkoH9j5",
"F3AUXTpebPx",
"zJq-nSyvU2q",
"l8x35inBJxO",
"ZvZOMrUFLx",
"CTevtWAcbFw",
"793DtsPMODP",
"fjE682UB8iA",
"xiBcPK4JR7n",
"CEZuhRLf1OL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the thoughtful responses. After reading this response, along with the other reviews and author responses, I retain my recommendation of acceptance.\n\nThank you for your detailed thoughts on the specific topics and extensions which I mentioned in the original review. While I find these specific topi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
5
] | [
"793DtsPMODP",
"ZvZOMrUFLx",
"F3AUXTpebPx",
"CEZuhRLf1OL",
"CEZuhRLf1OL",
"CEZuhRLf1OL",
"xiBcPK4JR7n",
"xiBcPK4JR7n",
"xiBcPK4JR7n",
"fjE682UB8iA",
"fjE682UB8iA",
"nips_2022_nRcyGtY2kBC",
"nips_2022_nRcyGtY2kBC",
"nips_2022_nRcyGtY2kBC"
] |
nips_2022_HH_jBD2ObPq | BR-SNIS: Bias Reduced Self-Normalized Importance Sampling | Importance Sampling (IS) is a method for approximating expectations with respect to a target distribution using independent samples from a proposal distribution and the associated to importance weights. In many cases, the target distribution is known up to a normalization constant and self-normalized IS (SNIS) is then used. While the use of self-normalization can have a positive effect on the dispersion of the estimator, it introduces bias. In this work, we propose a new method BR-SNIS whose complexity is essentially the same as SNIS and which significantly reduces bias. This method is a wrapper, in the sense that it uses the same proposal samples and importance weights but makes a clever use of iterated sampling-importance-resampling (i-SIR) to form a bias-reduced version of the estimator. We derive the proposed algorithm with rigorous theoretical results, including novel bias, variance, and high-probability bounds. We illustrate our findings with numerical examples. | Accept | Importance sampling requires the knowledge of the normalization constant of the distribution to be sampled from. SNIS (Self-Normalized Importance Sampling) does not, but is biased. The authors study methods to reduce the bias in SNIS: BR-SNIS. They prove error bounds for this method (Theorems 3 and 4).
The reviewers agreed that BR-SNIS does a nice job in practice, and that the theoretical analysis is novel and sound [especially Reviewers 9JEd, 1FgZ].
However, they also all pointed out that the paper is overstating the novelty of BR-SNIS. Some identified BR-SNIS to the exising i-SIR/multiple proposal MCMC [9JEc]. Other reviewers wrote that they are not exactly the same but that BR-SNIS is strongly based on i-SIR and that this is not emphasized enough by the authors [nbwq, 1FgZ]. They all required that the author to make it clear that i-SIR is an existing method, that BR-SNIS is strongly based on it, to discuss the connection to multiple proposal MCMC thoroughly and to add many references. Overall, "the novelty in methodology seem to be not big" [hWyK], so the authors should make it clear that their main contribution is the analysis of the method (Theorems 3 and 4). One of them also required to clarify the novelty of the Rao-Blackwellised estimator [9JEd]. Finally, while they agree that Theorems 1 and 2 are necessary to understand BR-SNIS, it should also be stressed that these results are "classical results", on the contrary to Theorems 3 and 4 [9JEd].
The authors acknowledged this, and promised to fix the paper accordingly. During the discussion, the reviewers agreed that the theoretical analysis in the paper is interesting enough to justify its acceptation. I will therefore recommend to accept the paper. I will ask the authors to carefully include in the paper all the discussion on i-SIR and multiple proposal MCMC and on the novelty of BR-SNIS in general (and of course Rao-Blackwellised estimator // Theorems 1 and 2). There was also some potential computational problem pointed out by [hWyK], who was finally addressed by the authors reply: please implement the corresponding corrections in the paper if necessary. | train | [
"wavCzxsfvl4",
"EQwzAiPQAAN",
"DuvDaNSZ3xN",
"61QO8X4_THbK",
"CfUmSbW2Wlo",
"dd8uMhGIBnR",
"X1D3bKfnrd",
"m-qK1gHzti",
"pj9ziYysbcd",
"MMWA4ILzbjr",
"bzo3Fc9xll_",
"IKWrFgrbYyd",
"GOaeoeTV_r",
"aQ5tpiNkMZ",
"CtIPnFED4nh",
"2mnaykexEas",
"37wQOx_nvV9",
"CuKt6-Er8AtW",
"AhxI7Sm8-eX... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for your feedback. Following your suggestion, we have now added—a short version of—this discussion to Section 2.3. ",
" Thanks a lot for this remark. We have added a reference to the work you mention at the point where we introduce the induced Markov kernel in question. ",
" Thank you for the detailed ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"CfUmSbW2Wlo",
"DuvDaNSZ3xN",
"CuKt6-Er8AtW",
"AhxI7Sm8-eX",
"GOaeoeTV_r",
"nips_2022_HH_jBD2ObPq",
"bzo3Fc9xll_",
"nips_2022_HH_jBD2ObPq",
"0rtg_rI1XL8",
"0rtg_rI1XL8",
"0rtg_rI1XL8",
"-5y87WueO2I",
"-5y87WueO2I",
"e5urouJ0bep",
"xhjej3dZayO",
"xhjej3dZayO",
"xhjej3dZayO",
"xhjej3... |
nips_2022_xWvI9z37Xd | Where to Pay Attention in Sparse Training for Feature Selection? | A new line of research for feature selection based on neural networks has recently emerged. Despite its superiority to classical methods, it requires many training iterations to converge and detect the informative features. For datasets with a large number of samples or a very high dimensional feature space, the computational time becomes prohibitively long. In this paper, we present a new efficient unsupervised method for feature selection based on sparse autoencoders. In particular, we propose a new sparse training algorithm that optimizes a model's sparse topology during training to quickly pay attention to informative features. The attention-based adaptation of the sparse topology enables fast detection of informative features after a few training iterations. We performed extensive experiments on 10 datasets of different types, including image, speech, text, artificial, and biological. They cover a wide range of characteristics, such as low and high-dimensional feature spaces, as well as few and large training samples. Our proposed approach outperforms the state-of-the-art methods in terms of the selection of informative features while reducing training iterations and computational costs substantially. Moreover, the experiments show the robustness of our method in extremely noisy environments. | Accept | After the rebuttal and discussion, all reviewers recommend acceptance of this paper to some degree. The paper has benefited from a careful review by reviewer U347 and additional experiments and clarifications performed by the authors in response to the reviewer's concerns. All reviewers noted that the paper is clearly written, the proposed method is simple and easily implemented, and that it appears to perform quite well.
| train | [
"YeDk9PYcxBe",
"XLVDXqVMhrM",
"xo6xn505gK",
"T67jspSf5xQ",
"DgxsZR_4Go",
"LafUzOFqcn0",
"745wHHn4fFx",
"XyxzqA7JvrO",
"uzkDpDSpCCx",
"KtXBKaPDhg",
"wws3d2ibO6U",
"B_byu8dozje",
"zHQvPF8B0Kw",
"xXIfanKOdea",
"PyKvQ0rdeQf",
"mRuf-f_3FGa"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you. We appreciate the reviewer’s time in checking our response and the constructive suggestions. We will illustrate the overlapping features on Madelon in the final version of the paper.",
" Thanks for your reply,\n\n> We did follow QS in adding noise to the input features. Kindly find this information i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"XLVDXqVMhrM",
"LafUzOFqcn0",
"uzkDpDSpCCx",
"nips_2022_xWvI9z37Xd",
"nips_2022_xWvI9z37Xd",
"mRuf-f_3FGa",
"PyKvQ0rdeQf",
"xXIfanKOdea",
"KtXBKaPDhg",
"wws3d2ibO6U",
"B_byu8dozje",
"zHQvPF8B0Kw",
"nips_2022_xWvI9z37Xd",
"nips_2022_xWvI9z37Xd",
"nips_2022_xWvI9z37Xd",
"nips_2022_xWvI9z... |
nips_2022_DmT862YAieY | A Continuous Time Framework for Discrete Denoising Models | We provide the first complete continuous time framework for denoising diffusion models of discrete data. This is achieved by formulating the forward noising process and corresponding reverse time generative process as Continuous Time Markov Chains (CTMCs). The model can be efficiently trained using a continuous time version of the ELBO. We simulate the high dimensional CTMC using techniques developed in chemical physics and exploit our continuous time framework to derive high performance samplers that we show can outperform discrete time methods for discrete data. The continuous time treatment also enables us to derive a novel theoretical result bounding the error between the generated sample distribution and the true data distribution. | Accept | The work proposes a continuous-time generalization of diffusion models on a discrete space. The description uses continuous-time Markov chain (CTMC), in parallel to the existing stochastic differential equation description for continuous space. Reverse CTMC and modeling and ELBO objective are described. Some practical considerations and inspirations are also discussed, including avoiding exponentially large model in high dimensions, efficient reverse (generation) process simulation, and a corrector technique that further exploit the model to improve simulation (generation) quality. An error bound on the learned data distribution is also presented that shows a mild dependency on data dimensionality.
All the reviewers agree that this work presents the very right way to describe the continuous-time version of diffusion model on discrete space, and thereafter inspired techniques make a desired contribution to the community. Some concerns are raised, including still inferior performance than the continuous counterpart, and on the independence among dimensions. The authors provide reasonable remarks on them. Hence, I recommend accept to this paper.
One minor point: In Sec. 4.2, it would be clearer if the independence is specified both among the random variables $x^{1:D}$ in “output” and between each $x^d$ in “output” and $x^{1:D\backslash d}$ in “input”. Conventionally independence refers to the former, in which case the size is only reduced to $S^D \times D S^2$.
| train | [
"uvKKjIqa1WR",
"O_XreQ-OaS",
"YW9tb7hXj52",
"ksywP7FVRsJ",
"5NiFJvebzl",
"l2gRVMcCFlu",
"luethdqMXyA",
"7wtqXZh4lFd",
"ZFrmIaGePVo",
"AhqZsm6VcVG",
"ndADo_yUjpu",
"-bjFx7gK9Cn",
"25eVeSCfLWV",
"nPAWYINL9vL",
"uuWHL-pBNi",
"zPSY2_P87e9",
"6Dqa_9V8Dr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the answer. I will keep my recommendation for acceptance. ",
" I thank the authors for their detailed response. All of my questions have been answered and I raised my score accordingly.",
" Thanks for the clear response. I have raised my score.",
" I would like to thank the authors for their d... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"25eVeSCfLWV",
"5NiFJvebzl",
"7wtqXZh4lFd",
"AhqZsm6VcVG",
"l2gRVMcCFlu",
"luethdqMXyA",
"6Dqa_9V8Dr",
"ZFrmIaGePVo",
"zPSY2_P87e9",
"ndADo_yUjpu",
"-bjFx7gK9Cn",
"nPAWYINL9vL",
"uuWHL-pBNi",
"nips_2022_DmT862YAieY",
"nips_2022_DmT862YAieY",
"nips_2022_DmT862YAieY",
"nips_2022_DmT862... |
nips_2022_XYDXL9_2P4 | AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning | Fine-tuning large pre-trained language models on downstream tasks is apt to suffer from overfitting when limited training data is available. While dropout proves to be an effective antidote by randomly dropping a proportion of units, existing research has not examined its effect on the self-attention mechanism. In this paper, we investigate this problem through self-attention attribution and find that dropping attention positions with low attribution scores can accelerate training and increase the risk of overfitting. Motivated by this observation, we propose Attribution-Driven Dropout (AD-DROP), which randomly discards some high-attribution positions to encourage the model to make predictions by relying more on low-attribution positions to reduce overfitting. We also develop a cross-tuning strategy to alternate fine-tuning and AD-DROP to avoid dropping high-attribution positions excessively. Extensive experiments on various benchmarks show that AD-DROP yields consistent improvements over baselines. Analysis further confirms that AD-DROP serves as a strategic regularizer to prevent overfitting during fine-tuning. | Accept | The paper proposes a method AD-DROP to drop attention weights in a network to alleviate overfitting.
It randomly sample a set of token positions with respect to attribution score calculated in first pass.
The authors provide a variety of experiments on multiple tasks (SNLI, NER, MT, etc.) showing effectiveness comparing to other methods. The method is slower since it needs a separate pass to calculate attention attribution.
| test | [
"xwyLjAWMaa",
"okScoyGHsQW",
"zEJHTnRVSdD",
"XrG9IXWr-2",
"gDRcp9i4NrB",
"I0k1uxLJVub",
"OPQvox0oKzf",
"YxpMx8-ni4_",
"b8OPYeIVE_",
"NHWgoimpv_",
"9Idjo23DzV",
"pfJXd9jP-GU",
"4EEOma_0WoM",
"J4mKzzAs1M",
"Z6bUS5ZU5LC"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the question. We conduct the MT experiments following the official colab (https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) with the pretrained OPUS-MT model, and report the BLEU results on the **test set** after five epochs. We found that the BLEU sco... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"okScoyGHsQW",
"YxpMx8-ni4_",
"J4mKzzAs1M",
"b8OPYeIVE_",
"OPQvox0oKzf",
"NHWgoimpv_",
"Z6bUS5ZU5LC",
"J4mKzzAs1M",
"4EEOma_0WoM",
"pfJXd9jP-GU",
"nips_2022_XYDXL9_2P4",
"nips_2022_XYDXL9_2P4",
"nips_2022_XYDXL9_2P4",
"nips_2022_XYDXL9_2P4",
"nips_2022_XYDXL9_2P4"
] |
nips_2022_4F7vp67j79I | Verification and search algorithms for causal DAGs | We study two problems related to recovering causal graphs from interventional data: (i) $\textit{verification}$, where the task is to check if a purported causal graph is correct, and (ii) $\textit{search}$, where the task is to recover the correct causal graph. For both, we wish to minimize the number of interventions performed. For the first problem, we give a characterization of a minimal sized set of atomic interventions that is necessary and sufficient to check the correctness of a claimed causal graph. Our characterization uses the notion of $\textit{covered edges}$, which enables us to obtain simple proofs and also easily reason about earlier known results. We also generalize our results to the settings of bounded size interventions and node-dependent interventional costs. For all the above settings, we provide the first known provable algorithms for efficiently computing (near)-optimal verifying sets on general graphs. For the second problem, we give a simple adaptive algorithm based on graph separators that produces an atomic intervention set which fully orients any essential graph while using $\mathcal{O}(\log n)$ times the optimal number of interventions needed to $\textit{verify}$ (verifying size) the underlying DAG on $n$ vertices. This approximation is tight as $\textit{any}$ search algorithm on an essential line graph has worst case approximation ratio of $\Omega(\log n)$ with respect to the verifying size. With bounded size interventions, each of size $\leq k$, our algorithm gives an $\mathcal{O}(\log n \cdot \log k)$ factor approximation. Our result is the first known algorithm that gives a non-trivial approximation guarantee to the verifying size on general unweighted graphs and with bounded size interventions. | Accept | This paper's reviews as it stands are divergent. The scores are 7, 5 and 4. The paper has seen discussion between reviewers with negative opinion and the authors. One reviewer who engaged in discussion revised the score up by 1.
The most unfavorable reviewer's main issue was lack of empirical evaluations comparing the authors algorithms to close competitors [SMG 20, PSS 22]. - Authors have responded turning in a quick implementation with plots comparing performance of their algorithm with competitors. Authors attached it plots and a readme in the form of an anonymous Drive folder (I looked at it briefly). It seems like their algorithm is very competitive with state of the art and infact in terms of runtime is faster than even random in some cases. Experiments seem reasonably comprehensive. I would have ideally liked the authors to include the contents of the drive folder into the main paper and uploaded a revision (I am not sure if authors are aware that one can update the paper during the rebuttal).
I would consider this issue sort of taken care of. Empirical simulations do clearly show that the proposed algorithms are effective for random graphs of different size.
Other concerns (even after a long discussion with reviewers) are: How significant are the theoretical results in comparison to [SMG 20, PSS 22].
1) Does it follow from many theorems about covered edges [Chi 95] classically known and other theorems from these two recent references ?
Authors responded saying - they are the first to give an *exact* algorithm to perform adaptive interventions to verify if a given graph is indeed the true one exactly characterizing the instance optimal number of interventions. I agree with the authors that this is not known and relation to covered edges does not directly follow from existing classical results (as authors have explained and I did see the proofs in the supplement.)
So results for exact instance optimal verification are certainly new and novel and previous works only provided bounds on the verification number.
2) How novel are the search results ? - Here, it is true that for proving approximation guarantee they do rely on a slight modification of a lower bound, i.e. Lemma 21 in the paper as observed by one reviewer in the discussion. However, authors also point out that theirs is the first algorithm which has instance wise O(log n) approximation to the best adaptive rate for arbitrary graphs.
I believe this was an open problem. Previous works like [SMG+20] could not make a general argument due to their reliance on directed clique trees and some orientation properties of the directed clique trees. Current work takes a different approach using clique separators and authors very easily extend the results to interventions of bounded size (which was also not known in general).
3) Experiments were added in an anonymous drive folder (I would strongly suggest the authors to add a few to the main camera ready + put the rest in supplement and discuss in detail about runtime benefits etc. Currently the only discussion is in the readme file).
For all the three concerns, I feel authors have adequately addressed the concerns. This paper simplifies adaptive interventional design with many interesting observations and generalizations in addition to particularly novel contributions to the verification problem.
Hence, I am positive about this paper.
To the authors: Please do include the figures and discuss the experiments in the camera ready. Your anonymous folder contents must go into the paper (split between main paper and supplement) at the very least. Authors may think their theoretical contribution is the main point of the paper. However, experimentally seeing competitiveness to the baselines AND runtime benefits for various graph sizes is an important contribution. Unlike many other theory results, interventional complexity is unlike sample or computational complexity. Therefore, actual gains do matter (even multiplicative constants) and I do appreciate authors putting in the effort during rebuttal. It has definitely helped with one of the chief reviewer concerns.
| train | [
"SxO5-bM7tQ",
"Qr96OFz9Ang",
"TiqNDYnG7i3",
"qgyOBjThGu",
"-fcHONNif-a",
"hfUNvb8ioE-",
"51ogZ87ya44",
"N6-h0Qo7PAO",
"8bTvJAoapPo",
"hdhr_Tz6IC_",
"o9fGIVb96de",
"iG6l5Xx8cYO",
"tP_2hC-qPwE",
"eliRm7ZFsWf"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your time. *We really appreciate your responses and are glad that we could have this discussion!* Please refer to the following for our responses.\n\n### Comment 1: Theorem 9 via Lemma 7\n\nOn necessity: While your statement is true, it is insufficient as a proof for the necessity. For examp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
4
] | [
"Qr96OFz9Ang",
"TiqNDYnG7i3",
"qgyOBjThGu",
"eliRm7ZFsWf",
"eliRm7ZFsWf",
"eliRm7ZFsWf",
"eliRm7ZFsWf",
"iG6l5Xx8cYO",
"iG6l5Xx8cYO",
"tP_2hC-qPwE",
"tP_2hC-qPwE",
"nips_2022_4F7vp67j79I",
"nips_2022_4F7vp67j79I",
"nips_2022_4F7vp67j79I"
] |
nips_2022_yoLGaLPEPo_ | Hyperbolic Feature Augmentation via Distribution Estimation and Infinite Sampling on Manifolds | Learning in hyperbolic spaces has attracted growing attention recently, owing to their capabilities in capturing hierarchical structures of data. However, existing learning algorithms in the hyperbolic space tend to overfit when limited data is given. In this paper, we propose a hyperbolic feature augmentation method that generates diverse and discriminative features in the hyperbolic space to combat overfitting. We employ a wrapped hyperbolic normal distribution to model augmented features, and use a neural ordinary differential equation module that benefits from meta-learning to estimate the distribution. This is to reduce the bias of estimation caused by the scarcity of data. We also derive an upper bound of the augmentation loss, which enables us to train a hyperbolic model by using an infinite number of augmentations. Experiments on few-shot learning and continual learning tasks show that our method significantly improves the performance of hyperbolic algorithms in scarce data regimes. | Accept | This paper attempts to improve learning in hyperbolic space under limited data (few shot setting). In this regards, the authors propose a hyperbolic feature augmentation method to circumvent overfitting. Furthermore, as optimizing using a large number of sampled data can be expensive, the paper proposes an upper bound the classification loss and optimize this tractable upper bound in the tangent space, which is Euclidean making the approach much more practical. There was a wide variance among reviewer scores. We thank the authors and reviewers for actively engaging in discussion and taking steps towards improving the paper including for providing additional experiments. Finally it would be appropriate to tone down the claim that this is the first paper to perform feature augmentation in hyperbolic space as it might be unsubstantiated, cf Weber et al "Robust large-margin learning in hyperbolic space" NeurIPS 2020 which also augments by solving a certification problem. | val | [
"xdE9kQqaHJ",
"KGDQ82VVKeO",
"k3cCMmMqm8u",
"W5Zcp3Umey8",
"FgjmSzaxsOY",
"oJCr6pvKUO2",
"auBSRmeYId4",
"h0YznPY1Ti7",
"V5wUN21Wr_",
"ooRuOV31f2x",
"C-ICPik5ySd",
"AC6wB_38xPf"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer KDoT,\n\nWe thank you for the review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. If yes, we would appreciate it if you could improv... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"ooRuOV31f2x",
"C-ICPik5ySd",
"AC6wB_38xPf",
"C-ICPik5ySd",
"ooRuOV31f2x",
"ooRuOV31f2x",
"V5wUN21Wr_",
"V5wUN21Wr_",
"nips_2022_yoLGaLPEPo_",
"nips_2022_yoLGaLPEPo_",
"nips_2022_yoLGaLPEPo_",
"nips_2022_yoLGaLPEPo_"
] |
nips_2022_c4o5oHg32CY | TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers | Mixup is a commonly adopted data augmentation technique for image classification. Recent advances in mixup methods primarily focus on mixing based on saliency. However, many saliency detectors require intense computation and are especially burdensome for parameter-heavy transformer models. To this end, we propose TokenMixup, an efficient attention-guided token-level data augmentation method that aims to maximize the saliency of a mixed set of tokens. TokenMixup provides ×15 faster saliency-aware data augmentation compared to gradient-based methods. Moreover, we introduce a variant of TokenMixup which mixes tokens within a single instance, thereby enabling multi-scale feature augmentation. Experiments show that our methods significantly improve the baseline models’ performance on CIFAR and ImageNet-1K, while being more efficient than previous methods. We also reach state-of-the-art performance on CIFAR-100 among from-scratch transformer models. Code is available at https://github.com/mlvlab/TokenMixup. | Accept | The paper is about speeding up the saliency computation used in gradient based mixup algorithms(Puzzlemix, Co-mixup, etc). The authors propose employing the attention layer output of the transformer to replace the expensive saliency computation.
Since focus of the the paper is on improving the speed and accuracy aspects of gradient based mixup algorithms, the tables 1 and 2 should remove
I strongly suggest the authors to remove tables 1 and 2 as it only shows irrelevant accuracy results across different network architectures. Since the main focus of the paper is on improving over the previous mixup algorithms, I suggest the authors to run the following experiment and insert it as table 1 in the main paper. Current table 5 does not show fair comparison across different mixup methods.
1) fix the network architecture. Denote it as A (i.e. CCT-7/3x1)
2) report the following accuracy results.
A;
A + input mixup;
A + manifold mixup;
A + cutmix;
A + puzzlemix;
A + co-mixup;
A + horizontal TM;
A + vertical TM;
A + horizontal TM + vertical TM
Here, you would need to make sure the backpropagate all the way through the individual pixels as opposed to tokens to properly run the gradient based mixup algorithms.
Also, I suggest the authors to add the end-to-end forward-backward computation time per image in addition to the avg. latency in table 3.
Overall, I like the proposed method of speeding up the saliency computation via attention maps. However, the experiment protocol needs a lot of improvement. | test | [
"bcd4-8sNmzk",
"Ez3QBLlS5GG",
"DWt6W54RUcb",
"0Ek_5Umyic7",
"8nDPPRXyLwn",
"Lvhc6dZrY_8",
"Btscd3gRP7G",
"sHLKpL7c7V9",
"dkX4KprRGN7",
"kk9v7etWaum",
"aeaxyCT2hk"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed clarifications and additional experiments.\n\nThe response addresses most of my concerns. Especially, the response to Q2 seems to show an interesting difference between gradient-based methods and attention-based methods. Also, the response to Q3 seems fair enough.\n\nBased on the response,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
5
] | [
"0Ek_5Umyic7",
"nips_2022_c4o5oHg32CY",
"aeaxyCT2hk",
"kk9v7etWaum",
"nips_2022_c4o5oHg32CY",
"dkX4KprRGN7",
"sHLKpL7c7V9",
"nips_2022_c4o5oHg32CY",
"nips_2022_c4o5oHg32CY",
"nips_2022_c4o5oHg32CY",
"nips_2022_c4o5oHg32CY"
] |
nips_2022_Jw34v_84m2b | IM-Loss: Information Maximization Loss for Spiking Neural Networks | Spiking Neural Network (SNN), recognized as a type of biologically plausible architecture, has recently drawn much research attention. It transmits information by $0/1$ spikes. This bio-mimetic mechanism of SNN demonstrates extreme energy efficiency since it avoids any multiplications on neuromorphic hardware. However, the forward-passing $0/1$ spike quantization will cause information loss and accuracy degradation. To deal with this problem, the Information maximization loss (IM-Loss) that aims at maximizing the information flow in the SNN is proposed in the paper. The IM-Loss not only enhances the information expressiveness of an SNN directly but also plays a part of the role of normalization without introducing any additional operations (\textit{e.g.}, bias and scaling) in the inference phase. Additionally, we introduce a novel differentiable spike activity estimation, Evolutionary Surrogate Gradients (ESG) in SNNs. By appointing automatic evolvable surrogate gradients for spike activity function, ESG can ensure sufficient model updates at the beginning and accurate gradients at the end of the training, resulting in both easy convergence and high task performance. Experimental results on both popular non-spiking static and neuromorphic datasets show that the SNN models trained by our method outperform the current state-of-the-art algorithms. | Accept | This paper proposes a novel loss for training a spiking neural network that mitigates errors due to quantization. All reviewers agreed that the contributions of this paper were above the acceptance threshold. | train | [
"KfWMpN7C2oi",
"3-I7LZjAjik",
"PdSoinzYLbG",
"TE8RbsU9iJS",
"U9cb7cCZH-Y",
"SF8BTqRXGSa",
"20G-bbdeD9y",
"sps0obxIVRK",
"KN1Zj48w4AK",
"vGWUB9a7mhX",
"bT8n5SzM9Xr",
"-qiLJrJmmUk",
"-7Wn4C4Qlw",
"xQDloNpEWq5",
"CWllSUj-z6n",
"VooeZWLQioH",
"Ri6eIzCfVlq",
"0DIxdIhAvIM",
"5q0wTMn4OO... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Thanks for your timely reply. We are very grateful to meet such a responsible reviewer. We have taken your suggestion seriously and added an extra section to discuss the sparsity and efficiency cost in the revised version (see line 300-308 in revision). We'd like to know if you have any other questions. We would ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"3-I7LZjAjik",
"PdSoinzYLbG",
"TE8RbsU9iJS",
"Ri6eIzCfVlq",
"SF8BTqRXGSa",
"20G-bbdeD9y",
"vGWUB9a7mhX",
"KN1Zj48w4AK",
"VooeZWLQioH",
"-7Wn4C4Qlw",
"nips_2022_Jw34v_84m2b",
"u72L0FBrqi3",
"u72L0FBrqi3",
"u72L0FBrqi3",
"u72L0FBrqi3",
"u72L0FBrqi3",
"nIKOaMSCJX8",
"nIKOaMSCJX8",
"... |
nips_2022_70bBDacSpNn | Operator-Discretized Representation for Temporal Neural Networks | This paper proposes a new representation of artificial neural networks to efficiently track their temporal dynamics as sequences of operator-discretized events. Our approach takes advantage of diagrammatic notions in category theory and operator algebra, which are known mathematical frameworks to abstract and discretize high-dimensional quantum systems, and adjusts the state space for classical signal activation in neural systems. The states for nonstationary neural signals are prepared at presynaptic systems with ingress creation operators and are transformed via synaptic weights to attenuated superpositions. The outcomes at postsynaptic systems are observed as the effects with egress annihilation operators (each adjoint to the corresponding creation operator) for efficient coarse-grained detection. The follow-on signals are generated at neurons via individual activation functions for amplitude and timing. The proposed representation attributes the different generations of neural networks, such as analog neural networks (ANNs) and spiking neural networks (SNNs), to the different choices of operators and signal encoding. As a result, temporally-coded SNNs can be emulated at competitive accuracy and throughput by exploiting proven models and toolchains for ANNs. | Reject | Reviewers agree that manuscript presents a fresh attempt, but also that the manuscript is lacking in several aspects. The writing has a lot of room for improvement and not suitable for the NeurIPS community. It's neuroscientific claims are controversial, or relies on non-mainstream arguments without appropriate justifications. The theoretical and experimental results are limited. | train | [
"3ajulTFQFJ6",
"mYYXLBUGwB",
"rEhu_OYZXW",
"Ykgulll0q-x",
"pObWdspEpz1",
"s3JZv3E8hvS",
"ySFuI50QNk5",
"KZKwlttBnh9",
"D62fmi35e2Q",
"OemuTBnfJt",
"poGT58ZLp0V",
"xXLca0vOeuA",
"0PA5i0usmj"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarification. I see your point better. We will update it with this focus. If you have other points for us to clarify, please let us know ASAP. ",
" First of all, as the title says, our paper is on a new representation (or language in your wording) for temporal neural networks. We do not have... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
1,
2
] | [
"Ykgulll0q-x",
"rEhu_OYZXW",
"D62fmi35e2Q",
"s3JZv3E8hvS",
"poGT58ZLp0V",
"0PA5i0usmj",
"xXLca0vOeuA",
"poGT58ZLp0V",
"OemuTBnfJt",
"nips_2022_70bBDacSpNn",
"nips_2022_70bBDacSpNn",
"nips_2022_70bBDacSpNn",
"nips_2022_70bBDacSpNn"
] |
nips_2022__iXQPM6AsQD | Could Giant Pre-trained Image Models Extract Universal Representations? | Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. However, with frozen models there are relatively few parameters available for adapting to downstream tasks, which is problematic in computer vision where tasks vary significantly in input/output format and the type of information that is of value. In this paper, we present a study of frozen pretrained models when applied to diverse and representative computer vision tasks, including object detection, semantic segmentation and video action recognition. From this empirical analysis, our work answers the questions of what pretraining task fits best with this frozen setting, how to make the frozen setting more flexible to various downstream tasks, and the effect of larger model sizes. We additionally examine the upper bound of performance using a giant frozen pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches competitive performance on a varied set of major benchmarks with only one shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7 top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models. | Accept | This paper presents a study of how well pre-trained and frozen large models work across several downstream computer vision tasks. The paper initially received mixed reviews with two of them being borderline accept and one borderline reject. The reviewers shared their concerns about the novelty of the investigation and its impact with some additional questions about the setup. The authors provided a rebuttal that addressed some of the reviewers' concerns. Two out of three reviewers updated their reviews in the post-rebuttal phase. Reviewers generally agree that the paper should be accepted but still have concerns regarding the novelty. Due to the comprehensive empirical analysis, AC recommends acceptance but suggests that the authors are urged to look at reviewers' feedback and incorporate their comments into the camera-ready. | train | [
"2_Ywy2vvMA",
"fuola9hp4KV",
"u70Efa5UZS",
"V5TtjoDMxex",
"rlHsA_KHNza",
"3yWjJC6xgSw",
"6xX6p-lHsyUP",
"rppWfLBpEQQ",
"oA2ypPb1gg",
"Zn5LitvYKpc",
"A1sBSDPtWVw",
"5pJM3jGEC_b"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your valuable and thoughtful comments. Do you have more comments or questions about this submission and our response? We can prepare answers or more results for them.",
" We sincerely thank the reviewer for the constructive comments, which help us improve the paper. We would add the results ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"6xX6p-lHsyUP",
"rlHsA_KHNza",
"V5TtjoDMxex",
"3yWjJC6xgSw",
"rppWfLBpEQQ",
"5pJM3jGEC_b",
"A1sBSDPtWVw",
"Zn5LitvYKpc",
"nips_2022__iXQPM6AsQD",
"nips_2022__iXQPM6AsQD",
"nips_2022__iXQPM6AsQD",
"nips_2022__iXQPM6AsQD"
] |
nips_2022_IlYS1pLa9y | Searching for Better Spatio-temporal Alignment in Few-Shot Action Recognition | Spatio-Temporal feature matching and alignment are essential for few-shot action recognition as they determine the coherence and effectiveness of the temporal patterns. Nevertheless, this process could be not reliable, especially when dealing with complex video scenarios. In this paper, we propose to improve the performance of matching and alignment from the end-to-end design of models. Our solution comes at two-folds. First, we encourage to enhance the extracted Spatio-Temporal representations from few-shot videos in the perspective of architectures. With this aim, we propose a specialized transformer search method for videos, thus the spatial and temporal attention can be well-organized and optimized for stronger feature representations. Second, we also design an efficient non-parametric spatio-temporal prototype alignment strategy to better handle the high variability of motion. In particular, a query-specific class prototype will be generated for each query sample and category, which can better match query sequences against all support sequences. By doing so, our method SST enjoys significant superiority over the benchmark UCF101 and HMDB51 datasets. For example, with no pretraining, our method achieves 17.1\% Top-1 accuracy improvement than the baseline TRX on UCF101 5-way 1-shot setting but with only 3x fewer FLOPs. | Accept | All three reviewers lean towards the acceptance of the paper. The reviewers believe the rebuttal has addressed their concerns. The AC recommends acceptance of the paper, and suggest the authors to include the materials and the discussion they promised in the rebuttal in the final version of the paper. | test | [
"KdE3VJQ7jly",
"xGWamRe-1XK",
"CkXbZpyPmnN",
"3JX4sDyDwElX",
"pYZuMd8u8ys4",
"6bG4bp3mTyof",
"cQAH6Ym3vwo",
"ocAphxfGLlz",
"ARTI-w73XDm",
"h1tgp9aGXs",
"NFKorCJFun4",
"dgc0Adr96_K",
"zP1n3dVwbo",
"KkAFflc2KBx",
"uUw3NH4O7Yk",
"GXq9UzQR2Ix",
"gIVItTuSEb1",
"XGAPSWZyRnC",
"3JumZEX0... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" I appreciate the effort made by the authors to provide additional experimental results. The response addressed all of my concerns. Therefore, I raise my rating to weak accept.",
" Dear reviewer 4Tcp:\n\nWe sincerely thank you for your careful reviews and insightful comments. We have tried our best to respond to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"GXq9UzQR2Ix",
"2l4RgRUa9e5",
"3JX4sDyDwElX",
"pYZuMd8u8ys4",
"ocAphxfGLlz",
"ARTI-w73XDm",
"h1tgp9aGXs",
"KkAFflc2KBx",
"uUw3NH4O7Yk",
"XGAPSWZyRnC",
"2l4RgRUa9e5",
"3JumZEX0Pk",
"vdveXOdNY5",
"3JumZEX0Pk",
"3JumZEX0Pk",
"2l4RgRUa9e5",
"vdveXOdNY5",
"3JumZEX0Pk",
"nips_2022_IlYS... |
nips_2022_dIUQ5haSOI | Relation-Constrained Decoding for Text Generation | The dominant paradigm for neural text generation nowadays is seq2seq learning with large-scale pretrained language models. However, it is usually difficult to manually constrain the generation process of these models. Prior studies have introduced Lexically Constrained Decoding (LCD) to ensure the presence of pre-specified words or phrases in the output. However, simply applying lexical constraints has no guarantee of the grammatical or semantic relations between words. Thus, more elaborate constraints are needed. To this end, we first propose a new constrained decoding scenario named Relation-Constrained Decoding (RCD), which requires the model's output to contain several given word pairs with respect to the given relations between them. For this scenario, we present a novel plug-and-play decoding algorithm named RElation-guided probability Surgery and bEam ALlocation (RESEAL), which can handle different categories of relations, e.g., syntactical relations or factual relations. Moreover, RESEAL can adaptively "reseal" the relations to form a high-quality sentence, which can be applied to the inference stage of any autoregressive text generation model. To evaluate our method, we first construct an RCD benchmark based on dependency relations from treebanks with annotated dependencies. Experimental results demonstrate that our approach can achieve better preservation of the input dependency relations compared to previous methods. To further illustrate the effectiveness of RESEAL, we apply our method to three downstream tasks: sentence summarization, fact-based text editing, and data-to-text generation. We observe an improvement in generation quality. The source code is available at https://github.com/CasparSwift/RESEAL. | Accept | The paper describes a model for text generation, based on target dependency relations that should be in the output. The word-level output probabilities are modified to increase the likelihood of generating words that match the target relation. Evaluation is performed on several datasets, formulating the task as text generation based on dependency relations.
The empirical gains are OK but not particularly large. What I find more compelling is the ability to control the output of the model, which is currently lacking in most approaches.
The reviewer scores straddle the decision boundary and it was unfortunately not possible to get the reviewers to engage in a discussion but the authors did a good job addressing all initial comments/questions. | train | [
"-ZKlSYokhP",
"GdqK_1kdGG2",
"Yalb0vZsy7A",
"hSgXY5q8pPja",
"6LcqpiyIJ0x",
"Cy_8tZ2E44",
"Z2ocI_woalM",
"a9Ymn4yK3Rf",
"GZho3ImOGUh",
"lYqR0LrN364",
"1gE7hJE7VJg",
"IZYnIEVJgfI",
"mofdM99Lvk_",
"2kmjN6Byaiz",
"szJ7d0e99Sm",
"Rgv-zMfw3nP",
"OjxYfxOZGao"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We really appreciate all the reviewers for their efforts in reviewing our paper. We believe that we have responded to most of the concerns below. Since the discussion deadline is approaching, here we'd like to briefly summarize the updates we have made to the revised version of the paper:\n\n- We have clarified t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"nips_2022_dIUQ5haSOI",
"OjxYfxOZGao",
"nips_2022_dIUQ5haSOI",
"OjxYfxOZGao",
"Rgv-zMfw3nP",
"nips_2022_dIUQ5haSOI",
"Rgv-zMfw3nP",
"Rgv-zMfw3nP",
"Rgv-zMfw3nP",
"2kmjN6Byaiz",
"OjxYfxOZGao",
"OjxYfxOZGao",
"szJ7d0e99Sm",
"nips_2022_dIUQ5haSOI",
"nips_2022_dIUQ5haSOI",
"nips_2022_dIUQ5... |
nips_2022_hyc27bDixNR | Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation | Few-shot class-incremental learning (FSCIL) is designed to incrementally recognize novel classes with only few training samples after the (pre-)training on base classes with sufficient samples, which focuses on both base-class performance and novel-class generalization. A well known modification to the base-class training is to apply a margin to the base-class classification. However, a dilemma exists that we can hardly achieve both good base-class performance and novel-class generalization simultaneously by applying the margin during the base-class training, which is still under explored. In this paper, we study the cause of such dilemma for FSCIL. We first interpret this dilemma as a class-level overfitting (CO) problem from the aspect of pattern learning, and then find its cause lies in the easily-satisfied constraint of learning margin-based patterns. Based on the analysis, we propose a novel margin-based FSCIL method to mitigate the CO problem by providing the pattern learning process with extra constraint from the margin-based patterns themselves. Extensive experiments on CIFAR100, Caltech-USCD Birds-200-2011 (CUB200), and miniImageNet demonstrate that the proposed method effectively mitigates the CO problem and achieves state-of-the-art performance. | Accept | This work studied the few-shot class-incremental learning in the margin-based classification. It presented a deeper analysis about the dilemma between the base-class and novel class performance, from the perspective of positive and negative patterns corresponding to positive and negative margins. Although this dilemma had been observed and analyzed in some previous works, the analysis in this work is deeper and novel. The provided method is inspired by the analysis. Although it is simple, but reasonable and effective verified in experiments. The authors also provide convincing responses to most concerns.
In summary, I think this work is well motivated, well writing. It is a professional work, could be accepted to NeurIPS. | train | [
"Fmxt1UeJXVT",
"GR31jK81Ulv",
"CL7YUAV-Jak",
"e2_a-WLTRDx",
"7YgcTK6hBeY",
"pG6U43rH0rb",
"pa9SPLtWJx",
"6yRPDVcBmyt",
"C1ewEXcbgRg",
"Eln98Q3D1oM",
"y_gtP-LhC3L",
"x__z3CGl4xd",
"WD35hHxuo3J",
"xcXJKxRrtce"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for reading our response! May I know if our response have addressed your questions? Please feel free to let us know if you have any question. We are very much looking forward to having the opportunity to discuss with you.",
" Thank you very much for reading our response! May I know if our re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"x__z3CGl4xd",
"xcXJKxRrtce",
"pG6U43rH0rb",
"7YgcTK6hBeY",
"Eln98Q3D1oM",
"pa9SPLtWJx",
"xcXJKxRrtce",
"C1ewEXcbgRg",
"Eln98Q3D1oM",
"WD35hHxuo3J",
"x__z3CGl4xd",
"nips_2022_hyc27bDixNR",
"nips_2022_hyc27bDixNR",
"nips_2022_hyc27bDixNR"
] |
nips_2022_Setj8nJ-YB8 | Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients | We consider escaping saddle points of nonconvex problems where only the function evaluations can be accessed. Although a variety of works have been proposed, the majority of them require either second or first-order information, and only a few of them have exploited zeroth-order methods, particularly the technique of negative curvature finding with zeroth-order methods which has been proven to be the most efficient method for escaping saddle points. To fill this gap, in this paper, we propose two zeroth-order negative curvature finding frameworks that can replace Hessian-vector product computations without increasing the iteration complexity. We apply the proposed frameworks to ZO-GD, ZO-SGD, ZO-SCSG, ZO-SPIDER and prove that these ZO algorithms can converge to $(\epsilon,\delta)$-approximate second-order stationary points with less query complexity compared with prior zeroth-order works for finding local minima. | Accept | This paper designs new algorithms for finding second order stationary points using only function value queries (0th order information). The main novelty is in designing two approaches for negative curvature finding. The new subroutines can be used in a wide range of algorithms for finding second order stationary points (most using first order information) and result in new 0th order algorithms with reasonable guarantees. The reviewers had some concerns but most are addressed in the response. In general the reviewers agree that this is a solid contribution to nonconvex optimization. | val | [
"6OUK9F20sbP",
"SQOcKIb-xzi",
"RxeUxU3kCVk",
"C7DYHQnGL0W",
"OG1B0EdZoAn",
"DCoNl-jm3gA9",
"DKWLtMkkfhm",
"8f5c7AtqXGf",
"Ejb36Zj9hO0",
"_kbIiP8150",
"4L3td_u4RPN",
"aeWa9D645tN",
"jT8eA9ylkr8",
"LjReTs9H3M5",
"f8_yqDtieT_",
"fdoOzmEenrB",
"UvLwbREd76n",
"cQ1Hudikrp"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for reading our response and thanks for raising the score.",
" Thanks again for your further feedback and thanks for raising the score!",
" Thank you for the detailed replies to my questions and comments. I especially appreciate the detailed explanation on iteration complexity and the addition of empir... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"C7DYHQnGL0W",
"RxeUxU3kCVk",
"_kbIiP8150",
"jT8eA9ylkr8",
"DCoNl-jm3gA9",
"DKWLtMkkfhm",
"4L3td_u4RPN",
"nips_2022_Setj8nJ-YB8",
"_kbIiP8150",
"f8_yqDtieT_",
"aeWa9D645tN",
"cQ1Hudikrp",
"UvLwbREd76n",
"fdoOzmEenrB",
"nips_2022_Setj8nJ-YB8",
"nips_2022_Setj8nJ-YB8",
"nips_2022_Setj8... |
nips_2022_JRAlT8ZstmH | Latency-aware Spatial-wise Dynamic Networks | Spatial-wise dynamic convolution has become a promising approach to improving the inference efficiency of deep networks. By allocating more computation to the most informative pixels, such an adaptive inference paradigm reduces the spatial redundancy in image features and saves a considerable amount of unnecessary computation. However, the theoretical efficiency achieved by previous methods can hardly translate into a realistic speedup, especially on the multi-core processors (e.g. GPUs). The key challenge is that the existing literature has only focused on designing algorithms with minimal computation, ignoring the fact that the practical latency can also be influenced by scheduling strategies and hardware properties. To bridge the gap between theoretical computation and practical efficiency, we propose a latency-aware spatial-wise dynamic network (LASNet), which performs coarse-grained spatially adaptive inference under the guidance of a novel latency prediction model. The latency prediction model can efficiently estimate the inference latency of dynamic networks by simultaneously considering algorithms, scheduling strategies, and hardware properties. We use the latency predictor to guide both the algorithm design and the scheduling optimization on various hardware platforms. Experiments on image classification, object detection and instance segmentation demonstrate that the proposed framework significantly improves the practical inference efficiency of deep networks. For example, the average latency of a ResNet-101 on the ImageNet validation set could be reduced by 36% and 46% on a server GPU (Nvidia Tesla-V100) and an edge device (Nvidia Jetson TX2 GPU) respectively without sacrificing the accuracy. Code is available at https://github.com/LeapLabTHU/LASNet. | Accept | The paper proposes latency-aware spatial-wise dynamic neural networks under the guidance of a latency prediction mode. reviewers arrived at a consensus to accept the paper. | train | [
"gR79srqyTk",
"mkiq_dx3bGh",
"EZfOuGqbanv",
"iNvtjdMj8ra",
"Xl3cgaDHkHY",
"_znD4W2JmHq",
"mA_3B-eQ1oo",
"ZeRzlERlxl4",
"iA9EoCGz0K",
"pGhm1vB_uEY",
"7fWrBv6caS",
"TtnqGqrV1nv",
"oq7oqhMKml",
"MgtixVVmfpp",
"SqzOlA5e9g",
"L1DNLhxnDdK",
"yGC04AvrBF4",
"E5r8zz_pc-y",
"OG1T3DO0pWf"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nWe wanted to reach out to see if our most recent reply and paper revisions have addressed the concerns. With only ~17 hours left in the rebuttal period, we were hoping that the reviewer could confirm that the revisions have addressed the concerns about the problem statement, further clarify the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"OG1T3DO0pWf",
"EZfOuGqbanv",
"L1DNLhxnDdK",
"ZeRzlERlxl4",
"_znD4W2JmHq",
"oq7oqhMKml",
"ZeRzlERlxl4",
"TtnqGqrV1nv",
"nips_2022_JRAlT8ZstmH",
"OG1T3DO0pWf",
"OG1T3DO0pWf",
"OG1T3DO0pWf",
"E5r8zz_pc-y",
"yGC04AvrBF4",
"yGC04AvrBF4",
"yGC04AvrBF4",
"nips_2022_JRAlT8ZstmH",
"nips_20... |
nips_2022_sexfswCc7B | Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation | While large-scale neural language models, such as GPT2 and BART,
have achieved impressive results on various text generation tasks, they tend to get stuck in undesirable sentence-level loops with maximization-based decoding algorithms (\textit{e.g.}, greedy search). This phenomenon is counter-intuitive since there are few consecutive sentence-level repetitions in the human corpus (e.g., 0.02\% in Wikitext-103). To investigate the underlying reasons for generating consecutive sentence-level repetitions, we study the relationship between the probability of repetitive tokens and
their previous repetitions in context. Through our quantitative experiments, we find that 1) Models have a preference to repeat the previous sentence; 2) The sentence-level repetitions have a \textit{self-reinforcement effect}: the more times a sentence is repeated in the context, the higher the probability of continuing to generate that sentence; 3) The sentences with higher initial probabilities usually have a stronger self-reinforcement effect. Motivated by our findings, we propose a simple and effective training method \textbf{DITTO} (Pseu\underline{D}o-Repet\underline{IT}ion Penaliza\underline{T}i\underline{O}n), where the model learns to penalize probabilities of sentence-level repetitions from synthetic repetitive data. Although our method is motivated by mitigating repetitions, our experiments show that DITTO not only mitigates the repetition issue without sacrificing perplexity, but also achieves better generation quality. Extensive experiments on open-ended text generation (Wikitext-103) and text summarization (CNN/DailyMail) demonstrate the generality and effectiveness of our method. | Accept | This paper investigates the source of repetition in text generation from a language model and presents a training method to mitigate this problem. Their experiments show the proposed method not only reduces repetition, but also improves generation quality. I think this is a good paper. All reviewers agreed with me so I recommend acceptance to NeurIPS. | train | [
"nIGNs_iSwAd",
"hXN3tX2Ynat",
"ItGMbRDqQUL",
"CAGIFK8EsKj",
"xLIJRg4uXfc",
"ZjFAghnwJvH",
"szhBPP31hZ",
"2kas10y2IOs",
"IAquLGyuuF3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Reviewers have addressed all questions I had. I have increased the rating.",
" We thank the reviewers for their insightful and constructive reviews. We first provide responses to several points raised by multiple reviewers. Responses to individual reviewers are provided below. **The Appendix that contains addit... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"CAGIFK8EsKj",
"nips_2022_sexfswCc7B",
"szhBPP31hZ",
"xLIJRg4uXfc",
"2kas10y2IOs",
"IAquLGyuuF3",
"nips_2022_sexfswCc7B",
"nips_2022_sexfswCc7B",
"nips_2022_sexfswCc7B"
] |
nips_2022_Y6A4-R_Hgsw | Toward a realistic model of speech processing in the brain with self-supervised learning | Several deep neural networks have recently been shown to generate activations similar to those of the brain in response to the same input. These algorithms, however, remain largely implausible: they require (1) extraordinarily large amounts of data, (2) unobtainable supervised labels, (3) textual rather than raw sensory input, and / or (4) implausibly large memory (e.g. thousands of contextual words). These elements highlight the need to identify algorithms that, under these limitations, would suffice to account for both behavioral and brain responses. Focusing on speech processing, we here hypothesize that self-supervised algorithms trained on the raw waveform constitute a promising candidate. Specifically, we compare a recent self-supervised model, wav2vec 2.0, to the brain activity of 412 English, French, and Mandarin individuals recorded with functional Magnetic Resonance Imaging (fMRI), while they listened to approximately one hour of audio books. First, we show that this algorithm learns brain-like representations with as little as 600 hours of unlabelled speech -- a quantity comparable to what infants can be exposed to during language acquisition. Second, its functional hierarchy aligns with the cortical hierarchy of speech processing. Third, different training regimes reveal a functional specialization akin to the cortex: wav2vec 2.0 learns sound-generic, speech-specific and language-specific representations similar to those of the prefrontal and temporal cortices. Fourth, we confirm the similarity of this specialization with the behavior of 386 additional participants. These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain. | Accept | This paper compares learned self-supervised speech representations to brain fMRI representations for more than 400 subjects speaking English, French, and Mandarin. Through the rebuttal period, the authors and reviewers interacted extensively to discuss the contribution, results, and analysis provided in the paper. Most of the reviewers' concerns have been addressed by improvements to the analysis and presentation of the paper.
One main concern was a concurrent research work that appeared on arxiv about one week after the NeurIPS submission deadline. The novelty of this paper should not be impacted by that other work, given the timing of both papers. | train | [
"J3Zq1tZktp",
"Vy8rBhMEgVf",
"i_Bce2-0BMy",
"eu-4WyxfUuS",
"Ky7cHltP8dG",
"4Xmysa_vpG",
"fX52mL4Rqcw",
"Nf_1GjZCFmG",
"pYX72_a132T",
"ayqpTPN2Wy4",
"nGGiIj1CSz",
"s8Emrgt3HY",
"sbwaY_CFz5",
"7C2odCAKXdn",
"y9mJfbXz522",
"vMPCQiMFbiT",
"j_CycJwcqXK",
"Vi_oPIs20Si",
"0bmDGO6cagc",
... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you for your response. I am impressed by the new analyses and thorough responses made by the authors and would like to increase my score by 1 more point. However, I urge the authors to include some of the discussion we had here in the paper especially regarding the perils of over-interpretation like in my s... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
2
] | [
"Vy8rBhMEgVf",
"i_Bce2-0BMy",
"xZaau6dfjNH",
"Ky7cHltP8dG",
"pYX72_a132T",
"pYX72_a132T",
"Nf_1GjZCFmG",
"pYX72_a132T",
"ayqpTPN2Wy4",
"xZaau6dfjNH",
"xZaau6dfjNH",
"xZaau6dfjNH",
"IYqZ2Q_T__w",
"HAI7Bi4lwH",
"7f3bddenKKb",
"j_CycJwcqXK",
"Vi_oPIs20Si",
"0bmDGO6cagc",
"xZaau6dfjN... |
nips_2022_DTD9BRDWtkn | Multi-layer State Evolution Under Random Convolutional Design | Signal recovery under generative neural network priors has emerged as a promising direction in statistical inference and computational imaging. Theoretical analysis of reconstruction algorithms under generative priors is, however, challenging. For generative priors with fully connected layers and Gaussian i.i.d. weights, this was achieved by the multi-layer approximate message (ML-AMP) algorithm via a rigorous state evolution. However, practical generative priors are typically convolutional, allowing for computational benefits and inductive biases, and so the Gaussian i.i.d. weight assumption is very limiting. In this paper, we overcome this limitation and establish the state evolution of ML-AMP for random convolutional layers. We prove in particular that random convolutional layers belong to the same universality class as Gaussian matrices. Our proof technique is of an independent interest as it establishes a mapping between convolutional matrices and spatially coupled sensing matrices used in coding theory. | Accept | This paper considers finding an input vector from multi-layer noisy measurements. This can alternatively be thought of as finding the latent code of generative models. The authors analyze the state evolution of a multi-layer multi-layer approximate message passing algorithm. The main technical idea is relating random convolutional layers to Gaussian ones using permutation matrices and then utilizing a connection with spatially coupled matrices in coding. The authors also provide numerical evidence. Overall all the reviewers were positive but did raise some concerns about the model bing not realistic since the convolutional layers are not trained. I agree with the assessment of the reviewers. I think the paper is interesting and the connections and theoretical results are nice. Therefore I recommend acceptance. However, I do have some concerns about the model studied. I also have some concerns about the theoretical analysis as it is sometimes difficult to differentiate what is fully rigorous and what is based on statistical physics conjectures. In your final manuscript please clarify which parts are fully rigorous (perhaps all) and which parts rely on conjectures that have not been fully proved.
| train | [
"don2tz17IAI",
"VCe62k-m-K",
"L1f-gF-5U3L",
"ojiiK5o5Hms",
"_ztjKd1Fl1O",
"AznkeTgold3",
"j3lMC3IWERC",
"2_rVZPIxhP-",
"dl3eETxOugb",
"DXkRxlQbhM",
"aA3pvkp3dmL",
"Uoa6C9u-XaI",
"AA-0lmAlSY9",
"2zPFigHuL_v"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have increased my score in terms of the fairness between the other papers I reviewed. ",
" Concerning the insights our work can bring for trained networks: we will add our thoughts in the discussion of the paper. There are several trained settings for which the same state evolution could apply and that we are... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
1,
4
] | [
"_ztjKd1Fl1O",
"ojiiK5o5Hms",
"dl3eETxOugb",
"DXkRxlQbhM",
"j3lMC3IWERC",
"nips_2022_DTD9BRDWtkn",
"2zPFigHuL_v",
"AA-0lmAlSY9",
"Uoa6C9u-XaI",
"aA3pvkp3dmL",
"nips_2022_DTD9BRDWtkn",
"nips_2022_DTD9BRDWtkn",
"nips_2022_DTD9BRDWtkn",
"nips_2022_DTD9BRDWtkn"
] |
nips_2022_UwzrP-B38jK | Learning Robust Rule Representations for Abstract Reasoning via Internal Inferences | Abstract reasoning, as one of the hallmarks of human intelligence, involves collecting information, identifying abstract rules, and applying the rules to solve new problems. Although neural networks have achieved human-level performances in several tasks, the abstract reasoning techniques still far lag behind due to the complexity of learning and applying the logic rules, especially in an unsupervised manner. In this work, we propose a novel framework, ARII, that learns rule representations for Abstract Reasoning via Internal Inferences. The key idea is to repeatedly apply a rule to different instances in hope of having a comprehensive understanding (i.e., representations) of the rule. Specifically, ARII consists of a rule encoder, a reasoner, and an internal referrer. Based on the representations produced by the rule encoder, the reasoner draws the conclusion while the referrer performs internal inferences to regularize rule representations to be robust and generalizable. We evaluate ARII on two benchmark datasets, including PGM and I-RAVEN. We observe that ARII achieves new state-of-the-art records on the majority of the reasoning tasks, including most of the generalization tests in PGM. Our codes are available at https://github.com/Zhangwenbo0324/ARII. | Accept | I thank the authors for their submission and active participation in the discussions. The paper presents a method for rule representation learning that can be transferred accross tasks. All reviewers unanimously agree that this paper's strengths outweigh its weaknesses. In particular, reviewers found the method to be well motivated [fEdf], general [fEdf], novel [ky2S], tackling an interesting task [bvvT], achieving strong empirical results [4Dou] and the paper to be well written [bvvT,ky2S,xDma]. Thus, I am recommending acceptance of the paper and encourage the authors to further improve their paper based on the reviewer feedback. | test | [
"sqNW9tB8nZP",
"K6JRYNO3Ufa",
"zJLHmA8Li7H",
"0wRG-ss8IV_",
"MXqeButhyJN",
"SGr1fD18KUd",
"EWsMJ9Z25AC",
"BHHSEshkXlD",
"Fjxk4XF5Yp3",
"pDPJxw5633d",
"NOBjU2ooHIe",
"OqtB3WfHHAo",
"UNs35GXuSuq",
"MTBUAjzQmZJ",
"RTYPNrJAed",
"nlgju2MKWd",
"m6PLFniDhEh",
"5_7g21t617c",
"2ZHUe_4Zczi... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thanks again for your insightful reviews. We are grateful to see that our response addresses your concerns, and appreciate your positive feedback. \n\nWe are glad to provide additional responses if you have any further questions.",
" Thanks for your insightful and positive feedback.\n\nSince relations are much ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
5,
5
] | [
"zJLHmA8Li7H",
"SGr1fD18KUd",
"Fjxk4XF5Yp3",
"MXqeButhyJN",
"EWsMJ9Z25AC",
"NOBjU2ooHIe",
"BHHSEshkXlD",
"pDPJxw5633d",
"2ZHUe_4Zczi",
"5_7g21t617c",
"m6PLFniDhEh",
"UNs35GXuSuq",
"nlgju2MKWd",
"RTYPNrJAed",
"nips_2022_UwzrP-B38jK",
"nips_2022_UwzrP-B38jK",
"nips_2022_UwzrP-B38jK",
... |
nips_2022_8N1NDRGQSQ | CalFAT: Calibrated Federated Adversarial Training with Label Skewness | Recent studies have shown that, like traditional machine learning, federated learning (FL) is also vulnerable to adversarial attacks.
To improve the adversarial robustness of FL, federated adversarial training (FAT) methods have been proposed to apply adversarial training locally before global aggregation. Although these methods demonstrate promising results on independent identically distributed (IID) data, they suffer from training instability on non-IID data with label skewness, resulting in degraded natural accuracy. This tends to hinder the application of FAT in real-world applications where the label distribution across the clients is often skewed. In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models. We then propose a Calibrated FAT (CalFAT) approach to tackle the instability issue by calibrating the logits adaptively to balance the classes. We show both theoretically and empirically that the optimization of CalFAT leads to homogeneous local models across the clients and better convergence points. | Accept | This submission aims to ensure adversarial robustness in federated learning when label skewness exists among different local agents. The main idea of the proposed solution is to calibrate the logits to balance the predicted marginal label probabilities. Most of the reivewers found the topic studied in this work relevant, important and timely. The authors have also successfully addressed the concerns from the reviewers during the rebuttal. Hence I recommend acceptance. | test | [
"f6HVuX9kha",
"plby1aZTqjb",
"kiRImXYHlIs",
"96Ze1xoFfo",
"QjZkfP2JfnC",
"wZw4gN6WuFg",
"bkevMw1ygTT",
"Rlziwi0m_EC",
"fN26kx5K2W0",
"gjQ1wHuufWG",
"_W1PL_kr5j7",
"AdNLAT3aiTT",
"n71A4y5mrO",
"uRaNWE01BPU",
"28fbP8GRvF6",
"c4OK30GjTr2",
"n-z2bWUE19I"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for further clarification and additional experiments, which solved most of my concerns. I would like to improve my score.",
" We would like to gently remind the reviewer of any follow-up clarifications or questions that we can do our best to address in the remaining limited time. We hope our... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"n-z2bWUE19I",
"n-z2bWUE19I",
"c4OK30GjTr2",
"QjZkfP2JfnC",
"28fbP8GRvF6",
"c4OK30GjTr2",
"n-z2bWUE19I",
"nips_2022_8N1NDRGQSQ",
"n-z2bWUE19I",
"n-z2bWUE19I",
"c4OK30GjTr2",
"c4OK30GjTr2",
"c4OK30GjTr2",
"28fbP8GRvF6",
"nips_2022_8N1NDRGQSQ",
"nips_2022_8N1NDRGQSQ",
"nips_2022_8N1NDR... |
nips_2022_02YXg0OZdG | Eliciting Thinking Hierarchy without a Prior | When we use the wisdom of the crowds, we usually rank the answers according to their popularity, especially when we cannot verify the answers. However, this can be very dangerous when the majority make systematic mistakes. A fundamental question arises: can we build a hierarchy among the answers without any prior where the higher-ranking answers, which may not be supported by the majority, are from more sophisticated people? To address the question, we propose 1) a novel model to describe people's thinking hierarchy; 2) two algorithms to learn the thinking hierarchy without any prior; 3) a novel open-response based crowdsourcing approach based on the above theoretic framework. In addition to theoretic justifications, we conduct four empirical crowdsourcing studies and show that a) the accuracy of the top-ranking answers learned by our approach is much higher than that of plurality voting (In one question, the plurality answer is supported by 74 respondents but the correct answer is only supported by 3 respondents. Our approach ranks the correct answer the highest without any prior); b) our model has a high goodness-of-fit, especially for the questions where our top-ranking answer is correct. To the best of our knowledge, we are the first to propose a thinking hierarchy model with empirical validations in the general problem-solving scenarios; and the first to propose a practical open-response-based crowdsourcing approach that beats plurality voting without any prior. | Accept | This work proposes the framework to elicit people's "thinking hierarchy" that helps improve the wisdom of the crowd even if the majority is wrong. The reviewers overall appreciate the main idea of the work and believe it makes a nice contribution to the literature. There have been some questions/concerns raised about the efficiency of the algorithm and the model limitations, to which the authors have provided reasonable responses. We encourage the authors to incorporate those responses (and other reviewer comments) into the final version of the paper. | train | [
"-5lhHqjCO6",
"StCCchwU2JB",
"a2V2g6WJoK4",
"EYwFGcYPErc",
"h5jqDva0-2O",
"wOhodG-OBa",
"XPWZX3ai2J",
"9I30YdnJ7_O",
"0y7DF4ymDUt",
"YEGu13yzJBx",
"mQjlDje-Ip1",
"KcUdT6ehdSj",
"HPh0leCp0oD",
"gID0Vxif1FL"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I suggest the authors put the running time analysis in the main body of the paper, probably even the dynamic programming-based algorithm.\n\nI think the exponential running time in the number of possible answers is not very practical and limits the application of the model proposed in the paper with the algorithm... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
4
] | [
"StCCchwU2JB",
"a2V2g6WJoK4",
"EYwFGcYPErc",
"9I30YdnJ7_O",
"wOhodG-OBa",
"XPWZX3ai2J",
"gID0Vxif1FL",
"HPh0leCp0oD",
"KcUdT6ehdSj",
"mQjlDje-Ip1",
"nips_2022_02YXg0OZdG",
"nips_2022_02YXg0OZdG",
"nips_2022_02YXg0OZdG",
"nips_2022_02YXg0OZdG"
] |
nips_2022_vphSm8QmLFm | GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Models | High-concurrency asynchronous training upon parameter server (PS) architecture and high-performance synchronous training upon all-reduce (AR) architecture are the most commonly deployed distributed training modes for recommendation models. Although synchronous AR training is designed to have higher training efficiency, asynchronous PS training would be a better choice for training speed when there are stragglers (slow workers) in the shared cluster, especially under limited computing resources. An ideal way to take full advantage of these two training modes is to switch between them upon the cluster status. However, switching training modes often requires tuning hyper-parameters, which is extremely time- and resource-consuming. We find two obstacles to a tuning-free approach: the different distribution of the gradient values and the stale gradients from the stragglers. This paper proposes Global Batch gradients Aggregation (GBA) over PS, which aggregates and applies gradients with the same global batch size as the synchronous training. A token-control process is implemented to assemble the gradients and decay the gradients with severe staleness. We provide the convergence analysis to reveal that GBA has comparable convergence properties with the synchronous training, and demonstrate the robustness of GBA the recommendation models against the gradient staleness. Experiments on three industrial-scale recommendation tasks show that GBA is an effective tuning-free approach for switching. Compared to the state-of-the-art derived asynchronous training, GBA achieves up to 0.2% improvement on the AUC metric, which is significant for the recommendation models. Meanwhile, under the strained hardware resource, GBA speeds up at least 2.4x compared to synchronous training. | Accept | The paper identifies and illustrates a practically relevant challenge for training of deep learning-based recommender systems on distributed architectures: switching between synchronous and asynchronous training modes. The proposed mechanism called global batch gradient aggregation (GBA) is simple but mitigates the need to do hyper-parameter tuning when switching which was identified as the critical performance bottleneck. Experiments were conducted on industry-scale recommendation tasks and show that the proposed method is effective and improves in accuracy over fully asynchronous training methods and in speed over synchronous training schemes.
The prevailing opinion among the reviewers was that the paper is well written, well executed and addressed a practically relevant not previously studied problem. The identified challenge is clearly outlined and the proposed solution is well motivated, analyzed with ablation studies and shown to be effective. I advocate acceptance because it is a well executed practical piece of work.
For a potential camera ready version the authors should anticipate the questions of the reviewers and clarify them in the manuscript. They should also be more explicit about the manual steps and the parameter choices required when implementing the method so the limitations are more clear. Further, it would be appreciated if the authors would do an effort beyond acceptance of the paper to push the (currently proprietary) optimizer into an open source framework to make it available to the broader community as promised in the checklist.
| train | [
"7nxfwetkVWE",
"dysZe_zxyMT",
"cFQbpBkWe4n",
"AP738OwxNqE",
"NZ-mj_M92v",
"Zd4JDuyzfWO",
"j4kLHL7su2p",
"PW-EByxcA13",
"6xJHTFesvss"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We have done many experiments and studies **to understand this phenomenon** in terms of different hyper-parameters. As we said in our last response, it is a common phenomenon encountered by our researchers / developers, which remains an open question in the community. And a comprehensive theoretical analysis of t... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"dysZe_zxyMT",
"AP738OwxNqE",
"PW-EByxcA13",
"j4kLHL7su2p",
"nips_2022_vphSm8QmLFm",
"6xJHTFesvss",
"nips_2022_vphSm8QmLFm",
"nips_2022_vphSm8QmLFm",
"nips_2022_vphSm8QmLFm"
] |
nips_2022_L0OKHqYe_FU | Online Neural Sequence Detection with Hierarchical Dirichlet Point Process | Neural sequence detection plays a vital role in neuroscience research. Recent impressive works utilize convolutive nonnegative matrix factorization and Neyman-Scott process to solve this problem. However, they still face two limitations. Firstly, they accommodate the entire dataset into memory and perform iterative updates of multiple passes, which can be inefficient when the dataset is large or grows frequently. Secondly, they rely on the prior knowledge of the number of sequence types, which can be impractical with data when the future situation is unknown. To tackle these limitations, we propose a hierarchical Dirichlet point process model for efficient neural sequence detection. Instead of computing the entire data, our model can sequentially detect sequences in an online unsupervised manner with Particle filters. Besides, the Dirichlet prior enables our model to automatically introduce new sequence types on the fly as needed, thus avoiding specifying the number of types in advance. We manifest these advantages on synthetic data and neural recordings from songbird higher vocal center and rodent hippocampus. | Accept | This paper describes a hierarchical Bayesian latent model to identify neural sequences from spike data. Especially in neuroscience, detection of patterns in neural sequences is an important computational problem as the infrared patterns are useful for characterizing brain activity. The key problem is reminiscent of clustering where individual spikes are associated with sequences.
The proposed model -- Hierarchical Dirichlet Point model (HDPP) -- consists of a Dirichlet nonhomogeneous Poisson process (DPP) prior for observed spikes and a Dirichlet Hawkes process (DHP) prior for the neural sequences generating those observed spikes.
Inference is done with sequential Monte Carlo, including a proposal mechanism for merging and pruning neural sequence categories/types that may have been incorrectly generated early during inference. A comparison of the method to two other top-down unsupervised methods (ConvNMF and PP-Seq) on synthetic data is provided.
While the description of the hierarchical model seems to be complete, the reviewers asked for clarifications about the motivations. During the rebuttal, the authors were also able to answer various issues about experimental section and regarding the inference procedure,
They were able to include results of further experiments. As a results, reviewers decided to raise their grades for the paper.
In light of the importance of the problem and the soundness of the methodology, I am inclined to suggest acceptance for this work.
| val | [
"bcmb-I61QEI",
"qyHRt_TiFaG",
"6a5QAsIWdG3",
"iaSRF5mrOa",
"8Lz7h8tzIrD",
"h2LJx6pJ9OI",
"XuYRv3yyAT",
"DkYrlQYnxz",
"lrBaq9V7FIP",
"4W95EG0R52",
"fEu130Htzv_",
"SV2HmJ1T74",
"fm-gIlBZ8TQ",
"NfAQtdf4O8"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for considering our responses and for revising the score! We will be happy to address any further questions or concerns about the work.",
" I thank the authors for conducting additional experiments and explanation. I think the paper has overall improved as a result of authors efforts to addr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"qyHRt_TiFaG",
"XuYRv3yyAT",
"iaSRF5mrOa",
"DkYrlQYnxz",
"h2LJx6pJ9OI",
"4W95EG0R52",
"fEu130Htzv_",
"SV2HmJ1T74",
"fm-gIlBZ8TQ",
"NfAQtdf4O8",
"nips_2022_L0OKHqYe_FU",
"nips_2022_L0OKHqYe_FU",
"nips_2022_L0OKHqYe_FU",
"nips_2022_L0OKHqYe_FU"
] |
nips_2022_i0FnLiIRj6U | Iterative Scene Graph Generation | The task of scene graph generation entails identifying object entities and their corresponding interaction predicates in a given image (or video). Due to the combinatorially large solution space, existing approaches to scene graph generation assume certain factorization of the joint distribution to make the estimation feasible (e.g., assuming that objects are conditionally independent of predicate predictions). However, this fixed factorization is not ideal under all scenarios (e.g., for images where an object entailed in interaction is small and not discernible on its own). In this work, we propose a novel framework for scene graph generation that addresses this limitation, as well as introduces dynamic conditioning on the image, using message passing in a Markov Random Field. This is implemented as an iterative refinement procedure wherein each modification is conditioned on the graph generated in the previous iteration. This conditioning across refinement steps allows joint reasoning over entities and relations. This framework is realized via a novel and end-to-end trainable transformer-based architecture. In addition, the proposed framework can improve existing approach performance. Through extensive experiments on Visual Genome and Action Genome benchmark datasets we show improved performance on the scene graph generation. | Accept | The authors propose a new approach for end-to-end training of predicting scene graphs from images (different from the traditional two-stage approach.) The key observation that the fixed factorization approach can be suboptimal due to error compounding is reasonable and is supported by the experiment results. The proposed solution with iterative refinement is reasonable and the design choices of the method are sound. Evaluation is comprehensive overall and the result is convincing.
Most of the key concerns raised by reviewers Vh2W and 5s37 who gave low scores (3 & 4) seem to be well addressed by the authors but the reviewers were not responsive and engaged in the follow-up discussion, so also no score update as well. However, the other reviewer h8U6 expressed that the authors addressed the concerns well and after reading the paper I agree on this based on my personal assessment. Therefore, even if two reviewers gave a rather low score 3 and 4, I cannot weigh those review scores much and rather rely more on the other reviewers giving 6 and my own assessment. So, I recommend accepting this paper even if the average score is rather lower than usual. | train | [
"6PM6tMEeVQ3",
"3WOzgdhgl_",
"7NniQZoTMHz",
"Z3u49Ju2iD",
"Gzl4gPop8rS",
"Zhh3HvpfK-S",
"5-7_XAscQ3n",
"4t-bMd3Hlan",
"GX2JqQy0los",
"cixcvBtLei",
"ITGhrDfGp27",
"MhCB75CxxnW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reading our rebuttal.\n\nOverall it is difficult to objectively argue about novelty. However, we would like to highlight that, to the best of our knowledge, the high-level idea of iterative scene graph generation is novel and has not appeared elsewhere. The unique benefit of such formulation is its ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"7NniQZoTMHz",
"7NniQZoTMHz",
"5-7_XAscQ3n",
"Gzl4gPop8rS",
"Zhh3HvpfK-S",
"ITGhrDfGp27",
"MhCB75CxxnW",
"GX2JqQy0los",
"cixcvBtLei",
"nips_2022_i0FnLiIRj6U",
"nips_2022_i0FnLiIRj6U",
"nips_2022_i0FnLiIRj6U"
] |
nips_2022_4btNeXKFAQ | Low-rank Optimal Transport: Approximation, Statistics and Debiasing | The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The low-rank optimal transport (LOT) approach advocated in \cite{scetbon2021lowrank} holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT. | Accept | Overall: The paper focuses on advancing our knowledge, understanding and practical ability to leverage low-rank factorizations in optimal transport.
Reviews: The paper received four reviews. 4 accepts (all confident). It seems that there are several reviewers that will champion the paper for publication. The reviewers found the paper is clear and has a clean presentation. The findings are interesting. The authors have provided extensive answers to reviewers' comments, answering most of them successfully.
After rebuttal: A subset of the reviewers engaged in a consensus that the paper should be accepted.
Confidence of reviews: Overall, the reviewers are confident. We will put more weight to the reviews that got engaged in the rebuttal discussion period. | train | [
"D63rmfOOMT",
"L6raeKsMQSw",
"ZzjlhR8TM4g",
"CoRsCEOvwXY",
"sNTMWKWaeqI",
"_madUWpFsik",
"1lX4wL7BZVf",
"Q4QfSYcgBwU",
"bylzDOqdX8P",
"-dmg1DeKZYv",
"CSGcaIa5UAQ",
"5NeA860eDt",
"Jxho_pci88",
"NNhoScgngon",
"iy1NtmBpI-"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer:\n\nMany thanks for your appreciation and supporting comments. Thank you very much for your suggestions. What follows is a simple and fairly open-ended response on the items you have raised:\n\nOn item (1): this is indeed an important subject. One might hope to start from the simplest possible cases... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"ZzjlhR8TM4g",
"CoRsCEOvwXY",
"_madUWpFsik",
"-dmg1DeKZYv",
"iy1NtmBpI-",
"1lX4wL7BZVf",
"NNhoScgngon",
"bylzDOqdX8P",
"Jxho_pci88",
"5NeA860eDt",
"nips_2022_4btNeXKFAQ",
"nips_2022_4btNeXKFAQ",
"nips_2022_4btNeXKFAQ",
"nips_2022_4btNeXKFAQ",
"nips_2022_4btNeXKFAQ"
] |
nips_2022_ZCGDqdK0zG | Fast Distance Oracles for Any Symmetric Norm | In the \emph{Distance Oracle} problem, the goal is to preprocess $n$ vectors $x_1, x_2, \cdots, x_n$ in a $d$-dimensional normed space $(\mathbb{X}^d, \| \cdot \|_l)$ into a cheap data structure, so that given a query vector $q \in \mathbb{X}^d$, all distances $\| q - x_i \|_l$ to the data points $\{x_i\}_{i\in [n]}$ can be quickly approximated (faster than the trivial $\sim nd$ query time). This primitive is a basic subroutine in machine learning, data mining and similarity search applications. In the case of $\ell_p$ norms, the problem is well understood, and optimal data structures are known for most values of $p$.
Our main contribution is a fast $(1\pm \varepsilon)$ distance oracle for \emph{any symmetric} norm $\|\cdot\|_l$. This class includes $\ell_p$ norms and Orlicz norms as special cases, as well as other norms used in practice, e.g. top-$k$ norms, max-mixture and sum-mixture of $\ell_p$ norms, small-support norms and the box-norm. We propose a novel data structure with $\tilde{O}(n (d + \mathrm{mmc}(l)^2 ) )$ preprocessing time and space, and $t_q = \tilde{O}(d + n \cdot \mathrm{mmc}(l)^2)$ query time, where $\mathrm{mmc}(l)$ is a complexity-measure (modulus) of the symmetric norm under consideration. When $l = \ell_{p}$ , this runtime matches the aforementioned state-of-art oracles. | Accept | Reviewers found the problem, the results and the techniques (very) interesting. The main concerns were about the practicality of the results (esp. lack of experiments) and presentation (notably various typos). The presentation issues appear to be easily fixable with a careful pass over the paper. Ultimately, the positives significantly outweighed the negatives. | train | [
"GtBozmUobaK",
"lOX45ej4g4Z",
"8eCxxWO9jtL",
"s2MZT6UvM6N",
"JXbQAzPsz9i",
"JDjim616G8e"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors try to explain why their contributions are meaningful and I can appreciate their results better now. I won't change my score due to the presentation issues I mentioned. I noticed that the other reviewers gave it a score of 2 regarding presentation, so overall this makes for a hard paper to read and th... | [
-1,
-1,
4,
7,
6,
6
] | [
-1,
-1,
3,
4,
3,
4
] | [
"lOX45ej4g4Z",
"nips_2022_ZCGDqdK0zG",
"nips_2022_ZCGDqdK0zG",
"nips_2022_ZCGDqdK0zG",
"nips_2022_ZCGDqdK0zG",
"nips_2022_ZCGDqdK0zG"
] |
nips_2022_5Fg3XoHjQ4r | Towards Hard-pose Virtual Try-on via 3D-aware Global Correspondence Learning | In this paper, we target image-based person-to-person virtual try-on in the presence of diverse poses and large viewpoint variations. Existing methods are restricted in this setting as they estimate garment warping flows mainly based on 2D poses and appearance, which omits the geometric prior of the 3D human body shape.
Moreover, current garment warping methods are confined to localized regions, which makes them ineffective in capturing long-range dependencies and results in inferior flows with artifacts.
To tackle these issues, we present 3D-aware global correspondences, which are reliable flows that jointly encode global semantic correlations, local deformations, and geometric priors of 3D human bodies. Particularly, given an image pair depicting the source and target person, (a) we first obtain their pose-aware and high-level representations via two encoders, and introduce a coarse-to-fine decoder with multiple refinement modules to predict the pixel-wise global correspondence. (b) 3D parametric human models inferred from images are incorporated as priors to regularize the correspondence refinement process so that our flows can be 3D-aware and better handle variations of pose and viewpoint. (c) Finally, an adversarial generator takes the garment warped by the 3D-aware flow, and the image of the target person as inputs, to synthesize the photo-realistic try-on result. Extensive experiments on public benchmarks and our selected HardPose test set demonstrate the superiority of our method against state-of-the-art try-on approaches. | Accept | This paper received 4 positive reviews: 2xBA + WA+ A. All reviewers acknowledged that the proposed approach is simple and effective, it is well presented, and the claims are supported by strong empirical performance and extensive evaluation on several datasets. The remaining questions and concerns were addressed in the authors' responses, which seemed convincing to the reviewers. The final recommendation is therefore to accept. | train | [
"1-9KJCdlcpH",
"QeblDBVF9p2",
"pBRiFtjMmXh",
"w5NtzzSwH2e",
"ToWEVmBAYK8",
"0Z40M1rNi2c",
"KFIpJn4fwyJ",
"KuCUwYRRjN",
"6P_6lNUw5tYb",
"lx3z3Ba-hox",
"yJWzMNQ8rFb",
"nh6sd9Bn5qH",
"YtBLrnUN1rV",
"QXhiYjxyfm_",
"81jZfgvdPYV"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Authors addressed some of my concerns, especially w.r.t. the issues in the experimental setup. I am increasing my rating to borderline accept.",
" Thanks authors for the clarifications. I will update my rating after cross-checking prior art on datasets and metrics.",
" Dear Reviewer bUCs,\nWe have tried to ad... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"QeblDBVF9p2",
"pBRiFtjMmXh",
"81jZfgvdPYV",
"KuCUwYRRjN",
"81jZfgvdPYV",
"81jZfgvdPYV",
"YtBLrnUN1rV",
"YtBLrnUN1rV",
"QXhiYjxyfm_",
"nh6sd9Bn5qH",
"nips_2022_5Fg3XoHjQ4r",
"nips_2022_5Fg3XoHjQ4r",
"nips_2022_5Fg3XoHjQ4r",
"nips_2022_5Fg3XoHjQ4r",
"nips_2022_5Fg3XoHjQ4r"
] |
nips_2022_A6AFK_JwrIW | Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs | Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e.g., images), studies on graph data are still limited. Different from images, the complex nature of graphs poses unique challenges to adopting the invariance principle. In particular, distribution shifts on graphs can appear in a variety of forms such as attributes and structures, making it difficult to identify the invariance. Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invariance of graphs for guaranteed OOD generalization under various distribution shifts. Specifically, we characterize potential distribution shifts on graphs with causal models, concluding that OOD generalization on graphs is achievable when models focus only on subgraphs containing the most information about the causes of labels. Accordingly, we propose an information-theoretic objective to extract the desired subgraphs that maximally preserve the invariant intra-class information. Learning with these subgraphs is immune to distribution shifts. Extensive experiments on 16 synthetic or real-world datasets, including a challenging setting -- DrugOOD, from AI-aided drug discovery, validate the superior OOD performance of CIGA. | Accept |
Graph NNs have proven to work considerably well in the in-distribution setting. However, they fail when test data come from a different distribution than test, as shown by previous work. This paper aligns with recent works, and aims to study how to obtain invariance to shifts described by the assumed causal model. The assumed causal model is reasonable, and the solution is novel.
There is consensus among the referees, as evidenced by the score of 6 from each of them, that these results could be of interest to Neurips. | train | [
"r2RIXtKNbla",
"S1bqdlssm5",
"sljHPndSz10",
"jEpHzrG_t6Q",
"4QxPTlCckHq",
"m5cqnTem94U",
"TKdSok44XPx",
"vcBTqL_hQCE",
"d0RtVHMK78F",
"FXxheM6AMS",
"dFr_5epjaPU",
"2FKaOONi37x",
"DmiNtKjTt5y",
"LYbuh2ZkEeW",
"jn1spJuh4d",
"F8ZSKXbHpev",
"-L3EP8eMBwDr",
"KiTJrAO0O7y",
"naQjdPVFgi"... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
" Thank you again for your time and efforts in reviewing our paper, and for your valuable comments that helped us to strengthen the paper!",
" I have raised my rating from 5 to 6.",
" Dear Reviewer QGqs,\n\nAs the window for discussion is closing, we’d be grateful if you can confirm whether our follow-up respon... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"S1bqdlssm5",
"DmiNtKjTt5y",
"d0RtVHMK78F",
"4QxPTlCckHq",
"2A0iG8sKLK0",
"_3N7KSxXut",
"d0RtVHMK78F",
"d0RtVHMK78F",
"Z2qV6MFcwex",
"dFr_5epjaPU",
"2FKaOONi37x",
"nLDNj69gWII",
"HpFhRSN-bHX",
"_3N7KSxXut",
"nLDNj69gWII",
"HpFhRSN-bHX",
"_3N7KSxXut",
"nips_2022_A6AFK_JwrIW",
"HpF... |
nips_2022_8ow4YReXH9j | Ultra-marginal Feature Importance | Scientists frequently prioritize learning from data rather than training the best possible model; however, research in machine learning often prioritizes the latter. Marginal contribution feature importance (MCI) was developed to break this trend by providing a useful framework for quantifying the relationships in data in an interpretable fashion. In this work, we aim to improve upon the theoretical properties, performance, and runtime of MCI by introducing ultra-marginal feature importance (UMFI), which uses preprocessing methods from the AI fairness literature to remove dependencies in the feature set prior to measuring predictive power. We show on real and simulated data that UMFI performs better than MCI, especially in the presence of correlated interactions and unrelated features, while partially learning the structure of the causal graph and reducing the exponential runtime of MCI to super-linear. | Reject | This work makes a significant contribution to establishing the theoretical foundations for feature importance. The authors suggest a set of axioms that a feature importance score should have and introduce an algorithm that computes a feature importance score that has these required properties. In addition to the theoretical work, a compelling empirical evaluation is conducted showing significant improvement over previous results.
After a good discussion between the reviewers and the authors and improvements to the paper introduced due to this discussion, the result is a good paper that is of great interest to the NeurIPS community. However, the added content also raised some concern about the accuracy of some statements, especially with respect to the blood relation. The main concern is that it is not clear that the algorithm provided has the blood relation property. Moreover, it is not clear that it is possible to fulfil this relation. Here are two scenarios that may be problematic:
1. In the fully observed setting if X is a confounder of Y and Z while Z is identical to X then X blocks the backdoor from Y to Z and therefore, according to the blood relation axiom the importance of Z should be zero while the importance of X should be positive since it has direct causal relation with Y. However, the roles of X and Z are indistinguishable and therefore it might as well be that Z is the confounder and therefore should have a non-zero importance.
2. In the partially observed setting, if S is an unobserved uniformly distributed integer in the range 1..8, Y is the sign of S, and X is an indicator of S being greater then 4 the according to the blood relation axiom, since there is a backdoor between Y and X when S in unobserved then the importance of X should be non-zero. However, this setting is indistinguishable from the setting in which X and Y are uncorrelated Bernoulli variables in which case the importance of X should be zero.
Hence, is looks as if the blood relation requirement might be too strong. When reviewer suggested that this problem might be eliminated by saying that the importance of a feature is 0 if there exists a graphical model in which the feature is not in the same connected component as the target (note that this is a graphical model and not a causal model). However, this corollary should be theoretically analyzed.
Some additional comments that emerge in the revised paper:
1. The axioms do not define a unique solution. Indeed, if a function Imp has all three required properties (axioms) then multiplying it by any positive constant would generate another valid feature importance score. It would be nice to add another requirement that will force a unique solution like Shapley Value or MCI.
2. The proof in the appendix shows that UMFI has the three required properties only when certain assumptions hold on the distribution. However, in the body of the paper, these limitations are not mentioned.
3. In line 211 it is stated that the proofs are presented in Section 3, however, the proofs are presented only in the appendix
| test | [
"3ot3m0hNLrn",
"ewha3HILv8",
"ntW74zE5Qe",
"XK_W3Rlwho5",
"VtIYzDrjPs",
"WaZ9SL-ykTD",
"e38xCaIre9F",
"d7R91_ghF9j",
"9RQz7cHKntc",
"_GnTpbI5Aw8",
"NwuyOR4Mu_y",
"9wN6OclgVuZ",
"GWYg-6tCCK6",
"DVgoiRwG8kW",
"2V13JwMbB7",
"rNHhmDsyDQ",
"U4Rf1jgFoRA",
"_qWyE-gOGk",
"BGO8ffog3jG",
... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Sounds good, I think this resolves all my concerns. And yes I think discussing these points and making the changes from the previous posts will improve the paper. Best of luck.",
" **Reply #6 and #13.** \n\nAh, we see what you are saying now. Thank you for catching this. We will ensure that this wording is chan... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"ewha3HILv8",
"ntW74zE5Qe",
"XK_W3Rlwho5",
"VtIYzDrjPs",
"_GnTpbI5Aw8",
"d7R91_ghF9j",
"9RQz7cHKntc",
"nips_2022_8ow4YReXH9j",
"GWYg-6tCCK6",
"NwuyOR4Mu_y",
"DVgoiRwG8kW",
"BGO8ffog3jG",
"OoWGp8aDyUP",
"qQqno9Q_V28",
"qQqno9Q_V28",
"ffXFHK96AyV",
"nips_2022_8ow4YReXH9j",
"nips_2022... |
nips_2022_OcNoF7qA4t | Non-Linear Coordination Graphs | Value decomposition multi-agent reinforcement learning methods learn the global value function as a mixing of each agent's individual utility functions. Coordination graphs (CGs) represent a higher-order decomposition by incorporating pairwise payoff functions and thus is supposed to have a more powerful representational capacity. However, CGs decompose the global value function linearly over local value functions, severely limiting the complexity of the value function class that can be represented. In this paper, we propose the first non-linear coordination graph by extending CG value decomposition beyond the linear case. One major challenge is to conduct greedy action selections in this new function class to which commonly adopted DCOP algorithms are no longer applicable. We study how to solve this problem when mixing networks with LeakyReLU activation are used. An enumeration method with a global optimality guarantee is proposed and motivates an efficient iterative optimization method with a local optimality guarantee. We find that our method can achieve superior performance on challenging multi-agent coordination tasks like MACO. | Accept | This paper is a very clear accept. The reviews had only minor quibbles, which I trust the authors will address in their final version. | train | [
"21tERp8EpxK",
"wmCSWCNUu9",
"SvcOYpYq0ec",
"b1G5FVsTqK5",
"6sAii9JjuZH",
"ZvPvEEuQm_P",
"5jPzI5gwRpW",
"ltWhrJF0i8c",
"xZ8avovdPOp",
"HENoABMXRy",
"O1atqOkoHr4_",
"w36tsjttoP6",
"WmjhtPuMGPtG",
"uvcjTJ87OWd",
"B9Ta4WYArJ",
"L7-ipKKD1SWx",
"qG4ndEzbKgG",
"01OSMHkARYl",
"FF0UzYZx5... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thanks for the reviewer's work!",
" Thanks for the reviewer's work!",
" Thanks for the reviewer's work.",
" Thanks for the reviewer's work!",
" I appreciate your explanations, the point is clearer now. I would like to thanks the authors, I do not have any further concern or doubt.",
" Dear authors,\n\nT... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
5
] | [
"6sAii9JjuZH",
"HENoABMXRy",
"xZ8avovdPOp",
"ZvPvEEuQm_P",
"5jPzI5gwRpW",
"01OSMHkARYl",
"O1atqOkoHr4_",
"HENoABMXRy",
"B9Ta4WYArJ",
"WmjhtPuMGPtG",
"w36tsjttoP6",
"KO6QiPjGG_",
"axYDqevqpc5",
"FF0UzYZx5H",
"FF0UzYZx5H",
"01OSMHkARYl",
"01OSMHkARYl",
"nips_2022_OcNoF7qA4t",
"nips... |
nips_2022_0xbhGxgzd1t | ComGAN: Unsupervised Disentanglement and Segmentation via Image Composition | We propose ComGAN, a simple unsupervised generative model, which simultaneously generates realistic images and high semantic masks under an adversarial loss and a binary regularization. In this paper, we first investigate two kinds of trivial solutions in the compositional generation process, and demonstrate their source is vanishing gradients on the mask. Then, we solve trivial solutions from the perspective of architecture. Furthermore, we redesign two fully unsupervised modules based on ComGAN (DS-ComGAN), where the disentanglement module associates the foreground, background and mask with three independent variables, and the segmentation module learns object segmentation. Experimental results show that (i) ComGAN's network architecture effectively avoids trivial solutions without any supervised information and regularization; (ii) DS-ComGAN achieves remarkable results and outperforms existing semi-supervised and weakly supervised methods by a large margin in both the image disentanglement and unsupervised segmentation tasks. It implies that the redesign of ComGAN is a possible direction for future unsupervised work. | Accept | The paper proposes a compositional GAN model with a novel network architecture that solves the vanishing gradient problem underlying trivial solutions. The proposed model achieves strong results on image disentanglement and unsupervised segmentation tasks.
The rebuttals by the authors have successfully addressed most of the concerns of the reviewers. All the reviewers are positive about this paper. Reviewer Tkuw's main concerns regarding the evaluation and the clarity of the method were addressed. The reviewer raised the rating. Reviewer 19KW felt positive about section 3.1 in the revised version and the additional empirical results regarding the gradient values observed during a training stage as given in Figure 5. The reviewer also updated the initial rating. Reviewer rWMN's concerns have also been addressed. The reviewer appreciates the additional detailed theoretical analysis on the problem. | train | [
"Q5FzOXK9bxU",
"ckyEektziO8",
"ftycyuBW4FE",
"7-sKLOpgu4S",
"xtsai1wwrkL",
"1EizKaEfZRmX",
"Jf2S7GrKlw",
"THgGlMRDfV9",
"7XiG-cjyLN",
"_FjnonuxRS",
"HNcjZEda-a",
"2rT2J8hnJ9C",
"gOR6FOs7E-s",
"O0FJYU9yOI8",
"MT0a2UoUuy9"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response and appreciation. We are glad that our feedback addressed your concerns. We will be encouraged if you would like to raise your rating accordingly. Please let us know if there is anything else we can do to make the paper better.",
" Thank you for your comments. My main concerns regard... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"ftycyuBW4FE",
"7XiG-cjyLN",
"THgGlMRDfV9",
"HNcjZEda-a",
"MT0a2UoUuy9",
"O0FJYU9yOI8",
"gOR6FOs7E-s",
"MT0a2UoUuy9",
"_FjnonuxRS",
"gOR6FOs7E-s",
"2rT2J8hnJ9C",
"O0FJYU9yOI8",
"nips_2022_0xbhGxgzd1t",
"nips_2022_0xbhGxgzd1t",
"nips_2022_0xbhGxgzd1t"
] |
nips_2022_9TsP2Gg0CM | Homomorphic Matrix Completion | In recommendation systems, global positioning, system identification and mobile social networks, it is a fundamental routine that a server completes a low-rank matrix from an observed subset of its entries. However, sending data to a cloud server raises up the data privacy concern due to eavesdropping attacks and the single-point failure problem, e.g., the Netflix prize contest was canceled after a privacy lawsuit. In this paper, we propose a homomorphic matrix completion algorithm for privacy-preserving data completion. First, we formulate a \textit{homomorphic matrix completion} problem where a server performs matrix completion on cyphertexts, and propose an encryption scheme that is fast and easy to implement. Secondly, we prove that the proposed scheme satisfies the \textit{homomorphism property} that decrypting the recovered matrix on cyphertexts will obtain the target complete matrix in plaintext. Thirdly, we prove that the proposed scheme satisfies an $(\epsilon, \delta)$-differential privacy property. While with similar level of privacy guarantee, we reduce the best-known error bound $O(\sqrt[10]{n_1^3n_2})$ to EXACT recovery at a price of more samples. Finally, on numerical data and real-world data, we show that both homomorphic nuclear-norm minimization and alternating minimization algorithms achieve accurate recoveries on cyphertexts, verifying the homomorphism property. | Accept |
This paper concerns privacy-preserving matrix completion in a distributed manner. The communication security is based on homomorphic encryption, while the notion of privacy is defined as the subspace-aware join differential privacy.
The paper received a mixed evaluation from the reviewers, ranging from accept (7) to reject (3), and the reviewers that gave these scores decided to keep them after the rebuttal and the following discussion.
The strengths of the paper mentioned by the reviewers were:
- Focusing on an interesting and important problem
- Providing an algorithm that guarantees the exact recovery, as opposed to sacrificing accuracy in the prior work
- Solid experimental results
On the other hand, the identified weaknesses were:
- The DP side does not seem very interesting, just involving Gaussian mechanism
- Necessity to know at least an estimate of the rank of M before the homomorphic encryption (a 2-phase solution to that is sketched in the rebuttal)
- The paper not being self-contained
- Some technical issues, which (I believe) were clarified in the feedback
Despite the weaknesses mentioned above, I lean toward the acceptance with my recommendation, although with a limited confidence. | train | [
"qeuIRXvvywy",
"YCPArOklwc8",
"sbcS4S0de8G",
"RXAdDqTY7VY",
"L1woWzBytdQ",
"lF5WNN1THmu",
"71-6MD5EA1",
"GXvsEnkc-A",
"DjkNb73cJ4",
"IRj2ANJxif",
"yjvzL6L9Tv",
"b7y5nQsj1_"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer (Reviewer UWAB) raised an interesting question about the “block distributed” setting, which is a special case covered by the proposed homomorphic encryption framework, and the provided theoretical results naturally apply.\n\nAbout parameter k and the true rank r of data matrix M, the authors restat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"lF5WNN1THmu",
"RXAdDqTY7VY",
"nips_2022_9TsP2Gg0CM",
"L1woWzBytdQ",
"b7y5nQsj1_",
"71-6MD5EA1",
"yjvzL6L9Tv",
"DjkNb73cJ4",
"IRj2ANJxif",
"nips_2022_9TsP2Gg0CM",
"nips_2022_9TsP2Gg0CM",
"nips_2022_9TsP2Gg0CM"
] |
nips_2022_hqtSdpAK39W | Cluster Randomized Designs for One-Sided Bipartite Experiments | The conclusions of randomized controlled trials may be biased when the outcome of one unit depends on the treatment status of other units, a problem known as \textit{interference}. In this work, we study interference in the setting of one-sided bipartite experiments in which the experimental units---where treatments are randomized and outcomes are measured---do not interact directly. Instead, their interactions are mediated through their connections to \textit{interference units} on the other side of the graph. Examples of this type of interference are common in marketplaces and two-sided platforms. The \textit{cluster-randomized design} is a popular method to mitigate interference when the graph is known, but it has not been well-studied in the one-sided bipartite experiment setting. In this work, we formalize a natural model for interference in one-sided bipartite experiments using the exposure mapping framework. We first exhibit settings under which existing cluster-randomized designs fail to properly mitigate interference under this model. We then show that minimizing the bias of the difference-in-means estimator under our model results in a balanced partitioning clustering objective with a natural interpretation. We further prove that our design is minimax optimal over the class of linear potential outcomes models with bounded interference. We conclude by providing theoretical and experimental evidence of the robustness of our design to a variety of interference graphs and potential outcomes models. | Accept | This well-written paper proposes a possibly-new experiment-design problem where there is interference. This interference is modeled by a bipartite graph where one side has the "experimental" units and the other has "interference" units. The purpose of the interference units is to facilitate interactions between the experimental units. The goal is to assign the experimental units to "treatment" or "control" in order to estimate the total treatment effect on the experiment units. Specifically, for each experiment i, we can either assign "control" (Z_i = -1) or "treat" (Z_i = 1). The outcome for this experiment is the value Y_i(Z) = alpha_i + beta_i Z_i + gamma_i e_i(Y)---where e_i(Y) is called the "exposure mapping" that captures the interferences of the (Z_j: j distinct from i) values with the outcome for i. This setting is motivated by marketplace experiments where buyers interact with sellers.
This primarily-theoretical paper studies the performance of difference-in-means estimators for cluster-randomized balanced designs, under a "linear exposure" model. One key results is a min-max optimal equivalence between treatment effect estimation and identifying a good clustering: the clustering that minimizes maximum bias will basically yield a partitioning of a weighted graph, with weights representing the strength of interference between the units. Simulation results are also given.
There are concerns about the novelty of the clustering objective, and the supplementary material has poor formatting, but the paper's contributions are appreciated. The authors are asked to carefully incorporate the referee-comments. | train | [
"lpflBoVxhEi",
"Ds8456iKEn",
"Bqgya1mITDe",
"oGrinrWPGBG",
"Mtt4B5m5_W3",
"Cv5RlMzJArJ",
"gVibI99Gia2",
"TGeWQVdjtW1",
"XWmONu0QnN",
"5B91TnILfca",
"5PG7XLEYZwu",
"w1jd2o4Nw0a",
"2K3lZFbswDy",
"YJqrSamKLIL",
"BwJHhGdpo9C",
"RmV-sbJ-x-P"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their comments. It would be great if you can comment on our response addressing the reviewer's major concerns. Thanks.",
" Thank you for the response. We will include the power-law graph experiments and the discussion of the choice of balanced design in our revision.\n\nThank you also ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
3
] | [
"TGeWQVdjtW1",
"Bqgya1mITDe",
"Cv5RlMzJArJ",
"RmV-sbJ-x-P",
"BwJHhGdpo9C",
"BwJHhGdpo9C",
"YJqrSamKLIL",
"2K3lZFbswDy",
"w1jd2o4Nw0a",
"w1jd2o4Nw0a",
"w1jd2o4Nw0a",
"nips_2022_hqtSdpAK39W",
"nips_2022_hqtSdpAK39W",
"nips_2022_hqtSdpAK39W",
"nips_2022_hqtSdpAK39W",
"nips_2022_hqtSdpAK39... |
nips_2022_Qy1D9JyMBg0 | Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts | Large sparsely-activated models have obtained excellent performance in multiple domains.
However, such models are typically trained on a single modality at a time.
We present the Language-Image MoE, LIMoE, a sparse mixture of experts model capable of multimodal learning.
LIMoE accepts both images and text simultaneously, while being trained using a contrastive loss.
MoEs are a natural fit for a multimodal backbone, since expert layers can learn an appropriate partitioning of modalities.
However, new challenges arise; in particular, training stability and balanced expert utilization, for which we propose an entropy-based regularization scheme.
Across multiple scales, we demonstrate performance improvement over dense models of equivalent computational cost.
LIMoE-L/16 trained comparably to CLIP-L/14 achieves 77.9% zero-shot ImageNet accuracy (vs. 76.2%), and when further scaled to H/14 (with additional data) it achieves 83.8%, approaching state-of-the-art methods which use custom per-modality backbones and pre-training schemes.
We analyse the quantitative and qualitative behavior of LIMoE, and demonstrate phenomena such as differing treatment of the modalities and the emergence of modality-specific experts. | Accept | The authors use a mixture-of-experts model in a multimodal setting. the reviewers consider the work technically strong and interesting; the AC concurs. | val | [
"B6ZCZYC1T7",
"PmbzIETIF1b",
"RvOiFmC6rI",
"ef5EsWLCWx2",
"JRS04qAEjY2",
"ArHoIMwFjFG",
"sIMwbn_zz63",
"TyKsUg5SFtl"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the reviews and responses from all reviewers and personally thank the authors for the clarifications and answers to my individual questions. I find them satisfactory and increase 1 point to support this paper for that reason. ",
" Many thanks for the time spent reviewing and your feedback, and for n... | [
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"RvOiFmC6rI",
"TyKsUg5SFtl",
"ef5EsWLCWx2",
"sIMwbn_zz63",
"ArHoIMwFjFG",
"nips_2022_Qy1D9JyMBg0",
"nips_2022_Qy1D9JyMBg0",
"nips_2022_Qy1D9JyMBg0"
] |
nips_2022_Rym8_qTIB7o | Node-oriented Spectral Filtering for Graph Neural Networks | Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data while being far less impressive when handling non-homophilic graph data due to the inherent low-pass filtering property of GNNs. In general, since the real-world graphs are often a complex mixture of diverse subgraph patterns, learning a universal spectral filter on the graph from the global perspective as in most current works may still be difficult to adapt to the variation of local patterns. On the basis of the theoretical analysis of local patterns, we rethink the existing spectral filtering methods and propose the \underline{N}ode-oriented spectral \underline{F}iltering for Graph Neural Network (namely NFGNN). By estimating the node-oriented spectral filter for each node, NFGNN is provided with the capability of precise local node positioning via the generalized translated operator, thus adaptive discriminating the variations of local homophily patterns. Furthermore, the utilization of re-parameterization brings a trade-off between global consistency and local sensibility for learning the node-oriented spectral filters. Meanwhile, we theoretically analyze the localization property of NFGNN, demonstrating that the signal after adaptive filtering is still positioned around the corresponding node. Extensive experimental results demonstrate that the proposed NFGNN achieves more favorable performance. | Reject | The paper has mixed reviews. While some reviewers feel that the paper is novel and interesting, other reviewers think that additional experiments are needed to justify the proposed method and that the proposed methods are somewhat incremental.
The paper will benefit from another revision that will address the raised concerns. | train | [
"FS9foMR8Wz",
"QXTP63OZxC",
"ks5-7wuJ8B",
"ixB1MKd8-GR",
"_L8QqRMfafo",
"LZJoSAwS9xI",
"rHtGfij-K4g",
"_rZL1bqc-BE",
"LXaEolx23ON",
"EiKcx175i_F",
"OT1d5ifU-i",
"ccbc0uLqqW",
"UzrZf0pywO4",
"HqPnXEnOh3",
"vq7ip3M49W"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n We sincerely thank the reviewers for their comments and regret not having completely eliminated your concerns about the ability of our NFGNN model in handling heterogeneous graphs. In the new comment, the reviewer thinks that, for a challenging complex graph mixing of homophily and heterophily, the new method... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"QXTP63OZxC",
"ks5-7wuJ8B",
"_L8QqRMfafo",
"HqPnXEnOh3",
"rHtGfij-K4g",
"vq7ip3M49W",
"vq7ip3M49W",
"HqPnXEnOh3",
"UzrZf0pywO4",
"ccbc0uLqqW",
"ccbc0uLqqW",
"nips_2022_Rym8_qTIB7o",
"nips_2022_Rym8_qTIB7o",
"nips_2022_Rym8_qTIB7o",
"nips_2022_Rym8_qTIB7o"
] |
nips_2022_vaxPmiHE3S | EGRU: Event-based GRU for activity-sparse inference and learning | The scalability of recurrent neural networks (RNNs) is hindered by the sequential dependence of each time step's computation on the previous time step's output. Therefore, one way to speed up and scale RNNs is to reduce the computation required at each time step independent of model size and task. In this paper, we propose a model that reformulates Gated Recurrent Units (GRU) as an event-based activity-sparse model that we call the Event-based GRU (EGRU), where units compute updates only on receipt of input events (event-based) from other units. When combined with having only a small fraction of the units active at a time (activity-sparse), this model has the potential to be vastly more compute efficient than current RNNs. Notably, activity-sparsity in our model also translates into sparse parameter updates during gradient descent, extending this compute efficiency to the training phase. We show that the EGRU demonstrates competitive performance compared to state-of-the-art recurrent network models in real-world tasks, including language modeling while maintaining high activity sparsity naturally during inference and training. This sets the stage for the next generation of recurrent networks that are scalable and more suitable for novel neuromorphic hardware. | Reject | This paper introduces an event-based GRU to obtain an efficient continuous-time RNN. Although the method is sound and can work on a series of small sequence modeling tasks, there are multiple issues with the significance of the results of the paper which I point out in the following:
1) There have been significant advances in recurrent neural networks designed to efficiently model sequences, and their scaling properties, which are overlooked in this paper. In particular, structural state-space models [1], diagonal state-space models [2], LSSL [3], Closed-form continuous-time networks [4], efficient memorization via polynomial projection [5], and Neural Rough DEs [6] are currently shaping the state-of-the-art sequence modeling frameworks that efficiently model tasks with long-range dependencies while significantly outperforming Transformers and their variants. Therefore, from the perspective of representation learning capabilities, there is a significant gap between what EGRU (proposed in this paper) could achieve compared to the state-of-the-art sequence modeling tools powered by recurrent networks. Before getting published, it is essential to compare performance and speed to these models on proper benchmarks such as Long Range Arena [7].
[1] Gu, A., Goel, K., & Ré, C. (2021). Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396
[2] Gupta, A. (2022). Diagonal State Spaces are as Effective as Structured State Spaces. arXiv preprint arXiv:2203.14343.
[3] Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., & Ré, C. (2021). Combining recurrent, convolutional, and continuous-time
models with linear state space layers. Advances in neural information processing systems, 34, 572-585.
[4] Hasani, R., Lechner, M., Amini, A., Liebenwein, L., Tschaikowski, M., Teschl, G., & Rus, D. (2021). Closed-form continuous-depth models. arXiv preprint arXiv:2106.13898.
[5] Gu, A., Dao, T., Ermon, S., Rudra, A., & Ré, C. (2020). Hippo: Recurrent memory with optimal polynomial projections. Advances in Neural Information Processing Systems, 33, 1474-1487.
[6] Morrill, J., Salvi, C., Kidger, P., & Foster, J. (2021, July). Neural rough differential equations for long time series. In International Conference on Machine Learning (pp. 7829-7838). PMLR.
[7] Tay, Y., Dehghani, M., Abnar, S., Shen, Y., Bahri, D., Pham, P., ... & Metzler, D. (2020, September). Long Range Arena: A Benchmark for Efficient Transformers. In International Conference on Learning Representations.
2) When it comes to the efficiency of computations on spatiotemporal tasks, especially when using a benchmark such as DVS Gesture detection, efficient models such as spiking networks must be accounted for. For instance, in [8], authors outperform EGRU on the DVS task with over 10x fewer parameters. In this case, EGRU+DA achieves 97% accuracy with 15.75M (10.77M MAC) parameters, while the method proposed in [8] achieves 98% accuracy with 1.1M parameters. I believe it is essential to compare results with appropriate efficient models both in terms of computational efficiency and performance.
[8] She, X., Dash, S., Mukhopadhyay, S.: Sequence approximation using feedforward spiking neural network for spatiotemporal learning: Theory and optimization methods. In: International Conference on Learning Representations (2022), https://openreview.net/forum?id=bp-LJ4y_XC
3) Selected benchmarks for testing EGRU are not appropriate. For instance, sMNIST is a solved problem already with 99% accuracy of CoRNN that even outperforms EGRU (We need more clarifications on this as well). Even DVS is almost solved (98% from [8]). Instead, I suggest the authors try to use more challenging and up-to-date benchmarks such as Long Range Arena [7], audio datasets, and larger language benchmarks.
For the above fundamental reasons, I vote for the rejection of this paper and encourage the authors to incorporate these critical points in their next submission.
| train | [
"2qR1aNY8z5C",
"m7IegRZyBOO",
"Qkb8-2reMPX",
"Ny3-JOMj2gA",
"2WoExm0S3l",
"4mcsrXG5v7P",
"YdPqvrVoZkz",
"GB1jLG9W1WU",
"4MS-onvnKnQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have raised my score and the authors have done good work in the rebuttal.",
" We would like to thank all their reviewers for their constructive comments and questions. Please note that we have uploaded an updated version of our main text as well as supplement. We have indicated all major changes using blue co... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"2WoExm0S3l",
"nips_2022_vaxPmiHE3S",
"4MS-onvnKnQ",
"GB1jLG9W1WU",
"4mcsrXG5v7P",
"YdPqvrVoZkz",
"nips_2022_vaxPmiHE3S",
"nips_2022_vaxPmiHE3S",
"nips_2022_vaxPmiHE3S"
] |
nips_2022_mkEPog9HiV | Structure-Preserving 3D Garment Modeling with Neural Sewing Machines | 3D Garment modeling is a critical and challenging topic in the area of computer vision and graphics, with increasing attention focused on garment representation learning, garment reconstruction, and controllable garment manipulation. Whereas existing methods were constrained to model garments under specific categories or with simple topology, and failed to learn reconstructable and manipulable representations. In this paper, we propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling, which is capable of modeling and learning representations for garments with diverse shapes and topologies and is successfully applied to 3D garment reconstruction and controllable manipulation. To model generic garments, we first obtain sewing pattern embedding via a unified sewing pattern encoding module as the sewing pattern can accurately describe the intrinsic structure and the topology of the 3D garment. Then we use a 3D garment decoder to decode the sewing pattern embedding into a 3D garment using the UV-position maps with masks. To preserve the intrinsic structure of the predicted 3D garment, we introduce an inner-panel structure-preserving loss, an inter-panel structure-preserving loss, and a surface-normal loss in the learning process of our framework. We evaluate NSM on the public 3D garment dataset with sewing patterns with diverse garment shapes and categories. Extensive experiments demonstrate that the proposed NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies, outperforming the state-of-the-art methods by a clear margin. | Accept | This paper was reviewed by four experts in the field. Based on the reviewers' feedback, the decision is to recommend the paper for acceptance to NeurIPS 2022.
The reviewers did raise some valuable concerns that should be addressed in the final camera-ready version of the paper. For example, 1) the evaluation on real-world datasets can be incorporated, 2) more discussion can be added on the reconstruction of garments with non-canonical poses. The authors are encouraged to make the necessary changes to the best of their ability. We congratulate the authors on the acceptance of their paper! | train | [
"2jQovDItzOC",
"ysq4Uf1ZsQC",
"VWN7tqlNT7m",
"q7_JWf_J18J",
"jO4MgvZOeU",
"LUKbgSFm0j",
"9KPvE188mEt",
"oELCji1O1n",
"SC4ePiMzEMv",
"h--w5bBkMuU",
"s32zVx52JP7",
"z2YxgGZpZT8",
"Ku8DI5Frk6",
"DmpNUG2NkR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have addressed my concerns thoroughly. After reading other reviews, I decided to change my score to 6. If accepted, please consider adding the details above into the appendix so that the paper is self-contained.",
" The authors have tried to address most of my concerns and I appreciate the hard work... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"oELCji1O1n",
"z2YxgGZpZT8",
"DmpNUG2NkR",
"Ku8DI5Frk6",
"z2YxgGZpZT8",
"z2YxgGZpZT8",
"DmpNUG2NkR",
"Ku8DI5Frk6",
"z2YxgGZpZT8",
"s32zVx52JP7",
"nips_2022_mkEPog9HiV",
"nips_2022_mkEPog9HiV",
"nips_2022_mkEPog9HiV",
"nips_2022_mkEPog9HiV"
] |
nips_2022_sQiEJLPt1Qh | Improved Bounds on Neural Complexity for Representing Piecewise Linear Functions | A deep neural network using rectified linear units represents a continuous piecewise linear (CPWL) function and vice versa. Recent results in the literature estimated that the number of neurons needed to exactly represent any CPWL function grows exponentially with the number of pieces or exponentially in terms of the factorial of the number of distinct linear components. Moreover, such growth is amplified linearly with the input dimension. These existing results seem to indicate that the cost of representing a CPWL function is expensive. In this paper, we propose much tighter bounds and establish a polynomial time algorithm to find a network satisfying these bounds for any given CPWL function. We prove that the number of hidden neurons required to exactly represent any CPWL function is at most a quadratic function of the number of pieces. In contrast to all previous results, this upper bound is invariant to the input dimension. Besides the number of pieces, we also study the number of distinct linear components in CPWL functions. When such a number is also given, we prove that the quadratic complexity turns into bilinear, which implies a lower neural complexity because the number of distinct linear components is always not greater than the minimum number of pieces in a CPWL function. When the number of pieces is unknown, we prove that, in terms of the number of distinct linear components, the neural complexity of any CPWL function is at most polynomial growth for low-dimensional inputs and a factorial growth for the worst-case scenario, which are significantly better than existing results in the literature. | Accept | Three reviewers agree that this work meets the bar for acceptance, rating it weak accept, weak accept, and accept. The work provides bounds for approximating continuous piecewise linear functions by ReLU networks and an algorithm. Reviewers praised the novelty and significance, and were positive about clarifications offered during the discussion period, particularly about the time complexity of the algorithm. Hence I am recommending accept. I encourage the authors to still work on the items of the discussion and the promised additions such as the open source implementation of their algorithm for the final version of the manuscript. | val | [
"FuTgJ7JeAVL",
"EmqG-Eodao",
"t_4bcZQ1Udg",
"omm9eSnT0_",
"hrTaPpmcJnf",
"6AUFJIbQ_m2",
"3bocoUo6va8",
"TcqHa7IFev",
"t3PzKX8OvJY",
"ts616n8krl",
"ZcatGefNO0x",
"Ae1S7CQgPdW",
"f_wDlbG3OC7",
"4bMck8ZCeIT"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for letting us know! Again, we thank the reviewer for taking the time and effort to review the paper and provide thoughtful comments.",
" If you see it fit, please consider giving the paper a higher rating to ensure acceptance. Thank you!",
" The authors have addressed all my concerns and I also app... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"t_4bcZQ1Udg",
"hrTaPpmcJnf",
"ts616n8krl",
"hrTaPpmcJnf",
"ZcatGefNO0x",
"4bMck8ZCeIT",
"f_wDlbG3OC7",
"Ae1S7CQgPdW",
"4bMck8ZCeIT",
"f_wDlbG3OC7",
"Ae1S7CQgPdW",
"nips_2022_sQiEJLPt1Qh",
"nips_2022_sQiEJLPt1Qh",
"nips_2022_sQiEJLPt1Qh"
] |
nips_2022_hTxYJAKY85 | Learning Graph-embedded Key-event Back-tracing for Object Tracking in Event Clouds | Event data-based object tracking is attracting attention increasingly. Unfortunately, the unusual data structure caused by the unique sensing mechanism poses great challenges in designing downstream algorithms. To tackle such challenges, existing methods usually re-organize raw event data (or event clouds) with the event frame/image representation to adapt to mature RGB data-based tracking paradigms, which compromises the high temporal resolution and sparse characteristics. By contrast, we advocate developing new designs/techniques tailored to the special data structure to realize object tracking. To this end, we make the first attempt to construct a new end-to-end learning-based paradigm that directly consumes event clouds. Specifically, to process a non-uniformly distributed large-scale event cloud efficiently, we propose a simple yet effective density-insensitive downsampling strategy to sample a subset called key-events. Then, we employ a graph-based network to embed the irregular spatio-temporal information of key-events into a high-dimensional feature space, and the resulting embeddings are utilized to predict their target likelihoods via semantic-driven Siamese-matching. Besides, we also propose motion-aware target likelihood prediction, which learns the motion flow to back-trace the potential initial positions of key-events and measures them with the previous proposal. Finally, we obtain the bounding box by adaptively fusing the two intermediate ones separately regressed from the weighted embeddings of key-events by the two types of predicted target likelihoods. Extensive experiments on both synthetic and real event datasets demonstrate the superiority of the proposed framework over state-of-the-art methods in terms of both the tracking accuracy and speed. The code is publicly available at https://github.com/ZHU-Zhiyu/Event-tracking. | Accept | The paper receives overall positive reviews and rebuttal has resolved the reviewer's concerns. The paper proposes a new framework that directly takes raw event clouds as inputs for object tracking. Reviewers agree that this innovation is inspiring. AC agrees and recommends accepting the paper. | train | [
"ZTzl3iiP2IM",
"141dAUsLyD",
"ff4rgapF5Wt",
"8m5L1iAkaer",
"bzpXEMFL8aB",
"bLsXVZ7bRf",
"MOQWgaR8kjG",
"8IMpOjhuT7O",
"Y94VUVsi-Bn",
"tGVOPVxnuFP",
"Ly-sNealNJ",
"1iYFgRJ8ADE",
"xNVqYUty0g",
"pRkDff6m_1",
"TI5VfXtChsZ",
"puXdzygeIhK",
"Rd863boTA9",
"3-P75O81wmb",
"I4ycaaUPqfN",
... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear **Reviewer kchd**\n\nThanks for your time and efforts in reviewing our submission **3911**, as well as the recognition of our work. We think we have answered your questions clearly and directly. We are also glad to answer them if you have any more questions. Thanks.\n\nThe authors",
" Dear **Reviewer Vqd3*... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"YNkQdJ4Y9Is",
"XZnEKfIhyIP",
"I4ycaaUPqfN",
"bzpXEMFL8aB",
"bLsXVZ7bRf",
"MOQWgaR8kjG",
"Rd863boTA9",
"nips_2022_hTxYJAKY85",
"YNkQdJ4Y9Is",
"YNkQdJ4Y9Is",
"XZnEKfIhyIP",
"XZnEKfIhyIP",
"XZnEKfIhyIP",
"I4ycaaUPqfN",
"I4ycaaUPqfN",
"3-P75O81wmb",
"3-P75O81wmb",
"nips_2022_hTxYJAKY8... |
nips_2022_JpxsSAecqq | OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression | This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank. While prompt engineering for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists of learnable context tokens and learnable rank embeddings. The learnable rank embeddings are constructed by explicitly modeling numerical continuity, resulting in well-ordered, compact language prototypes in the CLIP space. Once learned, we can only save the language prototypes and discard the huge language model, resulting in zero additional computational overhead compared with the linear head counterpart. Experimental results show that our paradigm achieves competitive performance in general ordinal regression tasks, and gains improvements in few-shot and distribution shift settings for age estimation. The code is available at https://github.com/xk-huang/OrdinalCLIP.
| Accept | The paper proposes a language-powered model for ordinal regression tasks, based on CLIP. Language prototypes are constructed from sentences with rank categories via the CLIP paper encoder, and then optimizing the CLIP model by language prototype and image feature matching. To further boost the ordinality, this paper introduces the learnable rank prompts by interpolation from the base rank embeddings.
While the proposed approach builds on CoOp, reviewers agree the contribution is significant enough and original enough for NeurIPS. Regarding the experimental section, the paper shows that on three regression tasks (age estimation, historical image dating, and image aesthetics assessment), results show good performance compared to baseline models.
Concerns regarding the writing of the manuscript have been raised [PfAX, RX3e], but seem to have been addressed during the rebuttal phase. | train | [
"CInkYtdhlTU",
"Sk7N0LpmQNG",
"R6T-oxvBoH",
"y7rOjysufUg",
"lg9mLdJNRZ",
"DfG-v5xK-02",
"WrvWzrrSj27",
"BSxZLrYS5yX",
"7eK6HUt4TmJ",
"LyRY83YjaE3",
"nCKqkKU7M-w",
"qGm3KgJzWgY",
"lH2ESQDEaI8",
"G6tcqzkAfh6"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer RX3e,\n\nThanks again for your valuable advice and supportive comments! We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have.",
" Dear Reviewer PfAX,\n\nThanks again for your valuable advice and supportiv... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"G6tcqzkAfh6",
"lH2ESQDEaI8",
"y7rOjysufUg",
"lg9mLdJNRZ",
"qGm3KgJzWgY",
"qGm3KgJzWgY",
"qGm3KgJzWgY",
"qGm3KgJzWgY",
"G6tcqzkAfh6",
"G6tcqzkAfh6",
"lH2ESQDEaI8",
"nips_2022_JpxsSAecqq",
"nips_2022_JpxsSAecqq",
"nips_2022_JpxsSAecqq"
] |
nips_2022_c7sI8S-YIS_ | Unsupervised learning of features and object boundaries from local prediction | A visual system has to learn both which features to extract from images and how to group locations into (proto-)objects. Those two aspects are usually dealt with separately, although predictability is discussed as a cue for both. To incorporate features and boundaries into the same model, we model a layer of feature maps with a pairwise Markov random field model in which each factor is paired with an additional binary variable, which switches the factor on or off. Using one of two contrastive learning objectives, we can learn both the features and the parameters of the Markov random field factors from images without further supervision signals. The features learned by shallow neural networks based on this loss are local averages, opponent colors, and Gabor-like stripe patterns. Furthermore, we can infer connectivity between locations by inferring the switch variables. Contours inferred from this connectivity perform quite well on the Berkeley segmentation database (BSDS500) without any training on contours. Thus, computing predictions across space aids both segmentation and feature learning and models trained to optimize these predictions show similarities to the human visual system. We speculate that retinotopic visual cortex might implement such predictions over space through lateral connections. | Reject | There was some disagreement on the value of this work. The paper received 1 strong accept, 1 accept and 1 reject. The positive reviewers recommended the paper to be accepted because it proposes a novel unsupervised approach to semantic segmentation and contours detection and because of connections to neuroscience. Some of the main criticisms from the more negative reviewer included a lack of a discussion to related work and a lack of sufficient experimental evaluation (in particular comparisons to related work).
The AC found the response of the authors in the rebuttal unconvincing. There is quite a bit of prior work using MRF for segmentation (yes prior supervised segmentation work needs to be properly cited and not just in passing in the results section (line 216-217). Wrt a lack lack of baseline comparisons, I also agree with the reviewer and in that respect the statement in the paper (line 236-237) is not convincing ("There are also a few deep neural network models that attempt unsupervised segmentation [e.g. 10, 47, 82], but we were unable to find any that were evaluated on the contour task of BSD500"). It seems that the authors should at least consider running these models on BSD500 or run their models on other datasets. And as acknowledged by one of the more positive reviewers the results are not SOTA. As stated in the discussion, the authors plan to continue tweaking the architecture to improve results. The AC thinks this is needed for this work to make a sufficient contribution to the conference since neither the use of MRFs or contrastive loss are novel on their own -- the burden is on the authors to demonstrate that they can engineer a system from these two ideas with at least competitive results.
As for the neuroscience contribution, I am quoting the discussion ("the features learned by our models resemble receptive fields in the retina and primary visual cortex and the contours we extract from connectivity information match contours drawn by human subject fairly well, both without any training towards making them more human-like."). This seems like an unsubstantiated statement since no quantitative analysis is provided. Almost any CNN trained for any reasonable task will return some sort of center-surround, orientation-selective and color-opponency filters (take AlexNet trained on ImageNet for instance) -- so it is unclear what is new here. For this statement to be meaningful that authors should formulate null models and demonstrate empirically that their proposed model are more cortex-like than other reasonable alternative model.
Overall because of a lack of sufficient technical or neuroscience contribution, the AC recommends this paper to be rejected. | train | [
"6tVsqHnNz_C",
"_Ffma96J2C",
"ds73YZwr42d",
"_6yMEaMQXAS",
"BsEWyeyawX",
"jczdLNkZp-C",
"1ly81u-Z5U",
"qsG84cF9yTi",
"R6M35nvMgte",
"7XDycZlHogd",
"XJPkU7xsNZ2",
"tb63_P9OTlL",
"x2E-Q2XCQlc",
"dE2aQadmvJ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you! We are glad that we could clarify things and hope to see this paper accepted, too, of course.\n\nAlso, If our submission is accepted, we will be allowed an additional content page for the camera-ready version. Thus, we will most certainly have space to add a paragraph on texture vs. contour grouping.",... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"_Ffma96J2C",
"_6yMEaMQXAS",
"R6M35nvMgte",
"BsEWyeyawX",
"jczdLNkZp-C",
"dE2aQadmvJ",
"dE2aQadmvJ",
"x2E-Q2XCQlc",
"x2E-Q2XCQlc",
"tb63_P9OTlL",
"nips_2022_c7sI8S-YIS_",
"nips_2022_c7sI8S-YIS_",
"nips_2022_c7sI8S-YIS_",
"nips_2022_c7sI8S-YIS_"
] |
nips_2022_2dxsDFaESK | Amortized Projection Optimization for Sliced Wasserstein Generative Models | Seeking informative projecting directions has been an important task in utilizing sliced Wasserstein distance in applications. However, finding these directions usually requires an iterative optimization procedure over the space of projecting directions, which is computationally expensive. Moreover, the computational issue is even more severe in deep learning applications, where computing the distance between two mini-batch probability measures is repeated several times. This nested-loop has been one of the main challenges that prevent the usage of sliced Wasserstein distances based on good projections in practice. To address this challenge, we propose to utilize the \textit{learning-to-optimize} technique or \textit{amortized optimization} to predict the informative direction of any given two mini-batch probability measures. To the best of our knowledge, this is the first work that bridges amortized optimization and sliced Wasserstein generative models. In particular, we derive linear amortized models, generalized linear amortized models, and non-linear amortized models which are corresponding to three types of novel mini-batch losses, named \emph{amortized sliced Wasserstein}. We demonstrate the favorable performance of the proposed sliced losses in deep generative modeling on standard benchmark datasets. | Accept | During the author-reviewer discussions, the authors have addressed most of the concerns raised by the reviewers, leading to original scores being raised. During the reviewer discussions, the disagreement among reviewers about the demonstration of computational benefits was discussed. At this point, the merits of the paper, including the originality of its contribution and the sufficient experimental validation, outweigh the doubts remaining with one of the reviewers. Therefore, the recommendation is to accept this submission.
I would like to thank the authors and reviewers for engaging in discussions.
| train | [
"U6-y_e6ztzC",
"dn4tUqgzYOB",
"Nmn9jKpRtf",
"3kFABmEVUE",
"O1wG5C7FaZz",
"BDisx06EQoQY",
"N4GMkZVkoBR",
"1wH3JElO2-X",
"6uaI_p3qP10",
"ZZlqA_HR0NZ",
"aVawyKWwJZN",
"5k8BKoH0RIN",
"u0XY8zBl-pm",
"HOdqtxHKrWc",
"oYxj3bWpmHC",
"sZ187ct_GZ_",
"fcOM4NJ9mQY",
"GGLMUIQ9N96",
"ynS7fFW-l1... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer hTkP,\n\nWe have addressed your concerns in our responses. Given that the discussion deadline is only a few hours from now and you are the only one that gives a negative score on our paper, we would like to hear your feedback. Please feel free to raise questions if you have other concerns.\n\nBest r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"cspeNTOajs",
"fcOM4NJ9mQY",
"3kFABmEVUE",
"N4GMkZVkoBR",
"oYxj3bWpmHC",
"1wH3JElO2-X",
"GGLMUIQ9N96",
"u0XY8zBl-pm",
"nips_2022_2dxsDFaESK",
"cspeNTOajs",
"cspeNTOajs",
"cspeNTOajs",
"ynS7fFW-l1s",
"GGLMUIQ9N96",
"fcOM4NJ9mQY",
"fcOM4NJ9mQY",
"nips_2022_2dxsDFaESK",
"nips_2022_2dx... |
nips_2022_O4Q39aQFz0Y | Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution | The conventional sliced Wasserstein is defined between two probability measures that have realizations as \textit{vectors}. When comparing two probability measures over images, practitioners first need to vectorize images and then project them to one-dimensional space by using matrix multiplication between the sample matrix and the projection matrix. After that, the sliced Wasserstein is evaluated by averaging the two corresponding one-dimensional projected probability measures. However, this approach has two limitations. The first limitation is that the spatial structure of images is not captured efficiently by the vectorization step; therefore, the later slicing process becomes harder to gather the discrepancy information. The second limitation is memory inefficiency since each slicing direction is a vector that has the same dimension as the images. To address these limitations, we propose novel slicing methods for sliced Wasserstein between probability measures over images that are based on the convolution operators. We derive \emph{convolution sliced Wasserstein} (CSW) and its variants via incorporating stride, dilation, and non-linear activation function into the convolution operators. We investigate the metricity of CSW as well as its sample complexity, its computational complexity, and its connection to conventional sliced Wasserstein distances. Finally, we demonstrate the favorable performance of CSW over the conventional sliced Wasserstein in comparing probability measures over images and in training deep generative modeling on images. | Accept | The paper presents a new slicing methods for the Wasserstein distance between probability measures over images based on convolution operators. This way memory requirements can be reduced and locality can be better preserved. Experiments are conducted on generative modeling problems.
Reviewers noted that the idea of convolution operators on probability measures over images is natural and simple, yet novel and acknowledged theoretical and practical results. The rebuttals were in-depth and provided additional clarifications.
On the other hand reviewers note that CSW only defines a pseudo-metric.
Overall this paper is an interesting contribution to the NeurIPS community and should be accepted. | test | [
"M4J8IJ-1Vxe",
"emLl_afR8N3",
"8JQjK40-YY0",
"YJNnI1PF33J",
"tgAbAGLdxN",
"E2biCTnkNhm",
"Tpc0jojym58",
"0RDgSuVH0f6",
"BLUJJs_eIr",
"IKa2WLX5Zf",
"QnHMW9YcBiF",
"qGFDo9XS-5",
"rev8ApFa_bx",
"PHcVTqX0f3B",
"AEVbxvl_-tR",
"36RtXFup99I",
"IbjkUWK8HCU",
"TOr2f2X6pa0",
"4iXAPbjJZ65",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Dear Reviewer GwEg,\n\nGiven that we already addressed your concerns, the discussion deadline is only a few hours from now, and you give a negative score on the paper, we would like to hear your feedback on whether our response is sufficient to change your opinion about the paper. Please feel free to raise quest... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"at4FObFbYmS",
"pyB9oNb056G",
"YJNnI1PF33J",
"E2biCTnkNhm",
"Tpc0jojym58",
"Tpc0jojym58",
"0RDgSuVH0f6",
"DKLz2q7Txwt",
"nips_2022_O4Q39aQFz0Y",
"AEVbxvl_-tR",
"qGFDo9XS-5",
"AEVbxvl_-tR",
"nips_2022_O4Q39aQFz0Y",
"36RtXFup99I",
"36RtXFup99I",
"4iXAPbjJZ65",
"OpFvwfaPC3",
"w1F8zItX... |
nips_2022_r-6Z1SJbCpv | Towards Learning Universal Hyperparameter Optimizers with Transformers | Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OptFormer can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OptFormer also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer. | Accept | In this work, authors investigate whether Transformers can be used for hyperparameter optimization. The work is interesting and authors outline how they frame the problem and solve practical difficulties. The resulting method is shown to be able to learn to HPO from historical HPO runs and text-based metadata. Some implementation details are missing and I would encourage the authors to add details when possible (taking into account the reviewers' feedback). The empirical evaluation of the paper contains sensible set of baselines and ablation studies. It is also appreciated the authors put extra effort into open sourcing parts of the code. | train | [
"arGI5wuwzdv",
"hBFl6_-vJ6",
"vuj4zRulnLkz",
"mllX7rCmqTT",
"7NXnu3TTJI1",
"ey56R17-vCN",
"wdSze6tiSim",
"1L3PCyqqjv9",
"c-9IWZ1700p",
"AqzpBG-8MR",
"wQVTpqnC1Rr",
"YvpCk0M4lId",
"sB1LLeGtRb"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nOnce again, thank you very much for your valuable time spent on our submission and your thoughtful reviews!\nAs the Author-Reviewer discussion period is coming to an end soon, we wanted to check in with you if we have addressed your questions and concerns, and if we provided all information requ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"1L3PCyqqjv9",
"vuj4zRulnLkz",
"mllX7rCmqTT",
"7NXnu3TTJI1",
"sB1LLeGtRb",
"YvpCk0M4lId",
"wQVTpqnC1Rr",
"AqzpBG-8MR",
"nips_2022_r-6Z1SJbCpv",
"nips_2022_r-6Z1SJbCpv",
"nips_2022_r-6Z1SJbCpv",
"nips_2022_r-6Z1SJbCpv",
"nips_2022_r-6Z1SJbCpv"
] |
nips_2022_H3o9a6l0wz | Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition | Identity-invariant facial expression recognition (FER) has been one of the challenging computer vision tasks. Since conventional FER schemes do not explicitly address the inter-identity variation of facial expressions, their neural network models still operate depending on facial identity. This paper proposes to quantify the inter-identity variation by utilizing pairs of similar expressions explored through a specific matching process. We formulate the identity matching process as an Optimal Transport (OT) problem. Specifically, to find pairs of similar expressions from different identities, we define the inter-feature similarity as a transportation cost. Then, optimal identity matching to find the optimal flow with minimum transportation cost is performed by Sinkhorn-Knopp iteration. The proposed matching method is not only easy to plug in to other models, but also requires only acceptable computational overhead. Extensive simulations prove that the proposed FER method improves the PCC/CCC performance by up to 10% or more compared to the runner-up on wild datasets. The source code and software demo are available at https://github.com/kdhht2334/ELIM_FER. | Accept | Authors propose a new strategy for a hard problem that reviewers found compelling and novel. The experimental details are complex and we encourage the authors to address the many issues the reviewers raise. | train | [
"rwZNEzH8b-L",
"Rd25pMj1E3e",
"lqE7BRkdnSC",
"dbYGO683V9g",
"7ArDRopBFUE",
"lHckhRYyew",
"9UfQWDxQ_v",
"MbAXqAfQpnS",
"A2itZp8qOaM",
"6XSC0PiI8Jv",
"8Cb8xqZjxXG",
"jt1H1KhpCkl"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for the good comments from the reviewer. Although feature normalization is not applied in the inference phase, we can expect positive performance. The reason is as follows:\n- The mechanism of allowing the model to learn domain-invariant features through training domains (i.e., training IDs in our cas... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"dbYGO683V9g",
"lqE7BRkdnSC",
"9UfQWDxQ_v",
"A2itZp8qOaM",
"lHckhRYyew",
"MbAXqAfQpnS",
"8Cb8xqZjxXG",
"jt1H1KhpCkl",
"6XSC0PiI8Jv",
"nips_2022_H3o9a6l0wz",
"nips_2022_H3o9a6l0wz",
"nips_2022_H3o9a6l0wz"
] |
nips_2022_xdZs1kf-va | I2Q: A Fully Decentralized Q-Learning Algorithm | Fully decentralized multi-agent reinforcement learning has shown great potentials for many real-world cooperative tasks, where the global information, \textit{e.g.}, the actions of other agents, is not accessible. Although independent Q-learning is widely used for decentralized training, the transition probabilities are non-stationary since other agents are updating policies simultaneously, which leads to non-guaranteed convergence of independent Q-learning. To deal with non-stationarity, we first introduce stationary ideal transition probabilities, on which independent Q-learning could converge to the global optimum. Further, we propose a fully decentralized method, I2Q, which performs independent Q-learning on the modeled ideal transition function to reach the global optimum. The modeling of ideal transition function in I2Q is fully decentralized and independent from the learned policies of other agents, helping I2Q be free from non-stationarity and learn the optimal policy. Empirically, we show that I2Q can achieve remarkable improvement in a variety of cooperative multi-agent tasks. | Accept | The paper presents a novel method for dealing with nonstationarity in decentralized multi-agent reinforcement learning (MARL). While there are some concerns about the level of novelty, the approach is interesting and presented well. There are also concerns about the discussion and comparison with the state-of-the-art in decentralized MARL methods. We suggest the authors include comparisons to other decentralized MARL methods (such as the ones below) or state why such comparisons are not reasonable.
Omidshafiei, Shayegan, et al. "Deep decentralized multi-task multi-agent reinforcement learning under partial observability." International Conference on Machine Learning. PMLR, 2017.
Palmer, Gregory, et al. "Lenient Multi-Agent Deep Reinforcement Learning." Proceedings of the International Conference on Autonomous Agents and MultiAgent Systems. 2018.
Lyu, Xueguang, and Christopher Amato. "Likelihood Quantile Networks for Coordinating Multi-Agent Reinforcement Learning." Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. 2020. | train | [
"yIBJZ3pRWer",
"aAaGEOVeuLZ",
"q6ZLhxQBjH",
"k9F9eRE5G2L",
"8lb_AwMVDv",
"4Y-HNlGbBVq",
"1K2t9p-ytC",
"Mrh9EHqMDu3",
"jmJUJkgbSxX",
"TsixHy2Vct9",
"zdaFcnUy6Fc"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed rebuttal. While the authors responded to all my points, they remain points of weakness in the paper. Particularly as learning a stochastic model is hard and it is unclear how learning a stochastic model would perform in practice in this case. Also, I still find the theoretic... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"8lb_AwMVDv",
"1K2t9p-ytC",
"zdaFcnUy6Fc",
"zdaFcnUy6Fc",
"zdaFcnUy6Fc",
"TsixHy2Vct9",
"jmJUJkgbSxX",
"nips_2022_xdZs1kf-va",
"nips_2022_xdZs1kf-va",
"nips_2022_xdZs1kf-va",
"nips_2022_xdZs1kf-va"
] |
nips_2022_awdyRVnfQKX | HierSpeech: Bridging the Gap between Text and Speech by Hierarchical Variational Inference using Self-supervised Representations for Speech Synthesis | This paper presents HierSpeech, a high-quality end-to-end text-to-speech (TTS) system based on a hierarchical conditional variational autoencoder (VAE) utilizing self-supervised speech representations. Recently, single-stage TTS systems, which directly generate raw speech waveform from text, have been getting interest thanks to their ability in generating high-quality audio within a fully end-to-end training pipeline. However, there is still a room for improvement in the conventional TTS systems. Since it is challenging to infer both the linguistic and acoustic attributes from the text directly, missing the details of attributes, specifically linguistic information, is inevitable, which results in mispronunciation and over-smoothing problem in their synthetic speech. To address the aforementioned problem, we leverage self-supervised speech representations as additional linguistic representations to bridge an information gap between text and speech. Then, the hierarchical conditional VAE is adopted to connect these representations and to learn each attribute hierarchically by improving the linguistic capability in latent representations. Compared with the state-of-the-art TTS system, HierSpeech achieves +0.303 comparative mean opinion score, and reduces the phoneme error rate of synthesized speech from 9.16% to 5.78% on the VCTK dataset. Furthermore, we extend our model to HierSpeech-U, an untranscribed text-to-speech system. Specifically, HierSpeech-U can adapt to a novel speaker by utilizing self-supervised speech representations without text transcripts. The experimental results reveal that our method outperforms publicly available TTS models, and show the effectiveness of speaker adaptation with untranscribed speech. | Accept | all reviewers agree
* the paper is interesting and novel
* the proposed method has solid experiments and good results
* paper is well written
this paper should be accepted to the conference. | train | [
"pJ_VBqH82j",
"TTapwyv7Jw8",
"Z4mYN8VBs_6",
"4kMMs09D2vd",
"NeCDaJDx83n",
"NIFbXTtAwKz",
"qbD_Sqw1y8h"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the authors' responses which addressed my questions. I will keep the original rating.",
" We appreciate for your helpful comments and suggestions. We have provided responses to your questions below to address your concerns.\n\n>Q1. Adding PP to the VITS posterior encoder is helpful. The posterior en... | [
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"TTapwyv7Jw8",
"qbD_Sqw1y8h",
"NIFbXTtAwKz",
"NeCDaJDx83n",
"nips_2022_awdyRVnfQKX",
"nips_2022_awdyRVnfQKX",
"nips_2022_awdyRVnfQKX"
] |
nips_2022_XA4ru9mfxTP | Unifying Voxel-based Representation with Transformer for 3D Object Detection | In this work, we present a unified framework for multi-modality 3D object detection, named UVTR. The proposed method aims to unify multi-modality representations in the voxel space for accurate and robust single- or cross-modality 3D detection. To this end, the modality-specific space is first designed to represent different inputs in the voxel feature space. Different from previous work, our approach preserves the voxel space without height compression to alleviate semantic ambiguity and enable spatial connections. To make full use of the inputs from different sensors, the cross-modality interaction is then proposed, including knowledge transfer and modality fusion. In this way, geometry-aware expressions in point clouds and context-rich features in images are well utilized for better performance and robustness. The transformer decoder is applied to efficiently sample features from the unified space with learnable positions, which facilitates object-level interactions. In general, UVTR presents an early attempt to represent different modalities in a unified framework. It surpasses previous work in single- or multi-modality entries. The proposed method achieves leading performance in the nuScenes test set for both object detection and the following object tracking task. Code is made publicly available at https://github.com/dvlab-research/UVTR. | Accept | The paper proposes a multimodal system for 3d object detection and 3 expert reviewers vote for its acceptance, after rebuttal, based on their appreciation of the good improvements brought by multimodality, and due various interesting details of the system.
I agree with reviewer Bb8v that the writing should be polished, starting from the abstract, e.g. "Benefit from the unified manner, cross-modality interaction is then proposed to make full use of inherent properties from different sensors" -- this sentence reads very poorly. | train | [
"EBOzXUvZtw7",
"QtD_CnMjGLD",
"sDIp2eYg7a",
"hA1nrdY3rbl",
"4i8199g5Oi",
"Ie7f6IIvjg2",
"5TbNmbWhlD",
"q-mbK1IORWd",
"3XfmiG9QTh",
"Mu0NI3SoEQI",
"14UrmhefQzI",
"M0fBxfEC_lE",
"yiQBcypy7wO",
"T_1IOSu7Qy3",
"HbpSkyoYtaQ",
"PapyNMDTWHx"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Bb8v,\n\nWe follow your suggestions and keep polishing the paper. Because the revision cannot be uploaded now, we attach the revision of Section 3.2 below. Hope it can address your remaining concern.\n\n**3.2 Cross-modality Interaction**\n\nWith the unified representation in space $\\mathbf{V}_I$ an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"hA1nrdY3rbl",
"14UrmhefQzI",
"hA1nrdY3rbl",
"5TbNmbWhlD",
"Ie7f6IIvjg2",
"HbpSkyoYtaQ",
"PapyNMDTWHx",
"PapyNMDTWHx",
"HbpSkyoYtaQ",
"HbpSkyoYtaQ",
"T_1IOSu7Qy3",
"nips_2022_XA4ru9mfxTP",
"nips_2022_XA4ru9mfxTP",
"nips_2022_XA4ru9mfxTP",
"nips_2022_XA4ru9mfxTP",
"nips_2022_XA4ru9mfxTP... |
nips_2022_FzdmrTUyZ4g | Monte Carlo Tree Descent for Black-Box Optimization | The key to Black-Box Optimization is to efficiently search through input regions with potentially widely-varying numerical properties, to achieve low-regret descent and fast progress toward the optima. Monte Carlo Tree Search (MCTS) methods have recently been introduced to improve Bayesian optimization by computing better partitioning of the search space that balances exploration and exploitation. Extending this promising framework, we study how to further integrate sample-based descent for faster optimization. We design novel ways of expanding Monte Carlo search trees, with new descent methods at vertices that incorporate stochastic search and Gaussian Processes. We propose the corresponding rules for balancing progress and uncertainty, branch selection, tree expansion, and backpropagation. The designed search process puts more emphasis on sampling for faster descent and uses localized Gaussian Processes as auxiliary metrics for both exploitation and exploration. We show empirically that the proposed algorithms can outperform state-of-the-art methods on many challenging benchmark problems. | Accept | This paper proposes a novel combination of Bayesian optimization and Monte Carlo Tree Search for more sample-efficient black-box optimization. The method is adding complexity, but the empirical results are thorough and show a clear benefit.
The reviewers on this paper did not come to unanimous decision, but the clear majority advocated for acceptance, and I concur, especially because reviewer 6Fro did not spell out their concerns with sufficient concreteness. | train | [
"yIflUXxssCW",
"ZczyHKspmTU",
"PkZP1TxeQq",
"ol-8hjV4MX8",
"KcKoOpHhgK2",
"BuCCRbEp9c",
"bEjGmK4LDlp",
"t4f9SIZLbXR",
"laAYALK9Hiw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading the responses by the authors, several issues have been addressed.\nHowever, I still think the motivation of this paper is not convincing enough for me. \nDue to the sufficient experimental results, I'm increasing my score to weak accept. ",
" Thank you for your feedback. We address the specific qu... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
2,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
5,
4
] | [
"ZczyHKspmTU",
"laAYALK9Hiw",
"t4f9SIZLbXR",
"bEjGmK4LDlp",
"BuCCRbEp9c",
"nips_2022_FzdmrTUyZ4g",
"nips_2022_FzdmrTUyZ4g",
"nips_2022_FzdmrTUyZ4g",
"nips_2022_FzdmrTUyZ4g"
] |
nips_2022_E3LgJdPEkP | A Mean-Field Game Approach to Cloud Resource Management with Function Approximation | Reinforcement learning (RL) has gained increasing popularity for resource management in cloud services such as serverless computing. As self-interested users compete for shared resources in a cluster, the multi-tenancy nature of serverless platforms necessitates multi-agent reinforcement learning (MARL) solutions, which often suffer from severe scalability issues. In this paper, we propose a mean-field game (MFG) approach to cloud resource management that is scalable to a large number of users and applications and incorporates function approximation to deal with the large state-action spaces in real-world serverless platforms. Specifically, we present an online natural actor-critic algorithm for learning in MFGs compatible with various forms of function approximation. We theoretically establish its finite-time convergence to the regularized Nash equilibrium under linear function approximation and softmax parameterization. We further implement our algorithm using both linear and neural-network function approximations, and evaluate our solution on an open-source serverless platform, OpenWhisk, with real-world workloads from production traces. Experimental results demonstrate that our approach is scalable to a large number of users and significantly outperforms various baselines in terms of function latency and resource utilization efficiency. | Accept | I agree with the reviewers that this is a well-written paper on an interesting application of mean-field games. The paper is a nice blend of theoretical developments and experimental evaluations. I believe that it will be well-received by the NeurIPS community and recommend acceptance. | test | [
"aTsMvAVdXqo",
"AKd5hb8bPg4",
"Srpze0q2Fmm",
"n5ThODp-KED",
"uP1olfpnIbL",
"tl512tHeJCH",
"RXmjQBk6v9f",
"nrJ8aC7-Hv",
"yu1jcSF6t6s",
"w0DRwt-4Kp",
"rAoAVpOCKT_",
"dgUbSxAFuKG",
"9aQuAoHcu9R",
"4cqXLWViweC",
"wrxrVmFBlL"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their detailed reply. I expected the paper to discuss cloud resource management in more detail since the title is \"A Mean-Field Game Approach to Cloud Resource Management with ***\". However, putting a lot of scenario descriptions in the appendix does not help the reader to understand h... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
2
] | [
"RXmjQBk6v9f",
"Srpze0q2Fmm",
"n5ThODp-KED",
"uP1olfpnIbL",
"tl512tHeJCH",
"wrxrVmFBlL",
"nrJ8aC7-Hv",
"9aQuAoHcu9R",
"w0DRwt-4Kp",
"dgUbSxAFuKG",
"4cqXLWViweC",
"nips_2022_E3LgJdPEkP",
"nips_2022_E3LgJdPEkP",
"nips_2022_E3LgJdPEkP",
"nips_2022_E3LgJdPEkP"
] |
nips_2022__vfyuJaXFug | Translation-equivariant Representation in Recurrent Networks with a Continuous Manifold of Attractors | Equivariant representation is necessary for the brain and artificial perceptual systems to faithfully represent the stimulus under some (Lie) group transformations. However, it remains unknown how recurrent neural circuits in the brain represent the stimulus equivariantly, nor the neural representation of abstract group operators. The present study uses a one-dimensional (1D) translation group as an example to explore the general recurrent neural circuit mechanism of the equivariant stimulus representation. We found that a continuous attractor network (CAN), a canonical neural circuit model, self-consistently generates a continuous family of stationary population responses (attractors) that represents the stimulus equivariantly. Inspired by the Drosophila's compass circuit, we found that the 1D translation operators can be represented by extra speed neurons besides the CAN, where speed neurons' responses represent the moving speed (1D translation group parameter), and their feedback connections to the CAN represent the translation generator (Lie algebra). We demonstrated that the network responses are consistent with experimental data. Our model for the first time demonstrates how recurrent neural circuitry in the brain achieves equivariant stimulus representation. | Accept | The paper constructs recurrent neural circuits that represent stimuli equivariantly with respect to a given symmetry, taking the example of the 1D translation group. Most Reviewers were positively impressed by the general framing of the problem in terms of group theory and the elucidation of a connection between Continuous Attractor Networks and Lie groups.
The main weakness pointed out by the Reviewers was that the high-level exposition of the paper tended to conceal the distinction between novel original contributions and the previous literature. This concern was however resolved in the rebuttals in a way that seems to satisfy all Reviewers.
Further concerns about the biological plausibility of the proposed general construction and the potential difficulty of extending it beyond the restricted 1D case analyzed in the paper were also raised by several comments in the reviews, but also in this case the clarifications in the rebuttals seemed to convince Reviewers, who by and large expressed optimistic views about the significance of the work for its potential for future developments and applicability for modeling other neural systems. | train | [
"Ls4JqrnHKlB",
"akCK7uZpfe",
"spPuxrMYP2",
"yPoM_UHVRt4",
"LTHpjhS3SZ7",
"EYWPN5QzOB0",
"LideWf1GSS0L",
"q3tERdXMRaL",
"Do5YKCcthOK",
"pSqGTJ4eHjR",
"-hBUcuS8RZH",
"B_XkWhXjuzF",
"BhssC8rjE_",
"wpRyVOiyDSi"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your responses. I appreciate the additional discussions that the authors have promised. It is clear to me that this submission is an important contribution.",
" Thanks for the reviewer's suggestion. We will definitely comment on the first paper in our revised manuscript.\nAlso, the temporal scalin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
2
] | [
"EYWPN5QzOB0",
"spPuxrMYP2",
"LTHpjhS3SZ7",
"pSqGTJ4eHjR",
"BhssC8rjE_",
"B_XkWhXjuzF",
"-hBUcuS8RZH",
"-hBUcuS8RZH",
"wpRyVOiyDSi",
"wpRyVOiyDSi",
"nips_2022__vfyuJaXFug",
"nips_2022__vfyuJaXFug",
"nips_2022__vfyuJaXFug",
"nips_2022__vfyuJaXFug"
] |
nips_2022_RO0wSr3R7y- | 3DILG: Irregular Latent Grids for 3D Generative Modeling | We propose a new representation for encoding 3D shapes as neural fields. The representation is designed to be compatible with the transformer architecture and to benefit both shape reconstruction and shape generation. Existing works on neural fields are grid-based representations with latents being defined on a regular grid. In contrast, we define latents on irregular grids which facilitates our representation to be sparse and adaptive. In the context of shape reconstruction from point clouds, our shape representation built on irregular grids improves upon grid-based methods in terms of reconstruction accuracy. For shape generation, our representation promotes high-quality shape generation using auto-regressive probabilistic models. We show different applications that improve over the current state of the art. First, we show results of probabilistic shape reconstruction from a single higher resolution image. Second, we train a probabilistic model conditioned on very low resolution images. Third, we apply our model to category-conditioned generation. All probabilistic experiments confirm that we are able to generate detailed and high quality shapes to yield the new state of the art in generative 3D shape modeling. | Accept | All reviewers agree to accept this work, which presents a creative new shape representation for 3D generative modeling. The negative aspects raised by the reviewers are fairly minor, and most were addressed during the rebuttal phase (please be sure to incorporate all comments/additional results into the final camera-ready version). During the post-rebuttal discussion, reviewers suggested nominating for a spotlight given the new results on realistic data. | train | [
"77169KRl2XV",
"8cmN-0rM6bN",
"fqF8YzxvzX0",
"7Uw9QhaY9Ay",
"kCgZYCvsK5P",
"0bvJKyXZcD",
"kB8PK959tdT8",
"EXOojKbopJ5",
"QXa1WljqFPg",
"YzLqcqNaKn2",
"xMJOIOzK9uU",
"rY6lB1VEEQe",
"HMyyucKlkJk",
"4Tqf7Ah7MRr"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nThanks again for the review. Since the author/reviewer discussion phase is ending tomorrow, we would like to ask if our comments helped clarify your concerns or if there are additional questions we can help with.",
" We thank the reviewers for their comments. We are happy to see that reviewer... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"nips_2022_RO0wSr3R7y-",
"nips_2022_RO0wSr3R7y-",
"7Uw9QhaY9Ay",
"4Tqf7Ah7MRr",
"0bvJKyXZcD",
"HMyyucKlkJk",
"EXOojKbopJ5",
"rY6lB1VEEQe",
"YzLqcqNaKn2",
"xMJOIOzK9uU",
"nips_2022_RO0wSr3R7y-",
"nips_2022_RO0wSr3R7y-",
"nips_2022_RO0wSr3R7y-",
"nips_2022_RO0wSr3R7y-"
] |
nips_2022_N0tKCpMhA2 | Coresets for Vertical Federated Learning: Regularized Linear Regression and $K$-Means Clustering | Vertical federated learning (VFL), where data features are stored in multiple parties distributively, is an important area in machine learning. However, the communication complexity for VFL is typically very high. In this paper, we propose a unified framework by constructing \emph{coresets} in a distributed fashion for communication-efficient VFL. We study two important learning tasks in the VFL setting: regularized linear regression and $k$-means clustering, and apply our coreset framework to both problems. We theoretically show that using coresets can drastically alleviate the communication complexity, while nearly maintain the solution quality. Numerical experiments are conducted to corroborate our theoretical findings. | Accept | The reviewers have converged around the idea that the paper proposes an interesting approach to vertical federated learning; they also conclude that the authors have provided replies to reviews that answered questions and provided useful clarifications, which encourages the acceptance of the paper.
I will stress the need for the authors to properly include further updates for the camera ready version of the paper; in their reply, they indeed make several promises to reviewers and it is important that such updates be properly included (last comment to NZai, intermediary comment to XK2T). | train | [
"mr7ZzPRVDv",
"q08fGS38zS",
"biHk5a9hUv2",
"UyOeA-P_A2k",
"R6Tx3FCcP_t",
"-o33SwYDY8",
"aft5knh6wgd",
"vTRGBXodyS0",
"peLxfWfXQAi",
"RjJkufbFxlR"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for your support! We will address your comments in the updated version.",
" Thanks a lot for your support!",
" I have read the author's rebuttal and I keep my score. \n\n",
" Thanks for the response, which addresses my concerns and questions. I will maintain my scores. \n\nBTW, it would be bett... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"UyOeA-P_A2k",
"biHk5a9hUv2",
"R6Tx3FCcP_t",
"aft5knh6wgd",
"RjJkufbFxlR",
"peLxfWfXQAi",
"vTRGBXodyS0",
"nips_2022_N0tKCpMhA2",
"nips_2022_N0tKCpMhA2",
"nips_2022_N0tKCpMhA2"
] |
nips_2022_5pvB6IH_9UZ | CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis | A persistent challenge in conditional image synthesis has been to generate diverse output images from the same input image despite only one output image being observed per input image. GAN-based methods are prone to mode collapse, which leads to low diversity. To get around this, we leverage Implicit Maximum Likelihood Estimation (IMLE) which can overcome mode collapse fundamentally. IMLE uses the same generator as GANs but trains it with a different, non-adversarial objective which ensures each observed image has a generated sample nearby. Unfortunately, to generate high-fidelity images, prior IMLE-based methods require a large number of samples, which is expensive. In this paper, we propose a new method to get around this limitation, which we dub Conditional Hierarchical IMLE (CHIMLE), which can generate high-fidelity images without requiring many samples. We show CHIMLE significantly outperforms the prior best IMLE, GAN and diffusion-based methods in terms of image fidelity and mode coverage across four tasks, namely night-to-day, 16x single image super-resolution, image colourization and image decompression. Quantitatively, our method improves Fréchet Inception Distance (FID) by 36.9% on average compared to the prior best IMLE-based method, and by 27.5% on average compared to the best non-IMLE-based general-purpose methods. More results and code are available on the project website at https://niopeng.github.io/CHIMLE/. | Accept | This paper introduces a conditional image synthesis method based on Implicit Maximum Likelihood Estimation (IMLE). Compared to previous work CIMLE, the paper has introduced a divide-and-conquer method to accurately estimate latent code without evaluating many samples. The paper has received consistently positive reviews. Reviewers found the idea intuitive and interesting, and the method effective (especially compared to CIMLE). The rebuttal further addressed the concerns and included comparisons with the same backbone architecture and additional baselines and evaluations. The AC agreed with the reviewers’ consensus and recommended accepting the paper.
| train | [
"AXC-myKlLiT",
"rmRMyYCXHw",
"YZuYGmxH4B-",
"15jk5Et9mY",
"WpYb705hDdW",
"CYs5k9tfwTs",
"NlhJehHcrVZ",
"P-hRK30QSiV",
"jeny33SzRsa",
"v9PQrIvts9KW",
"9zDuvpVFTwr",
"0gUioh31eDT",
"CFbRucya0p",
"baCBlUW6kv",
"3YnjSVpya-",
"YRp69Ep-Qn-",
"ND3Oh4C5ssH"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **For Q1**: We want to remind the reviewer that while the generation procedure is hierarchical, the number of such levels of hierarchy is a lot smaller than the number of parallel generated samples, so it does not compromise generation speed. On the contrary, the hierarchical generation procedure greatly improves... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"rmRMyYCXHw",
"v9PQrIvts9KW",
"WpYb705hDdW",
"NlhJehHcrVZ",
"jeny33SzRsa",
"v9PQrIvts9KW",
"9zDuvpVFTwr",
"0gUioh31eDT",
"ND3Oh4C5ssH",
"YRp69Ep-Qn-",
"3YnjSVpya-",
"baCBlUW6kv",
"nips_2022_5pvB6IH_9UZ",
"nips_2022_5pvB6IH_9UZ",
"nips_2022_5pvB6IH_9UZ",
"nips_2022_5pvB6IH_9UZ",
"nips... |
nips_2022_xK6wRfL2mv7 | Sharpness-Aware Training for Free | Modern deep neural networks (DNNs) have achieved state-of-the-art performances but are typically over-parameterized. The over-parameterization may result in undesirably large generalization error in the absence of other customized training strategies. Recently, a line of research under the name of Sharpness-Aware Minimization (SAM) has shown that minimizing a sharpness measure, which reflects the geometry of the loss landscape, can significantly reduce the generalization error. However, SAM-like methods incur a two-fold computational overhead of the given base optimizer (e.g. SGD) for approximating the sharpness measure. In this paper, we propose Sharpness-Aware Training for Free, or SAF, which mitigates the sharp landscape at almost zero additional computational cost over the base optimizer. Intuitively, SAF achieves this by avoiding sudden drops in the loss in the sharp local minima throughout the trajectory of the updates of the weights. Specifically, we suggest a novel trajectory loss, based on the KL-divergence between the outputs of DNNs with the current weights and past weights, as a replacement of the SAM's sharpness measure. This loss captures the rate of change of the training loss along the model's update trajectory. By minimizing it, SAF ensures the convergence to a flat minimum with improved generalization capabilities. Extensive empirical results show that SAF minimizes the sharpness in the same way that SAM does, yielding better results on the ImageNet dataset with essentially the same computational cost as the base optimizer. | Accept | This paper proposes a novel optimization method, called SAF, for reaching flat minima. The main claim is that the proposed method does not suffer from computational overhead of SAM-like methods which is typically 2x SGD. The proposed method is based on a novel loss that minimizes the KL-divergence between the output of the network with previous weights and current weights. This allows avoiding the extra computational overhead of SAM. Authors show that their method can achieve better empirical results in a compute-constrained regime.
Reviewers are in agreement about the novelty of the proposed method and that the paper is well-written and easy to follow. The empirical results also show a clear advantage over other methods in a compute-constrained regime. The main concern about accepting the paper is due to some mismatch between reported numbers in the paper and that of original SAM paper as well as lack of clarity on some experimental details. I am leaning towards acceptance given the advantages mentioned above but I strongly recommend authors (and want to see this implemented for camera-ready) to at the very least make the following changes (as well as the ones proposed by reviewers) to adhere to clarity and reproducibility standards of publications in computer science and increase the impact of their paper:
1- In Table 1 (and perhaps Figure 1?), add ALL reported results for SAM including but not limited to a) The results reported in the original SAM paper. b) The results reported by other papers running SAM themselves c) The results of running SAM by authors.
2- In Table 1 (and perhaps everywhere), always make it extra easy for the reader to know if the number is taken from another paper or it is a the result you reproduced yourself.
3- Add all experimental details (including augmentation techniques used and what hyper-parameters are tuned) for all experiments. When there are mismatches that makes the comparison more difficult, this becomes even more important and allows the reader to make-up their mind about where the improvement might be coming from.
4- If you have considerations about original SAM paper’s numbers not being reproduced by other papers, you can add it as a footnote or discussion but it is still important to report them. | train | [
"2zAxlUm3Kti",
"L3jgQMXH15g",
"vIKGV4z6l5x",
"mEX6EDV6Cx",
"TkvVkaa4Yqw",
"ty2xiB6ifDX",
"HJMF1c05cW",
"m87V0qVB34h",
"ZSyF0kYHZL",
"YB8WeDS1UhL",
"SQoPLdPvB86",
"TJnObd90-Lt",
"s8jyIpPVK5j",
"t0AEd_w-uRz",
"RJaATRurbu0",
"5xZbbcF1Ams",
"3qx2-s-ERgq",
"ahZOHFzJn98",
"EBEODdjDYMU"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for the response and the additional experiments.",
" Continued of A11 \n\nFor the gap between the reported results of [2,37] and [21], the reported results of ViT-S/32-SAM are precisely listed as\n\n|Vit-S/32-SAM Reported in [2] | Vit-S/32-SAM Reported in [37] |Vit-S/32-SAM Reported in [21] |Vit-S/32-SAM... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
3,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
2
] | [
"5xZbbcF1Ams",
"vIKGV4z6l5x",
"HJMF1c05cW",
"TkvVkaa4Yqw",
"ty2xiB6ifDX",
"TJnObd90-Lt",
"TJnObd90-Lt",
"ZSyF0kYHZL",
"SQoPLdPvB86",
"RJaATRurbu0",
"tXCVWY5tWaH",
"s8jyIpPVK5j",
"t0AEd_w-uRz",
"ahZOHFzJn98",
"EBEODdjDYMU",
"3qx2-s-ERgq",
"nips_2022_xK6wRfL2mv7",
"nips_2022_xK6wRfL2... |
nips_2022_m8YYs8nJF3T | Distributional Convergence of the Sliced Wasserstein Process | Motivated by the statistical and computational challenges of computing Wasserstein distances in high-dimensional contexts, machine learning researchers have defined modified Wasserstein distances based on computing distances between one-dimensional projections of the measures. Different choices of how to aggregate these projected distances (averaging, random sampling, maximizing) give rise to different distances, requiring different statistical analyses. We define the \emph{Sliced Wasserstein Process}, a stochastic process defined by the empirical Wasserstein distance between projections of empirical probability measures to all one-dimensional subspaces, and prove a uniform distributional limit theorem for this process. As a result, we obtain a unified framework in which to prove sample complexity and distributional limit results for all Wasserstein distances based on one-dimensional projections. We illustrate these results on a number of examples where no distributional limits were previously known. | Accept | After the rebuttal period, the reviewers have come to an agreement on the paper being novel, interesting, the contributions being significant. The rebuttal also addressed most of the concerns, though I agree with reviewer 84Jv on the comment that experiments on non-compact settings would be a plus to see the limits of the theory. Overall, I believe this is a nice continuation for the prior art and I recommend an acceptance for the paper. | train | [
"cwgP2zx2Vii",
"k9xW54zb86z",
"o4YFl0mqXrkK",
"F6xLuGl4f0",
"4mlQmbiPOug",
"19M2B1exfom",
"JiMF7Fl-YVz",
"3GlabsjbJwT",
"9xtqoZl2AD1",
"RXqFTdE_Yl",
"C9TBUJ-yeW4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your detailed response and the changes made to the document (especially the clarification for the example with $p=1$). Also, I agree with you on the particular issue raised in 2, thanks for pointing out my misunderstanding. \n\nAfter reading the other reviews and responses, the authors' go... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"4mlQmbiPOug",
"F6xLuGl4f0",
"JiMF7Fl-YVz",
"C9TBUJ-yeW4",
"RXqFTdE_Yl",
"9xtqoZl2AD1",
"3GlabsjbJwT",
"nips_2022_m8YYs8nJF3T",
"nips_2022_m8YYs8nJF3T",
"nips_2022_m8YYs8nJF3T",
"nips_2022_m8YYs8nJF3T"
] |
nips_2022_Iksst2czYoB | Stochastic Multiple Target Sampling Gradient Descent | Sampling from an unnormalized target distribution is an essential problem with many applications in probabilistic inference. Stein Variational Gradient Descent (SVGD) has been shown to be a powerful method that iteratively updates a set of particles to approximate the distribution of interest. Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem. A natural question then arises: ``Can we derive a probabilistic version of the multi-objective optimization?''. To answer this question, we propose Stochastic Multiple Target Sampling Gradient Descent (MT-SGD), enabling us to sample from multiple unnormalized target distributions. Specifically, our MT-SGD conducts a flow of intermediate distributions gradually orienting to multiple target distributions, which allows the sampled particles to move to the joint high-likelihood region of the target distributions. Interestingly, the asymptotic analysis shows that our approach reduces exactly to the multiple-gradient descent algorithm for multi-objective optimization, as expected. Finally, we conduct comprehensive experiments to demonstrate the merit of our approach to multi-task learning. | Accept | The paper presents a particle-based method to approximate multiple target distributions simultaneously. The proposed particle-updating dynamics is shown to decrease the KL to every target (makes a Pareto improvement), and the resulting particles prefer the intersection of all targets (Pareto common) which differs it from a related method (which prefers Pareto front). Although the technical framework is not completely novel (follows MGDA (MOO)), reviewers agree that the proposed method for multi-distribution approximation is inspiring and the paper has well implemented the idea.
Nevertheless, there still remains a few imprecise statements that authors need to address upon acceptance.
1. Precise meeding of Eqs. (1,2). It is impossible that a single $q$ simultaneously minimizes each individual KL in general. Reviewer yFUe mentioned this but the reply did not clearly define this. The notation/formulation needs clarification even if it follows previous work.
2. Equation below Line 114 is only true if $\phi$ is in RKHS.
3. Some statements on MOO-SVGD might be improper. It seems conflicting that MOO-SVGD “updates the particles individually and independently”, while also “employs a repulsive term”, which is an interation among particles. More precise description is expected on (MOO-SVGD) “encourages the particle diversity without any theoretical-guaranteed principle to control the repulsive term”: MOO-SVGD is not originally intended for multi-distribution approximation, and it also provides a stationary distribution characterization.
| train | [
"cupTJmaE6q4",
"04yPUOg10rQ",
"PsAWGzbNwdt",
"poWZ6ak3Ou",
"Cyqoskf1jqP",
"i5XNcvNIhB4",
"koGlyUM6G-X",
"1Bfufgou0Nx",
"0lgSxt6XD0l",
"y8Gdyrr4TB9",
"XhH5KTWEHfj",
"h8q2OUzC4GI",
"x2IxHQwXoKQ",
"1Z-7GKS_AIC",
"ViKY-WVuwGb",
"yn0fQ1UXHHh",
"ZpUQy03IVbA"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for spending their time evaluating our paper and providing detailed feedbacks. We will include the intuition behind our objective function in the next version of our manuscript, as you suggested.\n\nBest,\n\nAuthors.\n",
" Thank you for recognizing our efforts and providing c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"i5XNcvNIhB4",
"PsAWGzbNwdt",
"0lgSxt6XD0l",
"Cyqoskf1jqP",
"koGlyUM6G-X",
"1Bfufgou0Nx",
"y8Gdyrr4TB9",
"x2IxHQwXoKQ",
"1Z-7GKS_AIC",
"XhH5KTWEHfj",
"h8q2OUzC4GI",
"ZpUQy03IVbA",
"yn0fQ1UXHHh",
"ViKY-WVuwGb",
"nips_2022_Iksst2czYoB",
"nips_2022_Iksst2czYoB",
"nips_2022_Iksst2czYoB"
... |
nips_2022_wjSHd5nDeo | Multi-Sample Training for Neural Image Compression | This paper considers the problem of lossy neural image compression (NIC). Current state-of-the-art (SOTA) methods adopt uniform posterior to approximate quantization noise, and single-sample pathwise estimator to approximate the gradient of evidence lower bound (ELBO). In this paper, we propose to train NIC with multiple-sample importance weighted autoencoder (IWAE) target, which is tighter than ELBO and converges to log likelihood as sample size increases. First, we identify that the uniform posterior of NIC has special properties, which affect the variance and bias of pathwise and score function estimators of the IWAE target. Moreover, we provide insights on a commonly adopted trick in NIC from gradient variance perspective. Based on those analysis, we further propose multiple-sample NIC (MS-NIC), an enhanced IWAE target for NIC. Experimental results demonstrate that it improves SOTA NIC methods. Our MS-NIC is plug-and-play, and can be easily extended to neural video compression.
| Accept | This paper studies the problem of neural image compression (NIC). Standard methods for NIC use a "single-sample pathwise estimator" to estimate the ELBO gradients to optimize the rate-distortion loss function. This paper improves the estimation by using multiple samples, leading to better compression results. Experimental results show that multi-sample methods improve compression performance in many cases.
The reviewers' comments are appropriately addressed, and all reviewers appreciate the contribution of this paper on neural image compression, so I recommend accept. | train | [
"56y96darhdk",
"PQoHuGSmBi",
"Ny2RouvxdTy",
"v4aJmL50Ttl",
"8CNsGglSXvi",
"lgSh_2GBplB",
"HRQ06AI7Qtq",
"TleKjGrsrq",
"cRRrwb8PzY",
"1GM-KYtKbMi",
"J9yMpqSeTu",
"4Uj0Cz5AjFGV",
"LgKnwGh4iYFM",
"3snQs7Rmrlj",
"rZtzMELI-_7",
"LacQHX8kFGh",
"Zi7t_8odtLC"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed response, which has addressed most of my concerns and provided interesting new insights to the problem. \n\nI think the paper made a good contribution that connects and fills in the blank of existing literature, and I've raised my score. ",
" Hi, thanks for your feedback.\n\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"rZtzMELI-_7",
"Ny2RouvxdTy",
"Zi7t_8odtLC",
"HRQ06AI7Qtq",
"nips_2022_wjSHd5nDeo",
"nips_2022_wjSHd5nDeo",
"LacQHX8kFGh",
"LacQHX8kFGh",
"Zi7t_8odtLC",
"Zi7t_8odtLC",
"Zi7t_8odtLC",
"rZtzMELI-_7",
"rZtzMELI-_7",
"rZtzMELI-_7",
"nips_2022_wjSHd5nDeo",
"nips_2022_wjSHd5nDeo",
"nips_20... |
nips_2022_duBoAyn9aI | Controllable and Lossless Non-Autoregressive End-to-End Text-to-Speech | Some recent studies have demonstrated the feasibility of single-stage neural text-to-speech, which does not need to generate mel-spectrograms but generates the raw waveforms directly from the text. Single-stage text-to-speech often faces two problems: a) the one-to-many mapping problem due to multiple speech variations and b) insufficiency of high frequency reconstruction due to the lack of supervision of ground-truth acoustic features during training. To solve the a) problem and generate more expressive speech, we propose a novel phoneme-level prosody modeling method based on a variational autoencoder with normalizing flows to model underlying prosodic information in speech. We also use the prosody predictor to support end-to-end expressive speech synthesis. Furthermore, we propose the dual parallel autoencoder to introduce supervision of the ground-truth acoustic features during training to solve the b) problem enabling our model to generate high-quality speech. We compare the synthesis quality with state-of-the-art text-to-speech systems on an internal expressive English dataset. Both qualitative and quantitative evaluations demonstrate the superiority and robustness of our method for lossless speech generation while also showing a strong capability in prosody modeling. | Reject | I am in agreement with the last 2 reviewers.
1) there are many concerns about the technical correctness of the paper that can be improved
2) more thorough evaluations and experiments are needed
i'm marking this as reject and i encourage the authors to address reviewer comments and resubmit. | train | [
"WV0tv4qeW4",
"yFt4cs2CFWh",
"iBiKeYD3qeZ",
"DIxy90qzpB7",
"Gw5EC0UvIW",
"9MapiaTetKZ",
"p0QN4RQprK-",
"NdwOQdChi4F",
"GNQWvVoFPsK",
"jOS9v0Q5Pyk",
"956tD4Lppgm",
"IA7XbuUFB_H",
"bfpfXvp0_aq"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer xNqf:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"bfpfXvp0_aq",
"IA7XbuUFB_H",
"bfpfXvp0_aq",
"bfpfXvp0_aq",
"IA7XbuUFB_H",
"IA7XbuUFB_H",
"IA7XbuUFB_H",
"IA7XbuUFB_H",
"956tD4Lppgm",
"956tD4Lppgm",
"nips_2022_duBoAyn9aI",
"nips_2022_duBoAyn9aI",
"nips_2022_duBoAyn9aI"
] |
nips_2022_pZsAwqUgnAs | Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions | Stochastic gradient descent (SGD) is a pillar of modern machine learning, serving as the go-to optimization algorithm for a diverse array of problems. While the empirical success of SGD is often attributed to its computational efficiency and favorable generalization behavior, neither effect is well understood and disentangling them remains an open problem. Even in the simple setting of convex quadratic problems, worst-case analyses give an asymptotic convergence rate for SGD that is no better than full-batch gradient descent (GD), and the purported implicit regularization effects of SGD lack a precise explanation. In this work, we study the dynamics of multi-pass SGD on high-dimensional convex quadratics and establish an asymptotic equivalence to a stochastic differential equation, which we call homogenized stochastic gradient descent (HSGD), whose solutions we characterize explicitly in terms of a Volterra integral equation. These results yield precise formulas for the learning and risk trajectories, which reveal a mechanism of implicit conditioning that explains the efficiency of SGD relative to GD. We also prove that the noise from SGD negatively impacts generalization performance, ruling out the possibility of any type of implicit regularization in this context. Finally, we show how to adapt the HSGD formalism to include streaming SGD, which allows us to produce an exact prediction for the excess risk of multi-pass SGD relative to that of streaming SGD (bootstrap risk). | Accept | The paper addresses an important question regarding the trajectories of SGD in high-dimensional settings. The theoretical derivations of the paper builds on top of prior works but nonetheless is sound. Most reviewers agree that the paper advances the knowledge in this area. | train | [
"0MS6RX4G3PJ",
"vrNdYZgPcI6",
"p-pZC3fgnvB",
"0hM9xrKDuo",
"GoH2AGk-S5t",
"m99_GzrFNgV",
"VZFxqPhjXHn",
"DT95QKDYHj7Q",
"t6iXHfSOZXL8",
"26dzc896CsR9",
"XKDJvTnLpq",
"8aeIVgnkdpI",
"zzUb5uZUKMf",
"Qz48jbosjRh",
"jnzDMhrNDdj",
"UEkYuZs1qfg"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree with the reviewer that it is very interesting to examine the evolution of the ICR during training. We have added an experiment that shows the ICR is roughly constant over training **[see Supplementary Materials zip file]**, yielding at least some preliminary empirical support for the practical utility of... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"p-pZC3fgnvB",
"8aeIVgnkdpI",
"GoH2AGk-S5t",
"m99_GzrFNgV",
"UEkYuZs1qfg",
"VZFxqPhjXHn",
"jnzDMhrNDdj",
"t6iXHfSOZXL8",
"Qz48jbosjRh",
"XKDJvTnLpq",
"zzUb5uZUKMf",
"nips_2022_pZsAwqUgnAs",
"nips_2022_pZsAwqUgnAs",
"nips_2022_pZsAwqUgnAs",
"nips_2022_pZsAwqUgnAs",
"nips_2022_pZsAwqUgnA... |
nips_2022_OtxyysUdBE | FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model Extraction | Most cross-device federated learning (FL) studies focus on the model-homogeneous setting where the global server model and local client models are identical. However, such constraint not only excludes low-end clients who would otherwise make unique contributions to model training but also restrains clients from training large models due to on-device resource bottlenecks. In this work, we propose FedRolex, a partial training (PT)-based approach that enables model-heterogeneous FL and can train a global server model larger than the largest client model. At its core, FedRolex employs a rolling sub-model extraction scheme that allows different parts of the global server model to be evenly trained, which mitigates the client drift induced by the inconsistency between individual client models and server model architectures. Empirically, we show that FedRolex outperforms state-of-the-art PT-based model-heterogeneous FL methods (e.g. Federated Dropout) and reduces the gap between model-heterogeneous and model-homogeneous FL, especially under the large-model large-dataset regime. In addition, we provide theoretical statistical analysis on its advantage over Federated Dropout. Lastly, we evaluate FedRolex on an emulated real-world device distribution to show that FedRolex can enhance the inclusiveness of FL and boost the performance of low-end devices that would otherwise not benefit from FL. Our code is available at: \href{https://github.com/MSU-MLSys-Lab/FedRolex}{https://github.com/MSU-MLSys-Lab/FedRolex}. | Accept | To better handle the case that each client is with heterogeneous device resources, this paper presents a model-heterogeneous federated learning algorithm FedRolex. FedRolex rolls the submodel in each federated iteration, in order to train the parameters of the global model on the global data distribution. Experimental results show FedRolex outperforms other model-heterogeneous baselines. Ablation studies on submodel rolling show it is an effective technique.
However, this paper suffers from several limitations. Firstly, it remains unclear why FedRolex can significantly outperform Federated Dropout and HeteroFL, since they are only different in sampling methods. Secondly, after federated learning, the low-end devices still can only use a sub-model. Will it benefit? | train | [
"8ntybHCno2Q",
"w5tyCt5O-MZ",
"FLVZlP3uMAY",
"49l0mlTPQED",
"7_K38_z_eer",
"dnz8l6B8f_t",
"w4MBf6NWmlI",
"gSpOtYKurW5",
"1jlXHIUVjro",
"DbY_u0LWAzl",
"E9XjD4CZ-Xe",
"J2O0MWJUlHP",
"MI1_Ziwe0gk",
"gHtDiePuuys",
"azZoU7_Lz70",
"NYwVcjRkepO",
"8uLTmCciM5g",
"fjMYyofwZC",
"bDnZPef0fJ... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"officia... | [
" Dear Reviewer 5GU1,\n\nWe have included the rebuttal results and discussion into the revision, and have uploaded the latest version. Thanks again for the time and valuable comments. We really appreciate it.",
" Thank the authors for your response, which addressed my major concerns. I have improved my ratting.",... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"FLVZlP3uMAY",
"8YlqFSRnPnX",
"E9XjD4CZ-Xe",
"MH5WBbl2PEc",
"lHzfU7_OEtN",
"8YlqFSRnPnX",
"CznujbdzLMH",
"lHzfU7_OEtN",
"lHzfU7_OEtN",
"CznujbdzLMH",
"CznujbdzLMH",
"MH5WBbl2PEc",
"MH5WBbl2PEc",
"MH5WBbl2PEc",
"MH5WBbl2PEc",
"MH5WBbl2PEc",
"MH5WBbl2PEc",
"8YlqFSRnPnX",
"8YlqFSRnP... |
nips_2022_ZVuzllOOHS | Differentially Private Covariance Revisited | In this paper, we present two new algorithms for covariance estimation under concentrated differential privacy (zCDP). The first algorithm achieves a Frobenius error of $\tilde{O}(d^{1/4}\sqrt{\mathrm{tr}}/\sqrt{n} + \sqrt{d}/n)$, where $\mathrm{tr}$ is the trace of the covariance matrix. By taking $\mathrm{tr}=1$, this also implies a worst-case error bound of $\tilde{O}(d^{1/4}/\sqrt{n})$, which improves the standard Gaussian mechanism's $\tilde{O}(d/n)$ for the regime $d>\widetilde{\Omega}(n^{2/3})$. Our second algorithm offers a tail-sensitive bound that could be much better on skewed data. The corresponding algorithms are also simple and efficient. Experimental results show that they offer significant improvements over prior work. | Accept | The reviewers all concurred that the main result of this paper is quite interesting. It privately estimates the covariance better than established methods in particular parameter regimes. Given the clear accept sentiments towards this paper, there was little additional discussion. | train | [
"Eo1OYFaFH4",
"t-WwuMZbvD",
"4BtpOocJBxg",
"lMKnkDLSJfX",
"y14RQa8c6Fu",
"Xk_LWa8zMbS",
"2ThMVST8Wq3",
"abDYpy77f1M",
"NE3Ccu3NM_",
"fe0mhJOY9JZ",
"HR9NEXNKvIu",
"mC6-yuQ0c7-"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your helpful comments!\n\nQuestion 2: This is an interesting observation! Yes, we can use simhash to transform a vector from the unit sphere to one in $\\\\{-1/\\sqrt{d},1/\\sqrt{d}\\\\}^d$ and then apply the mechanism for the 2-way marginal problem. That does work but it is unclear what the error i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"4BtpOocJBxg",
"lMKnkDLSJfX",
"Xk_LWa8zMbS",
"2ThMVST8Wq3",
"mC6-yuQ0c7-",
"HR9NEXNKvIu",
"fe0mhJOY9JZ",
"NE3Ccu3NM_",
"nips_2022_ZVuzllOOHS",
"nips_2022_ZVuzllOOHS",
"nips_2022_ZVuzllOOHS",
"nips_2022_ZVuzllOOHS"
] |
nips_2022_tHK5ntjp-5K | LION: Latent Point Diffusion Models for 3D Shape Generation | Denoising diffusion models (DDMs) have shown promising results in 3D point cloud synthesis. To advance 3D DDMs and make them useful for digital artists, we require (i) high generation quality, (ii) flexibility for manipulation and applications such as conditional synthesis and shape interpolation, and (iii) the ability to output smooth surfaces or meshes. To this end, we introduce the hierarchical Latent Point Diffusion Model (LION) for 3D shape generation. LION is set up as a variational autoencoder (VAE) with a hierarchical latent space that combines a global shape latent representation with a point-structured latent space. For generation, we train two hierarchical DDMs in these latent spaces. The hierarchical VAE approach boosts performance compared to DDMs that operate on point clouds directly, while the point-structured latents are still ideally suited for DDM-based modeling. Experimentally, LION achieves state-of-the-art generation performance on multiple ShapeNet benchmarks. Furthermore, our VAE framework allows us to easily use LION for different relevant tasks: LION excels at multimodal shape denoising and voxel-conditioned synthesis, and it can be adapted for text- and image-driven 3D generation. We also demonstrate shape autoencoding and latent shape interpolation, and we augment LION with modern surface reconstruction techniques to generate smooth 3D meshes. We hope that LION provides a powerful tool for artists working with 3D shapes due to its high-quality generation, flexibility, and surface reconstruction. Project page and code: https://nv-tlabs.github.io/LION. | Accept | This paper proposes a latent point diffusion model, LION, for 3D shape generation. The model builds two denoising diffusion models in the latent spaces of a variational autoencoder. The latent spaces combine a global shape latent representation with a point-structured latent space. Comprehensive experiments are conducted to evaluate the performance of the proposed method. The authors address the major concerns of the reviewers and strengthen the paper by providing additional empirical results. After the rebuttal, all four reviewers reach an agreement on accepting the paper because of the novelty and the state-of-the-art performance. The AC agrees with the reviewers and recommends accepting the paper. | train | [
"AGFCoCOUXbF",
"Z-c0HikpD3",
"1SnmvizqK5X",
"fr8AbMwPJ_T",
"vaoqfL-FGi",
"42-dopnqLI",
"efSATmyxu-m",
"KVvxm0gGp7H",
"dDQRjW9rV2N",
"mqKwAsApmjT",
"Ogc4trMz9lC",
"zWJJutKKt4Z",
"TsorGO7i9t",
"YW6ROs7TV4",
"eF1sj5tD47e",
"FmxngAJUJTA",
"UbrkT1OPUT",
"Ev7OK1CGc8",
"S_HLpYr_mOu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The rebuttal alleviates my concerns and also answers other reviewers' questions. So I keep my attitude as borderline accept. \n",
" After reading the authors' clarifications, I think their comments have resolved most of my concerns. I'm inclined to accept this paper and thus keep my original rating.",
" Thank... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"zWJJutKKt4Z",
"Ogc4trMz9lC",
"vaoqfL-FGi",
"nips_2022_tHK5ntjp-5K",
"42-dopnqLI",
"efSATmyxu-m",
"S_HLpYr_mOu",
"dDQRjW9rV2N",
"Ev7OK1CGc8",
"Ogc4trMz9lC",
"UbrkT1OPUT",
"TsorGO7i9t",
"YW6ROs7TV4",
"FmxngAJUJTA",
"nips_2022_tHK5ntjp-5K",
"nips_2022_tHK5ntjp-5K",
"nips_2022_tHK5ntjp-... |
nips_2022_crFMP5irwzn | Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation | In the past few years, transformers have achieved promising performance on various computer vision tasks. Unfortunately, the immense inference overhead of most existing vision transformers withholds them from being deployed on edge devices such as cell phones and smart watches. Knowledge distillation is a widely used paradigm for compressing cumbersome architectures into compact students via transferring information. However, most of them are designed for convolutional neural networks (CNNs), which do not fully investigate the character of vision transformers. In this paper, we fully utilize the patch-level information and propose a fine-grained manifold distillation method for transformer-based networks. Specifically, we train a tiny student model to match a pre-trained teacher model in the patch-level manifold space. Then, we decouple the manifold matching loss into three terms with careful design to further reduce the computational costs for the patch relationship. Equipped with the proposed method, a DeiT-Tiny model containing 5M parameters achieves 76.5\% top-1 accuracy on ImageNet-1k, which is +2.0\% higher than previous distillation approaches. Transfer learning results on other classification benchmarks and downstream vision tasks also demonstrate the superiority of our method over the state-of-the-art algorithms. | Accept | Four experts in the field reviewed the paper and recommended Borderline Accept, Weak Accept, Accept, and Borderline Accept. The reviewers generally liked the approach, though some commented that it is straightforward. The reviewers' questions about experiments and clarifications were well addressed by the rebuttal. Hence, the decision is to recommend the paper for acceptance. We encourage the authors to consider the reviewers' comments and make the necessary changes to the best of their ability. We congratulate the authors on the acceptance of their paper! | train | [
"7XoVK1fCr9C",
"mkAA-JgBjbg",
"TOEUKWhFKoq",
"gpXagSTCwA",
"k3dV-Qq6nwk",
"mZay8T8hH81",
"d-3-M9YMkdu",
"xKDm2uFmb1",
"SeRKYY242C",
"AajF035iLPq",
"XdOoL1Gvvqh",
"yIH5fmXSF2K",
"6iVBqGI4WD",
"yhKmzpoc1pY",
"1A_wLFZ8axy",
"MfL3CtHRz5t",
"ZIWxACZ1yGs"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer ipbi:\n\n\nThanks for your feedback!\n\n\nBest,\n\nPaper3797 Authors",
" Thanks for the reviewers' rebuttal, which has solved most of my concerns.\n\nSo I would like to raise my rating from Borderline reject (4) to Borderline accept (6) \n\nBest,",
" Dear Reviewer eVwU,\n\n\nThanks again for you... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"mkAA-JgBjbg",
"yhKmzpoc1pY",
"gpXagSTCwA",
"xKDm2uFmb1",
"yhKmzpoc1pY",
"d-3-M9YMkdu",
"XdOoL1Gvvqh",
"SeRKYY242C",
"ZIWxACZ1yGs",
"1A_wLFZ8axy",
"MfL3CtHRz5t",
"6iVBqGI4WD",
"yhKmzpoc1pY",
"nips_2022_crFMP5irwzn",
"nips_2022_crFMP5irwzn",
"nips_2022_crFMP5irwzn",
"nips_2022_crFMP5i... |
nips_2022_P_eBjUlzlV | On the Limitations of Stochastic Pre-processing Defenses | Defending against adversarial examples remains an open problem. A common belief is that randomness at inference increases the cost of finding adversarial inputs. An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model. In this paper, we empirically and theoretically investigate such stochastic pre-processing defenses and demonstrate that they are flawed. First, we show that most stochastic defenses are weaker than previously thought; they lack sufficient randomness to withstand even standard attacks like projected gradient descent. This casts doubt on a long-held assumption that stochastic defenses invalidate attacks designed to evade deterministic defenses and force attackers to integrate the Expectation over Transformation (EOT) concept. Second, we show that stochastic defenses confront a trade-off between adversarial robustness and model invariance; they become less effective as the defended model acquires more invariance to their randomization. Future work will need to decouple these two effects. We also discuss implications and guidance for future research. | Accept | This paper considers the effectiveness of stochastic preprocessing methods at achieving adversarial robustness. It shows empirically that the common Expectation of Transformations attack is not necessary to break many such defenses, as these defenses are vulnerable to standard PGD attacks when the amount of randomization is small. In a specific setup, the authors prove a trade-off between the utility and robustness of randomization defenses, and demonstrate on real data sets that such a trade-off exists for two randomized defenses (Barrage of Random Transforms and randomized smoothing). Although there is concern about the lack of clear impact on the development of future defense schemes, the reviewers found the message and empirical results of the paper to be illuminating. | train | [
"-H0w6Wzd9pc",
"O1YQn4q5sW",
"tZrpTLf0rm6",
"8Yj2uIrRjd0",
"v68GFJVNQaB",
"yXy_mfn7BnB",
"VMMb05-yXFl",
"8wpq_06uv_S",
"k2srZV3h2EH",
"0nagCvU9QZT",
"p9jslyzMVGK",
"63Cry6fZp8o",
"tr7DOUtqQgX",
"i1vRufZ61gt",
"ubn_atE0RGr",
"IuOL07UHv1R",
"gUx6_iCNaeV",
"nKRRBEEzeem"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the reviewer's detailed review and positive comments. We will incorporate these discussions and additional experimental results into the main paper and provide more details in the appendix.",
" We appreciate the reviewer's detailed review and positive comments. We will incorporate these discussion... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"tZrpTLf0rm6",
"8Yj2uIrRjd0",
"63Cry6fZp8o",
"IuOL07UHv1R",
"nKRRBEEzeem",
"IuOL07UHv1R",
"k2srZV3h2EH",
"nips_2022_P_eBjUlzlV",
"0nagCvU9QZT",
"tr7DOUtqQgX",
"nKRRBEEzeem",
"nKRRBEEzeem",
"gUx6_iCNaeV",
"IuOL07UHv1R",
"IuOL07UHv1R",
"nips_2022_P_eBjUlzlV",
"nips_2022_P_eBjUlzlV",
... |
nips_2022_caH1x1ZBLDR | Distributionally Robust Optimization with Data Geometry | Distributionally Robust Optimization (DRO) serves as a robust alternative to empirical risk minimization (ERM), which optimizes the worst-case distribution in an uncertainty set typically specified by distance metrics including $f$-divergence and the Wasserstein distance. The metrics defined in the ostensible high dimensional space lead to exceedingly large uncertainty sets, resulting in the underperformance of most existing DRO methods. It has been well documented that high dimensional data approximately resides on low dimensional manifolds. In this work, to further constrain the uncertainty set, we incorporate data geometric properties into the design of distance metrics, obtaining our novel Geometric Wasserstein DRO (GDRO). Empowered by Gradient Flow, we derive a generically applicable approximate algorithm for the optimization of GDRO, and provide the bounded error rate of the approximation as well as the convergence rate of our algorithm. We also theoretically characterize the edge cases where certain existing DRO methods are the degeneracy of GDRO. Extensive experiments justify the superiority of our GDRO to existing DRO methods in multiple settings with strong distributional shifts, and confirm that the uncertainty set of GDRO adapts to data geometry. | Accept | The paper proposes a novel distributionally robust optimization formulation leveraging data geometry to construct the uncertainty set. After a lengthy discussion and revision process, the reviewers have reached a consensus acceptance recommendation, which I support.
Currently the reproducibility checklist (part 3a) states that the authors submitted code to reproduce their results along with the paper, but I do not see it as part of the supplementary material or as a link. Please provide code with the camera ready submission, or correct the checklist. | train | [
"FgSmng8wexw",
"tIp2EvGvYrU",
"2LZNJECPt-",
"-6QNXzgSTQR",
"FXP0PnY_HAj",
"1fzyh4tziQB",
"ad-x1n85qcr",
"mEiDfafqZas",
"cG4DwX-5rW4I",
"KOmpJ99kv2VW",
"VopZSZL8CUS",
"5bkYZVz4OAQV",
"Tl61RZpCF7_",
"tC6kwUS7blv",
"PC-pJQudCXr",
"uWnaXbdNT87",
"3aYAIlBue7c",
"vtV6uKnSq8z",
"2JH1tYR... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" I would like to raise my score to 7.",
" Thank you for your support! \nWe appreciate your efforts in all the constructive suggestions and discussions that help to improve this paper.",
" Thank you for your support! \nThanks to your suggestions, several parts have been improved in the rebuttal revision:\n* We ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"2LZNJECPt-",
"-6QNXzgSTQR",
"FXP0PnY_HAj",
"ad-x1n85qcr",
"2JH1tYRRgVt",
"mEiDfafqZas",
"cG4DwX-5rW4I",
"vtV6uKnSq8z",
"a9oyQCNiri",
"VopZSZL8CUS",
"5bkYZVz4OAQV",
"Tl61RZpCF7_",
"tC6kwUS7blv",
"PC-pJQudCXr",
"uWnaXbdNT87",
"3aYAIlBue7c",
"a9oyQCNiri",
"eSyFkO2mEPI",
"DPIGA9XSRl... |
nips_2022_AIqC7F7xV-d | Learning Unified Representations for Multi-Resolution Face Recognition | In this work, we propose Branch-to-Trunk network (BTNet), a novel representation learning method for multi-resolution face recognition. It consists of a trunk network (TNet), namely a unified encoder, and multiple branch networks (BNets), namely resolution adapters. As per the input, a resolution-specific BNet is used and the output are implanted as feature maps in the feature pyramid of TNet, at a layer with the same resolution. The discriminability of tiny faces is significantly improved, as the interpolation error introduced by rescaling, especially up-sampling, is mitigated on the inputs. With branch distillation and backward-compatible training, BTNet transfers discriminative high-resolution information to multiple branches while guaranteeing representation compatibility. Our experiments demonstrate strong performance on face recognition benchmarks, both for multi-resolution identity matching and feature aggregation, with much less computation amount and parameter storage. We establish new state-of-the-art on the challenging QMUL-SurvFace 1: N face identification task. | Reject | This paper proposes a Branch-to-Trunk network with multiple independent branch networks and a shared trunk network for multi-resolution face recognition. This paper received three detailed reviews. While there are some merits in this work, the reviewers raised many concerns, including 1) inadequate experiments to demonstrate the superiority of the proposed method, 2) missing ablations, 3) improvement achieved by BTNet is unclear. After reading the reviews, rebuttals, and the paper, the AC concurs with the reviewers’ comments, and feels that the concerns outweigh the strength. Therefore, a rejection is recommended. | train | [
"Qk5G9ocMjJG",
"cw6QAf2dLG",
"bSuN2m4dPS_",
"gFd1m3SQTQ",
"IXcuCGpkQod",
"3MU94jlfhe",
"Vk9dTtzXeHp",
"lclDQG6XxEC"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors for the responses to my concerns. I still have some further concerns regarding to the response.\n\n**Q3.** It is unclear about the setup of $\\varphi_{mm}$. \nAs $\\varphi_{r}$ has the issue of representation compatibility, it is not suitable for cross-resolution recognition (i.e. see Lin... | [
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"cw6QAf2dLG",
"Vk9dTtzXeHp",
"lclDQG6XxEC",
"3MU94jlfhe",
"nips_2022_AIqC7F7xV-d",
"nips_2022_AIqC7F7xV-d",
"nips_2022_AIqC7F7xV-d",
"nips_2022_AIqC7F7xV-d"
] |
nips_2022_OmLNqwnZwmY | Falsification before Extrapolation in Causal Effect Estimation | Randomized Controlled Trials (RCTs) represent a gold standard when developing policy guidelines. However, RCTs are often narrow, and lack data on broader populations of interest. Causal effects in these populations are often estimated using observational datasets, which may suffer from unobserved confounding and selection bias. Given a set of observational estimates (e.g., from multiple studies), we propose a meta-algorithm that attempts to reject observational estimates that are biased. We do so using validation effects, causal effects that can be inferred from both RCT and observational data. After rejecting estimators that do not pass this test, we generate conservative confidence intervals on the extrapolated causal effects for subgroups not observed in the RCT. Under the assumption that at least one observational estimator is asymptotically normal and consistent for both the validation and extrapolated effects, we provide guarantees on the coverage probability of the intervals output by our algorithm. To facilitate hypothesis testing in settings where causal effect transportation across datasets is necessary, we give conditions under which a doubly-robust estimator of group average treatment effects is asymptotically normal, even when flexible machine learning methods are used for estimation of nuisance parameters. We illustrate the properties of our approach on semi-synthetic experiments based on the IHDP dataset, and show that it compares favorably to standard meta-analysis techniques. | Accept | The authors propose an approach for estimating causal effects when both observational and limited experimental data exists. The authors propose falsifying effect estimates from observational data before using the effect estimate on other populations. This is an important idea that may improve reliability of causal inference. The authors provide confidence intervals for the proposed procedure. The considered problem is of clear importance; and the simplicity of its approach is appealing (cPQd). There have been some concerns about the limited empirical evaluation (icYd). The authors provide additional numerical evidence during the rebuttal period. This evidence should be added to the appendix for the camera-ready version.
Note: The reviewer most critical of the paper (rating 4, icYd) does not seem to have updated their score post-rebuttal. | train | [
"7U099W4cs28",
"gqjCwBWwCX2",
"cFX4Q5vxHEF",
"2svv-nQqFB9",
"yUKXXe4iSi4A",
"7x_oORmX3VA",
"R-UtuE6ltxa",
"uEwhJBvrtJm",
"CH8opepDqJP",
"1Rr0rxBMIdG",
"Opof4xl0Bi",
"jLTqN9O9lGLk",
"cL3_3qwEs64",
"H11Et8n7KxM",
"kBrayJWsRg0",
"8CQqW8IxvKU"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response to address my concerns. I adjusted the score accordingly. ",
" Thank you again for your review! One of your major concerns appeared to be the lack of a \"real data\" experiment, which we have now provided.\n\nCould you please let us know if this experiment (and our other clarificatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"7x_oORmX3VA",
"H11Et8n7KxM",
"cL3_3qwEs64",
"yUKXXe4iSi4A",
"8CQqW8IxvKU",
"kBrayJWsRg0",
"H11Et8n7KxM",
"cL3_3qwEs64",
"1Rr0rxBMIdG",
"Opof4xl0Bi",
"jLTqN9O9lGLk",
"nips_2022_OmLNqwnZwmY",
"nips_2022_OmLNqwnZwmY",
"nips_2022_OmLNqwnZwmY",
"nips_2022_OmLNqwnZwmY",
"nips_2022_OmLNqwnZw... |
nips_2022_Yay6tHq1Nw | Improving Policy Learning via Language Dynamics Distillation | Recent work has shown that augmenting environments with language descriptions improves policy learning. However, for environments with complex language abstractions, learning how to ground language to observations is difficult due to sparse, delayed rewards. We propose Language Dynamics Distillation (LDD), which pretrains a model to predict environment dynamics given demonstrations with language descriptions, and then fine-tunes these language-aware pretrained representations via reinforcement learning (RL). In this way, the model is trained to both maximize expected reward and retain knowledge about how language relates to environment dynamics. On SILG, a benchmark of five tasks with language descriptions that evaluate distinct generalization challenges on unseen environments (NetHack, ALFWorld, RTFM, Messenger, and Touchdown), LDD outperforms tabula-rasa RL, VAE pretraining, and methods that learn from unlabeled demonstrations in inverse RL and reward shaping with pretrained experts. In our analyses, we show that language descriptions in demonstrations improve sample-efficiency and generalization across environments, and that dynamics modeling with expert demonstrations is more effective than with non-experts. | Accept | This work proposes to learn better representations for language description-based tasks like navigation, via first pretraining a dynamics model on sequences of observations without action labels and using this model to aid RL-based policy learning. Good empirical results are presented and the work has been well-received. The dynamics policy learnt helps policy learning especially with longer horizons. The authors are encouraged to incorporate the reviewer feedback into account and especially the VAE experiments and add discussion. | val | [
"mERFQJcXVbT",
"EbjB3ZJQOXI",
"mRtx-RQFC46",
"qil9cB4a-1B",
"qiKVHTuPjI",
"3wp-sh0P6J6",
"YWW3pjDZcy8",
"mQ67FiBRN3C",
"nZ0e8CYTJkV",
"IgzpDcb5aYR",
"81utqpezyf3",
"fiQYxzkEttj",
"u5Ww-BUGjn_"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As suggested by ZHyY, we implemented a VAE pretraining baseline, a standard representation learning method for RL, and added its results for Messenger to Figure 6 and RTFM to Figure 5. The intermediate variable for VAE here being the representation just before the policy head. We found that VAE pretraining underp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"IgzpDcb5aYR",
"qiKVHTuPjI",
"qiKVHTuPjI",
"nips_2022_Yay6tHq1Nw",
"3wp-sh0P6J6",
"u5Ww-BUGjn_",
"fiQYxzkEttj",
"81utqpezyf3",
"IgzpDcb5aYR",
"nips_2022_Yay6tHq1Nw",
"nips_2022_Yay6tHq1Nw",
"nips_2022_Yay6tHq1Nw",
"nips_2022_Yay6tHq1Nw"
] |
nips_2022_8UUtKmSRkXE | On Gap-dependent Bounds for Offline Reinforcement Learning | This paper presents a systematic study on gap-dependent sample complexity in offline reinforcement learning. Prior works showed when the density ratio between an optimal policy and the behavior policy is upper bounded (single policy coverage), then the agent can achieve an $O\left(\frac{1}{\epsilon^2}\right)$ rate, which is also minimax optimal. We show under the same single policy coverage assumption, the rate can be improved to $O\left(\frac{1}{\epsilon}\right)$ when there is a gap in the optimal $Q$-function. Furthermore, we show under a stronger uniform single policy coverage assumption, the sample complexity can be further improved to $O(1)$. Lastly, we also present nearly-matching lower bounds to complement our gap-dependent upper bounds. | Accept | This paper studies gap-dependent sample complexity in offline tabular RL. The authors show that when there is a gap in the optimal Q-function (and the density ratio between optimal and behavior policies is upper-bounded), the sample complexity can be improved from $O(1/\epsilon^2)$ to $O(1/\epsilon)$ using a pessimistic algorithm. The authors also provide gap-dependent lower bounds.
The work seems correct and well-executed. It is of somewhat limited impact given the strong assumptions (tabular MDP, coverage) but fills a gap (pun intended) in the literature and the results are interesting enough to warrant acceptance.
| train | [
"QSFW8RJ1Hk",
"_N7SEGEXUN2",
"i5LCVbNggMP",
"-CGVS00YFg5",
"BYXG-NPaSSL",
"idZ45IIjh7",
"96dA-OGy95",
"xVQ8uPQ7yA-",
"MRsv7RzUXN8",
"5BB7NRm5kPC",
"e5_vU4JJaen",
"dbbMVyVLn_g",
"ddUlDxa_Y9e"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply and for adjusting your score!\n\n+ **Setting $\\epsilon = \\mathrm{gap}_\\min$ implies finding the optimal policy:** Sorry about the confusion. We will emphasize the difference between identifying an optimal policy and a near-optimal policy in the final version.\n+ **Significance of Upper Bo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"_N7SEGEXUN2",
"-CGVS00YFg5",
"ddUlDxa_Y9e",
"e5_vU4JJaen",
"5BB7NRm5kPC",
"ddUlDxa_Y9e",
"dbbMVyVLn_g",
"e5_vU4JJaen",
"5BB7NRm5kPC",
"nips_2022_8UUtKmSRkXE",
"nips_2022_8UUtKmSRkXE",
"nips_2022_8UUtKmSRkXE",
"nips_2022_8UUtKmSRkXE"
] |
nips_2022_bdnZ_1qHLCW | ResQ: A Residual Q Function-based Approach for Multi-Agent Reinforcement Learning Value Factorization | The factorization of state-action value functions for Multi-Agent Reinforcement Learning (MARL) is important. Existing studies are limited by their representation capability, sample efficiency, and approximation error. To address these challenges, we propose, ResQ, a MARL value function factorization method, which can find the optimal joint policy for any state-action value function through residual functions. ResQ masks some state-action value pairs from a joint state-action value function, which is transformed as the sum of a main function and a residual function. ResQ can be used with mean-value and stochastic-value RL. We theoretically show that ResQ can satisfy both the individual global max (IGM) and the distributional IGM principle without representation limitations. Through experiments on matrix games, the predator-prey, and StarCraft benchmarks, we show that ResQ can obtain better results than multiple expected/stochastic value factorization methods. | Accept | The paper is for the most part well written and contains both theoretical analyses and a comprehensive empirical study. One of the main initial concerns brought up by various reviewers is that the relation between the proposed method, resQ, and the closely related existing methods Qtran and Qplex is not 100% clear. The authors addressed this point extensively in the rebuttal. However, the theoretical advantages of resQ over Qtran remains unclear even after the rebuttal phase for one of the reviewers (despite promising empirical results)
Overall, I believe the paper's strengths make up for this potential weakness and recommend acceptance. I do want to recommend that the authors take a careful look at reviewer dP9n's comments and clarify any points of confusion in the final version of this paper.
| train | [
"J2Hjnr9KT8g",
"te3HHzAEBvpL",
"wIva2OZRj5",
"k5W0XDKU-j",
"sBpzRPg1c1o",
"apRy92Ww3qa",
"RqpRI3Z83L",
"VSWMsUFYuL1",
"WBhl6i7hR8C",
"-ono5mY3eRI",
"7hgCd9_QgaL",
"8b1jKJkLpUS",
"UyH7MSB-QY",
"E_mg3UDz7uP",
"12bUQHN_nmJ",
"i4Rba3HUE5w",
"K4HgraAXO6P"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to express our gratitute to the reviewer for this comments which help us improve the quality of this work. \n\nWe have changed the sentence *\"Achieving the IGM and the DIGM principles with low approximation errors and high sample efficiency remains an open challenge\"* to be *\"Achieving the IGM an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"sBpzRPg1c1o",
"sBpzRPg1c1o",
"k5W0XDKU-j",
"apRy92Ww3qa",
"RqpRI3Z83L",
"WBhl6i7hR8C",
"VSWMsUFYuL1",
"K4HgraAXO6P",
"nips_2022_bdnZ_1qHLCW",
"7hgCd9_QgaL",
"E_mg3UDz7uP",
"i4Rba3HUE5w",
"12bUQHN_nmJ",
"nips_2022_bdnZ_1qHLCW",
"nips_2022_bdnZ_1qHLCW",
"nips_2022_bdnZ_1qHLCW",
"nips_... |
nips_2022_u4KagP_FjB | Spartan: Differentiable Sparsity via Regularized Transportation | We present Spartan, a method for training sparse neural network models with a predetermined level of sparsity. Spartan is based on a combination of two techniques: (1) soft top-k masking of low-magnitude parameters via a regularized optimal transportation problem and (2) dual averaging-based parameter updates with hard sparsification in the forward pass. This scheme realizes an exploration-exploitation tradeoff: early in training, the learner is able to explore various sparsity patterns, and as the soft top-k approximation is gradually sharpened over the course of training, the balance shifts towards parameter optimization with respect to a fixed sparsity mask. Spartan is sufficiently flexible to accommodate a variety of sparsity allocation policies, including both unstructured and block-structured sparsity, global and per-layer sparsity budgets, as well as general cost-sensitive sparsity allocation mediated by linear models of per-parameter costs. On ImageNet-1K classification, we demonstrate that training with Spartan yields 95% sparse ResNet-50 models and 90% block sparse ViT-B/16 models while incurring absolute top-1 accuracy losses of less than 1% compared to fully dense training. | Accept | All reviewers agree that the paper is clearly written and proposes an algorithm which is both novel and efficient.
The rebuttal has clarified a number of points, and thereby adressed most of the concerns of the reviewers. The authors are thus strongly encouraged to take into account the comments of the reviewers and to add some of the clarifications that they provided in this discussion in the paper and supplementary materials.
| train | [
"iUrKZjrcdRh",
"Eqz86P6xjH",
"zutGpJVkTK",
"SX9vDO5EGaD",
"n34LPZ4HZo",
"NvwEkfz3V7F",
"3EAoE7X3Z2Nk",
"CHQhN9GdzAag",
"5tw4Zgoe69w",
"MpniuY1LsAb",
"0z-SzKhcPQF",
"AKg-6wulFw",
"cxZrhQIVPMB",
"sfGhMusYpZT",
"mU3xynnpOKx",
"yhLedqzE9YA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I keep my score and vote for weak acceptance.",
" We greatly appreciate this and would like to gently remind the reviewer to actually raise the rating to 6 as it still seems to be set at 3.",
" I thank the authors for their response.\n\nIt is all clear to me now. The authors have a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"0z-SzKhcPQF",
"zutGpJVkTK",
"SX9vDO5EGaD",
"n34LPZ4HZo",
"AKg-6wulFw",
"5tw4Zgoe69w",
"nips_2022_u4KagP_FjB",
"MpniuY1LsAb",
"cxZrhQIVPMB",
"sfGhMusYpZT",
"mU3xynnpOKx",
"yhLedqzE9YA",
"nips_2022_u4KagP_FjB",
"nips_2022_u4KagP_FjB",
"nips_2022_u4KagP_FjB",
"nips_2022_u4KagP_FjB"
] |
nips_2022_GzESlaXaN04 | Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks | We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting: previous SQ lower bounds held only for adversarial noise models (agnostic learning) (Kothari and Klivans 2014, Goel et al. 2020a, Diakonikolas et al. 2020a) or restricted models such as correlational SQ (Goel et al. 2020b, Diakonikolas et al. 2020b). Prior work hinted at the impossibility of our result: Vempala and Wilmes (2019) showed that general SQ lower bounds cannot apply to any real-valued family of functions that satisfies a simple non-degeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi (2021) that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many well-studied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning two-hidden-layer ReLU networks, as well as new lower bounds for learning constant-depth ReLU networks from membership queries. | Accept | This work provides lower bounds in the noise free setting for learning two hidden layer networks in the Gaussian space. Overall it is a fundamental result well within the scope of Neurips, continuing a solid line of work and I cannot see any reason for rejection.
The authors have engaged with the reviewers, and have committed to make minor revisions and clarifications which I am sure they will do. | val | [
"DrxgDx3lnP1",
"z2NOtlvNWlK",
"1fvL7JxoP88",
"Fw6vLGOtT2l",
"_kDzubyhJm3",
"lb-HvYr3wdC",
"imJuS5SAwXs"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am satisfied with the author's response.",
" Thanks for the positive feedback! There is indeed a distinction between SQ and SGD, though note that by recent results of Abbe, Sandon, and coauthors, SGD with a suitable architecture + exotic initialization is essentially “P-complete,” so the only way to rule out ... | [
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"z2NOtlvNWlK",
"imJuS5SAwXs",
"lb-HvYr3wdC",
"_kDzubyhJm3",
"nips_2022_GzESlaXaN04",
"nips_2022_GzESlaXaN04",
"nips_2022_GzESlaXaN04"
] |
nips_2022_Q9lm8w6JpXi | BILCO: An Efficient Algorithm for Joint Alignment of Time Series | Multiple time series data occur in many real applications and the alignment among them is usually a fundamental step of data analysis. Frequently, these multiple time series are inter-dependent, which provides extra information for the alignment task and this information cannot be well utilized in the conventional pairwise alignment methods. Recently, the joint alignment was modeled as a max-flow problem, in which both the profile similarity between the aligned time series and the distance between adjacent warping functions are jointly optimized. However, despite the new model having elegant mathematical formulation and superior alignment accuracy, the long computation time and large memory usage, due to the use of the existing general-purpose max-flow algorithms, limit significantly its well-deserved wide use. In this report, we present BIdirectional pushing with Linear Component Operations (BILCO), a novel algorithm that solves the joint alignment max-flow problems efficiently and exactly. We develop the strategy of linear component operations that integrates dynamic programming technique and the push-relabel approach. This strategy is motivated by the fact that the joint alignment max-flow problem is a generalization of dynamic time warping (DTW) and numerous individual DTW problems are embedded. Further, a bidirectional-pushing strategy is proposed to introduce prior knowledge and reduce unnecessary computation, by leveraging another fact that good initialization can be easily computed for the joint alignment max-flow problem. We demonstrate the efficiency of BILCO using both synthetic and real experiments. Tested on thousands of datasets under various simulated scenarios and in three distinct application categories, BILCO consistently achieves at least 10 and averagely 20-folds increase in speed, and uses at most 1/8 and averagely 1/10 memory compared with the best existing max-flow method. Our source code can be found at https://github.com/yu-lab-vt/BILCO. | Accept | In this paper, the authors propose an algorithm BILCO for solving graphical time warping, an alignment method for multiple time series data. Overall, the proposed approach is interesting, and all reviewers are positive. Thus, I also vote for acceptance. | val | [
"6I1rQFfaWRP",
"embrHrgeRF",
"y3SXFnImFTN",
"1X9c0rw8vGD",
"VVLk4pQHDq8",
"OiGhEFCpvnh",
"l54AnfsoI_g"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response and hope to see the illustrative example in the final version.",
" Thank you for your valuable feedback. The major concern was **whether the high efficiency of our method is at the expense of alignment accuracy.** We would like to take this opportunity to relieve your concern.\n\nFirst, ... | [
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
4,
1,
4
] | [
"1X9c0rw8vGD",
"l54AnfsoI_g",
"OiGhEFCpvnh",
"VVLk4pQHDq8",
"nips_2022_Q9lm8w6JpXi",
"nips_2022_Q9lm8w6JpXi",
"nips_2022_Q9lm8w6JpXi"
] |
nips_2022_YZ-N-sejjwO | Models Out of Line: A Fourier Lens on Distribution Shift Robustness | Improving the accuracy of deep neural networks on out-of-distribution (OOD) data is critical to an acceptance of deep learning in real world applications. It has been observed that accuracies on in-distribution (ID) versus OOD data follow a linear trend and models that outperform this baseline are exceptionally rare (and referred to as ``effectively robust”). Recently, some promising approaches have been developed to improve OOD robustness: model pruning, data augmentation, and ensembling or zero-shot evaluating large pretrained models. However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness. We approach this issue by conducting a comprehensive empirical study of diverse approaches that are known to impact OOD robustness on a broad range of natural and synthetic distribution shifts of CIFAR-10 and ImageNet. In particular, we view the "effective robustness puzzle" through a Fourier lens and ask how spectral properties of both models and OOD data correlate with OOD robustness. We find this Fourier lens offers some insight into why certain robust models, particularly those from the CLIP family, achieve OOD robustness. However, our analysis also makes clear that no known metric is consistently the best explanation of OOD robustness. Thus, to aid future research into the OOD puzzle, we address the gap in publicly-available models with effective robustness by introducing a set of pretrained CIFAR-10 models---$RobustNets$---with varying levels of OOD robustness. | Accept | All reviewers noted the relevance of the proposed study for the NeurIPS community. They all agreed that the paper is well-motivated, sound, and that the proposed Fourier interpolation is novel. While some reviewers had initial concerns regarding the experimental evaluation, the authors did a great job at improving their experiments and reply to the reviewers's concerns. There, we recommend acceptance. | test | [
"Tjsd7L30OlF",
"P1MZNr8HDcD",
"o5ZrVJDFTU3",
"y9rCBB_cn1o",
"_NE05ZaKOjv",
"CjEOjxgio42",
"5KhtknN2gRD",
"iA-gNq85eFc",
"Oqi-yfhfSBm",
"LqwgGnxHGGZQ",
"Fa21d5ve9sj",
"mW-8Q0AiPk",
"2EsEX4qUJtp"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for the authors' responses. After seeing the rebuttal and other reviewers' comments and checking the revised submission, I think my main concerns are well-addressed and I will raise the recommendation to 'weak accept'.",
" Thank you all for your time and thoughtful reviews! As the discussion period... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"_NE05ZaKOjv",
"nips_2022_YZ-N-sejjwO",
"CjEOjxgio42",
"mW-8Q0AiPk",
"mW-8Q0AiPk",
"2EsEX4qUJtp",
"Fa21d5ve9sj",
"Fa21d5ve9sj",
"nips_2022_YZ-N-sejjwO",
"nips_2022_YZ-N-sejjwO",
"nips_2022_YZ-N-sejjwO",
"nips_2022_YZ-N-sejjwO",
"nips_2022_YZ-N-sejjwO"
] |
nips_2022_A0ejsEHQu9w | Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization | Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential~\citep{Goldstein-1977-Optimization} and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a $(\delta,\epsilon)$-Goldstein stationary point of a Lipschitz function $f$ at an expected convergence rate at $O(d^{3/2}\delta^{-1}\epsilon^{-4})$ where $d$ is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the \textsc{Minst} dataset. | Accept | The authors introduce two derivative-free algorithms for computing the Goldstein stationary points in the context of nonconvex nonsmooth optimization, and show that they enjoy polynomial complexity (in expectation), while the dimension dependence (which is unavoidable in the derivative-free setting) is worse by only an sqrt(d) factor compared to the convex/smooth case. A high-probability bound with a two-phase scheme was established as well.
The reviewers described the strengths of the paper in the following way:
- The paper is well-written and easy to follow.
- The theoretical result in this paper is interesting.
- The authors established a novel optimality criterion for non-smooth non-convex Lipschitz function called (δ,ε)-Goldstein stationary point and proposed a gradient-free algorithm with its stochastic version by deriving the Goldstein subdifferential and uniform smoothing technical.
- Overall, it provides some solid theoretical results on gradient-free methods for the nonsmooth nonconvex optimization.
Some criticism was raised, but the authors managed to address it in their rebuttal. I agree with the collective judgment of the reviewers that this paper clearly passes the bar of acceptance. Please make sure that all criticism is properly addressed in the camera-ready version of the paper as well.
Congratulations on a nice paper!
AC | train | [
"RsYMIbxhlx1",
"MyrGIhpzvBK",
"ZfVVhAiuwKL",
"bUILwlcefgi",
"pIe_eYRUC_B",
"2aZjFJc-7v2",
"ksJioR-u1p",
"dguYxaTiyYu"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors' responses.\n\nI have read all responses and reviews. The authors have solved my concerns. So I keep my score.",
" Thank you for your encouraging comments and positive evaluation! We reply to your questions point-by-point below, and will color all relevant revisions in our paper in ${\\co... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"MyrGIhpzvBK",
"dguYxaTiyYu",
"ksJioR-u1p",
"ksJioR-u1p",
"2aZjFJc-7v2",
"nips_2022_A0ejsEHQu9w",
"nips_2022_A0ejsEHQu9w",
"nips_2022_A0ejsEHQu9w"
] |
nips_2022_ZqgFbZEb8bW | Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning | People say, "A picture is worth a thousand words". Then how can we get the rich information out of the image? We argue that by using visual clues to bridge large pretrained vision foundation models and language models, we can do so without any extra cross-modal training. Thanks to the strong zero-shot capability of foundation models, we start by constructing a rich semantic representation of the image (e.g., image tags, object attributes / locations, captions) as a structured textual prompt, called visual clues, using a vision foundation model. Based on visual clues, we use large language model to produce a series of comprehensive descriptions for the visual content, which is then verified by the vision model again to select the candidate that aligns best with the image. We evaluate the quality of generated descriptions by quantitative and qualitative measurement. The results demonstrate the effectiveness of such a structured semantic representation. | Accept | All three reviewers have voted weak accept to this paper; the authors have engaged well with the reviewers and have improved their paper. I also recommend acceptance. | train | [
"l6e4KnEuh7Z",
"NzbZRsFAJG",
"mcmOrXFlWWp",
"9Ew05muOW1j",
"tyipraZCevA",
"IRF9JybSAzQ",
"emicOQp4QEC",
"zR8wsN-4j1oS",
"Zyj1579MCBF",
"igmv8-tqZ1",
"3b83J1IspH",
"842cSKG2xv",
"86L8AjVgMyC",
"W8dGNQqiAnK",
"raVOIUU3f6i",
"AnC9BY2BOO_",
"9PDdsUkyp9A",
"5TYyT_N3awm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the rebuttal. About the baseline captioners, you can maybe take baselines in original BLIP paper such as LEMON or even some older models such as AoANet, but given the time frame, it is not needed. Most of my concerns are resolved and I still recommend acceptance for this pape... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
3
] | [
"842cSKG2xv",
"9PDdsUkyp9A",
"AnC9BY2BOO_",
"raVOIUU3f6i",
"IRF9JybSAzQ",
"5TYyT_N3awm",
"5TYyT_N3awm",
"Zyj1579MCBF",
"igmv8-tqZ1",
"9PDdsUkyp9A",
"AnC9BY2BOO_",
"raVOIUU3f6i",
"nips_2022_ZqgFbZEb8bW",
"nips_2022_ZqgFbZEb8bW",
"nips_2022_ZqgFbZEb8bW",
"nips_2022_ZqgFbZEb8bW",
"nips_... |
nips_2022_vQzDYi4dPwM | Size and depth of monotone neural networks: interpolation and approximation | Monotone functions and data sets arise in a variety of applications. We study the interpolation problem for monotone data sets: The input is a monotone data set with $n$ points, and the goal is to find a size and depth efficient monotone neural network with \emph{non negative parameters} and threshold units that interpolates the data set. We show that there are monotone data sets that cannot be interpolated by a monotone network of depth $2$. On the other hand, we prove that for every monotone data set with $n$ points in $\mathbb{R}^d$, there exists an interpolating monotone network of depth $4$ and size $O(nd)$. Our interpolation result implies that every monotone function over $[0,1]^d$ can be approximated arbitrarily well by a depth-4 monotone network, improving the previous best-known construction of depth $d+1$. Finally, building on results from Boolean circuit complexity, we show that the inductive bias of having positive parameters can lead to a super-polynomial blow-up in the number of neurons when approximating monotone functions. | Accept | Surprisingly strong result about the expressive power of monotone networks | train | [
"QVrjiiO2XXH",
"8kdNcm8VD7l",
"D248y5_AZLV",
"0QLCnqjAE91",
"2kvDTAJPaA",
"RmE8svGi4XX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your feedback and for raising several important and interesting questions. \n\n- “These are interesting theoretical findings and help us understand the role of depth in deep learning.”\n\n**Comment**: Thank you! Indeed, this is a theoretical paper focused on mathematical questions.\n\n- “... | [
-1,
-1,
-1,
7,
7,
4
] | [
-1,
-1,
-1,
1,
3,
4
] | [
"RmE8svGi4XX",
"2kvDTAJPaA",
"0QLCnqjAE91",
"nips_2022_vQzDYi4dPwM",
"nips_2022_vQzDYi4dPwM",
"nips_2022_vQzDYi4dPwM"
] |
nips_2022_SHMi1b7sjXk | Test-Time Training with Masked Autoencoders | Test-time training adapts to a new test distribution on the fly by optimizing a model for each test input using self-supervision.
In this paper, we use masked autoencoders for this one-sample learning problem.
Empirically, our simple method improves generalization on many visual benchmarks for distribution shifts.
Theoretically, we characterize this improvement in terms of the bias-variance trade-off. | Accept | This paper performs test-time (unsupervised) adaptation to improve generalization performance (e.g., under distribution shift). The reviewer's concerns were mostly about clarification, both in experimentation as well as overall contribution (e.g., concerns about novelty). The discussion was concise and easy to follow, and it seems that the authors addressed most of the outstanding concerns of the reviewers to the point that there's a clear consensus.
I therefore recommend acceptance of the paper.
As far as reviews, overall the discussion was light so there isn't a great deal of signal. ZRRX had the most substantial review in terms of initial content, but did not participate in further discussion. | train | [
"d8CKqe6n2Yq",
"lr3orbN3a-mA",
"Rl8f8-Grvms",
"cYKBLYAX_ONG",
"48DJtCVAW6yb",
"tAEpn3KLb0F",
"KM2jZyvp158",
"66yvO1dwbE",
"Hlnpo31Iwqw",
"WWKikAS5Nzm",
"ULNCiLay5Ft"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for providing all your comments and updating the paper!\n\nBased on the response to my and other reviewers, I am overall happy with the quality of the work, and increasing my scores to lean towards acceptance.",
" Dear ACs and reviewers, \n\nThank you again for the detailed feedback on our work. We hope ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"48DJtCVAW6yb",
"nips_2022_SHMi1b7sjXk",
"ULNCiLay5Ft",
"WWKikAS5Nzm",
"Hlnpo31Iwqw",
"66yvO1dwbE",
"nips_2022_SHMi1b7sjXk",
"nips_2022_SHMi1b7sjXk",
"nips_2022_SHMi1b7sjXk",
"nips_2022_SHMi1b7sjXk",
"nips_2022_SHMi1b7sjXk"
] |
nips_2022_o4uFFg9_TpV | Visual Prompting via Image Inpainting | How does one adapt a pre-trained visual model to novel downstream tasks without task-specific finetuning or any model modification? Inspired by prompting in NLP, this paper investigates visual prompting: given input-output image example(s) of a new task at test time and a new input image, the goal is to automatically produce the output image, consistent with the given examples. We show that posing this problem as simple image inpainting -- literally just filling in a hole in a concatenated visual prompt image -- turns out to be surprisingly effective, provided that the inpainting algorithm has been trained on the right data. We train masked auto-encoders on a new dataset that we curated -- 88k unlabeled figures from academic papers sources on Arxiv. We apply visual prompting to these pretrained models and demonstrate results on various downstream image-to-image tasks, including foreground segmentation, single object detection, colorization, edge detection, etc. Project page: https://yossigandelsman.github.io/visual_prompt | Accept | The paper discusses a way to use pre-trained models for downstream tasks. Reviewers generally appreciated the paper but had questions regarding baselines, details, dataset, etc. The rebuttal addressed most of these concerns prompting the reviewers to raise their recommendation. However some questions remained (e.g., https://openreview.net/forum?id=o4uFFg9_TpV¬eId=wPCrVV96hE). AC concurs with the unanimous reviewer recommendation. | test | [
"0Li1merIM5",
"xosS56mQJ7o",
"zX1tE3bIsMx",
"wMnbkVwkKcS",
"nOYiCMM0v6",
"wPCrVV96hE",
"n3t6ZElPx79",
"4JrW35Vd14S",
"zoOAXGL-roY",
"C_HOQ1Tx4JW",
"loUu1RsaBx",
"reiRXFhTdXT",
"drBS-fm3r-",
"mK7IwkyFCfG",
"fjf4cYrRQKH",
"OKdVWUlnpkD",
"PFLRw_rmwCZ",
"p54Qo0FPIyf",
"lS_JXTtMECsQ"
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_rev... | [
" Thank you for the questions and suggestions. As the discussion phase ends today, we will revise the paper to include the missing technical details and the training/inference figure suggested after the discussion period. Additionally, the code will be made publicly available.\n\nQ: **During testing, did you test t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"nOYiCMM0v6",
"zX1tE3bIsMx",
"wMnbkVwkKcS",
"drBS-fm3r-",
"C_HOQ1Tx4JW",
"mK7IwkyFCfG",
"mK7IwkyFCfG",
"loUu1RsaBx",
"nips_2022_o4uFFg9_TpV",
"lS_JXTtMECsQ",
"reiRXFhTdXT",
"p54Qo0FPIyf",
"PFLRw_rmwCZ",
"OKdVWUlnpkD",
"nips_2022_o4uFFg9_TpV",
"nips_2022_o4uFFg9_TpV",
"nips_2022_o4uFF... |
nips_2022_sBrS3M5lT2w | Global Convergence and Stability of Stochastic Gradient Descent | In machine learning, stochastic gradient descent (SGD) is widely deployed to train models using highly non-convex objectives with equally complex noise models. Unfortunately, SGD theory often makes restrictive assumptions that fail to capture the non-convexity of real problems, and almost entirely ignore the complex noise models that exist in practice. In this work, we demonstrate the restrictiveness of these assumptions using three canonical models in machine learning, then we develop novel theoretical tools to address this shortcoming in two ways. First, we establish that SGD's iterates will either globally converge to a stationary point or diverge under nearly arbitrary nonconvexity and noise models. Under a slightly more restrictive assumption on the joint behavior of the non-convexity and noise model that generalizes current assumptions in the literature, we show that the objective function cannot diverge, even if the iterates diverge. As a consequence of our results, SGD can be applied to a greater range of stochastic optimization problems with confidence about its global convergence behavior and stability. | Accept | This paper analyzes the asymptotic convergence behavior of SGD on the class of locally Hölder continuous functions, by generalizing the technique and results of Patel [2021]. The paper extends and generalizes prior SGD analyses that were conducted under the assertion that certain conditions (e.g. smoothness or continuity) hold globally.
The reviewers found that the results are well presented, correct, and of interest to the community. The results could stipulate further research on SGD analyses on function classes more relevant to neural network training or deep learning, and also on non-asymptotic analyses of SGD on the function class studied in this paper.
The internal discussion brought up a few concerns should be carefully addressed when preparing the final version:
- some reviewers found the examples a bit overclaimed and not clearly showing the necessity of considering locally Hölder continuous functions. For instance, analyses that do not require a bounded variance assumption have become standard in recent years (see for instance the textbook by Bottou, Curtis, & Nocedal that is cited in the paper) and thus Example 1 (Linear Regression) could be a bit misleading as it is bringing up an already solved issue. Please relate the examples carefully to the (novel) contributions in this paper,
- and please mention and explain the relation to the arXiv preprint https://arxiv.org/abs/2104.00423. | train | [
"O2OicW0ZLFE",
"ukv3Y5rqi6",
"0FReLw9wwmR",
"HTMCPe2LI-jV",
"M67HB-LgoJT",
"RmUXd1teTLm",
"11DG6LkUGBY",
"XwP89UB_z0",
"XNZE8oqObG1",
"XgwwNMa23GC",
"I_B_k9JNgaZ",
"GevNcbgzdkv",
"b3LV9KWZxMs",
"byr-1ZcQa9"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nthank you for the detailed response and the clarification. I may have underestimated the difficulty of some technical details. I updated my score. This may change again during the discussion phase.\n\nBest regards,\n\nReviewer JXLB",
" thank you for your answer which mostly clears up the questi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
2,
3
] | [
"M67HB-LgoJT",
"11DG6LkUGBY",
"HTMCPe2LI-jV",
"RmUXd1teTLm",
"byr-1ZcQa9",
"b3LV9KWZxMs",
"GevNcbgzdkv",
"I_B_k9JNgaZ",
"XgwwNMa23GC",
"nips_2022_sBrS3M5lT2w",
"nips_2022_sBrS3M5lT2w",
"nips_2022_sBrS3M5lT2w",
"nips_2022_sBrS3M5lT2w",
"nips_2022_sBrS3M5lT2w"
] |
nips_2022_qj-_HnxQxB | Functional Indirection Neural Estimator for Better Out-of-distribution Generalization | The capacity to achieve out-of-distribution (OOD) generalization is a hallmark of human intelligence and yet remains out of reach for machines. This remarkable capability has been attributed to our abilities to make conceptual abstraction and analogy, and to a mechanism known as indirection, which binds two representations and uses one representation to refer to the other. Inspired by these mechanisms, we hypothesize that OOD generalization may be achieved by performing analogy-making and indirection in the functional space instead of the data space as in current methods. To realize this, we design FINE (Functional Indirection Neural Estimator), a neural framework that learns to compose functions that map data input to output on-the-fly. FINE consists of a backbone network and a trainable semantic memory of basis weight matrices. Upon seeing a new input-output data pair, FINE dynamically constructs the backbone weights by mixing the basis weights. The mixing coefficients are indirectly computed through querying a separate corresponding semantic memory using the data pair. We demonstrate empirically that FINE can strongly improve out-of-distribution generalization on IQ tasks that involve geometric transformations. In particular, we train FINE and competing models on IQ tasks using images from the MNIST, Omniglot and CIFAR100 datasets and test on tasks with unseen image classes from one or different datasets and unseen transformation rules. FINE not only achieves the best performance on all tasks but also is able to adapt to small-scale data scenarios. | Accept | This paper tackles OOD generalisation through a mechanism for analogy-making in functional spaces rather than the data space. It involves construction of a functional framework that maps inputs to outputs---by abstracting the transformation between inputs and outputs through a separate (hyper)network which provides the weights of the mappings. It further contributes a benchmark for evaluating OOD on IQ tasks.
The reviewers agree that the paper tackles an interesting and relevant problem with the perspective of functional indirection, and the IQ task does appear challenging.
The primary outstanding issue with the work appears to be with what class of models they compare against---there exists work in meta-learning (e.g. CAVIA, MAML) that ought to be discussed. If they are not appropriate it is crucial that this is explained, because from a functional perspective, they are very similar. Indeed as Reviewer kR5Y points out there are clearly relevant pieces of work that out to figure as comparisons here.
Simply speculating that 'it is not clear...can deal with unseen transformations' will not do; this ought to be established in order for the paper to stand strongly on its own.
And while I'm somewhat inclined to buy the proposition that PGM etc can have 'shortcuts' but I would perhaps have still expected evaluations on these in addition to the IQ task setup to mitigate claimed deficiencies in benchmarks like RAVEN or PGM.
The authors also provided additional experiments over more extreme OOD settings as well as tempered some imprecise statements to address reviewer concerns, which was good.
On balance, though it appears as if the paper has more merits than issues, and most of the issues raised could be addressed with some work. I would strongly urge the authors to actually make the edits for comparison and incorporate the additional experiments over existing benchmarks from the rebuttal into the manuscript, as requested by the reviewers. | train | [
"mVN4bC131bK",
"H-pFzss_ahx",
"Yt2ZOKcurdf",
"4ED3Na8nC6",
"1nwv-2EIflv",
"RhsZuSkEZv3",
"JqP5oIrk1Q2",
"LQTFhq0IHCt",
"s6DLck9nQ_2",
"KEcjMZU5PJ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed response. The authors have addressed most of my concerns, so I would like to raise my score from 5 to 6.\n\n",
" We thank the reviewer for your detailed and insightful comments. We have updated our manuscript and would like to address the reviewer's concerns as follows: \n\n• “Evaluatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"Yt2ZOKcurdf",
"KEcjMZU5PJ",
"s6DLck9nQ_2",
"1nwv-2EIflv",
"LQTFhq0IHCt",
"JqP5oIrk1Q2",
"nips_2022_qj-_HnxQxB",
"nips_2022_qj-_HnxQxB",
"nips_2022_qj-_HnxQxB",
"nips_2022_qj-_HnxQxB"
] |
nips_2022_n0dD3d54Wgf | SparCL: Sparse Continual Learning on the Edge | Existing work in continual learning (CL) focuses on mitigating catastrophic forgetting, i.e., model performance deterioration on past tasks when learning a new task. However, the training efficiency of a CL system is under-investigated, which limits the real-world application of CL systems under resource-limited scenarios. In this work, we propose a novel framework called Sparse Continual Learning (SparCL), which is the first study that leverages sparsity to enable cost-effective continual learning on edge devices. SparCL achieves both training acceleration and accuracy preservation through the synergy of three aspects: weight sparsity, data efficiency, and gradient sparsity. Specifically, we propose task-aware dynamic masking (TDM) to learn a sparse network throughout the entire CL process, dynamic data removal (DDR) to remove less informative training data, and dynamic gradient masking (DGM) to sparsify the gradient updates. Each of them not only improves efficiency, but also further mitigates catastrophic forgetting. SparCL consistently improves the training efficiency of existing state-of-the-art (SOTA) CL methods by at most 23X less training FLOPs, and, surprisingly, further improves the SOTA accuracy by at most 1.7%. SparCL also outperforms competitive baselines obtained from adapting SOTA sparse training methods to the CL setting in both efficiency and accuracy. We also evaluate the effectiveness of SparCL on a real mobile phone, further indicating the practical potential of our method. | Accept | This paper introduces a new continual learning scheme whose efficiency and effectiveness are achieved through three key components that encourage sparse network weight connection, replay buffer selection, and sparse gradient truncation. After the author-review discussion phase, a majority of reviewer suggest acceptance. Only one negative reviewer did not respond to the authors' rebuttal, but AC thinks that it is convincing enough to resolve her/his concerns. AC thinks that investigating sparse networks for continual learning is novel, and demonstrating it under edge-device level is a big plus. Overall, AC is happy to recommend acceptance. AC strongly recommend the authors to incorporate all additional results and discussion-with-reviewers into the final draft. | train | [
"K-rt-3DDzox",
"HLJf7OdNN8Y",
"xoM9xsONIWK",
"fRHsz3wj5y",
"-YNdTehNVN",
"6962aArzV6C",
"LyZZPlw_vIT",
"Aa2o4N8oDjF",
"ZF8o0E6jlKG",
"QN-C6-H98K1",
"Sf0jIoZ0XoX",
"vMHik1qGe-A",
"7q-fILC-eIa",
"fUSnrOeeo7L",
"pLcSYRuaDsG",
"0pQOI7O9EpI2",
"2BM3ldFnnb-",
"8T1I-3103Lz",
"hI5fvrY_8c... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer g1tN,\n\nThank you very much for spending time reviewing our paper. Since the discussion will end very soon, we sincerely hope that you have found time to check our detailed response to your previous questions/comments. If you have any further questions, please feel free to let us know. We will try ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"8T1I-3103Lz",
"2BM3ldFnnb-",
"gkGZ8Lfgd9-",
"8T1I-3103Lz",
"2BM3ldFnnb-",
"LyZZPlw_vIT",
"pLcSYRuaDsG",
"8T1I-3103Lz",
"8T1I-3103Lz",
"8T1I-3103Lz",
"8T1I-3103Lz",
"2BM3ldFnnb-",
"2BM3ldFnnb-",
"hI5fvrY_8cm",
"hI5fvrY_8cm",
"gkGZ8Lfgd9-",
"nips_2022_n0dD3d54Wgf",
"nips_2022_n0dD3d... |
nips_2022_u3vEuRr08MT | Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models | Despite their wide adoption, the underlying training and memorization dynamics of very large language models is not well understood. We empirically study exact memorization in causal and masked language modeling, across model sizes and throughout the training process. We measure the effects of dataset size, learning rate, and model size on memorization, finding that larger language models memorize training data faster across all settings. Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process. We also analyze the memorization dynamics of different parts of speech and find that models memorize nouns and numbers first; we hypothesize and provide empirical evidence that nouns and numbers act as a unique identifier for memorizing individual training examples. Together, these findings present another piece of the broader puzzle of trying to understand what actually improves as models get bigger. | Accept | This paper studies the underlying training and memorization dynamics of very large language models. The main take aways are that larger-sized language models memorize training data faster, and that this memorization happens before the overfitting of language modeling. Tokens with certain part-of-speech tags (nouns, numerals) seem to be memorized faster during training.
Overall, most reviewers feel positively about this paper, agreeing that it tackles an important problem and that it provides a solid contribution. The experimental results are detailed and use reasonable metrics for data memorization, including the forgetting identifier experiments. Some of the weaknesses that have been pointed out (e.g. regarding the significance of the part-of-speech tags experiment, clarifying the criteria for memorization, etc.) seem to have been well addressed during the author response. Therefore, I recommend acceptance.
| train | [
"3EPQHFVUWy",
"O3PHX3RTT0c",
"OWqzCZKAdDM",
"0N8mf0lZhfV",
"otgTOcycDSz",
"dpsFVNDmzZ",
"HIj7jGKI3N",
"jmta3fzebhN",
"WYSU_XYzkSD",
"RvyGE5LIJxo",
"7IKWxwp0J4",
"JEhui3saOMx",
"eMdL3Kgp2F6"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for providing a detailed response to my questions. I'm glad that most of my questions are answered, and I'm happy to improve my original evaluation.",
" \n*W7: I'm not convinced by the experiments that consider memorization relationship with overfitting as though these are two separate phenomena. What... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"jmta3fzebhN",
"OWqzCZKAdDM",
"0N8mf0lZhfV",
"otgTOcycDSz",
"dpsFVNDmzZ",
"RvyGE5LIJxo",
"7IKWxwp0J4",
"JEhui3saOMx",
"eMdL3Kgp2F6",
"nips_2022_u3vEuRr08MT",
"nips_2022_u3vEuRr08MT",
"nips_2022_u3vEuRr08MT",
"nips_2022_u3vEuRr08MT"
] |
nips_2022_Q38D6xxrKHe | High-dimensional limit theorems for SGD: Effective dynamics and critical scaling | We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in the high-dimensional regime. We prove limit theorems for the trajectories of summary statistics (i.e., finite-dimensional functions) of SGD as the dimension goes to infinity. Our approach allows one to choose the summary statistics that are tracked, the initialization, and the step-size. It yields both ballistic (ODE) and diffusive (SDE) limits, with the limit depending dramatically on the former choices. We find a critical scaling regime for the step-size below which this ``effective dynamics" matches gradient flow for the population loss, but at which, a new correction term appears which changes the phase diagram. About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate.
We demonstrate our approach on popular examples including estimation for spiked matrix and tensor models and classification via two-layer networks for binary and XOR-type Gaussian mixture models. These examples exhibit surprising phenomena including multimodal timescales to convergence as well as convergence to sub-optimal solutions with probability bounded away from zero from random (e.g., Gaussian) initializations.
| Accept | The paper is quite interesting and rigorous, with intriguing conclusions. The rebuttal also addressed all the major concerns -- mostly technical clarity. I congratulate the authors for the nice work and recommend an acceptance for the paper. | val | [
"v-UEO1fdgdk",
"KNAAMa6mEOG",
"W6_OImRsT0",
"EQPOx0cWBHGd",
"KNphMyiGCrD",
"PjxyOZAkOb4",
"J5Jy9BRyBe4",
"5z0nsfuxNdKD",
"-X6ujiQW4Nh",
"kMYMxgSPtl8",
"c8Wb6XIMxsO",
"vVVxM2OR6HZ",
"gn5Ue-aCCKv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nI really appreciate the time you took to answer my questions and clarify certain points. I was also confused by your answer to my question about the gradient-like form of the drift and the bias term as reviewer hQ5R pointed out. But the authors clarified this issue quite well for me. I would also... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"kMYMxgSPtl8",
"EQPOx0cWBHGd",
"KNphMyiGCrD",
"PjxyOZAkOb4",
"J5Jy9BRyBe4",
"-X6ujiQW4Nh",
"kMYMxgSPtl8",
"gn5Ue-aCCKv",
"vVVxM2OR6HZ",
"c8Wb6XIMxsO",
"nips_2022_Q38D6xxrKHe",
"nips_2022_Q38D6xxrKHe",
"nips_2022_Q38D6xxrKHe"
] |
nips_2022_HQDvPsdXS-F | Neur2SP: Neural Two-Stage Stochastic Programming | Stochastic Programming is a powerful modeling framework for decision-making under uncertainty. In this work, we tackle two-stage stochastic programs (2SPs), the most widely used class of stochastic programming models. Solving 2SPs exactly requires optimizing over an expected value function that is computationally intractable. Having a mixed-integer linear program (MIP) or a nonlinear program (NLP) in the second stage further aggravates the intractability, even when specialized algorithms that exploit problem structure are employed.
Finding high-quality (first-stage) solutions -- without leveraging problem structure -- can be crucial in such settings. We develop Neur2SP, a new method that approximates the expected value function via a neural network to obtain a surrogate model that can be solved more efficiently than the traditional extensive formulation approach. Neur2SP makes no assumptions about the problem structure, in particular about the second-stage problem, and can be implemented using an off-the-shelf MIP solver. Our extensive computational experiments on four benchmark 2SP problem classes with different structures (containing MIP and NLP second-stage problems) demonstrate the efficiency (time) and efficacy (solution quality) of Neur2SP. In under 1.66 seconds, Neur2SP finds high-quality solutions across all problems even as the number of scenarios increases, an ideal property that is difficult to have for traditional 2SP solution techniques. Namely, the most generic baseline method typically requires minutes to hours to find solutions of comparable quality.
| Accept | In this paper, the authors proposed to use learning with neural network to amortize the cost in the two-stage optimization problems. The authors tested the algorithms on several problems, demonstrating the advantages of the proposed algorithm empirically.
Most of the reviewers think this work is interesting, although there are already plenty of existing work considering the similar methods, especially a similar work has been published that using learning to amortize for multi-stage stochastic programming.
Another concern raised by reviewer is that the empirical comparison is not comprehensive. The decomposition methods, e.g., Benders-based methods, are not involved, which is a major algorithm for two-stage stochastic optimization problems. Therefore, the advantages of the proposed method is not clear.
Please take the reviewers' points into account to improve the paper. | train | [
"0nLus52PdJ8",
"p47c0zXtrOG",
"jAzd8xWPhW",
"MmEjmhLXXA",
"ReMDdGe2oUT",
"46mGr8ukJIm",
"Ue-Z7sePz9M",
"u-1Fnmrgo7S",
"Bgl7ngSI19b",
"d2zcM9eCdKo",
"ps6pGcxv5hD",
"rYtXjbEct_",
"bF9D59g7QKs",
"h-AwFrtU9s"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I appreciate that the camera-ready version would be more upfront about the training and data collection time, given that this is not the typical case where you would generalize for a wide set of instances.\n\nI understand the reasons not to include stronger baselines. This is however i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"Bgl7ngSI19b",
"jAzd8xWPhW",
"Ue-Z7sePz9M",
"ReMDdGe2oUT",
"d2zcM9eCdKo",
"nips_2022_HQDvPsdXS-F",
"h-AwFrtU9s",
"bF9D59g7QKs",
"rYtXjbEct_",
"ps6pGcxv5hD",
"nips_2022_HQDvPsdXS-F",
"nips_2022_HQDvPsdXS-F",
"nips_2022_HQDvPsdXS-F",
"nips_2022_HQDvPsdXS-F"
] |
nips_2022_S2Awu3Zn04v | Approximate Value Equivalence | Model-based reinforcement learning agents must make compromises about which aspects of the environment their models should capture.
The value equivalence (VE) principle posits that these compromises should be made considering the model's eventual use in value-based planning. Given sets of functions and policies, a model is said to be order-$k$ VE to the environment if $k$ applications of the Bellman operators induced by the policies produce the correct result when applied to the functions. Prior work investigated the classes of models induced by VE when we vary $k$ and the sets of policies and functions. This gives rise to a rich collection of topological relationships and conditions under which VE models are optimal for planning. Despite this effort, relatively little is known about the planning performance of models that fail to satisfy these conditions. This is due to the rigidity of the VE formalism, as classes of VE models are defined with respect to \textit{exact} constraints on their Bellman operators. This limitation gets amplified by the fact that such constraints themselves may depend on functions that can only be approximated in practice. To address these problems we propose approximate value equivalence (AVE), which extends the VE formalism by replacing equalities with error tolerances. This extension allows us to show that AVE models with respect to one set of functions are also AVE with respect to any other set of functions if we tolerate a high enough error. We can then derive bounds on the performance of VE models with respect to \textit{arbitrary sets of functions}. Moreover, AVE models more accurately reflect what can be learned by our agents in practice, allowing us to investigate previously unexplored tensions between model capacity and the choice of VE model class. In contrast to previous works, we show empirically that there are situations where agents with limited capacity should prefer to learn more accurate models with respect to smaller sets of functions over less accurate models with respect to larger sets of functions. | Accept | A key discussion point in the rebuttal phase was the practical use of the proposed bounds, which two of the three reviewers brought up. The authors in response added an additional section (Section 6) and experiment to address this concern. While some concerns regarding the practical use of these bounds remain, the authors have made a sufficiently convincing case in my view. Hence, I recommend acceptance. | train | [
"lAIEovDK53n",
"oOydCCKHbT",
"Pwd1KwRwlqUO",
"5Lr23hsBak",
"zGv2iJxJ8J",
"I75FB7UgET5",
"h10qktEe2SH",
"Img1fRIDdC",
"zl23edjuRe6",
"R_4G76nCgv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the response to the issues I raised. Some additional notes:\n\n* Regarding providing some intuition on how the different propositions are established, I think that a 1-liner hint on how the result is established would be helpful in all, but - personally - I found that Propositions 2 and 3 ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"I75FB7UgET5",
"zGv2iJxJ8J",
"nips_2022_S2Awu3Zn04v",
"R_4G76nCgv",
"zl23edjuRe6",
"Img1fRIDdC",
"nips_2022_S2Awu3Zn04v",
"nips_2022_S2Awu3Zn04v",
"nips_2022_S2Awu3Zn04v",
"nips_2022_S2Awu3Zn04v"
] |
nips_2022_gQBetxnU4Lk | Learning-based Motion Planning in Dynamic Environments Using GNNs and Temporal Encoding | Learning-based methods have shown promising performance for accelerating motion planning, but mostly in the setting of static environments. For the more challenging problem of planning in dynamic environments, such as multi-arm assembly tasks and human-robot interaction, motion planners need to consider the trajectories of the dynamic obstacles and reason about temporal-spatial interactions in very large state spaces. We propose a GNN-based approach that uses temporal encoding and imitation learning with data aggregation for learning both the embeddings and the edge prioritization policies. Experiments show that the proposed methods can significantly accelerate online planning over state-of-the-art complete dynamic planning algorithms. The learned models can often reduce costly collision checking operations by more than 1000x, and thus accelerating planning by up to 95%, while achieving high success rates on hard instances as well. | Accept | Robot motion planing in dynamic environments remains a significant problem. All reviewers consistently agree that the suggest GNN approach in this paper has useful merits, is of general interest, and that the paper is above the publication threshold. Detailed comments of the reviewers provide a good source for some fine-tuning improvements of the paper. | val | [
"sB58zPeVdsg",
"QEQIkC70Pxy5",
"m6xrguRAP5S",
"ZwJ2-642jz",
"OMgDUy4krZ",
"ec_Jfr12KLG",
"im6mbLdU66A",
"YUkI6rNpXjF",
"Se2eLywhK4I",
"eCKkci9wMmI"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the crucial questions and carefully reading our response! As also discussed in Appendix D, we think the topics on planning in more challenging environments and problems will be a great direction for our future work.",
" Thanks for the response! My questions are addressed.\n\nIt's inter... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"QEQIkC70Pxy5",
"im6mbLdU66A",
"ZwJ2-642jz",
"ec_Jfr12KLG",
"eCKkci9wMmI",
"Se2eLywhK4I",
"YUkI6rNpXjF",
"nips_2022_gQBetxnU4Lk",
"nips_2022_gQBetxnU4Lk",
"nips_2022_gQBetxnU4Lk"
] |
nips_2022_IJNDyqdRF0m | Decomposing NeRF for Editing via Feature Field Distillation | Emerging neural radiance fields (NeRF) are a promising scene representation for computer graphics, enabling high-quality 3D reconstruction and novel view synthesis from image observations.
However, editing a scene represented by a NeRF is challenging, as the underlying connectionist representations such as MLPs or voxel grids are not object-centric or compositional.
In particular, it has been difficult to selectively edit specific regions or objects.
In this work, we tackle the problem of semantic scene decomposition of NeRFs to enable query-based local editing of the represented 3D scenes.
We propose to distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors such as CLIP-LSeg or DINO into a 3D feature field optimized in parallel to the radiance field.
Given a user-specified query of various modalities such as text, an image patch, or a point-and-click selection, 3D feature fields semantically decompose 3D space without the need for re-training, and enables us to semantically select and edit regions in the radiance field.
Our experiments validate that the distilled feature fields can transfer recent progress in 2D vision and language foundation models to 3D scene representations, enabling convincing 3D segmentation and selective editing of emerging neural graphics representations. | Accept | The paper proposes an approach for manipulating 3d scenes represented with implicit neural representations (NeRF-like), via distilling 2D feature extractors into a 3D feature field. The method shows convincing qualitative results on scene editing and promising quantitative results on semantic segmentation.
All reviewers are (to a different degree) positive about the paper, noting good presentation, interesting and fairly novel approach, fairly thorough and convincing results (it would be nice to have quantitative results on scene editing, but that's quite non-trivial).
Overall, this is an interesting and well-executed paper and I am happy to recommend acceptance. | train | [
"QJ_9bJe_Jzj",
"pHy6RHJgm9r",
"WaXoIDcNdB-",
"b_5taPMlLeHB",
"JybWGWzUz6A",
"yQMEyvQj45T",
"U1I2Gh2brQ5",
"bsTf83a8zk",
"100f6-fmQ4_",
"D96WSsH2zgR",
"WixSHEi4EsX",
"rVs6u0h7if-",
"80yT57QNMbf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I am happy to keep my original score and suggest acceptance.\n\n",
" Thanks for the Authors' response, which I believe clarifies many concerns of mine and other reviewers. Without additional concerns from other reviewers, I would keep my original score and suggest acceptance.",
" D... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"yQMEyvQj45T",
"WaXoIDcNdB-",
"80yT57QNMbf",
"rVs6u0h7if-",
"WixSHEi4EsX",
"D96WSsH2zgR",
"bsTf83a8zk",
"100f6-fmQ4_",
"nips_2022_IJNDyqdRF0m",
"nips_2022_IJNDyqdRF0m",
"nips_2022_IJNDyqdRF0m",
"nips_2022_IJNDyqdRF0m",
"nips_2022_IJNDyqdRF0m"
] |
nips_2022_AXDNM76T1nc | Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos | Pretraining on noisy, internet-scale datasets has been heavily studied as a technique for training models with broad, general capabilities for text, images, and other modalities. However, for many sequential decision domains such as robotics, video games, and computer use, publicly available data does not contain the labels required to train behavioral priors in the same way. We extend the internet-scale pretraining paradigm to sequential decision domains through semi-supervised imitation learning wherein agents learn to act by watching online unlabeled videos. Specifically, we show that with a small amount of labeled data we can train an inverse dynamics model accurate enough to label a huge unlabeled source of online data -- here, online videos of people playing Minecraft -- from which we can then train a general behavioral prior. Despite using the native human interface (mouse and keyboard at 20Hz), we show that this behavioral prior has nontrivial zero-shot capabilities and that it can be fine-tuned, with both imitation learning and reinforcement learning, to hard-exploration tasks that are impossible to learn from scratch via reinforcement learning. For many tasks our models exhibit human-level performance, and we are the first to report computer agents that can craft diamond tools, which can take proficient humans upwards of 20 minutes (24,000 environment actions) of gameplay to accomplish. | Accept | The authors have introduced Video Pre-Training (VPT), a semi-supervised learning approach that allows relatively small volumes of labeled data to train an inverse-dynamics model that is subsequently applied to predict the action labels associated with a far larger, unlabeled dataset. They then train an agent in a supervised regime with respect to these labels to achieve strong performance in Minecraft, which requires reasoning over very long time horizons.
Overall there is clear consensus among the reviewers that this paper is novel, technically sound and of broad interest to the NeurIPS community. The authors have also proactively engaged with reviewer feedback to improve manuscript clarity. I am confident in recommending this paper for acceptance. | val | [
"O3fH2iikdrq",
"ugmzTcQH_P",
"1Igt61qqri9",
"5yP34P1JEx",
"t18CZqD-6qf",
"sTrUMktaBh9",
"155ufQFqd5L",
"eEBH3m1uqgGH",
"6t0iuUM7bQW",
"1NpQoYJQ5TD",
"EkPy2nZxVB0",
"1xlO8Wd1BVe",
"iB75rCF-ZsQ",
"wSuC-co5iWu0"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the answers to my questions, they are all satisfactory! \n\nOne minor suggestion: could you please discuss relations to a recent concurrent work, MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge (https://arxiv.org/abs/2206.08853)? It seems very complementary, and readers from... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
9,
9,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"1NpQoYJQ5TD",
"1Igt61qqri9",
"5yP34P1JEx",
"sTrUMktaBh9",
"6t0iuUM7bQW",
"155ufQFqd5L",
"wSuC-co5iWu0",
"iB75rCF-ZsQ",
"1xlO8Wd1BVe",
"EkPy2nZxVB0",
"nips_2022_AXDNM76T1nc",
"nips_2022_AXDNM76T1nc",
"nips_2022_AXDNM76T1nc",
"nips_2022_AXDNM76T1nc"
] |
nips_2022_se2oxj-6Nz | Rethinking Image Restoration for Object Detection | Although image restoration has achieved significant progress, its potential to assist object detectors in adverse imaging conditions lacks enough attention. It is reported that the existing image restoration methods cannot improve the object detector performance and sometimes even reduce the detection performance. To address the issue, we propose a targeted adversarial attack in the restoration procedure to boost object detection performance after restoration. Specifically, we present an ADAM-like adversarial attack to generate pseudo ground truth for restoration training. Resultant restored images are close to original sharp images, and at the same time, lead to better results of object detection. We conduct extensive experiments in image dehazing and low light enhancement and show the superiority of our method over conventional training and other domain adaptation and multi-task methods. The proposed pipeline can be applied to all restoration methods and detectors in both one- and two-stage. | Accept | In this paper, the authors provide an interesting formulation of an adversarial attack that can directly help object detector training in the presence of various degradations. This is a departure from the usual formulation of restoration, followed by detector training. I liked the initial derivation which is elegant and logical, and their experimental results show that mAP is clearly improved over baseline training. 2 of the 3 reviewers supported acceptance (with one being a strong accept). The third reviewer felt that the experimental improvements were insufficient. While I agree that the mAP improvements are not in the "wow" category, I still think the method is solid (as also accepted by third reviewer) and worthy of acceptance. | train | [
"fuwsUiydg5-",
"kpzunnZWqbj",
"vdpV8Aq1Kws",
"6wqFuXzs6PP",
"LZti962MB2V",
"3Yc2UowHJ-W",
"jG30-yMoGOB",
"QgGGjBgKvC",
"ASxWbk6izDi",
"QH5P1a8fRw1",
"4uGrwtiFq-w"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your issue. But we still want to emphasize the universality of our method. Our algorithm can extend to most of the restoration networks and detection networks. For most restoration tasks, e.g., haze removal (Table 2), low-light enhancement (Table 3), and the case with multiple degradations (the tabl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"kpzunnZWqbj",
"jG30-yMoGOB",
"nips_2022_se2oxj-6Nz",
"ASxWbk6izDi",
"4uGrwtiFq-w",
"QH5P1a8fRw1",
"QH5P1a8fRw1",
"ASxWbk6izDi",
"nips_2022_se2oxj-6Nz",
"nips_2022_se2oxj-6Nz",
"nips_2022_se2oxj-6Nz"
] |
nips_2022_Oq2bdIQQOIZ | On Privacy and Personalization in Cross-Silo Federated Learning | While the application of differential privacy (DP) has been well-studied in cross-device federated learning (FL), there is a lack of work considering DP and its implications for cross-silo FL, a setting characterized by a limited number of clients each containing many data subjects. In cross-silo FL, usual notions of client-level DP are less suitable as real-world privacy regulations typically concern the in-silo data subjects rather than the silos themselves. In this work, we instead consider an alternative notion of silo-specific sample-level DP, where silos set their own privacy targets for their local examples. Under this setting, we reconsider the roles of personalization in federated learning. In particular, we show that mean-regularized multi-task learning (MR-MTL), a simple personalization framework, is a strong baseline for cross-silo FL: under stronger privacy requirements, silos are incentivized to federate more with each other to mitigate DP noise, resulting in consistent improvements relative to standard baseline methods. We provide an empirical study of competing methods as well as a theoretical characterization of MR-MTL for mean estimation, highlighting the interplay between privacy and cross-silo data heterogeneity. Our work serves to establish baselines for private cross-silo FL as well as identify key directions of future work in this area. | Accept | The paper presents an analysis of item-level or sample-level DP with personalization in cross-silo federated learning.
The reviews are strongly divided with two recommending acceptance and two recommending rejection.
After reading all the reviews and the paper, I find the argument of the "accept" side stronger.
While the paper does not propose new algorithms, it presents a systematic analysis of existing methods that helps explain their properties. I believe this can be a more valuable contribution to the community than yet-another-new-algorithm.
That said, there are also important weaknesses, as noted by the reviewers. It is not clear how to select the optimal $\lambda$, especially in the non-convex setting with no theory as a guide. It is therefore not clear how useful the method would be in actual application where suitable $\lambda$ would have to be found somehow. This seems like an obvious topic of future work. A broader assessment with more data sets and some actual algorithm to select $\lambda$ would clearly strengthen the paper.
Some reviewers were confused by the proposed definition for sample-level cross-silo DP. I would strongly encourage the authors to use a part of the extra space for camera ready for writing this definition explicitly to avoid such confusion. | train | [
"LwzUOEemOtZ",
"cVsqGO-RX0",
"UhgUTt0t1c0",
"uGnVZ4MNHl6",
"ykcCtAZZID",
"n7RKftXKEQ8",
"20rH0GEI63z",
"QkslmvkMirY",
"rMtUPaS_fOR",
"cOZla78PwcO",
"U2m767jL7Ku",
"G_dq5I2DaEX",
"mPKqadLVyHC",
"6D3pMlYY4H0",
"r0B9lsjVU_k"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I tend to agree with the authors that the main weakness I mentioned (discussion and claims about DP) is due to writing clarity, and trust that the authors will fix these issues for the next version of the paper. I will therefore raise my score to recommend accepting the paper: I think the main point about the nee... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"uGnVZ4MNHl6",
"nips_2022_Oq2bdIQQOIZ",
"nips_2022_Oq2bdIQQOIZ",
"r0B9lsjVU_k",
"r0B9lsjVU_k",
"6D3pMlYY4H0",
"6D3pMlYY4H0",
"G_dq5I2DaEX",
"nips_2022_Oq2bdIQQOIZ",
"mPKqadLVyHC",
"mPKqadLVyHC",
"nips_2022_Oq2bdIQQOIZ",
"nips_2022_Oq2bdIQQOIZ",
"nips_2022_Oq2bdIQQOIZ",
"nips_2022_Oq2bdIQ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.