paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_fhO6vCGuuag | On the inability of Gaussian process regression to optimally learn compositional functions | We rigorously prove that deep Gaussian process priors can outperform Gaussian process priors if the target function has a compositional structure. To this end, we study information-theoretic lower bounds for posterior contraction rates for Gaussian process regression in a continuous regression model. We show that if the true function is a generalized additive function, then the posterior based on any mean-zero Gaussian process can only recover the truth at a rate that is strictly slower than the minimax rate by a factor that is polynomially suboptimal in the sample size $n$. | Accept | The reviewers unanimously agree that the theory here exhibiting a particular case where Gaussian process priors are inferior to deep Gaussian processes is interesting, and furthermore that the proof techniques themselves are novel. Indeed, reviewers had minimal or no substantial concerns about the paper, and most of the questions asked by reviewers txpX and sPbe read as simple follow up questions that the authors may choose to include discussion on. | train | [
"518ec8iJz2O",
"MsxoNJqWU2Q",
"IzIWEARD0c3",
"t5RY0cjPPLi",
"l4v46ytwvtg",
"m2fdwXc9h7o"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the positive assessment of our work and kind words.",
" Thank you for the constructive suggestions and helpful comments. In reply to your comments:\n\n1. The symbol $L$ is overloaded to imply both Lipschitz constant and the $L^2$ space.\n\nThe Lipschitz constant has been changed to $\\Lambda.$\n\n... | [
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
5,
4,
3
] | [
"m2fdwXc9h7o",
"l4v46ytwvtg",
"t5RY0cjPPLi",
"nips_2022_fhO6vCGuuag",
"nips_2022_fhO6vCGuuag",
"nips_2022_fhO6vCGuuag"
] |
nips_2022_jHIn0U9U6RO | Understanding the Eluder Dimension | We provide new insights on eluder dimension, a complexity measure that has been extensively used to bound the regret of algorithms for online bandits and reinforcement learning with function approximation. First, we study the relationship between the eluder dimension for a function class and a generalized notion of \emph{rank}, defined for any monotone ``activation'' $\sigma : \mathbb{R}\to \mathbb{R}$, which corresponds to the minimal dimension required to represent the class as a generalized linear model. It is known that when $\sigma$ has derivatives bounded away from $0$, $\sigma$-rank gives rise to an upper bound on eluder dimension for any function class; we show however that eluder dimension can be exponentially smaller than $\sigma$-rank. We also show that the condition on the derivative is necessary; namely, when $\sigma$ is the $\mathsf{relu}$ activation, the eluder dimension can be exponentially larger than $\sigma$-rank. For Boolean-valued function classes, we obtain a characterization of the eluder dimension in terms of star number and threshold dimension, quantities which are relevant in active learning and online learning respectively. | Accept | All reviewers and AC believe this paper is valuable contribution to the theoretical understanding of reinforcement learning. | train | [
"1Y73ARmCdji",
"_7SiqlpdJ07",
"W2MvIQHDJf5",
"5z9oNa2oGSV5",
"YMDxWPn6jG3",
"uDC8rPWmAO",
"uTnscUU_nxt",
"vX_lCYP21AC"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Makes sense! Thanks for your explanation.",
" Answering the reviewers questions:\n1. **Does comparing $\\sigma$-rank and eluder dimension help us understand when eluder dimension is bounded? What is the consequence of eluder being exponentially smaller than $\\sigma$-rank?** Yes, understanding the connection be... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"5z9oNa2oGSV5",
"W2MvIQHDJf5",
"vX_lCYP21AC",
"uTnscUU_nxt",
"uDC8rPWmAO",
"nips_2022_jHIn0U9U6RO",
"nips_2022_jHIn0U9U6RO",
"nips_2022_jHIn0U9U6RO"
] |
nips_2022_jcIIVkbCaHO | Pessimism for Offline Linear Contextual Bandits using $\ell_p$ Confidence Sets | We present a family $\{\widehat{\pi}_p\}_{p\ge 1}$ of pessimistic learning rules for offline learning of linear contextual bandits, relying on confidence sets with respect to different $\ell_p$ norms, where $\widehat{\pi}_2$ corresponds to Bellman-consistent pessimism (BCP), while $\widehat{\pi}_\infty$ is a novel generalization of lower confidence bound (LCB) to the linear setting. We show that the novel $\widehat{\pi}_\infty$ learning rule is, in a sense, adaptively optimal, as it achieves the minimax performance (up to log factors) against all $\ell_q$-constrained problems, and as such it strictly dominates all other predictors in the family, including $\widehat{\pi}_2$. | Accept | The reviewers are in agreement that this paper provides a minimax optimal solution to the problem of offline linear contextual bandits. This new family of learning rules beat state of the art approaches and provide a unified view on existing approaches, such as Lower Confidence Bound and Bellman-Consistent Pessimism. The theoretical results are backed by reasonable numerical simulations. Accept. | train | [
"BISWZ0opbjt",
"AGMInIajTZH",
"nMaRL4loyv",
"I2pKmfnpCgYh",
"M21UENsOQW",
"qauVJt3sU2s",
"6prQjRVcK7E",
"cG7mVIKAtzA",
"lCMhu4FgAAO",
"Qwvf2RbLiq_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the clarification and my question is well-addressed. I believe this is a good work and I'll thus keep my score.",
" I thank the authors for their response. They address my concern about the theoretical contribution of the work. But I still doubt its real-world applicability since real-wo... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
2,
3
] | [
"I2pKmfnpCgYh",
"nMaRL4loyv",
"Qwvf2RbLiq_",
"lCMhu4FgAAO",
"cG7mVIKAtzA",
"6prQjRVcK7E",
"nips_2022_jcIIVkbCaHO",
"nips_2022_jcIIVkbCaHO",
"nips_2022_jcIIVkbCaHO",
"nips_2022_jcIIVkbCaHO"
] |
nips_2022_sc7bBHAmcN | Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries | Subgraph GNNs are a recent class of expressive Graph Neural Networks (GNNs) which model graphs as collections of subgraphs. So far, the design space of possible Subgraph GNN architectures as well as their basic theoretical properties are still largely unexplored. In this paper, we study the most prominent form of subgraph methods, which employs node-based subgraph selection policies such as ego-networks or node marking and deletion. We address two central questions: (1) What is the upper-bound of the expressive power of these methods? and (2) What is the family of equivariant message passing layers on these sets of subgraphs?. Our first step in answering these questions is a novel symmetry analysis which shows that modelling the symmetries of node-based subgraph collections requires a significantly smaller symmetry group than the one adopted in previous works. This analysis is then used to establish a link between Subgraph GNNs and Invariant Graph Networks (IGNs). We answer the questions above by first bounding the expressive power of subgraph methods by 3-WL, and then proposing a general family of message-passing layers for subgraph methods that generalises all previous node-based Subgraph GNNs. Finally, we design a novel Subgraph GNN dubbed SUN, which theoretically unifies previous architectures while providing better empirical performance on multiple benchmarks. | Accept | This paper studies the recent hot topic in GNN, namely subgraph-based GNNs which apply GNN to each node-centered subgraph copy of the original graph instead of directly applying GNN to the full graph. These GNNs were shown to be more expressive than 1-WL but were unknown in terms of their upper bound of expressive power. This paper shows that all these subgraph-based GNNs, including Nested GNN, ID-GNN, reconstruction GNN, GNN-AK etc., can be implemented by 3-IGN which is upper bounded by 3-WL, thus giving an upper bound to subgraph-based GNNs' expressive power. The novel perspective that views subgraphs as an additional tensor dimension which is also equivariant to node permutation is very insightful, and is the key to the 3-IGN implementations. Overall, I believe this paper is of great theoretical contribution to the GNN community and opens up some new design space. | train | [
"Mx071i5o4lS",
"uC7UdHJd_3T",
"YYoaflKIENe",
"F7ymMFHJtZn",
"pLygWmGGkY",
"nF95jDan9Q7",
"9-zmAvBbu47",
"NPzrQvkmQv",
"WI3vsIIQPsI",
"sznMeQVAz5Z",
"vOBDwssPglD"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We kindly bring the attention of the Reviewers to a new manuscript revision we have just uploaded. The revision implements the additions discussed in the previous general comment and in specific responses to Reviewers.\n\nChanges are visually signalled in _blue_; they include:\n- A more thorough and detailed intr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN",
"vOBDwssPglD",
"pLygWmGGkY",
"sznMeQVAz5Z",
"WI3vsIIQPsI",
"NPzrQvkmQv",
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN"
] |
nips_2022_e3qH65r_eZS | Semi-supervised Semantic Segmentation with Prototype-based Consistency Regularization | Semi-supervised semantic segmentation requires the model to effectively propagate the label information from limited annotated images to unlabeled ones. A challenge for such a per-pixel prediction task is the large intra-class variation, i.e., regions belonging to the same class may exhibit a very different appearance even in the same picture. This diversity will make the label propagation hard from pixels to pixels. To address this problem, we propose a novel approach to regularize the distribution of within-class features to ease label propagation difficulty. Specifically, our approach encourages the consistency between the prediction from a linear predictor and the output from a prototype-based predictor, which implicitly encourages features from the same pseudo-class to be close to at least one within-class prototype while staying far from the other between-class prototypes. By further incorporating CutMix operations and a carefully-designed prototype maintenance strategy, we create a semi-supervised semantic segmentation algorithm that demonstrates superior performance over the state-of-the-art methods from extensive experimental evaluation on both Pascal VOC and Cityscapes benchmarks. | Accept | This paper proposes a teacher-student scheme for semi-supervised semantic segmentation. A consistency regularization is setup between a prototypical classifier and a linear classifier and different augmentation degrees (weak vs. strong) are applied to the teacher and student networks. On the positive side, the reviewers have found the ideas in this paper simple and strong in practice and they have indicated that the proposed setting is interesting. While the novelty of this paper may seem incremental since consistency regularization, in general, is heavily explored in semi-supervised training, the proposed setting is new for the semantic segmentation problem. One of the main criticisms of this submission is that it consists of many moving parts that are not well motivated and how they are orchestrated during training is missing from the original submission. After careful discussion, I believe that the merits of this submission outweigh the issues, and I am happy to recommend this paper for acceptance.
Last but not least, I strongly recommend the authors bring the algorithms to the main (if possible), provide additional implementation details, and make their code publicly available. | test | [
"O_dJihF5AF-",
"GZo_FzF6ZNa",
"QyyJDE5S613",
"NpQz5rlNWvq",
"TGM5e45yytT",
"WnSiDnsCW3R",
"o8632XKKleq",
"xZkgjt-8iDL",
"DrZWi_TXtlQ",
"H6gEh6mToeB",
"CtJgMj2aW2-",
"2jf6M6YbL0O",
"lxX42O3_RvD",
"KXM6dSoNRTq",
"dK-Td-AfT4-",
"uCkPpy5gtjd"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the author responses to questions raised by other reviewers, and decided to further increase the rating. I am highly impressed by the simplicity of this approach (à la x-Match style works in SSL), and very surprised that this simple method can achieve the results it does. This line of thought deserves... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"GZo_FzF6ZNa",
"WnSiDnsCW3R",
"lxX42O3_RvD",
"KXM6dSoNRTq",
"dK-Td-AfT4-",
"xZkgjt-8iDL",
"nips_2022_e3qH65r_eZS",
"uCkPpy5gtjd",
"uCkPpy5gtjd",
"dK-Td-AfT4-",
"KXM6dSoNRTq",
"lxX42O3_RvD",
"nips_2022_e3qH65r_eZS",
"nips_2022_e3qH65r_eZS",
"nips_2022_e3qH65r_eZS",
"nips_2022_e3qH65r_eZ... |
nips_2022_5L-wxm0YLcZ | CoupAlign: Coupling Word-Pixel with Sentence-Mask Alignments for Referring Image Segmentation | Referring image segmentation aims at localizing all pixels of the visual objects described by a natural language sentence. Previous works learn to straightforwardly align the sentence embedding and pixel-level embedding for highlighting the referred objects, but ignore the semantic consistency of pixels within the same object, leading to incomplete masks and localization errors in predictions. To tackle this problem, we propose CoupAlign, a simple yet effective multi-level visual-semantic alignment method, to couple sentence-mask alignment with word-pixel alignment to enforce object mask constraint for achieving more accurate localization and segmentation. Specifically, the Word-Pixel Alignment (WPA) module performs early fusion of linguistic and pixel-level features in intermediate layers of the vision and language encoders. Based on the word-pixel aligned embedding, a set of mask proposals are generated to hypothesize possible objects. Then in the Sentence-Mask Alignment (SMA) module, the masks are weighted by the sentence embedding to localize the referred object, and finally projected back to aggregate the pixels for the target. To further enhance the learning of the two alignment modules, an auxiliary loss is designed to contrast the foreground and background pixels. By hierarchically aligning pixels and masks with linguistic features, our CoupAlign captures the pixel coherence at both visual and semantic levels, thus generating more accurate predictions. Extensive experiments on popular datasets (e.g., RefCOCO and G-Ref) show that our method achieves consistent improvements over state-of-the-art methods, e.g., about 2% oIoU increase on the validation and testing set of RefCOCO. Especially, CoupAlign has remarkable ability in distinguishing the target from multiple objects of the same class. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/CoupAlign. | Accept | The paper was reviewed by four reviewers and received all positive scores at the end: 2 x Borderline Accepts and 2 x Weak Accepts. Most initial concerns with the paper were with exposition and experimental validation. These concerns, however, were addressed convincingly during the rebuttal period with additional experiments and ablations, as well as direct edits to the manuscript itself. In the current form the paper would be a valuable contribution to NeurIPS program. | train | [
"Rg8LYtmTkIC",
"SOaYDpPKvL",
"zT0RE0-WyH",
"Jzcf1vWZT_h",
"avTqqobEXZv",
"UMuFrU0gPv5",
"yB6W1dO7OiM",
"4micdEBIcDm",
"-wLYglob9v3",
"SbHbNz-BrcS",
"tHmkXJ1TFfF",
"Kxu5HXiEMr-",
"eTZ3yEH8DS",
"voy10ZiwcrY",
"stUL42Asjk-",
"tDT8k4vmxaV",
"PoGEGBimkU",
"FmnonUmP3hh",
"Qmz6SSE-XkK",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Dear reviewer,\n\nThank you very much for your support!\n",
" Thanks for solving my concerns with new experiments and detailed explanation. And I am glad to raise my rating up for accepting the paper.",
" Dear reviewer,\n\nThank you very much for your support!",
" Thanks for the new experiments on comparing... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"SOaYDpPKvL",
"4micdEBIcDm",
"Jzcf1vWZT_h",
"UMuFrU0gPv5",
"yB6W1dO7OiM",
"-wLYglob9v3",
"stUL42Asjk-",
"SbHbNz-BrcS",
"voy10ZiwcrY",
"Kxu5HXiEMr-",
"nips_2022_5L-wxm0YLcZ",
"eTZ3yEH8DS",
"SufrqIMEU-",
"Qmz6SSE-XkK",
"FmnonUmP3hh",
"PoGEGBimkU",
"nips_2022_5L-wxm0YLcZ",
"nips_2022_... |
nips_2022_noyKGZYvHH | coVariance Neural Networks | Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning. Principal component analysis (PCA) involves the projection of data on the eigenspace of the covariance matrix and draws similarities with the graph convolutional filters in GNNs. Motivated by this observation, we study a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs. We theoretically establish the stability of VNNs to perturbations in the covariance matrix, thus, implying an advantage over standard PCA-based data analysis approaches that are prone to instability due to principal components associated with close eigenvalues. Our experiments on real-world datasets validate our theoretical results and show that VNN performance is indeed more stable than PCA-based statistical approaches. Moreover, our experiments on multi-resolution datasets also demonstrate that VNNs are amenable to transferability of performance over covariance matrices of different dimensions; a feature that is infeasible for PCA-based approaches. | Accept | This paper proposes coVariance neural networks (VNN), which is a new architecture of graph neural network that is more robust to perturbations in covariance matrix. Most reviewers liked the new architecture as the intuition is clearly presented and the experiment results are interesting (in particular the results demonstrating multi-scale transferability). There are some concerns that this new architecture can be viewed as a more direct modification of GNN, I recommend the authors to clarify this relationship more clearly and emphasize the motivation. | train | [
"pR2roJ-6v1K",
"ciM0a9HDR9S",
"7lefPejZhnKx",
"uX5nJ0eSz65s",
"uPQ7XFRhCnk6",
"ha_L3DK_9_g1",
"O5GeMELKAA",
"Byfa96xBZVs",
"F1acbaHS4EW"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for considering our previous response. We address further concerns raised by the reviewer as follows.\n\n>*The analogy of VNNs to GNNs with CNN vs GNN does not quite hold since graph convolutions are SIGNIFICANTLY different from image convolutions.*\n\nPlease note that there exists a rich literature in ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"ciM0a9HDR9S",
"uPQ7XFRhCnk6",
"O5GeMELKAA",
"Byfa96xBZVs",
"F1acbaHS4EW",
"F1acbaHS4EW",
"nips_2022_noyKGZYvHH",
"nips_2022_noyKGZYvHH",
"nips_2022_noyKGZYvHH"
] |
nips_2022_kB9jrZDenff | Unsupervised Cross-Task Generalization via Retrieval Augmentation | Humans can perform unseen tasks by recalling relevant skills acquired previously and then generalizing them to the target tasks, even if there is no supervision at all. In this paper, we aim to improve this kind of cross-task generalization ability of massive multi-task language models, such as T0 and FLAN, in an unsupervised setting. We propose a retrieval-augmentation method named ReCross that takes a few unlabelled examples as queries to retrieve a small subset of upstream data and uses them to update the multi-task model for better generalization. ReCross is a straightforward yet effective retrieval method that combines both efficient dense retrieval and effective pair-wise reranking. Our results and analysis show that it significantly outperforms both non-retrieval methods and other baseline methods. | Accept | This paper presents an approach called ReCross that improves zero-shot task performance by retrieving and fine-tuning on examples of similar supervised tasks. This method is shown to help multi-task finetuned models when evaluated zero-shot on novel tasks.
The interesting finding of the paper is that fine-tuning on relevant examples from different but possibly related tasks can help. This finding can help researchers in the areas of zero-shot learning and multitask models.
Otherwise, the method although conceptually simple, includes significant additional machinery, which likely makes it practically difficult to use as the reviewers point out. Similarly, the relative contribution of the re-ranking step seems small and the steps appears to add significant complexity. As one of the reviewers points out, the paper and the method may be clearer without that step.
The review process included a lengthy and productive discussion, which helped the paper clarify and improve on several points. As result two of the reviewers increased their scores. There is now consensus among the three reviewers that the paper should be accepted.
| train | [
"APpb-aofGO",
"U7Cvnnei0JH",
"veRU2Qckoax",
"BXB1OEJt5To",
"i2qNx3irkA",
"13LUkU81gd",
"C81PrmuHV91",
"cyrJSSD0zTp",
"e7TLPAfvo55Y",
"IdzEMFLyPYx",
"si2TefgGDf",
"8zHjw6dxseu",
"J6y02YWJdA",
"a1AesqcUEA",
"n6vt8rRpASL",
"pV4bG3CeeiWR"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your detailed reply and raised score! \n\nWe will revise the final version accordingly based on these valuable suggestions and comments. Specifically, we will reframe the introduction of the reranker such that we have more space to add our analysis to the main paper. We will also rephrase ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"veRU2Qckoax",
"i2qNx3irkA",
"8zHjw6dxseu",
"J6y02YWJdA",
"si2TefgGDf",
"e7TLPAfvo55Y",
"cyrJSSD0zTp",
"a1AesqcUEA",
"n6vt8rRpASL",
"nips_2022_kB9jrZDenff",
"pV4bG3CeeiWR",
"n6vt8rRpASL",
"a1AesqcUEA",
"nips_2022_kB9jrZDenff",
"nips_2022_kB9jrZDenff",
"nips_2022_kB9jrZDenff"
] |
nips_2022_WESmKHEH5nJ | Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization | We consider the problem of minimizing the sum of two convex functions. One of those functions has Lipschitz-continuous gradients, and can be accessed via stochastic oracles, whereas the other is ``simple''. We provide a Bregman-type algorithm with accelerated convergence in function values to a ball containing the minimum. The radius of this ball depends on problem-dependent constants, including the variance of the stochastic oracle. We further show that this algorithmic setup naturally leads to a variant of Frank-Wolfe achieving acceleration under parallelization. More precisely, when minimizing a smooth convex function on a bounded domain, we show that one can achieve an $\epsilon$ primal-dual gap (in expectation) in $\tilde{O}(1 /\sqrt{\epsilon})$ iterations, by only accessing gradients of the original function and a linear maximization oracle with $O(1 / \sqrt{\epsilon})$ computing units in parallel. We illustrate this fast convergence on synthetic numerical experiments. | Accept | The authors design an algorithm for composite stochastic optimization that leverages both smoothness and strong convexity with respect to the same (general) norm, using a stochastic counterpart to recent work by Diakonikolas and Guzman. They then show how to leverage this algorithm and randomized smoothing in order to create an algorithm for constrained smooth convex optimization based on exact gradient evaluations and linear optimization computations. Compared to Frank-Wolfe, the algorithm requires strictly less gradient evaluations and parallelizes the same amount of linear optimization computations.
The paper received generally favorable reviews, with the exception of reviewer 3QVT who did not engage in discussion and whose critique I found unclear. I agree with reviewer rQnJ’s assessment that even though “all the building block are quite known in optimization community (accelerated methods, duality, Bregman distances, smoothing, etc.), the whole approach fits perfectly together and provides the reader with a number of nice and useful observations.” Consequently, I recommend acceptance. | train | [
"iWkpb1DImCP",
"ErysE6CfxxJ",
"qDx_aOY0mAl",
"1ZWIsjeMc9i",
"7PE69WHbwvD",
"rSF_B8AvVdxN",
"Z7gBYHbavy",
"QGjH3nVq5G",
"nEDRpQkusGe",
"Ixuan0zTW6S"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear AC,\n\nThank you for your question. We are replying now, as soon as possible after your question, because you asked this directly to us, expecting an answer.\n\nThe work that you reference does not put in question our novelty claims, as explained in the following points. Overall, there are indeed many papers... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"ErysE6CfxxJ",
"nips_2022_WESmKHEH5nJ",
"nips_2022_WESmKHEH5nJ",
"rSF_B8AvVdxN",
"Ixuan0zTW6S",
"nEDRpQkusGe",
"QGjH3nVq5G",
"nips_2022_WESmKHEH5nJ",
"nips_2022_WESmKHEH5nJ",
"nips_2022_WESmKHEH5nJ"
] |
nips_2022_CTqjKUAyRBt | Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization | We analyze the convergence rates of stochastic gradient algorithms for smooth finite-sum minimax optimization and show that, for many such algorithms, sampling the data points \emph{without replacement} leads to faster convergence compared to sampling with replacement. For the smooth and strongly convex-strongly concave setting, we consider gradient descent ascent and the proximal point method, and present a unified analysis of two popular without-replacement sampling strategies, namely \emph{Random Reshuffling} (RR), which shuffles the data every epoch, and \emph{Single Shuffling} or \emph{Shuffle Once} (SO), which shuffles only at the beginning. We obtain tight convergence rates for RR and SO and demonstrate that these strategies lead to faster convergence than uniform sampling. Moving beyond convexity, we obtain similar results for smooth nonconvex-nonconcave objectives satisfying a two-sided Polyak-\L{}ojasiewicz inequality. Finally, we demonstrate that our techniques are general enough to analyze the effect of \emph{data-ordering attacks}, where an adversary manipulates the order in which data points are supplied to the optimizer. Our analysis also recovers tight rates for the \emph{incremental gradient} method, where the data points are not shuffled at all. | Accept | All reviewers acknowledge that the paper fills a gap in the literature, with good results for a wide variety of settings. | train | [
"EeKYai0kES3",
"qVoUpUUgZSt",
"l8ZEKtymr34",
"_St1Vl_8gxQ",
"0bWG4Exgqfkg",
"gkdWiXksUH",
"L-LWu-5bkoc",
"AkX-rBHeFJg",
"dC3jjldRQ2XJ",
"4bDVik7dbsk",
"4iZ8x1lGYT",
"RWSHDIxuoMA",
"EKoV0tupZ-N",
"hfFZuMurO_j",
"v6Eaewj9L15"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for answering my questions. I will keep my score.",
" Thanks for the authors' response!\n\nI am not fully convinced about the technical novelty here. As Reviewer R12f pointed out, \"the main difficulty (or let's say the main difference to the existing analysis of RR) lies in rewriting the G... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"dC3jjldRQ2XJ",
"gkdWiXksUH",
"4iZ8x1lGYT",
"L-LWu-5bkoc",
"EKoV0tupZ-N",
"EKoV0tupZ-N",
"v6Eaewj9L15",
"v6Eaewj9L15",
"hfFZuMurO_j",
"RWSHDIxuoMA",
"RWSHDIxuoMA",
"nips_2022_CTqjKUAyRBt",
"nips_2022_CTqjKUAyRBt",
"nips_2022_CTqjKUAyRBt",
"nips_2022_CTqjKUAyRBt"
] |
nips_2022_AQgmyyEWg8 | Beyond spectral gap: the role of the topology in decentralized learning | In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster. We consider the setting in which all workers sample from the same dataset, and communicate over a sparse graph (decentralized). In this setting, current theory fails to capture important aspects of real-world behavior. First, the ‘spectral gap’ of the communication graph is not predictive of its empirical performance in (deep) learning. Second, current theory does not explain that collaboration enables larger learning rates than training alone. In fact, it prescribes smaller learning rates, which further decrease as graphs become larger, failing to explain convergence in infinite graphs. This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution. We quantify how the graph topology influences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives. Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies. | Accept | The paper studies decentralized optimization and considers all machines work on the data that follow the same distribution. Most of the reviewers think the paper is interesting. I recommend an acceptance. | val | [
"zOaPDkl66iZ",
"NNhaZrn0RR",
"k2QUbbDHgcy",
"dzHnMoIFXGP",
"vd1W4bueNg",
"UBcAFuXEM9",
"vUrouopdKk",
"isgveAfLAA6",
"58ZUzTsND67"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed responses, which have answered most of my questions.",
" Thank you for your quick reply.\nWe agree with all your concrete suggestions on clarity and typos, and are very grateful for your in-depth review and contributions to the quality of the paper.\n\nFor the initial rebuttal, we had a... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"vd1W4bueNg",
"k2QUbbDHgcy",
"dzHnMoIFXGP",
"58ZUzTsND67",
"isgveAfLAA6",
"vUrouopdKk",
"nips_2022_AQgmyyEWg8",
"nips_2022_AQgmyyEWg8",
"nips_2022_AQgmyyEWg8"
] |
nips_2022_F02H1zNl213 | Are GANs overkill for NLP? | This work offers a novel theoretical perspective on why, despite numerous attempts, adversarial approaches to generative modeling (e.g., GANs) have not been as successful for certain generation tasks, particularly sequential tasks such as Natural Language Generation, as they have in others, such as Computer Vision. In particular, on sequential data such as text, maximum-likelihood approaches are significantly more utilized than GANs. We show that, while it may seem that maximizing likelihood is inherently different than minimizing distinguishability, this distinction is largely an artifact of the limited representational capacity of the model family, for a wide class of adversarial objectives. We give a theoretical model in which minimizing KL-divergence (i.e., maximizing likelihood) is a more efficient approach to effectively minimizing the same distinguishability criteria that adversarial models seek to optimize. Reductions show that minimizing distinguishability can be seen as simply boosting likelihood for certain families of models including n-gram models and neural networks with a softmax output layer. To achieve a full polynomial-time reduction, a novel next-token distinguishability model is considered. Some preliminary empirical evidence is also provided to substantiate our theoretical analyses. | Accept | In the context of text generation, the paper gives a theoretical argument that GAN objectives are equivalent to maximum-likelihood training when the generator and discriminator families are 'paired'. Reviewers generally felt that the perspective was interesting (broM, jtPN, UUT1) and the theory was insightful (jtPN, UUT1). Reviewer vzAW raises the concern that the original draft of this paper overclaimed throughout, but I feel this has been addressed well enough in a revision. Reviewers broM and vzAW felt empirical validation was lacking, but since the paper's focus is clearly theoretical I don't see this as preventing acceptance. Overall this paper is borderline but I feel that it's interesting enough to merit acceptance despite flaws.
| train | [
"Mq3kQWRq1d",
"sOzTGMS-38c",
"SlAYeZml-I",
"uxAtqMWEfrf",
"p04cmF4tDf",
"TGFcbjg0l35",
"hZTy743skt",
"bF2_esz-SCQ",
"MHUL-cGysRU",
"53f5njtiCFn",
"SweKBcLxj-2",
"GAMh_ylN_b",
"GsjarHqw5NU",
"JRddaGGnsW",
"SlWLBI9qdbG",
"A6XcqAAFTc",
"c3oaoTwpKEh",
"vIAu3eGC2A6",
"7qjsqB2K95k",
... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! Our purpose is to attract greater attention of the community to an important area of research, and we believe the conceptual and mathematical contributions of this paper would be broadly useful in the context of generative models. We've edited the paper at several places including t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
3
] | [
"sOzTGMS-38c",
"vIAu3eGC2A6",
"MHUL-cGysRU",
"hZTy743skt",
"MHUL-cGysRU",
"SlWLBI9qdbG",
"GsjarHqw5NU",
"nips_2022_F02H1zNl213",
"53f5njtiCFn",
"A6XcqAAFTc",
"GAMh_ylN_b",
"GsjarHqw5NU",
"9wCI82UNEFb",
"7qjsqB2K95k",
"c3oaoTwpKEh",
"vIAu3eGC2A6",
"nips_2022_F02H1zNl213",
"nips_2022... |
nips_2022_xONqm0NUJc | Relational Proxies: Emergent Relationships as Fine-Grained Discriminators | Fine-grained categories that largely share the same set of parts cannot be discriminated based on part information alone, as they mostly differ in the way the local parts relate to the overall global structure of the object. We propose Relational Proxies, a novel approach that leverages the relational information between the global and local views of an object for encoding its semantic label. Starting with a rigorous formalization of the notion of distinguishability between fine-grained categories, we prove the necessary and sufficient conditions that a model must satisfy in order to learn the underlying decision boundaries in the fine-grained setting. We design Relational Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets and achieve state-of-the-art results on all of them, surpassing the performance of all existing works with a margin exceeding 4% in some cases. We also experimentally validate our theory on fine-grained distinguishability and obtain consistent results across multiple benchmarks. Implementation is available at https://github.com/abhrac/relational-proxies.
| Accept | This paper proposes a novel approach for fine-grained image recognition, which utilizes the relational information between the global and local views of an object. It is a reasonable and important finding that not only representing local parts but relating them are critical to establishing superior performance. The authors validate their proposal’s effectiveness with both theoretical explanations and positive empirical results on various benchmarks. The authors also did a great job in rebuttal. They provide more clarifications, extra experiments on large datasets, and newly included error bars. Most of the reviewers are satisfied with the rebuttals and discussions, and all reviewers have a consistent recommendation. We think this paper can bring new insights to the visual recognition community and help people understand how the key features and their relations work. Please also include the newly added experiments and clarifications in the new revision.
| val | [
"XGowPicg2qA",
"dOOs3Y1PdHA",
"GLX8eBgMP7",
"fawBt7P2c1U",
"k-eFNMoL5-7Z",
"58AkdzIhFt",
"9B8iXtGlqyb",
"XA6qg4Vuqzt",
"Fg0b_2gsZOf",
"Eu_v14hcmz",
"NPzyzl5v71w",
"0JHyaaknHpQ",
"PpSx7RDZdX-",
"0y1yic1l98q",
"zXWKCL1bPNp",
"l4iGnf1QNRo",
"Rvj9lGjmM88"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for taking the time to go through the rebuttal, appreciating our intuitive explanations and additional experiments, and increasing their score.\n",
" I thank the reviewers for their response, and appreciate their efforts in providing additional clarifications and revising the paper. The an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3,
3
] | [
"dOOs3Y1PdHA",
"Fg0b_2gsZOf",
"fawBt7P2c1U",
"XA6qg4Vuqzt",
"nips_2022_xONqm0NUJc",
"0JHyaaknHpQ",
"zXWKCL1bPNp",
"Rvj9lGjmM88",
"zXWKCL1bPNp",
"l4iGnf1QNRo",
"0y1yic1l98q",
"PpSx7RDZdX-",
"nips_2022_xONqm0NUJc",
"nips_2022_xONqm0NUJc",
"nips_2022_xONqm0NUJc",
"nips_2022_xONqm0NUJc",
... |
nips_2022_vMQ1V_z0TxU | Out-of-Distribution Detection with An Adaptive Likelihood Ratio on Informative Hierarchical VAE | Unsupervised out-of-distribution (OOD) detection is essential for the reliability of machine learning. In the literature, existing work has shown that higher-level semantics captured by hierarchical VAEs can be used to detect OOD instances.
However, we empirically show that, the inherent issue of hierarchical VAEs, i.e., ``posterior collapse'', would seriously limit their capacity for OOD detection.
Based on a thorough analysis for `posterior collapse'', we propose a novel informative hierarchical VAE to alleviate this issue through enhancing the connections between the data sample and its multi-layer stochastic latent representations during training.
Furthermore, we propose a novel score function for unsupervised OOD detection, referred to as Adaptive Likelihood Ratio. With this score function, one can selectively aggregate the semantic information on multiple hidden layers of hierarchical VAEs, leading to a strong separability between in-distribution and OOD samples.
Experimental results demonstrate that our method can significantly outperform existing state-of-the-art unsupervised OOD detection approaches. | Accept | This paper studies unsupervised out-of-distribution detection based on hierarchical VAE models. In particular, it (1) investigates the posterior collapse issue, (2) proposes a training procedure by increasing the mutual information between the input and latent representations, and (3) proposes an adaptive likelihood ratio score for detecting OOD inputs. Multiple reviewers found the method interesting and technically sound.
Post rebuttal, all reviewers unanimously supported this paper positively. The contribution and insights presented in this paper will be valuable for the OOD detection community. The AC recommends acceptance.
Please incorporate the reviewer's requested discussions (e.g. computational footprint) in the final version. Several published papers in the reference sections are in arXiv format, which necessitates proper citations in camera ready.
| train | [
"5zJ1dOyPq06",
"P1K4uv4il5w",
"Z1abMj_3vb7",
"hbakHmSQpMK",
"2RuxaQTF0AY",
"g0Wl7jzp8Hm",
"9qC4mhcINEX",
"SxTRoxbxUvr",
"UABDFtlrYar",
"sWLkfM5boXK",
"NCw9AhX13eT",
"eSX-_A9LfPl",
"6jnyZBhiIXk",
"n76THBZ-Xso",
"G9jZPscqAiT",
"2nos73R0EW8",
"zg3XRhFCb27",
"duXswuDelz",
"NfOju3OS4S... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewer 4FiY:\n\nThanks again for your effort in reviewing our paper and give us a great chance to improve the quality of this paper . \n\nConsidering that the discussion period is coming to an end, we would like to know if you have any other questions about our paper, and we are still glad to have a discus... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"dtG_FJqqbpr",
"Z1abMj_3vb7",
"hbakHmSQpMK",
"2RuxaQTF0AY",
"g0Wl7jzp8Hm",
"9qC4mhcINEX",
"UABDFtlrYar",
"Flv0ymK2q-q",
"LMbD-6Fe4ft",
"dtG_FJqqbpr",
"n76THBZ-Xso",
"6jnyZBhiIXk",
"LMbD-6Fe4ft",
"a_p5KW5sIR7",
"2nos73R0EW8",
"zg3XRhFCb27",
"Flv0ymK2q-q",
"NfOju3OS4So",
"dtG_FJqqb... |
nips_2022_o762mMj4XK | Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation | Modern approaches for simulation-based inference build upon deep learning surrogates to enable approximate Bayesian inference with computer simulators. In practice, the estimated posteriors' computational faithfulness is, however, rarely guaranteed. For example, Hermans et al., 2021 have shown that current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking false inferences. In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability, while sharing the same Bayes optimal solution. We achieve this by enforcing a balancing condition that increases the quantified uncertainty in low simulation budget regimes while still converging to the exact posterior as the budget increases. We provide theoretical arguments showing that BNRE tends to produce posterior surrogates that are more conservative than NRE's. We evaluate BNRE on a wide variety of tasks and show that it produces conservative posterior surrogates on all tested benchmarks and simulation budgets. Finally, we emphasize that BNRE is straightforward to implement over NRE and does not introduce any computational overhead. | Accept | The paper proposes a modification to the neural ratio estimation algorithm in the context of SBI (simulation-based inference) that tends to avoid overconfident posteriors. This is important for applications (for example in scientific discovery) where excluding plausible inferences can be more detrimental than including implausible ones.
The reviewers found the paper to be well written, technically solid, and a useful contribution to the SBI literature. Most concerns were addressed during the discussion period, with the paper strengthening its discussion of limitations as a result. In the end, the reviewers unanimously awarded the paper a score of 6 (weak accept). Therefore, I'm happy to recommend this paper for acceptance. | train | [
"WrgBz6ddS2m",
"U3ky3JJojRV",
"AsEt7fj5s0b",
"-ZEXJTjzPw3",
"hq7dhKU8Rqq",
"3aJJ6UeOpc",
"KVt5VMOmqQ",
"1MvK46KhP0l",
"pzyto-H0izC",
"_IP_aiPyt99",
"c-vPJs_G0in",
"NLbqLe_qwHM",
"JBtRJeZvjloL",
"glaLONgO91j",
"ytKjy_lDwGE",
"G4pnS9EGS8",
"d71VQgqr_yi",
"SydkzNUhyZ7",
"PFshcbP6ec9... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_r... | [
" We have now updated the limitations in Section 6 to reflect this:\n\n> Third, the benefits of BNRE remain to be assessed in high-dimensional parameter spaces. In particular, the posterior density must be evaluated on a discretized grid over the parameter space to compute credibility regions, which currently prohi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"U3ky3JJojRV",
"AsEt7fj5s0b",
"3aJJ6UeOpc",
"hq7dhKU8Rqq",
"KVt5VMOmqQ",
"c-vPJs_G0in",
"1MvK46KhP0l",
"_IP_aiPyt99",
"_IP_aiPyt99",
"2oJEZOpRDYy",
"NLbqLe_qwHM",
"JBtRJeZvjloL",
"PFshcbP6ec9",
"SydkzNUhyZ7",
"d71VQgqr_yi",
"nips_2022_o762mMj4XK",
"nips_2022_o762mMj4XK",
"nips_2022... |
nips_2022_d229wqASHOT | Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition | Deep learning models have shown their vulnerability when dealing with adversarial attacks. Existing attacks almost perform on low-level instances, such as pixels and super-pixels, and rarely exploit semantic clues. For face recognition attacks, existing methods typically generate the l_p-norm perturbations on pixels, however, resulting in low attack transferability and high vulnerability to denoising defense models. In this work, instead of performing perturbations on the low-level pixels, we propose to generate attacks through perturbing on the high-level semantics to improve attack transferability. Specifically, a unified flexible framework, Adversarial Attributes (Adv-Attribute), is designed to generate inconspicuous and transferable attacks on face recognition, which crafts the adversarial noise and adds it into different attributes based on the guidance of the difference in face recognition features from the target. Moreover, the importance-aware attribute selection and the multi-objective optimization strategy are introduced to further ensure the balance of stealthiness and attacking strength. Extensive experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates while maintaining better visual effects against recent attack methods. | Accept | This paper studies adversarial attacks on facial recognition systems. The key contribution is that, instead of directly manipulating pixel space, this paper proposed to perturb the facial attributes for generating inconspicuous and transferable adversarial examples. The initial concerns are mostly about requiring 1) more ablations/comparisons, and 2) clarifications on experiment details and visualization (especially Figure 6).
Most concerns are well addressed in the rebuttal, and 3 (out of 4) reviewers agree to accept this paper. The reviewer RXXP is still (slightly) concerned about the novelty contribution and rates it as a borderline case. Given its effectiveness and comprehensive analysis, the AC agrees that this paper has its own merits and will be of interest to the general NeurIPS community, therefore recommend accepting it.
In the final version, the authors should include all the clarifications and the additional empirical results provided in the rebuttal.
| train | [
"UTMz7UK2Yot",
"Q8454TVbR6",
"8eqafMy2z0e",
"OJuAq2DDn4D",
"9vNAHpn-WdV",
"O918jL5ubQK",
"LC7b7wvpi3I",
"Vko4bl1F1e1",
"UURI7QzRcRF",
"TU2Cya-A8ie",
"NNw94j-0lJY",
"AvOTKbOB2yW"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your reply and we will further address your concerns as follows.\n\n**[Q3: Semantic inconsistency.]** In the revised supplementary material, we provide more qualitative results from the FFHQ and CelebA-HQ datasets. Figure E and Figure F compare the original source faces, the edited faces by origi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"8eqafMy2z0e",
"OJuAq2DDn4D",
"9vNAHpn-WdV",
"O918jL5ubQK",
"UURI7QzRcRF",
"TU2Cya-A8ie",
"AvOTKbOB2yW",
"NNw94j-0lJY",
"nips_2022_d229wqASHOT",
"nips_2022_d229wqASHOT",
"nips_2022_d229wqASHOT",
"nips_2022_d229wqASHOT"
] |
nips_2022_nX-gReQ0OT | Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need? | Finding accurate solutions to the Schrödinger equation is the key unsolved challenge of computational chemistry. Given its importance for the development of new chemical compounds, decades of research have been dedicated to this problem, but due to the large dimensionality even the best available methods do not yet reach the desired accuracy.
Recently the combination of deep learning with Monte Carlo methods has emerged as a promising way to obtain highly accurate energies and moderate scaling of computational cost. In this paper we significantly contribute towards this goal by introducing a novel deep-learning architecture that achieves 40-70% lower energy error at 6x lower computational cost compared to previous approaches. Using our method we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules.
We systematically break down and measure our improvements, focusing in particular on the effect of increasing physical prior knowledge.
We surprisingly find that increasing the prior knowledge given to the architecture can actually decrease accuracy. | Accept | There is a clear consensus among the reviewers that this is a quality paper and worthy of acceptance (in fact, this may be the first time I've ever seen 4 reviewers give the exact same score), so I recommend accept.
I do however have one additional comment. I find the current title somewhat unwieldy and wonder if it would be possible for the authors to condense it at all. This is not a critical issue, of course, but one that the authors may want to consider (if the program chairs allow it). | train | [
"oOfGUR5Yjj",
"CHOWTb1CXD_",
"yjG4yiywau",
"4Lj66G-Z8E",
"dtch0ur1HUt",
"9loB3M4kwBrR",
"BCKGbpUnQ3W",
"vATYRI-UftQ",
"XGiFILiqHrh",
"TqeKdUiehpK",
"bsNygndN_LAr",
"-pIxqKcoQTp",
"AsVt6zn4K3M",
"9EgffXp72yK",
"jbUaHN_lp07",
"VtjKOqcQOFf",
"c1MbEXRO0Qm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed and clarifying response.",
" Thank you once more, for reviewing our paper and helping us to improve it!",
" Thank you for reviewing our paper and your constructive feedback!\n\nYes, for the systems such as 4th row atoms (K, Fe), and large molecules (e.g. Glycine), there are no publi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
3
] | [
"bsNygndN_LAr",
"9loB3M4kwBrR",
"BCKGbpUnQ3W",
"dtch0ur1HUt",
"-pIxqKcoQTp",
"XGiFILiqHrh",
"TqeKdUiehpK",
"-pIxqKcoQTp",
"c1MbEXRO0Qm",
"VtjKOqcQOFf",
"jbUaHN_lp07",
"9EgffXp72yK",
"nips_2022_nX-gReQ0OT",
"nips_2022_nX-gReQ0OT",
"nips_2022_nX-gReQ0OT",
"nips_2022_nX-gReQ0OT",
"nips_... |
nips_2022_7hhH95QKKDX | Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks | The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores. Nonetheless, we note that if the loss trend of the outputs is slightly perturbed, SQAs could be easily misled and thereby become much less effective. Following this idea, we propose a novel defense, namely Adversarial Attack on Attackers (AAA), to confound SQAs towards incorrect attack directions by slightly modifying the output logits. In this way, (1) SQAs are prevented regardless of the model's worst-case robustness; (2) the original model predictions are hardly changed, i.e., no degradation on clean accuracy; (3) the calibration of confidence scores can be improved simultaneously. Extensive experiments are provided to verify the above advantages. For example, by setting $\ell_\infty=8/255$ on CIFAR-10, our proposed AAA helps WideResNet-28 secure 80.59% accuracy under Square attack (2500 queries), while the best prior defense (i.e., adversarial training) only attains 67.44%. Since AAA attacks SQA's general greedy strategy, such advantages of AAA over 8 defenses can be consistently observed on 8 CIFAR-10/ImageNet models under 6 SQAs, using different attack targets, bounds, norms, losses, and strategies. Moreover, AAA calibrates better without hurting the accuracy. Our code is available at https://github.com/Sizhe-Chen/AAA. | Accept | This paper proposes a defense against score-based black-box attacks by post-processing the output probabilities to misguide the attacker. The method enjoys several advantages such as not reducing test-time accuracy or increasing the train-/test-time cost, improving calibration for the model, and superior performance under black-box attack compared to prior work. The authors also included additional experiments during the discussion phase that show effectiveness against adaptive attacks.
One weakness is that the method does not improve robustness of the underlying model and hence is still susceptible to surrogate model and/or hard-label attacks. However, most reviewers consider this weakness as minor and that the paper’s contribution is significant enough for publication. AC therefore recommends acceptance for publication at NeurIPS.
| train | [
"NQqbKli24Z",
"aLkLjiRw_yd",
"KtBKLo7HGQ_",
"hqq5u6eS0R8",
"cacoukOUvw7",
"itSr8-BUvd",
"-NwL_TnNb8QC",
"jiKSENTHhmX",
"1bFRe2KvfAc",
"S5dn0Dj7jZ",
"zn1sb-CWWQS",
"yVnwY3jIY5v",
"0Z1wxB5KGXr",
"H9oqtwVcw_",
"2TwoWG0-FK"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Program Chairs, Area Chairs, and Reviewers,\n\nThanks for the constructive comments and helpful discussions, we have carefully modified the manuscript according to the reviewers’ suggestions.\n\n- Descriptions on AAA-sine for adaptive attacks **(Line 53-56, 62-64, 162-183, 216-219, 312-332)**\n- Discussions ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"nips_2022_7hhH95QKKDX",
"S5dn0Dj7jZ",
"itSr8-BUvd",
"cacoukOUvw7",
"1bFRe2KvfAc",
"-NwL_TnNb8QC",
"H9oqtwVcw_",
"2TwoWG0-FK",
"0Z1wxB5KGXr",
"yVnwY3jIY5v",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX"
] |
nips_2022_yW5zeRSFdZ | Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models | Transformer architecture has become the fundamental element of the widespread natural language processing~(NLP) models. With the trends of large NLP models, the increasing memory and computation costs hinder their efficient deployment on resource-limited devices. Therefore, transformer quantization attracts wide research interest. Recent work recognizes that structured outliers are the critical bottleneck for quantization performance. However, their proposed methods increase the computation overhead and still leave the outliers there. To fundamentally address this problem, this paper delves into the inherent inducement and importance of the outliers. We discover that $\boldsymbol \gamma$ in LayerNorm (LN) acts as a sinful amplifier for the outliers, and the importance of outliers varies greatly where some outliers provided by a few tokens cover a large area but can be clipped sharply without negative impacts. Motivated by these findings, we propose an outlier suppression framework including two components: Gamma Migration and Token-Wise Clipping. The Gamma Migration migrates the outlier amplifier to subsequent modules in an equivalent transformation, contributing to a more quantization-friendly model without any extra burden. The Token-Wise Clipping takes advantage of the large variance of token range and designs a token-wise coarse-to-fine pipeline, obtaining a clipping range with minimal final quantization loss in an efficient way. This framework effectively suppresses the outliers and can be used in a plug-and-play mode. Extensive experiments prove that our framework surpasses the existing works and, for the first time, pushes the 6-bit post-training BERT quantization to the full-precision (FP) level. Our code is available at https://github.com/wimh966/outlier_suppression. | Accept | This paper proposes an outlier suppression method to improve transformer quantization. The method is derived based on careful analysis and thorough experiments demonstrate the efficacy of it. All reviewers agreed that this is a good paper. I recommend acceptance. | train | [
"WzfMlzneog9",
"u_Giy2f3mp",
"KPi4xVKsfiS",
"2gVh7U0pv3o",
"EsKMXbbybee",
"fMqljYmE1eE",
"BFDKWp_VhYA",
"I4JtIPGyT2C",
"ovm4S1Cg9uY",
"IAKuEfK8poF",
"0bjfZMrd0Vx",
"G3b7IZuEjOk",
"tGK7Qdoxc87"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the detailed responses and the additional experiments conducted to answer my questions. I have increased the soundness score for the paper.",
" Thanks for the responses to my questions. The explanations and the experiments added answered my questions well, which makes this manuscript more solid. In... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"IAKuEfK8poF",
"fMqljYmE1eE",
"0bjfZMrd0Vx",
"0bjfZMrd0Vx",
"tGK7Qdoxc87",
"tGK7Qdoxc87",
"G3b7IZuEjOk",
"G3b7IZuEjOk",
"0bjfZMrd0Vx",
"0bjfZMrd0Vx",
"nips_2022_yW5zeRSFdZ",
"nips_2022_yW5zeRSFdZ",
"nips_2022_yW5zeRSFdZ"
] |
nips_2022_aAs8KTbZvc9 | Fine-Grained Analysis of Stability and Generalization for Modern Meta Learning Algorithms | The support/query episodic training strategy has been widely applied in modern meta learning algorithms. Supposing the $n$ training episodes and the test episodes are sampled independently from the same environment, previous work has derived a generalization bound of $O(1/\sqrt{n})$ for smooth non-convex functions via algorithmic stability analysis. In this paper, we provide fine-grained analysis of stability and generalization for modern meta learning algorithms by considering more general situations. Firstly, we develop matching lower and upper stability bounds for meta learning algorithms with two types of loss functions: (1) nonsmooth convex functions with $\alpha$-H{\"o}lder continuous subgradients $(\alpha \in [0,1))$; (2) smooth (including convex and non-convex) functions. Our tight stability bounds show that, in the nonsmooth convex case, meta learning algorithms can be inherently less stable than in the smooth convex case. For the smooth non-convex functions, our stability bound is sharper than the existing one, especially in the setting where the number of iterations is larger than the number $n$ of training episodes. Secondly, we derive improved generalization bounds for meta learning algorithms that hold with high probability. Specifically, we first demonstrate that, under the independent episode environment assumption, the generalization bound of $O(1/\sqrt{n})$ via algorithmic stability analysis is near optimal. To attain faster convergence rate, we show how to yield a deformed generalization bound of $O(\ln{n}/n)$ with the curvature condition of loss functions. Finally, we obtain a generalization bound for meta learning with dependent episodes whose dependency relation is characterized by a graph. Experiments on regression problems are conducted to verify our theoretical results. | Accept | The reviewers and AC are in agreement that this paper is a solid work, and its contributions are significant. The theoretical results of this paper advance the theory of meta-learning, and, in particular, the provided generalization guarantees are strong. All reviewers were satisfied with the responses provided by the authors and even one of the reviewers increased their score. Overall, this is a good paper and my recommendation is "Accept".
AC | train | [
"gLCPi-RYfIC",
"6KZolUIjzOI",
"zj2gOGM9G8s",
"ylMyQv6WXC",
"LPUqh5HhMd",
"dy83VBaw--",
"bjW8UcaZCcX",
"T5VgsijGoeGJ",
"_nUS3yq5na",
"wK6Lj2Iw-Ft",
"i9nLvL8RJq5",
"e0sJlLK-dcP",
"Yfe3WioyGf4",
"I5zwatDfJyJ",
"goohdZaO6J",
"OTyjbmapI9C",
"fnGyC1A2e3P",
"3pPcuy1_i6Z"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your support very much!",
" I updated the review and increased the score as promised.",
" We appreciate your support very much!",
" **Q1. The authors did not mention that the fast generalization bound for PL functions is \"deformed\" neither in the abstract nor in the contribution section. I w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
2
] | [
"6KZolUIjzOI",
"ylMyQv6WXC",
"LPUqh5HhMd",
"dy83VBaw--",
"Yfe3WioyGf4",
"_nUS3yq5na",
"T5VgsijGoeGJ",
"fnGyC1A2e3P",
"3pPcuy1_i6Z",
"fnGyC1A2e3P",
"fnGyC1A2e3P",
"OTyjbmapI9C",
"goohdZaO6J",
"nips_2022_aAs8KTbZvc9",
"nips_2022_aAs8KTbZvc9",
"nips_2022_aAs8KTbZvc9",
"nips_2022_aAs8KTb... |
nips_2022_xaWO6bAY0xM | Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective | Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples. However, the relevant progress for the important $\ell_\infty$ perturbation setting is rather limited, and a principled understanding of how to design expressive $\ell_\infty$ Lipschitz networks is still lacking. In this paper, we bridge the gap by studying certified $\ell_\infty$ robustness from a novel perspective of representing Boolean functions. We derive two fundamental impossibility results that hold for any standard Lipschitz network: one for robust classification on finite datasets, and the other for Lipschitz function approximation. These results identify that networks built upon norm-bounded affine layers and Lipschitz activations intrinsically lose expressive power even in the two-dimensional case, and shed light on how recently proposed Lipschitz networks (e.g., GroupSort and $\ell_\infty$-distance nets) bypass these impossibilities by leveraging order statistic functions. Finally, based on these insights, we develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained (making certified robust training free). Extensive experiments show that our approach is scalable, efficient, and consistently yields better certified robustness across multiple datasets and perturbation radii than prior Lipschitz networks. | Accept | The paper presents novel theoretical results and a novel architecture for designing Lipschitz constrained neural networks (with respect to the infinity norm). The authors have addressed all the concerns from the reviewers properly. All the reviewers agreed that the paper contains significant contributions and should be accepted at NeurIPS 2022. | train | [
"xYGx06fRjL9",
"-tybpQJZRTy",
"5tNunerdaeD",
"5D3-Y1f7oD0",
"VcUuDpaLntkm",
"noObnzZGIym",
"-wfZBo2O3ym",
"prM3p7aVdRF",
"SbNhFcz0IxZ",
"uTK-droDpzf",
"1GbWfjtlNw",
"xRwBlZjTNW9",
"CkaS5pPHcDr",
"hi34jv2_VtB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed discussion of the points that I raised.\n\nThe new theoretical result is exciting and helps to complete the previously presented theoretical work. The additional results, that provide the error bars, are also a great addition.\n\nThe discussion here on other $\\ell_p$ norms is interesti... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"SbNhFcz0IxZ",
"uTK-droDpzf",
"nips_2022_xaWO6bAY0xM",
"hi34jv2_VtB",
"CkaS5pPHcDr",
"CkaS5pPHcDr",
"xRwBlZjTNW9",
"xRwBlZjTNW9",
"xRwBlZjTNW9",
"1GbWfjtlNw",
"nips_2022_xaWO6bAY0xM",
"nips_2022_xaWO6bAY0xM",
"nips_2022_xaWO6bAY0xM",
"nips_2022_xaWO6bAY0xM"
] |
nips_2022_mMdRZipvld2 | Deconfounded Representation Similarity for Comparison of Neural Networks | Similarity metrics such as representational similarity analysis (RSA) and centered kernel alignment (CKA) have been used to understand neural networks by comparing their layer-wise representations. However, these metrics are confounded by the population structure of data items in the input space, leading to inconsistent conclusions about the \emph{functional} similarity between neural networks, such as spuriously high similarity of completely random neural networks and inconsistent domain relations in transfer learning. We introduce a simple and generally applicable fix to adjust for the confounder with covariate adjustment regression, which improves the ability of CKA and RSA to reveal functional similarity and also retains the intuitive invariance properties of the original similarity measures. We show that deconfounding the similarity metrics increases the resolution of detecting functionally similar neural networks across domains. Moreover, in real-world applications, deconfounding improves the consistency between CKA and domain similarity in transfer learning, and increases the correlation between CKA and model out-of-distribution accuracy similarity. | Accept | The paper makes the observation that neural network similarity indexes can be misleading when compared across domains with different examples. The paper presents a fix via covariate adjustment, which improves quality of similarity indexes across neural networks across domains. The approach is simple, and the reviewers unanimously agree that the paper is worthy of publication at NeurIPS. | train | [
"gkGGAo_r4uA",
"aA06dFgBtVRw",
"zHmJSMD4EYe",
"hZNkvikEBZN",
"vRkFaW95ewl",
"rnw-Fk6AJXI",
"sUVBcSyq78J",
"lZXnVHxcZPj",
"FtpbQqyvlzi",
"kPCJz0u1h9Z"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for clarifications.",
" Thank you for the good comments again!\n\n> Q2. Thanks for the clarification. Actually, I didn't understand that the authors \"averaged the evaluation metrics of layer-wise similarities\" when I first saw Figure 2, which is now very clear. I feel that this point (how to measure... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"sUVBcSyq78J",
"zHmJSMD4EYe",
"vRkFaW95ewl",
"nips_2022_mMdRZipvld2",
"kPCJz0u1h9Z",
"FtpbQqyvlzi",
"lZXnVHxcZPj",
"nips_2022_mMdRZipvld2",
"nips_2022_mMdRZipvld2",
"nips_2022_mMdRZipvld2"
] |
nips_2022_kCU2pUrmMih | Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM | Many problems in machine learning can be formulated as optimizing a convex functional over a vector space of measures. This paper studies the convergence of the mirror descent algorithm in this infinite-dimensional setting. Defining Bregman divergences through directional derivatives, we derive the convergence of the scheme for relatively smooth and convex pairs of functionals. Such assumptions allow to handle non-smooth functionals such as the Kullback--Leibler (KL) divergence. Applying our result to joint distributions and KL, we show that Sinkhorn's primal iterations for entropic optimal transport in the continuous setting correspond to a mirror descent, and we obtain a new proof of its (sub)linear convergence. We also show that Expectation Maximization (EM) can always formally be written as a mirror descent. When optimizing only on the latent distribution while fixing the mixtures parameters -- which corresponds to the Richardson--Lucy deconvolution scheme in signal processing -- we derive sublinear rates of convergence. | Accept | All reviewers recommend the paper. The authors should think about ways to make the paper more accessible to a machine learning audience, but I recommend accepting. When preparing the camera-ready version, please take into account the reviewers comments and please also specifically address these two points raised in the discussion:
"I'm of the opinion that authors should try put more effort in making current submission more accessible to general audience helping the reader to understand why certain notions of differentiability have been chosen over others etc."
"Providing a concrete example where relative smoothness fails but the proposed approach applies would increase the potential audience among non-experts." | train | [
"qvNTOCB3IsD",
"VyMfOyUUHQm",
"8_1zrfFrkEp",
"HsZ0hPB2vvs",
"q_rVwSsRD-M",
"6YVLWYGeQAm",
"WGEwDh0LyR",
"7FWH3qvgih",
"Hmb8I3LSIE8",
"r4dSIzj-A5X"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the authors for answering our questions. It would be nice if the authors could include this discussion about the rates of convergence in the paper or in the supplementary material. We keep our rating unchanged.",
" We thank the reviewer his positive comments and interest.\n\nQuestion 1.: As written in... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
9,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"HsZ0hPB2vvs",
"r4dSIzj-A5X",
"Hmb8I3LSIE8",
"7FWH3qvgih",
"WGEwDh0LyR",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih"
] |
nips_2022_CmD5z_2DVuM | Learning Energy Networks with Generalized Fenchel-Young Losses | Energy-based models, a.k.a. energy networks, perform inference by optimizing
an energy function, typically parametrized by a neural network.
This allows one to capture potentially complex relationships between inputs and
outputs.
To learn the parameters of the energy function, the solution to that
optimization problem is typically fed into a loss function.
The key challenge for training energy networks lies in computing loss gradients,
as this typically requires argmin/argmax differentiation.
In this paper, building upon a generalized notion of conjugate function,
which replaces the usual bilinear pairing with a general energy function,
we propose generalized Fenchel-Young losses, a natural loss construction for
learning energy networks. Our losses enjoy many desirable properties and their
gradients can be computed efficiently without argmin/argmax differentiation.
We also prove the calibration of their excess risk in the case of linear-concave
energies. We demonstrate our losses on multilabel classification and
imitation learning tasks. | Accept | This paper introduces a new notion of regularized energy function using generalized Fenchel conjugates. Reviewers were leaning towards accept, the least convinced reviewer discussed at length with the authors the contribution of the paper and the comparison of the proposed method to prior work, and leaned also towards accept after rebuttal and paper revision. Accept. | train | [
"h-1r2zIdMh",
"KaYiyvqWIs",
"lF71vlTyTpl",
"Fe-5NDgJet",
"Yck7SSI2X9A",
"GCMdkezCfkYH",
"Ogrfa51rSisl",
"3Cq1AVJSScm",
"vF9xjiQrZ7",
"hBBeano50Rr",
"hfd6j3Xlzac4",
"U2kZSpPghLk",
"U2JYUW2mcvw",
"yedKwILEYkh"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for making these amendments. I have now raised my score to 5 to account for this",
" Thank you very much for the constructive comments. We hope that your concerns are now addressed satisfactorily. \n\n> I think the references should be discussed earlier in the intro\n\nThis is now addressed in the rev... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"KaYiyvqWIs",
"lF71vlTyTpl",
"Fe-5NDgJet",
"Yck7SSI2X9A",
"hBBeano50Rr",
"U2JYUW2mcvw",
"vF9xjiQrZ7",
"nips_2022_CmD5z_2DVuM",
"U2kZSpPghLk",
"U2JYUW2mcvw",
"yedKwILEYkh",
"nips_2022_CmD5z_2DVuM",
"nips_2022_CmD5z_2DVuM",
"nips_2022_CmD5z_2DVuM"
] |
nips_2022_wcBXsXIf-n9 | Reaching Nirvana: Maximizing the Margin in Both Euclidean and Angular Spaces for Deep Neural Network Classification | The classification loss functions used in deep neural network classifiers can be grouped into two categories based on maximizing the margin in either Euclidean or angular spaces. Euclidean distances between sample vectors are used during classification for the methods maximizing the margin in Euclidean spaces whereas the Cosine similarity distance is used during the testing stage for the methods maximizing margin in the angular spaces. This paper introduces a novel classification loss that maximizes the margin in both the Euclidean and angular spaces at the same time. This way, the Euclidean and Cosine distances will produce similar and consistent results and complement each other, which will in turn improve the accuracies. The proposed loss function enforces the samples of classes to cluster around the centers that represent them. The centers approximating classes are chosen from the boundary of a hypersphere, and the pairwise distances between class centers are always equivalent. This restriction corresponds to choosing centers from the vertices of a regular simplex. There is not any hyperparameter that must be set by the user in the proposed loss function, therefore the use of the proposed method is extremely easy for classical classification problems. Moreover, since the class samples are compactly clustered around their corresponding means, the proposed classifier is also very suitable for open set recognition problems where test samples can come from the unknown classes that are not seen in the training phase. Experimental studies show that the proposed method achieves the state-of-the-art accuracies on open set recognition despite its simplicity. | Reject | This paper proposed to use least-squares loss functions in training deep neural networks. The main idea is to encode class means, whose mutual distances are equivalent. The method is simple but efficient. However, the similar idea has been widely used in multi-class classification (SVM and Fisher discriminant analysis) and spectral clustering. More specifically, one reviewer commented that this work encodes class labels as high-dimensional vectors similar one-hot, and then uses a least-squares loss. Although the authors did not admit this comment, but essecially this comment is indeed right. This idea has been used such as in the following references
1) Multicategory Support Vector Machines: Theory and Application to the Classification of Microarray Data and Satellite Radiance Data
Yoonkyung Lee, Yi Lin & Grace Wahba
2) Prevalence of neural collapse during the terminalphase of deep learning training Vardan Papyana, X. Y. Hanb, and David L. Donoho
| val | [
"fm6zlb8uua2",
"cE2g4BAvUQ2",
"K6tCupmTs3u",
"sT30yYcve5T",
"OUkyecs1Aw",
"tUpt67GoRh4",
"RJQQj8jhBNX",
"YHIPBIj_v6",
"P18NlXgHQ2s0",
"KFdIL9E9_gu",
"dvZ5zJaXgjZ",
"q5Me_f0m9Mmi",
"rja5nFr-Ol7",
"NohUECiqEous",
"SMG52nWZOrj",
"LsSQQVxKxxE",
"i7iN_0Svvc",
"zgf4E5Q8Uw6",
"Lm2GrdZbs... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" 1) We have written a Motivation subsection to explain our motivation. There are theoretical proofs showing that the data samples lie on the vertices of a regular simplex (equivalently on the boundary of a hypersphere) in high-dimensional spaces. Therefore, it makes perfect sense to map the class-specific data sam... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"cE2g4BAvUQ2",
"NohUECiqEous",
"sT30yYcve5T",
"OUkyecs1Aw",
"YHIPBIj_v6",
"RJQQj8jhBNX",
"SMG52nWZOrj",
"P18NlXgHQ2s0",
"KFdIL9E9_gu",
"i7iN_0Svvc",
"rja5nFr-Ol7",
"rja5nFr-Ol7",
"nips_2022_wcBXsXIf-n9",
"LsSQQVxKxxE",
"Lm2GrdZbsZl",
"6dmycsk2Of2",
"_PIdOxESwc2",
"Lm2GrdZbsZl",
"... |
nips_2022_bBgNsEKUxmJ | Universally Expressive Communication in Multi-Agent Reinforcement Learning | Allowing agents to share information through communication is crucial for solving complex tasks in multi-agent reinforcement learning. In this work, we consider the question of whether a given communication protocol can express an arbitrary policy. By observing that many existing protocols can be viewed as instances of graph neural networks (GNNs), we demonstrate the equivalence of joint action selection to node labelling. With standard GNN approaches provably limited in their expressive capacity, we draw from existing GNN literature and consider augmenting agent observations with: (1) unique agent IDs and (2) random noise. We provide a theoretical analysis as to how these approaches yield universally expressive communication, and also prove them capable of targeting arbitrary sets of actions for identical agents. Empirically, these augmentations are found to improve performance on tasks where expressive communication is required, whilst, in general, the optimal communication protocol is found to be task-dependent. | Accept | Reviewers found the paper's connections between MARL and GNNs interesting and well-written, and the experiments convincing. Given the unanimous support, I recommend acceptance. That said, I encourage the authors to integrate reviewer feedback, including trying to move some of the details and plots requested to the main text. | test | [
"JDPuNP7Rgc3",
"8YkyOlHW_6_",
"VRMD6kucME",
"8mO8WHlr7v8",
"pYBDaOY7Mc4",
"Fq3c9mnSWwr",
"H5XMYr-mRn",
"Pln4yoSCYB",
"YoOhHXtN1Rm",
"oslt6Bl9tXy",
"fIX7FyNJ4v",
"ru_MwuRprS",
"8ZO0JO9dJ_q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. If the purpose of including epochs was to illustrate the convergence rates of different experiment setups, I still believe that training curve figures is better than adding additional rows indicating the best performing epoch. Best performing epoch information can be rather deceptive in case evaluations in pre... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"8YkyOlHW_6_",
"8mO8WHlr7v8",
"ru_MwuRprS",
"Pln4yoSCYB",
"nips_2022_bBgNsEKUxmJ",
"8ZO0JO9dJ_q",
"ru_MwuRprS",
"fIX7FyNJ4v",
"oslt6Bl9tXy",
"nips_2022_bBgNsEKUxmJ",
"nips_2022_bBgNsEKUxmJ",
"nips_2022_bBgNsEKUxmJ",
"nips_2022_bBgNsEKUxmJ"
] |
nips_2022_Q-HOv_zn6G | Efficient and Modular Implicit Differentiation | Automatic differentiation (autodiff) has revolutionized machine learning. It
allows to express complex computations by composing elementary ones in creative
ways and removes the burden of computing their derivatives by hand. More
recently, differentiation of optimization problem solutions has attracted
widespread attention with applications such as optimization layers, and in
bi-level problems such as hyper-parameter optimization and meta-learning.
However, so far, implicit differentiation remained difficult to use for
practitioners, as it often required case-by-case tedious mathematical
derivations and implementations. In this paper, we propose
automatic implicit differentiation, an efficient
and modular approach for implicit differentiation of optimization problems. In
our approach, the user defines directly in Python a function $F$ capturing the
optimality conditions of the problem to be differentiated. Once this is done, we
leverage autodiff of $F$ and the implicit function theorem to automatically
differentiate the optimization problem. Our approach thus combines the benefits
of implicit differentiation and autodiff. It is efficient as it can be added on
top of any state-of-the-art solver and modular as the optimality condition
specification is decoupled from the implicit differentiation mechanism. We show
that seemingly simple principles allow to recover many existing implicit
differentiation methods and create new ones easily. We demonstrate the ease of
formulating and solving bi-level optimization problems using our framework. We
also showcase an application to the sensitivity analysis of molecular dynamics. | Accept | The reviewers have discussed the paper at length and have reached a consensus after the authors have clarified the applicability and limitations of their proposed method. I recommend that the authors continue to polish their manuscript with the points they raised in their summary to the Area Chairs and congratulate them on the acceptance of their submission. | train | [
"DmpwX8N11Qo",
"pcOCXCWHCN",
"QXlvhjTeuoE",
"soN-FgXiK-Y",
"gr2cBxZPV5",
"Slsogjob9mS",
"WVFuaMLbkdV",
"gTQmnHprbDi",
"g5mXEZ20yjH",
"VUl0iP3eZSH",
"F7mteGJtY50",
"JOFhKKzifSJ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" >While we agree that the hypothesis of the smooth implicit function theorem may be challenging to check for general nonsmooth optimization problems, we would like to clarify that they hold at least for lasso regression, under mild hypothesis over the design matrix. To support this claim, we added Appendix E with ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
9,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"pcOCXCWHCN",
"QXlvhjTeuoE",
"gr2cBxZPV5",
"gTQmnHprbDi",
"Slsogjob9mS",
"WVFuaMLbkdV",
"VUl0iP3eZSH",
"JOFhKKzifSJ",
"F7mteGJtY50",
"nips_2022_Q-HOv_zn6G",
"nips_2022_Q-HOv_zn6G",
"nips_2022_Q-HOv_zn6G"
] |
nips_2022_Z4kZxAjg8Y | Autoregressive Search Engines: Generating Substrings as Document Identifiers | Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive language models are emerging as the de-facto standard for generating answers, with newer and more powerful systems emerging at an astonishing pace. In this paper we argue that all this (and future) progress can be directly applied to the retrieval problem with minimal intervention to the models' architecture. Previous work has explored ways to partition the search space into hierarchical structures and retrieve documents by autoregressively generating their unique identifier. In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers. This setup allows us to use an autoregressive model to generate and score distinctive ngrams, that are then mapped to full passages through an efficient data structure. Empirically, we show this not only outperforms prior autoregressive approaches but also leads to an average improvement of at least 10 points over more established retrieval solutions for passage-level retrieval on the KILT benchmark, establishing new state-of-the-art downstream performance on some datasets, while using a considerably lighter memory footprint than competing systems. Code available in the supplementary materials. Pre-trained models will be made available. | Accept | This paper proposes a method (SEAL) for document retrieval where a language model (LM) conditioned on a question generates n-grams as document identifiers. This is done by training BART on question and n-gram pairs, where the n-grams are sampled from the gold passages, and at test time constraining generation to output valid n-grams that correspond to document identifiers. Experiments on Natural Questions (NQ) Open dataset and the KILT tasks obtain strong results.
Overall, all reviewers agree that this is a strong paper that proposes a simple but effective approach. I agree with their assessments and recommend acceptance. However, a weakness that has been pointed out is that the paper does not perform evaluation on other common QA benchmarks (MSMARCO, TriviaQA, SQuAD, WebQuestions, and Entity Questions) where the performance of baseline models are well established. I strongly encourage the authors to train SEAL on at least some of those datasets and compare with stronger baselines. | test | [
"oT-LIxjPQ5U",
"Wuh8VjJIiv",
"FGF1NrzazG-X",
"OaZ2xPVTPRm",
"hrj27-HBV2J",
"2cc3kcs5xUx",
"iXxGqu3a6x",
"X9X0xDxuSH",
"p6Zj8JJ2YRp",
"qhApTs72KJP"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the score increase and for all the suggestions on how to strengthen the paper! We will revise the paper accordingly.",
" Thanks for providing the response!\nBased on the response to my review and the author's responses to other reviews, I am happy to increase my score to 6.\n\nSome followup commen... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"Wuh8VjJIiv",
"OaZ2xPVTPRm",
"qhApTs72KJP",
"X9X0xDxuSH",
"iXxGqu3a6x",
"p6Zj8JJ2YRp",
"nips_2022_Z4kZxAjg8Y",
"nips_2022_Z4kZxAjg8Y",
"nips_2022_Z4kZxAjg8Y",
"nips_2022_Z4kZxAjg8Y"
] |
nips_2022_7-bMGPCQCm7 | Heatmap Distribution Matching for Human Pose Estimation | For tackling the task of 2D human pose estimation, the great majority of the recent methods regard this task as a heatmap estimation problem, and optimize the heatmap prediction using the Gaussian-smoothed heatmap as the optimization objective and using the pixel-wise loss (e.g. MSE) as the loss function. In this paper, we show that optimizing the heatmap prediction in such a way, the model performance of body joint localization, which is the intrinsic objective of this task, may not be consistently improved during the optimization process of the heatmap prediction. To address this problem, from a novel perspective, we propose to formulate the optimization of the heatmap prediction as a distribution matching problem between the predicted heatmap and the dot annotation of the body joint directly. By doing so, our proposed method does not need to construct the Gaussian-smoothed heatmap and can achieve a more consistent model performance improvement during the optimization of the heatmap prediction. We show the effectiveness of our proposed method through extensive experiments on the COCO dataset and the MPII dataset. | Accept | This paper proposes to use earth mover distance to measure the loss function between a predicted heatmap and ground truth heatmap. It initially received mixed reviews. After rebuttal and discussion, all reviewers converged to acceptance of the paper. Reviewers believe this paper is novel and achieved significant practical performance across several models. AC follows the consensus and recommends acceptance of the paper. | test | [
"3SZk8PGMYlH",
"FcHuQxkWmez",
"aYyHJXk_9eQ",
"1sxt_fL1rTe",
"F1q4BaiFYPu",
"wayHXr2hHfl",
"XbXZYU3OSjd",
"IREZ5n0MYv6",
"GoNJU3j8VKZ",
"STyjzKKg6yq",
"K4x5AO3vbe",
"_ZbnwJ1Hgno",
"USbh0kO0TaX",
"swkU4QS2u1F",
"VDl_iC17e32",
"kKYD44TMxJJ",
"bCcXNkALcem"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Your replies generally answered my concerns and thus I change my rating. The suggestion of clarifying the core idea and supplementing the missing ablation study in the revised version, as mentioned in *Weakness*, still holds.",
" We thank the reviewer for the additional thoughtful discussions. In the following,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
5
] | [
"FcHuQxkWmez",
"aYyHJXk_9eQ",
"XbXZYU3OSjd",
"nips_2022_7-bMGPCQCm7",
"wayHXr2hHfl",
"swkU4QS2u1F",
"IREZ5n0MYv6",
"GoNJU3j8VKZ",
"VDl_iC17e32",
"K4x5AO3vbe",
"kKYD44TMxJJ",
"USbh0kO0TaX",
"bCcXNkALcem",
"nips_2022_7-bMGPCQCm7",
"nips_2022_7-bMGPCQCm7",
"nips_2022_7-bMGPCQCm7",
"nips... |
nips_2022_q-FRENiEP_d | SageMix: Saliency-Guided Mixup for Point Clouds | Data augmentation is key to improving the generalization ability of deep learning models. Mixup is a simple and widely-used data augmentation technique that has proven effective in alleviating the problems of overfitting and data scarcity. Also, recent studies of saliency-aware Mixup in the image domain show that preserving discriminative parts is beneficial to improving the generalization performance. However, these Mixup-based data augmentations are underexplored in 3D vision, especially in point clouds. In this paper, we propose SageMix, a saliency-guided Mixup for point clouds to preserve salient local structures. Specifically, we extract salient regions from two point clouds and smoothly combine them into one continuous shape. With a simple sequential sampling by re-weighted saliency scores, SageMix preserves the local structure of salient regions. Extensive experiments demonstrate that the proposed method consistently outperforms existing Mixup methods in various benchmark point cloud datasets. With PointNet++, our method achieves an accuracy gain of 2.6% and 4.0% over standard training in ModelNet40 and ScanObjectNN, respectively. In addition to generalization performance, SageMix improves robustness and uncertainty calibration. Moreover, when adopting our method to various tasks including part segmentation and standard image classification, our method achieves competitive performance. Code is available at https://github.com/mlvlab/SageMix. | Accept |
This paper studies the point cloud data mixup with the saliency guidance. The proposed SageMix focus on the mixup over the local regions to preserve salient structures which are more informative for downstream tasks. The whole paper is well organized with clear logic to follow. The proposed method is simple but effective. Moreover, there are solid experiments in various tasks, including object classification, parts segmentation and calibration, to comprehensively evaluate proposed methods. One of the major concerns is the limited improvements over the standard mixup (Reviewer VLSt) on PointNet++. And the discussion of 2D and 3D mixup can be enriched in the aspects of technical challenges and novelties (Reviewer YgrL). This paper includes five different tasks and four benchmarks in experimental studies that strongly address the third major concern in the limited evaluation of Reviewer YgrL, who, however, has not provided any feedback after the authors' rebuttal. Considering the overall contributions in methods and solid evaluation, this submission is slightly above the bar of acceptance. | train | [
"jIJ3IURrn1U",
"0bdOVAo127g",
"c4LiKLigG9m",
"Exdbx6DAxTE",
"5Ep2H6EFOtc",
"pFVlkVYQB3s",
"EpNXkB2hRsf",
"t8jOQt-9ges",
"7-5xyNQO2Ie",
"t0G2R-HiXqL"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer yU1N, we appreciate the reviewer for constructive feedback and comments.\n\nThe end of the Author-Reviewer Discussion is close. Through rebuttal, we have addressed all your concerns, and we believe that our responses have answered your suggestions and questions. So, would it be possible to check our... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"Exdbx6DAxTE",
"pFVlkVYQB3s",
"EpNXkB2hRsf",
"t0G2R-HiXqL",
"7-5xyNQO2Ie",
"7-5xyNQO2Ie",
"t8jOQt-9ges",
"nips_2022_q-FRENiEP_d",
"nips_2022_q-FRENiEP_d",
"nips_2022_q-FRENiEP_d"
] |
nips_2022_yQDC5ZcqX6l | Efficient and Effective Optimal Transport-Based Biclustering | Bipartite graphs can be used to model a wide variety of dyadic information such as user-rating, document-term, and gene-disorder pairs. Biclustering is an extension of clustering to the underlying bipartite graph induced from this kind of data. In this paper, we leverage optimal transport (OT) which has gained momentum in the machine learning community to propose a novel and scalable biclustering model that generalizes several classical biclustering approaches. We perform extensive experimentation to show the validity of our approach compared to other OT biclustering algorithms along both dimensions of the dyadic datasets. | Accept | The reviewers discussed strengths and weaknesses of the paper. One potential issue (to which the author's answer was rather unhelpful) was resolved by a reviewer running the experiments with higher precision output. Reviewers were mostly convinced by the strong empirical improvements.
| train | [
"4_PfQVKqzQ",
"yT6X_0u9nv",
"aQakfo1kWnj",
"ye6tQI0cFC7",
"hA796uTdIIf",
"FXEMdwmuz82",
"Bk5mX_8U7VO",
"n4CVQsoDHSg",
"Wa8it9NLGTe"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their comments. Please revise the mentioned part in the manuscript and probably add some more details about the computational complexity (answer 9) in the manuscript. The sd =0 still look suspicious and need more clarifications.",
" We thank you for your response and the interest you sho... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"hA796uTdIIf",
"aQakfo1kWnj",
"FXEMdwmuz82",
"Wa8it9NLGTe",
"n4CVQsoDHSg",
"Bk5mX_8U7VO",
"nips_2022_yQDC5ZcqX6l",
"nips_2022_yQDC5ZcqX6l",
"nips_2022_yQDC5ZcqX6l"
] |
nips_2022_B_LdLljS842 | Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions | One of the most important AI research questions is to trade off computation versus performance since ``perfect rationality" exists in theory but is impossible to achieve in practice. Recently, Monte-Carlo tree search (MCTS) has attracted considerable attention due to the significant performance improvement in various challenging domains. However, the expensive time cost during search severely restricts its scope for applications. This paper proposes the Virtual MCTS (V-MCTS), a variant of MCTS that spends more search time on harder states and less search time on simpler states adaptively. We give theoretical bounds of the proposed method and evaluate the performance and computations on $9 \times 9$ Go board games and Atari games. Experiments show that our method can achieve comparable performances to the original search algorithm while requiring less than $50\%$ search time on average. We believe that this approach is a viable alternative for tasks under limited time and resources. The code is available at \url{https://github.com/YeWR/V-MCTS.git}. | Accept | I found this to be an interesting paper. As the reviewers indicated, it could be improved in terms of clarity, and I strongly encourage the authors to consider those comments carefully, as ultimately this could only make their paper more impactful.
In particular, the authors could consider how to be clearer about their claims, and how to provide stronger evidence for these. For instance, a claim like "It can maintain comparable performances while reducing half of the time to search adaptively" is very general, and it is unclear that it is really true: for instance, is this true under _all_ conditions?
That said, I believe the paper is clear enough, and the method is simple enough, that it might be of interest to the community, and I think it would be good to accept it for presentation at the conference. This agrees with most reviewers, three of whom voted to accept the paper. I do agree with the one reviewer voting to reject that I'm somewhat unsure how this compares to other reasonable approaches, but I think this can be further discussed in follow-up papers as well. | train | [
"V-imCg8efHS",
"XxQkxowhVk_",
"fhhiFcmiNSV",
"cTME-veTe4",
"I1i26J6KzK8",
"YdW1cVlFfj",
"WpMGchitMkh",
"a58NooVzSEX",
"QxGg0TO-cyG"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer oRW8,\n\nWe kindly remind you that the final stage of discussion is ending soon, and so please kindly let us know if our response has addressed your concerns.\n\nHere is a summary of the revisions:\n\n- We further clarified the **main distinctions** between our work and the Time Management algorithm... | [
-1,
-1,
-1,
-1,
-1,
3,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"YdW1cVlFfj",
"QxGg0TO-cyG",
"a58NooVzSEX",
"WpMGchitMkh",
"YdW1cVlFfj",
"nips_2022_B_LdLljS842",
"nips_2022_B_LdLljS842",
"nips_2022_B_LdLljS842",
"nips_2022_B_LdLljS842"
] |
nips_2022_4MT-e8mn3X | Local Linear Convergence of Gradient Methods for Subspace Optimization via Strict Complementarity | We consider optimization problems in which the goal is find a $k$-dimensional subspace of $\mathbb{R}^n$, $k<<n$, which minimizes a convex and smooth loss. Such problems generalize the fundamental task of principal component analysis (PCA) to include robust and sparse counterparts, and logistic PCA for binary data, among others. This problem could be approached either via nonconvex gradient methods with highly-efficient iterations, but for which arguing about fast convergence to a global minimizer is difficult or, via a convex relaxation for which arguing about convergence to a global minimizer is straightforward, but the corresponding methods are often inefficient. In this work we bridge these two approaches under a strict complementarity assumption, which in particular implies that the optimal solution to the convex relaxation is unique and is also the optimal solution to the original nonconvex problem. Our main result is a proof that a natural nonconvex gradient method which is \textit{SVD-free} and requires only a single QR-factorization of an $n\times k$ matrix per iteration, converges locally with a linear rate. We also establish linear convergence results for the nonconvex projected gradient method, and the Frank-Wolfe method when applied to the convex relaxation. | Accept | The submitted work presents a local linear convergence guarantee for a projected gradient descent (PGD) algorithm on an explicit parameterization of the Stiefel manifold. Such a guarantee is easy to make if the convex objective f is assumed to be strongly convex. Instead, this work considers allowing f to be non-strongly convex. Under a strict complementarity assumption, which this paper shows is equivalent to an eigen-gap condition, the authors prove that the problem enjoys a standard quadratic growth condition that allows PGD to converge at a linear rate.
Reviewers Lsrb, tL19, Fujo concur that the theoretical contribution is worthy of publication. The past few years have seen a large number of local linear convergence guarantees by directly optimizing the factor matrix $U$ in the low-rank factorization $X=UU^T$, but all of these work have assumed some notion of strong convexity or restricted strong convexity. Indeed, I remark here that local linear convergence is actually lost in many of these cases (e.g. matrix sensing) if the objective f is not (restrictedly) strongly convex. In comparison, the present work allows f to be an arbitrary smooth convex function, while showing that local linear convergence is surprisingly still possible under a strict complementarity condition.
However, the impact of the work is obfuscated by repeated assertions to the practical aspects of the proposed algorithm, which in my opinion are difficult to defend. The authors repeatedly assert that their nonconvex algorithm requires only a single QR decomposition, and therefore "much faster and simpler to implement". This may be the case, but the actual reduction in the number of QR decompositions is only a logarithmic factor $O(\log(1/\epsilon))$ under the eigen-gap assumption. On the other hand, global convergence is lost with the nonconvex formulation, and random initialization leads to sublinear convergence in practice. Reviewer mnGb remarks that the numerical experiments are very brief, and do not make a strong case for the practical aspects of the algorithm.
Nevertheless, the technical novelty of the analysis pushes this paper towards acceptance. In the camera-ready version, the authors are advised to:
* Revise their summary of contributions to better compare with existing techniques in the literature, as outlined by Reviewers Lsrb, tL19, Fujo;
* Expand on their experimental section to answer the questions posed by Reviewer mnGb on global convergence, the existence of bad local minima. Answers to these questions can and should be supported or disproved by numerical experiments.
| train | [
"j4SBb1N04F",
"ks3ywFaQJgc",
"-lhAtq0gXrC",
"bkzgyKl3IFm",
"sdj2Vgd8dSJ",
"YNgGeltn-1b",
"MG_eAR5fLZq",
"7SQ-cJ1a0w3",
"1mAKTyh9cf-",
"GPnUg5qciqk",
"ieEqzPAVUH-",
"sLucwT_3DJ",
"dm8i53cfXko",
"ueK5JRbsiQV",
"GAE9tE7Dixe"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Lsrb,\n\nHave we answered your main concerns? If so, would you consider raising your score? Otherwise, we will be very happy to try and answer additional concerns.",
" Dear Reviewer FuJo,\n\nHave we answered your main concerns? If so, would you consider raising your score? Otherwise, we will be ve... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"GPnUg5qciqk",
"7SQ-cJ1a0w3",
"ieEqzPAVUH-",
"sdj2Vgd8dSJ",
"YNgGeltn-1b",
"1mAKTyh9cf-",
"nips_2022_4MT-e8mn3X",
"GAE9tE7Dixe",
"ueK5JRbsiQV",
"dm8i53cfXko",
"sLucwT_3DJ",
"nips_2022_4MT-e8mn3X",
"nips_2022_4MT-e8mn3X",
"nips_2022_4MT-e8mn3X",
"nips_2022_4MT-e8mn3X"
] |
nips_2022_mSiPuHIP7t8 | GraphDE: A Generative Framework for Debiased Learning and Out-of-Distribution Detection on Graphs | Despite the remarkable success of graph neural networks (GNNs) for graph representation learning, they are generally built on the (unreliable) i.i.d. assumption across training and testing data. However, real-world graph data are universally comprised of outliers in training set and out-of-distribution (OOD) testing samples from unseen domains, which solicits effective models for i) debiased learning and ii) OOD detection, towards general trustworthy purpose. In this paper, we first mathematically formulate the two challenging problems for graph data and take an initiative on tackling them under a unified probabilistic model. Specifically, we model the graph generative process to characterize the distribution shifts of graph data together with an additionally introduced latent environment variable as an indicator. We then define a variational distribution, i.e., a recognition model, to infer the environment during training of GNN. By instantiating the generative models as two-component mixtures, we derive a tractable learning objective and theoretically justify that the model can i) automatically identify and down-weight outliers in the training procedure, and ii) induce an effective OOD detector simultaneously. Experiments on diverse datasets with different types of OOD data prove that our model consistently outperforms strong baselines for both debiasing and OOD detection tasks. The source code has been made publicly available at https://github.com/Emiyalzn/GraphDE. | Accept | The authors propose a mixture modeling approach to train GNNs so that out-of-distribution data can be properly down-weighted during training and detected during testing. The reviews were mixed, with some reviewers criticizing the technical novelty and experimental comparison. Indeed, the authors could have explained their contribution more transparently, and emphasized a bit more on the new challenges in the GNN setting, which the response has largely addressed. Perhaps it is also worthwhile to discuss classic works on mixture of experts, as well as variational Bayesian approaches (e.g. https://ieeexplore.ieee.org/document/5563102). As to the experimental comparison, I think the authors made some good explanations in the response and it is perhaps too ambitious for anyone to compare to every possible alternative.
In the end, we think the application of the mixture modeling approach to GNNs is sufficiently interesting, and the initial experimental results appear to be encouraging. We urge the authors to further revise their work by incorporating all changes during the response and better positioning the contributions in historical context. | test | [
"XPrF61b5cA-",
"9tdYv_8jUnX",
"C6vE-PFuP4f",
"-yfl6nx_Ql",
"r4xrwpqF5e",
"GHTNPM0vzl",
"cf-w9sUvkcY",
"2L7lYLEup4h",
"_XfRnkd6uGh",
"ZVfvPR3CKfB",
"on24RX4vAb8",
"QnTZHKgx3p6",
"HU809loT765",
"KW7z_mVFZQY",
"uc8-nsBTKCI",
"RQFb2L7zjaT",
"7ba4NhJdNki",
"nYg0Y4pGPnS",
"85-LwVG-3T-"... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",... | [
" While the other reviewers have acknowledged our rebuttal and raised their rating accordingly, we are wondering whether our responses have addressed your concerns properly. Your feedback will definitely help reach a more reasonable decision on our submission. Thank you!",
" While the other reviewers have acknowl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
3,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4,
3
] | [
"Orf7LqzZmH1",
"CtB6b_wgtPm",
"r4xrwpqF5e",
"GHTNPM0vzl",
"ZSSxDGr3FGT",
"2L7lYLEup4h",
"nips_2022_mSiPuHIP7t8",
"_XfRnkd6uGh",
"28OVnYATqxs",
"on24RX4vAb8",
"QnTZHKgx3p6",
"Orf7LqzZmH1",
"KW7z_mVFZQY",
"uc8-nsBTKCI",
"CtB6b_wgtPm",
"7ba4NhJdNki",
"ZSSxDGr3FGT",
"Dg7_l9TMqUf",
"n... |
nips_2022_AREqvTvv6gG | Frank-Wolfe-based Algorithms for Approximating Tyler's M-estimator | Tyler's M-estimator is a well known procedure for robust and heavy-tailed covariance estimation. Tyler himself suggested an iterative fixed-point algorithm for computing his estimator however, it requires super-linear (in the size of the data) runtime per iteration, which maybe prohibitive in large scale. In this work we propose, to the best of our knowledge, the first Frank-Wolfe-based algorithms for computing Tyler's estimator. One variant uses standard Frank-Wolfe steps, the second also considers \textit{away-steps} (AFW), and the third is a \textit{geodesic} version of AFW (GAFW). AFW provably requires, up to a log factor, only linear time per iteration, while GAFW runs in linear time (up to a log factor) in a large $n$ (number of data-points) regime. All three variants are shown to provably converge to the optimal solution with sublinear rate, under standard assumptions, despite the fact that the underlying optimization problem is not convex nor smooth. Under an additional fairly mild assumption, that holds with probability 1 when the (normalized) data-points are i.i.d. samples from a continuous distribution supported on the entire unit sphere, AFW and GAFW are proved to converge with linear rates. Importantly, all three variants are parameter-free and use adaptive step-sizes. | Accept | The scores on this paper were quite spread (and the reviews at times a little imprecise), however looking more closely at the discussion as well as reading the paper myself, I believe this paper should be accepted. | train | [
"PZZ3PU3Xve0",
"FmriOBVrOrG",
"lsNbTcBGjyJ",
"YskgiReUctZ",
"Ytlha5sVuA",
"qN89lI-IFG3",
"pH6TSn5CW9j",
"vUbt4KarAQN",
"VjHpnB7CqgP0",
"vl31Srr8Bik",
"N8RGNY3hRq",
"gLafsu2NcLon",
"RvXgN7o7jfF",
"FKqKa8pbqP",
"OhEEOjgn0nA",
"Jhlspx2RWll",
"g-x495XQdV7",
"wU-jttT-F00"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reconsidering your score! We really appreciate it. \n\nRegarding projected gradient: this will have complexity the same as fixed point iterations, since projecting onto the feasible set and inverting the matrix iterates will require O(p^3) time, and computing the gradient will take O(np^2) time - sa... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
3
] | [
"lsNbTcBGjyJ",
"RvXgN7o7jfF",
"YskgiReUctZ",
"qN89lI-IFG3",
"vl31Srr8Bik",
"FKqKa8pbqP",
"vUbt4KarAQN",
"gLafsu2NcLon",
"nips_2022_AREqvTvv6gG",
"N8RGNY3hRq",
"g-x495XQdV7",
"wU-jttT-F00",
"Jhlspx2RWll",
"OhEEOjgn0nA",
"nips_2022_AREqvTvv6gG",
"nips_2022_AREqvTvv6gG",
"nips_2022_AREq... |
nips_2022_9Qjn_3gWLDc | Object-Category Aware Reinforcement Learning | Object-oriented reinforcement learning (OORL) is a promising way to improve the sample efficiency and generalization ability over standard RL. Recent works that try to solve OORL tasks without additional feature engineering mainly focus on learning the object representations and then solving tasks via reasoning based on these object representations. However, none of these works tries to explicitly model the inherent similarity between different object instances of the same category. Objects of the same category should share similar functionalities; therefore, the category is the most critical property of an object. Following this insight, we propose a novel framework named Object-Category Aware Reinforcement Learning (OCARL), which utilizes the category information of objects to facilitate both perception and reasoning. OCARL consists of three parts: (1) Category-Aware Unsupervised Object Discovery (UOD), which discovers the objects as well as their corresponding categories; (2) Object-Category Aware Perception, which encodes the category information and is also robust to the incompleteness of (1) at the same time; (3) Object-Centric Modular Reasoning, which adopts multiple independent and object-category-specific networks when reasoning based on objects. Our experiments show that OCARL can improve both the sample efficiency and generalization in the OORL domain. | Accept | This paper received three positive reviews and one borderline reject. In the rebuttal, the negative reviewer did not propose a response, but the authors have given detailed responses to the problems. And the other reviewers did not propose further concerns. Thus, taking the comments of the reviewers into account, the AC decides to accept this paper. | test | [
"X-CtcKUxJuC",
"Vi-nEQpfrjW",
"JxfzaVuMrEFK",
"QlzrN4gAUAl",
"OIZPjHHaqEO",
"_q2kpbnY5wt",
"-soaAakCA1",
"bM0B-CGoEfR",
"ETYSwlqH4kn",
"ReODMDfYiU",
"T3J_RgpQSjse",
"hC15WMEZm8P",
"XtBfK4oQ3o2",
"LxMBHufC0-j",
"LHTZMomMW0K",
"V3wlNUiUyq4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your advice. We have uploaded a new revision that includes this explanation.",
" Thanks the author for the rebuttal, my concerns are resolved",
" > Yes, in the current paper, we are more interested in the generalization to unseen object combinations. Generalization to novel object instances does ma... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5,
3
] | [
"JxfzaVuMrEFK",
"T3J_RgpQSjse",
"ETYSwlqH4kn",
"nips_2022_9Qjn_3gWLDc",
"V3wlNUiUyq4",
"-soaAakCA1",
"LHTZMomMW0K",
"ETYSwlqH4kn",
"LxMBHufC0-j",
"XtBfK4oQ3o2",
"hC15WMEZm8P",
"nips_2022_9Qjn_3gWLDc",
"nips_2022_9Qjn_3gWLDc",
"nips_2022_9Qjn_3gWLDc",
"nips_2022_9Qjn_3gWLDc",
"nips_2022... |
nips_2022_LdAxczs3m0 | Efficient Risk-Averse Reinforcement Learning | In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns. A risk measure often focuses on the worst returns out of the agent's experience. As a result, standard methods for risk-averse RL often ignore high-return strategies. We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a mechanism we call soft risk to bypass it. We also devise a novel cross entropy module for sampling, which (1) preserves risk aversion despite the soft risk; (2) independently improves sample efficiency. By separating the risk aversion of the sampler and the optimizer, we can sample episodes with poor conditions, yet optimize with respect to successful strategies. We combine these two concepts in CeSoR - Cross-entropy Soft-Risk optimization algorithm - which can be applied on top of any risk-averse policy gradient (PG) method. We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks, including in scenarios where standard risk-averse PG completely fails. | Accept | Overall, the reviewers were satisfied with the author response and overall recommend acceptance. However, there were many discussion points and nuanced details that arose during post-rebuttal author-reviewer discussion. Reviewers would like to see these discussion points, clarifications, and requests for revision addressed in the camera-ready. To this last point, I specifically highlight the writing/illustrative example discussion that the authors had with reviewer QDcc. I fully agree that refactoring a paper is challenging, but ultimately, the suggested modifications will improve the accessibility of the ideas and contributions in the paper. | train | [
"njPD9UUtIJ",
"_M-rH6ty4V",
"IZxh4IVxHhv",
"vwjpwaJC9WY",
"2nZhD-HV9sw",
"15ibH1DMU7",
"LAHA0oHhgT",
"nI6ZiF2DCST",
"AAMnsS1Jfcm",
"KB8iNx48_xu",
"CFwk8w96Ipj",
"zX3nR_qchmP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your response, and apologies about the late reply. \n\nIn terms of algorithm choice, I was only suggesting to extend the comparison from Guarded maze to traffic and server control domains, since I suspect (though could be wrong) that DRL should be less brittle on those problems than it would on t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
2
] | [
"IZxh4IVxHhv",
"2nZhD-HV9sw",
"vwjpwaJC9WY",
"nI6ZiF2DCST",
"zX3nR_qchmP",
"CFwk8w96Ipj",
"KB8iNx48_xu",
"AAMnsS1Jfcm",
"nips_2022_LdAxczs3m0",
"nips_2022_LdAxczs3m0",
"nips_2022_LdAxczs3m0",
"nips_2022_LdAxczs3m0"
] |
nips_2022_qtZac7A3-F | Enhance the Visual Representation via Discrete Adversarial Training | Adversarial Training (AT), which is commonly accepted as one of the most effective approaches defending against adversarial examples, can largely harm the standard performance, thus has limited usefulness on industrial-scale production and applications. Surprisingly, this phenomenon is totally opposite in Natural Language Processing (NLP) task, where AT can even benefit for generalization. We notice the merit of AT in NLP tasks could derive from the discrete and symbolic input space. For borrowing the advantage from NLP-style AT, we propose Discrete Adversarial Training (DAT). DAT leverages VQGAN to reform the image data to discrete text-like inputs, i.e. visual words. Then it minimizes the maximal risk on such discrete images with symbolic adversarial perturbations. We further give an explanation from the perspective of distribution to demonstrate the effectiveness of DAT. As a plug-and-play technique for enhancing the visual representation, DAT achieves significant improvement on multiple tasks including image classification, object detection and self-supervised learning. Especially, the model pre-trained with Masked Auto-Encoding (MAE) and fine-tuned by our DAT without extra data can get 31.40 mCE on ImageNet-C and 32.77% top-1 accuracy on Stylized-ImageNet, building the new state-of-the-art. The code will be available at https://github.com/alibaba/easyrobust. | Accept | This paper proposes a discrete adversarial training scheme for improving the robustness of vision models. Reviewers find the paper is well written, the proposed idea seems to be novel/interesting, and the approach leads to improved empirical performance. This work may also inspire new approaches for improving both robustness as well as generalization together. Therefore, I recommend accepting the paper, while also encourage the authors to address the remaining issues pointed out by the reviewers.
| train | [
"peS_HFyyRE9",
"e2EpqV79LYG",
"SrtGTL8rOU6",
"CuEabBdxPA",
"MNaDTdG2UXg",
"gXeYEd7rA55",
"bPvciiSlgz5",
"hLc9NFqHrGG",
"JKF_iztQzR7",
"h6XcIEiKUHC",
"r7DYUmhUdGP",
"6iwIrEWuzy4O",
"Juif_1WK9iNS",
"V996WVz7bpR",
"oxv-Q87Mmu",
"cOA0WnGQMdn",
"dC1nWx-ayqR",
"reuDqe3USKT",
"CUe1oHD9W... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" Dear reviewer wZbH,\n\nWe are appreciate for getting an affirmation from you about our response. Many thanks again for your precious review time and valuable comments to help us improve the paper. \n\nBest, \n\nAuthors of Paper 2664",
" Authors response has convincingly addressed my concerns and I am willing to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5
] | [
"e2EpqV79LYG",
"CUe1oHD9WwA",
"V996WVz7bpR",
"MNaDTdG2UXg",
"h6XcIEiKUHC",
"O5Wm2NX7rJa",
"hLc9NFqHrGG",
"JKF_iztQzR7",
"h6XcIEiKUHC",
"r7DYUmhUdGP",
"Juif_1WK9iNS",
"V996WVz7bpR",
"oxv-Q87Mmu",
"GLwQt-bz9eH",
"cOA0WnGQMdn",
"dC1nWx-ayqR",
"O5Wm2NX7rJa",
"nips_2022_qtZac7A3-F",
"... |
nips_2022_8AB7AXaLIX5 | Concept Activation Regions: A Generalized Framework For Concept-Based Explanations | Concept-based explanations permit to understand the predictions of a deep neural network (DNN) through the lens of concepts specified by users. Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the DNN's latent space. When this holds true, the concept can be represented by a concept activation vector (CAV) pointing in that direction. In this work, we propose to relax this assumption by allowing concept examples to be scattered across different clusters in the DNN's latent space. Each concept is then represented by a region of the DNN's latent space that includes these clusters and that we call concept activation region (CAR). To formalize this idea, we introduce an extension of the CAV formalism that is based on the kernel trick and support vector classifiers. This CAR formalism yields global concept-based explanations and local concept-based feature importance. We prove that CAR explanations built with radial kernels are invariant under latent space isometries. In this way, CAR assigns the same explanations to latent spaces that have the same geometry. We further demonstrate empirically that CARs offer (1) more accurate descriptions of how concepts are scattered in the DNN's latent space; (2) global explanations that are closer to human concept annotations and (3) concept-based feature importance that meaningfully relate concepts with each other. Finally, we use CARs to show that DNNs can autonomously rediscover known scientific concepts, such as the prostate cancer grading system. | Accept | All reviewers have found the paper as a solid contribution on a highly important topic, addressing the major shortcomings of the notable work, CAV in concept-based explainability area. On such shortcoming is that CAV assumes that examples corresponding to a concept are all mapped in a fixed direction in the DNNs latent feature space, which can be restrictive in practice. The proposed technique relaxes a fundamental assumption made in CAV, thereby increasing its effectiveness. As one main contribution, the reviewers have found the relaxation of the linear separability in the latent space sound, and the implemented concept activation regions well capturing the spread of concept-related features in the latent space. There are some concerns on the experimental analysis, that not many DNN architectures have been considered, lack of results without human annotations for concepts, and through robustness analyses. The authors have somewhat addressed these, although there is still some room for improvement. Overall, the positive aspects of the paper overweigh and I suggest acceptance of the paper. | train | [
"XbjrLkV6s2x",
"V3acIj-TF84",
"VR1w_6n_-j",
"EZk7ggxpWW8",
"i3mzRkyWEP0",
"qQILpanzQWX",
"CIWceAjIOsG",
"XlPO4pEmRzF",
"M0kQoVrLlEH",
"3-hn0-auiDo",
"WaGm0j74oR3",
"PcgZJkSKQd",
"jeFKFsFAsJT",
"_2ibg3z6imj",
"1xwZWXvoM4P",
"ZW-GK-V7W2u",
"Y45Kj6l2rv",
"dR7XgsdFH7f"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the extensive rebuttal. \n\nAfter reading the other reviews and the authors answers, I find that the authors addressed most of the concerns. I therefore raise my score accordingly. ",
" Dear reviewer,\n\nas requested, we have performed an analysis of TCAR by using the CAR sensitivity\n\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"CIWceAjIOsG",
"VR1w_6n_-j",
"PcgZJkSKQd",
"i3mzRkyWEP0",
"dR7XgsdFH7f",
"Y45Kj6l2rv",
"ZW-GK-V7W2u",
"dR7XgsdFH7f",
"dR7XgsdFH7f",
"dR7XgsdFH7f",
"Y45Kj6l2rv",
"Y45Kj6l2rv",
"Y45Kj6l2rv",
"ZW-GK-V7W2u",
"ZW-GK-V7W2u",
"nips_2022_8AB7AXaLIX5",
"nips_2022_8AB7AXaLIX5",
"nips_2022_8A... |
nips_2022_pUPFRSxfACD | ZIN: When and How to Learn Invariance Without Environment Partition? | It is commonplace to encounter heterogeneous data, of which some aspects of the data distribution may vary but the underlying causal mechanisms remain constant. When data are divided into distinct environments according to the heterogeneity, recent invariant learning methods have proposed to learn robust and invariant models using this environment partition. It is hence tempting to utilize the inherent heterogeneity even when environment partition is not provided. Unfortunately, in this work, we show that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional information. Then, we propose a framework to jointly learn environment partition and invariant representation, assisted by additional auxiliary information. We derive sufficient and necessary conditions for our framework to provably identify invariant features under a fairly general setting. Experimental results on both synthetic and real world datasets validate our analysis and demonstrate an improved performance of the proposed framework. Our findings also raise the need of making the role of inductive biases more explicit when learning invariant models without environment partition in future works. Codes are available at https://github.com/linyongver/ZIN_official . | Accept | This paper has been well received by the reviewers - all reviewers are positive including significant revisions upwards after rebuttal. Notable strengths are clarifying when you can/cannot identify environments for invariant learning and proposing sufficient and necessary conditions for the same. Further some reviewers have expressed positive opinion on the experiments of the paper which is valuable as well.
To the authors: Please do take into account reviewers questions when preparing camera ready.
| train | [
"6yR8MQqJosg",
"KvTDupQjSKy",
"l3e478f4PfI",
"xQDBdjdA2I",
"hL_xAI3_O7f",
"Ch8o837tuqV",
"KL7bGF6PuWH",
"DNRB-mo-6Q",
"q6fW52PRhUw",
"-wUvmA28Qij",
"yjja9FTUBR",
"YQc6V0k_ff",
"-y-YuaiNEPJ",
"RzrgE6Wm5Nu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarifications! Then I have no further questions. ",
" Thanks for the responses and congratulations on a nice paper.",
" Thanks for clarifying the questions and taking the suggestions into account!",
" I thank the authors for providing their feedback and addressing all my concerns. \n\nI ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"DNRB-mo-6Q",
"hL_xAI3_O7f",
"q6fW52PRhUw",
"KL7bGF6PuWH",
"RzrgE6Wm5Nu",
"-y-YuaiNEPJ",
"-y-YuaiNEPJ",
"YQc6V0k_ff",
"yjja9FTUBR",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD"
] |
nips_2022_NXHXoYMLIG | EfficientFormer: Vision Transformers at MobileNet Speed | Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks.
However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves $79.2\%$ top-1 accuracy on ImageNet-1K with only $1.6$ ms inference latency on iPhone 12 (compiled with CoreML), which runs as fast as MobileNetV2$\times 1.4$ ($1.6$ ms, $74.7\%$ top-1), and our largest model, EfficientFormer-L7, obtains $83.3\%$ accuracy with only $7.0$ ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. | Accept | This work proposes a purely transformer-based vision model for mobile vision purposes.
This proposition is somewhat surprising, since transformers did not excel at low-latency inference on resource-constrained hardware, especially compared to convolutional networks.
This is achieved by using a clever design that allows for reshape operations without the actual need of copying data as well as new techniques for latency-optimized network pruning.
While the methods itself are very technical and engineering-oriented, the overall result: a purely transformer-based low-latency, high-quality vision network is of general interest and is worth being shared with the wider community. Therefore I propose this paper to be accepted for NeurIPS 2022.
| train | [
"HeQ0Ex8TgNS",
"p0d3toOs0D7",
"MLgu9DeoG6e",
"3L6_A7_V_pf",
"ELPORpDi5B",
"NY-mmtZWnLk",
"XXJgNO9NHO",
"FNpadAD9sgK7",
"PYJTD_al9fcF",
"-FFzfBJtr7v",
"DbrtZSU5mRZ",
"UqYItUwc-OI",
"CpBcoV_LZFZ",
"MWwxIl5vHOG",
"wgz-vCdFQCG"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer mbmg,\n\nThanks again for your time and reviewing efforts to help improve our work! We appreciate your positive rating and insightful comments. \n\nAs a kind reminder, we provide suggested results and comparisons in the authors' response, including the demonstration of the advantageous performance o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"wgz-vCdFQCG",
"CpBcoV_LZFZ",
"3L6_A7_V_pf",
"UqYItUwc-OI",
"MWwxIl5vHOG",
"wgz-vCdFQCG",
"wgz-vCdFQCG",
"wgz-vCdFQCG",
"wgz-vCdFQCG",
"CpBcoV_LZFZ",
"CpBcoV_LZFZ",
"MWwxIl5vHOG",
"nips_2022_NXHXoYMLIG",
"nips_2022_NXHXoYMLIG",
"nips_2022_NXHXoYMLIG"
] |
nips_2022_MIhgxhsJMtY | A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP | As an important framework for safe Reinforcement Learning, the Constrained Markov Decision Process (CMDP) has been extensively studied in the recent literature. However, despite the rich results under various on-policy learning settings, there still lacks some essential understanding of the offline CMDP problems, in terms of both the algorithm design and the information theoretic sample complexity lower bound. In this paper, we focus on solving the CMDP problems where only offline data are available. By adopting the concept of the single-policy concentrability coefficient $C^*$, we establish an $\Omega\left(\frac{\min\left\{|\mathcal{S}||\mathcal{A}|,|\mathcal{S}|+I\right\} C^*}{(1-\gamma)^3\epsilon^2}\right)$ sample complexity lower bound for the offline CMDP problem, where $I$ stands for the number of constraints. By introducing a simple but novel deviation control mechanism, we propose a near-optimal primal-dual learning algorithm called DPDL. This algorithm provably guarantees zero constraint violation and its sample complexity matches the above lower bound except for an $\tilde{\mathcal{O}}((1-\gamma)^{-1})$ factor. Comprehensive discussion on how to deal with the unknown constant $C^*$ and the potential asynchronous structure on the offline dataset are also included. | Accept | This paper considers offline reinforcement learning in the constrained MDP framework. It proposes an algorithm that provably obtains a near-optimal policy (under a single-policy concentrability assumption) and proves an upper bound (and a corresponding lower-bound) on the resulting sample complexity.
The reviewers found the paper well-motivated and technically sound, and unanimously recommend acceptance. Please incorporate the reviewers' feedback in the final version of the paper. In order to strengthen the final paper, it would be helpful to:
- Incorporate toy experiments and empirically validate some of the paper's claims
- Include a discussion about the tightness of the upper/lower bound.
| train | [
"Scg4nLj0n6k",
"Zbk41ikwWm2",
"FvBYhX5_WN0",
"m3WjAckPtfP",
"W6d4iwZfGAk",
"fbYQjkLJrp1",
"bDJ2zuJgfj3",
"djNyoYt-S2H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for her/his time and thoughtful feedback. We address the comments in detail as follows.\n\n$\\mathbf{Weakness1.}$ I would question the relevance of the manuscript as the assumptions needed to conclude the sample complexity are heavy. Despite one of them is necessary (Slater), it is not clear... | [
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"fbYQjkLJrp1",
"djNyoYt-S2H",
"bDJ2zuJgfj3",
"W6d4iwZfGAk",
"nips_2022_MIhgxhsJMtY",
"nips_2022_MIhgxhsJMtY",
"nips_2022_MIhgxhsJMtY",
"nips_2022_MIhgxhsJMtY"
] |
nips_2022_4qR780g2Mg | Distributional Reward Estimation for Effective Multi-agent Deep Reinforcement Learning | Multi-agent reinforcement learning has drawn increasing attention in practice, e.g., robotics and automatic driving, as it can explore optimal policies using samples generated by interacting with the environment. However, high reward uncertainty still remains a problem when we want to train a satisfactory model, because obtaining high-quality reward feedback is usually expensive and even infeasible. To handle this issue, previous methods mainly focus on passive reward correction. At the same time, recent active reward estimation methods have proven to be a recipe for reducing the effect of reward uncertainty. In this paper, we propose a novel Distributional Reward Estimation framework for effective Multi-Agent Reinforcement Learning (DRE-MARL). Our main idea is to design the multi-action-branch reward estimation and policy-weighted reward aggregation for stabilized training. Specifically, we design the multi-action-branch reward estimation to model reward distributions on all action branches. Then we utilize reward aggregation to obtain stable updating signals during training. Our intuition is that consideration of all possible consequences of actions could be useful for learning policies. The superiority of the DRE-MARL is demonstrated using benchmark multi-agent scenarios, compared with the SOTA baselines in terms of both effectiveness and robustness. | Accept | The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. They believe that the NeurIPS community could benefit from the ideas and techniques presented in this work. They argued, e.g., that the paper is novel and interesting, technically sound, clearly written, and that the method is clearly motivated and introduced. One reviewer expressed a few technical concerns, to which the authors responded appropriately. The authors have also, post-submission, further compared their model and other baselines from different perspectives. One reviewer pointed out that a limitation of the paper is the lack of discussion and experimental comparison with other work related to Distributional MARL. The authors responded to this, but the reviewer requested further details and a more thorough discussion; the authors then expanded their initial response via two detailed rebuttal messages, which were considered to be satisfactory. Finally, another reviewer (who also expressed positive views on this work) mentioned that the authors could have provided more details on the limitations of their method. Overall, all reviewers were positively impressed with the quality of this work and look forward to an updated version of the paper that addresses the suggestions mentioned in their reviews. | train | [
"7JD0P7J6V0r",
"am62vuVWnT0",
"rS44RXpN_cq",
"4p5iuiMAd3c",
"X6OJ9jg4rZI",
"3P-F1jVXfA",
"C2hK-kKk2h",
"6kxkjUYD9te",
"yim4V_Rvaq9",
"nHT7i7-vkAf",
"T3u684HmJkG",
"HUcbgXFTXv",
"Nc1HN_DDxwB",
"PQMIKacOzAJ",
"F_Ct61PngUK",
"nGRdOL16rc",
"fnoll44TjEG",
"9qwY83DsjpH",
"VIxb9nPV9Uj",... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer JEsw,\n\nWe appreciate the reviewer's positive feedback and worthy suggestions for our paper. Furthermore, the recommendations of ablation studies and the clarification of our framework help us improve the quality of our paper further. As the end of the discussion is approaching, we are wondering if... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4,
2
] | [
"fnoll44TjEG",
"nGRdOL16rc",
"4p5iuiMAd3c",
"yim4V_Rvaq9",
"3P-F1jVXfA",
"C2hK-kKk2h",
"T3u684HmJkG",
"VsQQMAAwNOm",
"nHT7i7-vkAf",
"VIxb9nPV9Uj",
"9qwY83DsjpH",
"Nc1HN_DDxwB",
"fnoll44TjEG",
"F_Ct61PngUK",
"nGRdOL16rc",
"nips_2022_4qR780g2Mg",
"nips_2022_4qR780g2Mg",
"nips_2022_4q... |
nips_2022_9-SZkJLkCcB | KSD Aggregated Goodness-of-fit Test | We investigate properties of goodness-of-fit tests based on the Kernel Stein Discrepancy (KSD). We introduce a strategy to construct a test, called KSDAgg, which aggregates multiple tests with different kernels. KSDAgg avoids splitting the data to perform kernel selection (which leads to a loss in test power), and rather maximises the test power over a collection of kernels. We provide theoretical guarantees on the power of KSDAgg: we show it achieves the smallest uniform separation rate of the collection, up to a logarithmic term. For compactly supported densities with bounded score function for the model, we derive the rate for KSDAgg over restricted Sobolev balls; this rate corresponds to the minimax optimal rate over unrestricted Sobolev balls, up to an iterated logarithmic term. KSDAgg can be computed exactly in practice as it relies either on a parametric bootstrap or on a wild bootstrap to estimate the quantiles and the level corrections. In particular, for the crucial choice of bandwidth of a fixed kernel, it avoids resorting to arbitrary heuristics (such as median or standard deviation) or to data splitting. We find on both synthetic and real-world data that KSDAgg outperforms other state-of-the-art quadratic-time adaptive KSD-based goodness-of-fit testing procedures. | Accept | The paper proposes a novel method of statistical tests with Kernel Stein Discrepancy, aggregating multiple tests with different kernels. The method can avoid data splitting, which is commonly used to choose a kernel aiming at better power but may not be effective with a smaller sample size. The paper gives theoretical analysis, and also experimental results outperforming other relevant methods. The work gives solid theoretical and methodological advances in the field of kernel-based tests. We think the work is worth being presented in NeurIPS. | train | [
"9vq-EgGFg6L",
"uW15Vgw2Cd",
"04heoo8p-MS",
"5Xq5-8ofPaC",
"nSU-N0P4-6",
"o_Voi5JcJnQ",
"Z6FrpioBpzZ",
"7c56NiCrdzg",
"Oskp49RIfXY",
"UEDkRUpzPP",
"X7MURglWkUa",
"fpUlJ3r_RYB",
"LXuucxdJHyO",
"R1y1pUORQ3a"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer ZqFj for their reply, and for increasing their score! \n\nWe will follow their suggestion and include a discussion of the advantages of the multiple testing strategy used against the classical Bonferroni correction.\n\nYes, KSDAgg selects the bandwidth 0.002 and split extra selects 2437. Split e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Z6FrpioBpzZ",
"nSU-N0P4-6",
"o_Voi5JcJnQ",
"nips_2022_9-SZkJLkCcB",
"Oskp49RIfXY",
"UEDkRUpzPP",
"7c56NiCrdzg",
"R1y1pUORQ3a",
"LXuucxdJHyO",
"fpUlJ3r_RYB",
"nips_2022_9-SZkJLkCcB",
"nips_2022_9-SZkJLkCcB",
"nips_2022_9-SZkJLkCcB",
"nips_2022_9-SZkJLkCcB"
] |
nips_2022_pkzwYftNcqY | Efficient Aggregated Kernel Tests using Incomplete $U$-statistics | We propose a series of computationally efficient, nonparametric tests for the two-sample, independence and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and Kernel Stein Discrepancy (KSD), respectively. Our test statistics are incomplete $U$-statistics, with a computational cost that interpolates between linear time in the number of samples, and quadratic time, as associated with classical $U$-statistic tests. The three proposed tests aggregate over several kernel bandwidths to detect departures from the null on various scales: we call the resulting tests MMDAggInc, HSICAggInc and KSDAggInc. This procedure provides a solution to the fundamental kernel selection problem as we can aggregate a large number of kernels with several bandwidths without incurring a significant loss of test power. For the test thresholds, we derive a quantile bound for wild bootstrapped incomplete $U$-statistics, which is of independent interest. We derive non-asymptotic uniform separation rates for MMDAggInc and HSICAggInc, and quantify exactly the trade-off between computational efficiency and the attainable rates: this result is novel for tests based on incomplete $U$-statistics, to our knowledge. We further show that in the quadratic-time case, the wild bootstrap incurs no penalty to test power over more widespread permutation-based approaches, since both attain the same minimax optimal rates (which in turn match the rates that use oracle quantiles). We support our claims with numerical experiments on the trade-off between computational efficiency and test power. In all three testing frameworks, our proposed linear-time tests outperform the current linear-time state-of-the-art tests (or at least match their test power). | Accept | The paper discusses fast computation methods for kernel-based statistical tests: MMD, HSIC, and KSD. The paper uses incomplete U statistics in constructing the methods, shows decent theoretical results including the rate analysis, and confirms favorable numerical results. The paper has significant theoretical contributions to the topic, and also demonstrates the practical usefulness of the methods. After the revision, all the reviewers agree to accept this paper to NeurIPS.
| train | [
"MFP7RFfr3g",
"1Y0-j2aBWlR",
"uWrGLVtlpLo",
"J9LyXat879h",
"JQvx2LfdwvN",
"mAGzNXC22_W",
"lSRQFBin4jP",
"3VCv2cSAKS",
"5rrgvO5RLZvK",
"AkGV_fj41m",
"UI8GewXzJB",
"Xu8TZgu9auu",
"jxbEgKOM46u",
"xcw-bmiX8fN"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We warmly thank reviewer aMao for increasing their score! \n\nWe will make sure to clarify the following points in the final version:\n\n(i) The tests we propose have a computational cost which can be specified by the user (the size of the design between $1$ and $N^2$), there is a tradeoff between test power and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"mAGzNXC22_W",
"JQvx2LfdwvN",
"J9LyXat879h",
"AkGV_fj41m",
"lSRQFBin4jP",
"3VCv2cSAKS",
"5rrgvO5RLZvK",
"xcw-bmiX8fN",
"jxbEgKOM46u",
"Xu8TZgu9auu",
"nips_2022_pkzwYftNcqY",
"nips_2022_pkzwYftNcqY",
"nips_2022_pkzwYftNcqY",
"nips_2022_pkzwYftNcqY"
] |
nips_2022_YPoRoad6gzY | OST: Improving Generalization of DeepFake Detection via One-Shot Test-Time Training | State-of-the-art deepfake detectors perform well in identifying forgeries when they are evaluated on a test set similar to the training set, but struggle to maintain good performance when the test forgeries exhibit different characteristics from the training images e.g., forgeries are created by unseen deepfake methods. Such a weak generalization capability hinders the applicability of deepfake detectors. In this paper, we introduce a new learning paradigm specially designed for the generalizable deepfake detection task. Our key idea is to construct a test-sample-specific auxiliary task to update the model before applying it to the sample. Specifically, we synthesize pseudo-training samples from each test image and create a test-time training objective to update the model. Moreover, we proposed to leverage meta-learning to ensure that a fast single-step test-time gradient descent, dubbed one-shot test-time training (OST), can be sufficient for good deepfake detection performance. Extensive results across several benchmark datasets demonstrate that our approach performs favorably against existing arts in terms of generalization to unseen data and robustness to different post-processing steps. | Accept | The reviewers unanimously accept the paper, so is the final proposal. | train | [
"BGRUCGoFEHc",
"S1BJWeoX-HH",
"W4lyeGyn008",
"fdaQ6KSuWCy",
"Z_--U2tVhRwL",
"vdB7i1DvrQ",
"IWV_eMbXdqY",
"IhfKwQIfBO8",
"q9zCcqcWrd",
"Jc1wN4tYzek",
"V2GdwTKxlCI",
"g5GL1lzOp7",
"3EuZnIvBhj6",
"GRCL4wlwYsP"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for the valuable suggestions. These works will be included and discussed in our future version.",
" Thank you authors for the great effort on the rebuttal. Authors have addressed my concerns to some extent. \n\n**In the revised version, please consider including a short description to compare agains... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
3
] | [
"S1BJWeoX-HH",
"fdaQ6KSuWCy",
"3EuZnIvBhj6",
"g5GL1lzOp7",
"vdB7i1DvrQ",
"GRCL4wlwYsP",
"3EuZnIvBhj6",
"g5GL1lzOp7",
"V2GdwTKxlCI",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY"
] |
nips_2022_adFLKRqRu1h | Fuzzy Learning Machine | Classification is one of the most important problems in machine learning and the nature of it is concept cognition. So far, dozens of different classifiers have been designed. Although their working mechanisms vary widely, few of them fully consider concept cognition. In this paper, a new learning machine, fuzzy learning machine (FLM), is proposed from the perspective of concept cognition. Inspired by cognitive science, its working mechanism is of strong interpretability. At the same time, FLM roots in set theory and fuzzy set theory, so FLM has a solid mathematical foundation. The systematic experimental results on a large number of data sets show that FLM can achieve excellent performance, even with the simple implementation. | Accept | The paper proposes an approach for the design of neural networks for classification based on fuzzy theory, and a specific implementation is presented and experimentally assessed. Arguments from cognition to justify the proposed approach are also used, although at the level of inspiration. The lack of reference to fuzzy systems based neural networks models in the relevant literature in the initial version of the paper has been solved in the revised version, and author's rebuttal seems to have clarified most of the issues raised by reviewers. The experimental assessment seems to be robust. Personally I find the jargon used in the paper a bit unfit for NeurIPS standards, however I do not think this should be a valid reason for rejecting a paper for which no serious drawback has emerged. In any case, I think it is good for NeurIPS to diversify the range of approaches and methodologies covered by the scientific program. | train | [
"Uz9LQIpywCb",
"7n1Ppzt0JoK",
"IPhgQp9QIfP",
"fPVfcBd8eUh",
"1wTs6e6tqdY",
"d_L_zfF5iFs",
"f1er7W5Bj-B",
"qHCfxKL5JHt",
"VAAaNCS6n65"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your advice. You are right.\n\nFrom a biological point of view, the concepts of “cat” and “dog” can be defined according to their DAN features. At this time, the concepts are crisp.\n\nIn the field of ML, for example, in most image classification task, the goal is to learn the concepts from the images ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"7n1Ppzt0JoK",
"1wTs6e6tqdY",
"fPVfcBd8eUh",
"VAAaNCS6n65",
"qHCfxKL5JHt",
"f1er7W5Bj-B",
"nips_2022_adFLKRqRu1h",
"nips_2022_adFLKRqRu1h",
"nips_2022_adFLKRqRu1h"
] |
nips_2022_tYAS1Rpys5 | Simulation-guided Beam Search for Neural Combinatorial Optimization | Neural approaches for combinatorial optimization (CO) equip a learning mechanism to discover powerful heuristics for solving complex real-world problems. While neural approaches capable of high-quality solutions in a single shot are emerging, state-of-the-art approaches are often unable to take full advantage of the solving time available to them. In contrast, hand-crafted heuristics perform highly effective search well and exploit the computation time given to them, but contain heuristics that are difficult to adapt to a dataset being solved. With the goal of providing a powerful search procedure to neural CO approaches, we propose simulation-guided beam search (SGBS), which examines candidate solutions within a fixed-width tree search that both a neural net-learned policy and a simulation (rollout) identify as promising. We further hybridize SGBS with efficient active search (EAS), where SGBS enhances the quality of solutions backpropagated in EAS, and EAS improves the quality of the policy used in SGBS. We evaluate our methods on well-known CO benchmarks and show that SGBS significantly improves the quality of the solutions found under reasonable runtime assumptions. | Accept | The paper follows in the footsteps of alpha go and presents two methods for neural-network guided search, targeting in particular beam search. The paper was deemed a bit incremental, but the method is simple, is easier to parallelize than MCTS and obtains good results on problems under-explored in machine learning. Please review related literature in AI for games and neural guided search techniques in discrete inference.
| train | [
"cBab4SKUbJ",
"OJ8pFRi2Lk",
"rKC2Xzp1gM",
"yBkwUQ-P2dv",
"Fh3Ucuoi2x",
"2iUNEbOKTx1",
"5soqdw4_PsJ",
"M3FcX_wcUD",
"Aiqc7oLxe7",
"Wenf_JGEvuo",
"o-L2Tgth0b4",
"LKx2UlXg2FI",
"HTrnFtLESRt",
"2RTDTDSyaAf",
"7XqyiWvaZ_J"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Sorry for confusing you with our ambiguous use of the term 'policy likelihood'.\nYour description and understanding of the method is accurate, indeed. \n\nWe value your opinion and we thank you again for your hard work and time for reviewing our work.",
" I don't understand the authors when they mention that SG... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"OJ8pFRi2Lk",
"5soqdw4_PsJ",
"yBkwUQ-P2dv",
"2iUNEbOKTx1",
"7XqyiWvaZ_J",
"7XqyiWvaZ_J",
"HTrnFtLESRt",
"HTrnFtLESRt",
"HTrnFtLESRt",
"2RTDTDSyaAf",
"LKx2UlXg2FI",
"nips_2022_tYAS1Rpys5",
"nips_2022_tYAS1Rpys5",
"nips_2022_tYAS1Rpys5",
"nips_2022_tYAS1Rpys5"
] |
nips_2022_3r0yLLCo4fF | Quo Vadis: Is Trajectory Forecasting the Key Towards Long-Term Multi-Object Tracking? | Recent developments in monocular multi-object tracking have been very successful in tracking visible objects and bridging short occlusion gaps, mainly relying on data-driven appearance models.
While we have significantly advanced short-term tracking performance, bridging longer occlusion gaps remains elusive: state-of-the-art object trackers only bridge less than 10% of occlusions longer than three seconds.
We suggest that the missing key is reasoning about future trajectories over a longer time horizon. Intuitively, the longer the occlusion gap, the larger the search space for possible associations.
In this paper, we show that even a small yet diverse set of trajectory predictions for moving agents will significantly reduce this search space and thus improve long-term tracking robustness. Our experiments suggest that the crucial components of our approach are reasoning in a bird's-eye view space and generating a small yet diverse set of forecasts while accounting for their localization uncertainty. This way, we can advance state-of-the-art trackers on the MOTChallenge dataset and significantly improve their long-term tracking performance. This paper's source code and experimental data are available at https://github.com/dendorferpatrick/QuoVadis. | Accept | The paper initially had mixed reviews 4567. The main concerns of the reviewers were:
1. can better show the improvement on long-term occlusions (cbmW)
2. lack of results on autonomous driving datasets w/ camera parameters. (cbmW)
3. Questions about the evaluation metrics used (yuJE, Tgjz)
4. In Tab 1, most of the HOTA gain comes from linear prediction in 3D space, i.e., Kalman filters. (yuJE)
5. comparison on 3D MOT 2015 (yuJE)
6. missing ablation study on association threshold (yuJE)
7. what is the tracking / efficiency tradeoff for forecasting (XrjC)
8. how to deal with moving cameras (XrjC, Tgjz)
9. complex pipeline requires training separate sub-models (Tgjz)
10. ablation study on the different view projection methods (Tgjz)
The authors wrote a response to address these concerns. The reviewers were largely satisfied with the response. Reviewer yuJE still had a concern about the message of the paper (Point 4; Reviewer's point [A.1]), and responded:
> The authors replied by assessing that working in BED is already trajectory forecasting. I do not agree with that, that is just 3D or metric tracking. And metric tracking + kalman filter, which explain 90% of the contribution of the paper, should not be advertised as novelty, nor as trajectory forecasting. This view that I am suggesting here, clearly help the reader in understanding that trajectory forecasting is really of little help in MTT (~0.5% HOTA), which is the opposite of what the paper is claiming.
> As I see it, the paper has merits, e.g. ways to go from image to BED in static as well as in moving sequences, but that is not the story told by this paper (the most interesting part being in the supplementary material).
Nonetheless, the final ratings were positive (5667), and the reviewers appreciated the problem solution to handle long-term occlusions, and brings a promising direction for future research. The AC agrees and recommends accept. The authors should revise the paper according to the reviewers' comments and the discussion.
| train | [
"o43vkBpKHFy",
"4NuEMlFeP7P",
"m5cGaXglJ6C",
"_aZ2fFZaSuu",
"pCQt-GvZwBY",
"MF8D486mQi",
"4H3BqEuPF-A",
"5qFN2s6VsZj",
"hMxvisqGIih",
"5tjqUxnV3e7",
"xlMOXJjrJ50",
"r-jJL5Ca4vd",
"r-nQzm78JhF",
"3Mxa-OCb_Xt",
"vbEJoxftums"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal!Most of my concerns are adequately addressed. I keep my postive rating.",
" I have read the responses from the reviewers, and they addressed my concerns. I will increase my rating after the Reviewer-Meta Reviewer Discussion phase. \n\nI recommend the authors highlight these performance a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"5tjqUxnV3e7",
"4H3BqEuPF-A",
"3Mxa-OCb_Xt",
"r-nQzm78JhF",
"r-jJL5Ca4vd",
"xlMOXJjrJ50",
"r-jJL5Ca4vd",
"r-nQzm78JhF",
"r-nQzm78JhF",
"3Mxa-OCb_Xt",
"vbEJoxftums",
"nips_2022_3r0yLLCo4fF",
"nips_2022_3r0yLLCo4fF",
"nips_2022_3r0yLLCo4fF",
"nips_2022_3r0yLLCo4fF"
] |
nips_2022_cYeYzaP-5AF | Meta-Reinforcement Learning with Self-Modifying Networks | Deep Reinforcement Learning has demonstrated the potential of neural networks tuned with gradient descent for solving complex tasks in well-delimited environments. However, these neural systems are slow learners producing specialized agents with no mechanism to continue learning beyond their training curriculum. On the contrary, biological synaptic plasticity is persistent and manifold, and has been hypothesized to play a key role in executive functions such as working memory and cognitive flexibility, potentially supporting more efficient and generic learning abilities. Inspired by this, we propose to build networks with dynamic weights, able to continually perform self-reflexive modification as a function of their current synaptic state and action-reward feedback, rather than a fixed network configuration. The resulting model, MetODS (for Meta-Optimized Dynamical Synapses) is a broadly applicable meta-reinforcement learning system able to learn efficient and powerful control rules in the agent policy space. A single layer with dynamic synapses can perform one-shot learning, generalize navigation principles to unseen environments and demonstrates a strong ability to learn adaptive motor policies, comparing favorably with previous meta-reinforcement learning approaches. | Accept | This is exciting work that demonstrates the ability of self-modifying networks to solve meta-reinforcement learning problems. The reviewers all agree that this is strong work, and the authors have convincingly addressed most of the concerns the reviewers brought up during the reviewing phase. There are a few lingering questions about the applicability of the baselines, but these are quite minor. The authors have further promised to add analytical comparisons and additional details /motivation on the Hebbian update. Given this, I view this paper quite positively and encourage the authors to integrate the additional experiments and details they mentioned in the feedback stage. | test | [
"WDS5R5cT7h7",
"UQZxk0qbWgf",
"71YuwtONQnj",
"nZqGD4lrn0Q",
"QbhWR-qNREIl",
"Ed9J8gzNQFS",
"1tqcFQrkQjE",
"0JtnoR4Ukz5",
"F7XY0KWVDoB",
"GjVHFlvQgv_",
"ZWgkiaQcwQ",
"-c2alWfalf",
"3KVlDNW2XWv",
"J-yjRplRa8G"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewers for their diligence in evaluating our modifications and willingness to increase their score. We are enthusiastic about our work forming a stronger contribution thanks to their feedback! \n\nRegarding last comments from reviewers, we are currently working on delivering an additional analytical c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"nips_2022_cYeYzaP-5AF",
"ZWgkiaQcwQ",
"GjVHFlvQgv_",
"Ed9J8gzNQFS",
"nips_2022_cYeYzaP-5AF",
"1tqcFQrkQjE",
"0JtnoR4Ukz5",
"F7XY0KWVDoB",
"3KVlDNW2XWv",
"J-yjRplRa8G",
"-c2alWfalf",
"nips_2022_cYeYzaP-5AF",
"nips_2022_cYeYzaP-5AF",
"nips_2022_cYeYzaP-5AF"
] |
nips_2022_5K3uopkizS | Robust Models are less Over-Confident | Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation | Accept | This paper empirically demonstrates that adversarially trained models are better calibrated than naturally trained counterparts. The reviewer found this paper interesting, and initial concerns are mainly about 1) missing discussions of prior works, and 2) requiring more ablations.
The rebuttal well addresses most concerns (especially regarding the novelty w.r.t. prior works). As a result, three (out of four) reviewers unanimously agree to accept this submission. The reviewer itb7 is the only one against accepting this paper; nonetheless, the original review from itb7 is sort of vague and does not provide useful information for instructing authors for preparing a high-quality rebuttal accordingly. Also as the AC, I cannot see any significant concerns/drawbacks raised in the reviewer itb7's comments, therefore decide to ignore it.
In the final version, the authors should include all the clarifications and the additional empirical results provided in the rebuttal.
| train | [
"VeZeWmwHqOC",
"SMO3-gOlQ7",
"dIR5wPW5SJ0",
"BxRybqJEJI",
"KHF63cv0ej3",
"i5aysvd7LR",
"VTIiNodMGN5",
"WNmQ1Rt1jXT",
"ros6J3vHhHb",
"kKx5aGs48SA",
"m7K7mEp-cQe",
"te85nc0R5M4",
"We6q89kvsWt",
"vfti1vnaRVx",
"PYahQ2KkK_Y",
"UQQ-egjPTIt",
"olr7zP9Lv2FQ",
"hjlbEh-KU6",
"p79-zaN84oN"... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We would also like to point out our following comments regarding related work:\n\nhttps://openreview.net/forum?id=5K3uopkizS¬eId=nq_2pyo_YgA\n\nhttps://openreview.net/forum?id=5K3uopkizS¬eId=We6q89kvsWt\n\nhttps://openreview.net/forum?id=5K3uopkizS¬eId=p79-zaN84oN\n\nPlease also pay attention that the ot... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"dIR5wPW5SJ0",
"dIR5wPW5SJ0",
"KHF63cv0ej3",
"WNmQ1Rt1jXT",
"N9YTfafZdb",
"ros6J3vHhHb",
"hjlbEh-KU6",
"ros6J3vHhHb",
"kKx5aGs48SA",
"p79-zaN84oN",
"te85nc0R5M4",
"We6q89kvsWt",
"vfti1vnaRVx",
"nq_2pyo_YgA",
"r-tumqwhndq",
"70SIueqjUL",
"nips_2022_5K3uopkizS",
"ousUEyFjbDM",
"70S... |
nips_2022_tmUGnBjchSC | Generalizing Bayesian Optimization with Decision-theoretic Entropies | Bayesian optimization (BO) is a popular method for efficiently inferring optima of an expensive black-box function via a sequence of queries. Existing information-theoretic BO procedures aim to make queries that most reduce the uncertainty about optima, where the uncertainty is captured by Shannon entropy. However, an optimal measure of uncertainty would, ideally, factor in how we intend to use the inferred quantity in some downstream procedure. In this paper, we instead consider a generalization of Shannon entropy from work in statistical decision theory (DeGroot 1962, Rao 1984), which contains a broad class of uncertainty measures parameterized by a problem-specific loss function corresponding to a downstream task. We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures such as knowledge gradient, expected improvement, and entropy search. We then show how alternative choices for the loss yield a flexible family of acquisition functions that can be customized for use in novel optimization settings. Additionally, we develop gradient-based methods to efficiently optimize our proposed family of acquisition functions, and demonstrate strong empirical performance on a diverse set of sequential decision making tasks, including variants of top-$k$ optimization, multi-level set estimation, and sequence search. | Accept | The paper proposed a novel acquisition function for BO, based on a generalization of Shannon entropy that enables one to incorporate problem-specific loss functions corresponding to a downstream task. The authors show that the proposed acquisition criterion generalizes a number of well-known BO acquisition functions, including EI/KG/ES/PES. A detailed training procedure for optimizing the acquisition function was discussed in the paper, and experimental results show that the proposed acquisition function with the optimization procedure performs well over a diverse set of tasks.
All reviewers agree that this paper is well written, and the idea of unifying a collection of “classical” BO acquisition functions is interesting. There were a few concerns about the sufficiency/significance of the experiments, mainly due to the (lack of) baselines considered in the tasks. The authors clarified the concerns by including preliminary runs of several new experiments, and highlighting that the proposed approaches were targeting novel tasks that went beyond the vanilla optimization tasks. There were no other critical concerns in the reviews. The authors are strongly encouraged to address the questions raised in the reviews when preparing a revision of this paper.
| train | [
"-qm1397_Ir",
"N967pn6zYC",
"htPUt2TbtE",
"khI9ukrSZ-f",
"4huYRc6HHSv",
"H4bFFn1fhLa",
"QpIUsZst4ad",
"uWIYyuuDO81"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have adequately addressed my concern and questions.\nI am glad they have also shown \"Probability of Improvement\" to be a special case of their approach in response to another reviewer.\nMy rating remains unchanged after considering the discussion between authors and reviewers thus far.\n",
" Thank... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"htPUt2TbtE",
"uWIYyuuDO81",
"QpIUsZst4ad",
"4huYRc6HHSv",
"H4bFFn1fhLa",
"nips_2022_tmUGnBjchSC",
"nips_2022_tmUGnBjchSC",
"nips_2022_tmUGnBjchSC"
] |
nips_2022_wZk69kjy9_d | Deep Hierarchical Planning from Pixels | Intelligent agents need to select long sequences of actions to solve complex tasks. While humans easily break down tasks into subgoals and reach them through millions of muscle commands, current artificial intelligence is limited to tasks with horizons of a few hundred decisions, despite large compute budgets. Research on hierarchical reinforcement learning aims to overcome this limitation but has proven to be challenging, current methods rely on manually specified goal spaces or subtasks, and no general solution exists. We introduce Director, a practical method for learning hierarchical behaviors directly from pixels by planning inside the latent space of a learned world model. The high-level policy maximizes task and exploration rewards by selecting latent goals and the low-level policy learns to achieve the goals. Despite operating in latent space, the decisions are interpretable because the world model can decode goals into images for visualization. Director learns successful behaviors across a wide range of environments, including visual control, Atari games, and DMLab levels and outperforms exploration methods on tasks with very sparse rewards, including 3D maze traversal with a quadruped robot from an egocentric camera and proprioception, without access to the global position or top-down view used by prior work. | Accept | This paper studies an interesting problem, and overall the reviewers agreed the exposition and validation are sufficient. We encourage the authors to consider the issues raised by the reviewers and further improve the work in the final version. | train | [
"Zja5XQBsMe",
"fKk69TDTwy3p",
"E8P69PAE1b4",
"-Za9xX4q_Od",
"V2C9VbtOHKl",
"eVLn1BG1Yrj",
"piP4zTVwwB",
"oFZKKNeCET",
"PZc4mYYIE_Z",
"bdyObEErLWZ",
"rxjnLkCVb38"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer q27r,\n\nThe discussion period is coming to an end soon and we haven't received a response from you yet. Could we please ask you to confirm whether our response has resolved your concerns or whether you see any remaining issues that motivate your current rating? If there are remaining issues, we wou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rxjnLkCVb38",
"bdyObEErLWZ",
"PZc4mYYIE_Z",
"PZc4mYYIE_Z",
"PZc4mYYIE_Z",
"rxjnLkCVb38",
"rxjnLkCVb38",
"bdyObEErLWZ",
"nips_2022_wZk69kjy9_d",
"nips_2022_wZk69kjy9_d",
"nips_2022_wZk69kjy9_d"
] |
nips_2022_HFm7AxNa9Wo | Multi-Scale Adaptive Network for Single Image Denoising | Multi-scale architectures have shown effectiveness in a variety of tasks thanks to appealing cross-scale complementarity. However, existing architectures treat different scale features equally without considering the scale-specific characteristics, \textit{i.e.}, the within-scale characteristics are ignored in the architecture design. In this paper, we reveal this missing piece for multi-scale architecture design and accordingly propose a novel Multi-Scale Adaptive Network (MSANet) for single image denoising. Specifically, MSANet simultaneously embraces the within-scale characteristics and the cross-scale complementarity thanks to three novel neural blocks, \textit{i.e.}, adaptive feature block (AFeB), adaptive multi-scale block (AMB), and adaptive fusion block (AFuB). In brief, AFeB is designed to adaptively preserve image details and filter noises, which is highly expected for the features with mixed details and noises. AMB could enlarge the receptive field and aggregate the multi-scale information, which meets the need of contextually informative features. AFuB devotes to adaptively sampling and transferring the features from one scale to another scale, which fuses the multi-scale features with varying characteristics from coarse to fine. Extensive experiments on both three real and six synthetic noisy image datasets show the superiority of MSANet compared with 12 methods. The code could be accessed from https://github.com/XLearning-SCU/2022-NeurIPS-MSANet. | Accept | All reviewers are positive about this paper. Although this paper does not achieve the best performance, it reveals some insights about some insights about scale characteristics of features, which is model-agnostic and potential to design more powerful networks. Also, the proposed method can reduce FlOPs obviously. | train | [
"LOoD0pQTXa",
"6NmhGfcIDOC",
"tY8czY6bCeH",
"b5UDX1HkJaR",
"A8LBDrs5V6",
"t7Y6OZxmt",
"0EiuKiyC2r6",
"kYnPY4QH2F",
"5kl4ErYU1Kv",
"nuaqdwgEDqZp",
"U_P2om0ZPGg",
"tRFHErSv_Lx",
"4-cP4E9coJW",
"ZTM_3cFG08Y",
"mi7p-eA6DGo",
"JG2Bld8q-90",
"JVAdFYnzNF",
"AKxLioyYjrt",
"AmJAp4s1Use",
... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for your positive comments and suggestions. We would improve our manuscript for a clearer presentation in the next version.",
" Thanks for your positive comments and suggestions. We would accordingly revise the problems and include some discussions about the concerns in the next version for a clearer pre... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4,
4
] | [
"b5UDX1HkJaR",
"t7Y6OZxmt",
"A8LBDrs5V6",
"rPrT6V62Qq4",
"U_P2om0ZPGg",
"kYnPY4QH2F",
"rPrT6V62Qq4",
"_PbaoqR0LGz",
"uoEYQWc2jOY",
"64WafatBSlw",
"AmJAp4s1Use",
"rPrT6V62Qq4",
"_PbaoqR0LGz",
"mi7p-eA6DGo",
"uoEYQWc2jOY",
"64WafatBSlw",
"AKxLioyYjrt",
"AmJAp4s1Use",
"nips_2022_HFm... |
nips_2022_NyAJzgHLAr | Intermediate Prototype Mining Transformer for Few-Shot Semantic Segmentation | Few-shot semantic segmentation aims to segment the target objects in query under the condition of a few annotated support images. Most previous works strive to mine more effective category information from the support to match with the corresponding objects in query. However, they all ignored the category information gap between query and support images. If the objects in them show large intra-class diversity, forcibly migrating the category information from the support to the query is ineffective. To solve this problem, we are the first to introduce an intermediate prototype for mining both deterministic category information from the support and adaptive category knowledge from the query. Specifically, we design an Intermediate Prototype Mining Transformer (IPMT) to learn the prototype in an iterative way. In each IPMT layer, we propagate the object information in both support and query features to the prototype and then use it to activate the query feature map. By conducting this process iteratively, both the intermediate prototype and the query feature can be progressively improved. At last, the final query feature is used to yield precise segmentation prediction. Extensive experiments on both PASCAL-5i and COCO-20i datasets clearly verify the effectiveness of our IPMT and show that it outperforms previous state-of-the-art methods by a large margin. Code is available at https://github.com/LIUYUANWEI98/IPMT | Accept | All reviewers lean to accept this paper and this is a clear acceptance. | train | [
"l9iNyM_akA-",
"xQ27gqnj75P",
"LoeViNXYP06",
"L2C4ZmUIlOT",
"5-XAIYd1OBnD",
"NjTjDg5bXza",
"AhIwdHidZC3",
"oeJxNPIPYq"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your continued interest and positive responses to our work. Here are further responses to your concerns.\n\n**Q1. Threshold of diverse samples**\n\nIn the previous rebuttal, we set 1.5 as a threshold to define diverse support and 8.2\\% samples are categorized as \"diverse support\". To furt... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"xQ27gqnj75P",
"LoeViNXYP06",
"oeJxNPIPYq",
"NjTjDg5bXza",
"AhIwdHidZC3",
"nips_2022_NyAJzgHLAr",
"nips_2022_NyAJzgHLAr",
"nips_2022_NyAJzgHLAr"
] |
nips_2022_0-uBrFiOVf | DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection | The Mean-Teacher (MT) scheme is widely adopted in semi-supervised object detection (SSOD). In MT, sparse pseudo labels, offered by the final predictions of the teacher (e.g., after Non Maximum Suppression (NMS) post-processing), are adopted for the dense supervision for the student via hand-crafted label assignment. However, the "sparse-to-dense'' paradigm complicates the pipeline of SSOD, and simultaneously neglects the powerful direct, dense teacher supervision. In this paper, we attempt to directly leverage the dense guidance of teacher to supervise student training, i.e., the "dense-to-dense'' paradigm. Specifically, we propose the Inverse NMS Clustering (INC) and Rank Matching (RM) to instantiate the dense supervision, without the widely used, conventional sparse pseudo labels. INC leads the student to group candidate boxes into clusters in NMS as the teacher does, which is implemented by learning grouping information revealed in NMS procedure of the teacher. After obtaining the same grouping scheme as the teacher via INC, the student further imitates the rank distribution of the teacher over clustered candidates through Rank Matching. With the proposed INC and RM, we integrate Dense Teacher Guidance into Semi-Supervised Object Detection (termed "DTG-SSOD''), successfully abandoning sparse pseudo labels and enabling more informative learning on unlabeled data. On COCO benchmark, our DTG-SSOD achieves state-of-the-art performance under various labelling ratios. For example, under 10% labelling ratio, DTG-SSOD improves the supervised baseline from 26.9 to 35.9 mAP, outperforming the previous best method Soft Teacher by 1.9 points. | Accept |
This paper proposes a dense-to-dense semi-supervised object detection method, where the teacher's NMS is used to guide the clustering and ranking of bounding box candidates from the student. This is motivated from potential noise resulting from sparse-to-dense pseudo-label supervision in existing methods. Results are shown on standard semi-supervised object detection benchmarks, with improvements over the current state of art.
The reviewers all thought that the paper had an interesting idea, strong results, and thorough experiments, ablations, and analysis. Some concerns included generalization to other architectures (e.g. DETR or single-stage CNN), comparison to feature distillation, and poor communication especially through the figures. The rebuttal provided answers to these, including new experiments showing generalization to a single-stage method, and all reviewers have recommended acceptance (and the reviewer with borderline accept mentioned it is a good paper). As a result, I recommend accepting this paper as it provides an interesting new contribution to the common mean teacher paradigm. I highly encourage the authors to add new elements that came out in the rebuttal, especially generalization to single-stage methods and failure cases. | train | [
"wZ-yVDZfSfk",
"AAo1PgoWLei",
"uSQ8DlWWC9",
"0Rg928RWGbG",
"HmlaW42_ci",
"dSSYeuMIiXY",
"65-L5TJ-EqZ",
"qXjso-9vRnh",
"wFv7yx_u1Qa",
"V3LbA5wBpJ9",
"NM5TGhtbkfF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your rebuttal. The author provides detailed experiments to prove this. It has addressed my concerns in the rebuttal. So I think it is a good paper.",
" Thanks for the rebuttal. The authors have properly addressed the reviewer's questions in the rebuttal. Thus, the reviewer decided to keep the origina... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"uSQ8DlWWC9",
"V3LbA5wBpJ9",
"NM5TGhtbkfF",
"HmlaW42_ci",
"V3LbA5wBpJ9",
"wFv7yx_u1Qa",
"qXjso-9vRnh",
"nips_2022_0-uBrFiOVf",
"nips_2022_0-uBrFiOVf",
"nips_2022_0-uBrFiOVf",
"nips_2022_0-uBrFiOVf"
] |
nips_2022_LIKlL1Br9AT | Contact-aware Human Motion Forecasting | In this paper, we tackle the task of scene-aware 3D human motion forecasting, which consists of predicting future human poses given a 3D scene and a past human motion. A key challenge of this task is to ensure consistency between the human and the scene, accounting for human-scene interactions. Previous attempts to do so model such interactions only implicitly, and thus tend to produce artifacts such as ``ghost motion" because of the lack of explicit constraints between the local poses and the global motion. Here, by contrast, we propose to explicitly model the human-scene contacts. To this end, we introduce distance-based contact maps that capture the contact relationships between every joint and every 3D scene point at each time instant. We then develop a two-stage pipeline that first predicts the future contact maps from the past ones and the scene point cloud, and then forecasts the future human poses by conditioning them on the predicted contact maps. During training, we explicitly encourage consistency between the global motion and the local poses via a prior defined using the contact maps and future poses. Our approach outperforms the state-of-the-art human motion forecasting and human synthesis methods on both synthetic and real datasets. Our code is available at https://github.com/wei-mao-2019/ContAwareMotionPred. | Accept | Three expert reviewers have recommended accepting the paper after the discussion period. Reviewers like the overall idea and framework. The AC agrees and recommends acceptance. Please carefully revise the paper based on the reviews. | train | [
"EcVXc_zUNCk",
"afWvOKiqRMk",
"0Tqd9BeHwIc",
"YQIgRUeX0vQ",
"YQxWRBYU5er",
"guh8pBoYUeG",
"htr-86LXGpE",
"SdPix4XY7Np",
"_507hdQuFVu",
"ms6dAWB_Cg4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the suggestion.\n\nWe have added additional results to the supplemental material. As also mentioned in the Checklist, we will release our source code upon the acceptance of this paper which also includes the code to visualize our results. ",
" Thanks for the detailed response. \n\nMost of my concerns... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"afWvOKiqRMk",
"htr-86LXGpE",
"YQIgRUeX0vQ",
"ms6dAWB_Cg4",
"guh8pBoYUeG",
"_507hdQuFVu",
"SdPix4XY7Np",
"nips_2022_LIKlL1Br9AT",
"nips_2022_LIKlL1Br9AT",
"nips_2022_LIKlL1Br9AT"
] |
nips_2022_7CONgGdxsV | Understanding Programmatic Weak Supervision via Source-aware Influence Function | Programmatic Weak Supervision (PWS) aggregates the source votes of multiple weak supervision sources into probabilistic training labels, which are in turn used to train an end model. With its increasing popularity, it is critical to have some tool for users to understand the influence of each component (\eg, the source vote or training data) in the pipeline and interpret the end model behavior. To achieve this, we build on Influence Function (IF) and propose source-aware IF, which leverages the generation process of the probabilistic labels to decompose the end model's training objective and then calculate the influence associated with each (data, source, class) tuple. These primitive influence score can then be used to estimate the influence of individual component of PWS, such as source vote, supervision source, and training data. On datasets of diverse domains, we demonstrate multiple use cases: (1) interpreting incorrect predictions from multiple angles that reveals insights for debugging the PWS pipeline, (2) identifying mislabeling of sources with a gain of 9\%-37\% over baselines, and (3) improving the end model's generalization performance by removing harmful components in the training objective (13\%-24\% better than ordinary IF). | Accept | This paper proposes source-aware Influence Function (IF) to study the “influence” of individual data, source, and class tuples on the performance of different label functions in the programmatic weak supervision paradigm. The proposed method has the capability to work with diverse data domains (tabular, image, textual). An ample number of datasets are used in the experiments.
The reviewers agree that the proposed method is interesting and sound, the experiments are thorough, and the results provide valuable insights for future work. Reviewers' raised concerns and questions are properly addressed by the author's response. | test | [
"QU1Q-MNYVv",
"iOlUfX0XO4G",
"pXFDuGrnq5i",
"LZnP-mvyLCx",
"DdtfKqcMB96",
"K6R1BevR9Hl",
"Pw9lhotw6qy",
"OKA67Zxes23"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n| | | MV | | | | | | DS | | | | | Snorkel | | | |\n| ------------ | :--: | :-------: | :----: | :-------: | :-------: | :-------: | :--: | :-------: | :-------: | :-------... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"iOlUfX0XO4G",
"Pw9lhotw6qy",
"K6R1BevR9Hl",
"OKA67Zxes23",
"nips_2022_7CONgGdxsV",
"nips_2022_7CONgGdxsV",
"nips_2022_7CONgGdxsV",
"nips_2022_7CONgGdxsV"
] |
nips_2022_k7FuTOWMOc7 | Elucidating the Design Space of Diffusion-Based Generative Models | We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36. | Accept | Ratings: 8/9/8/7.
Confidence: 4/4/4/5.
Discussion among reviewers: No.
Summary: This is an excellent paper analyzing the design space of diffusion models. The paper clarifies the design space by disentangling the effects of (1) parameterization, (2) sampling, and (3) training separately. The researchers uniformily agree that the paper is well written and that the empirical results are impressive. Given the enormous interest in diffusion models in the research community, and the likely high impact of advancements in this subfield, this paper is well timed, and will probably be very well received by the NeurIPS community.
Decision: I highly recommend to accept this paper. | train | [
"b1M7dY_e9C",
"2jNQZ5NMJK4",
"JK2tbKgI6h_v",
"VsMxG6fNCi1",
"Q8k7apk1UdC",
"AZvQgzXI_SN",
"AqtWGbdOLDr",
"kmfBuNSIYOm",
"EHQP3cnRbLU",
"TwVg5ExbRhs",
"XyqsKy2paNd"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for clarifying the motivation of your tailored stochastic sampler. My rating about this paper stays unchanged.",
" Thanks for your response. I'll stick to my original rating recommending a Strong Accept.",
" Thank you for the response. I am looking forward to see Fig. 5(b) for ImageNet in the camera... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
9,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"AZvQgzXI_SN",
"AqtWGbdOLDr",
"VsMxG6fNCi1",
"XyqsKy2paNd",
"TwVg5ExbRhs",
"EHQP3cnRbLU",
"kmfBuNSIYOm",
"nips_2022_k7FuTOWMOc7",
"nips_2022_k7FuTOWMOc7",
"nips_2022_k7FuTOWMOc7",
"nips_2022_k7FuTOWMOc7"
] |
nips_2022_HEcYYV5MPxa | Dict-TTS: Learning to Pronounce with Prior Dictionary Knowledge for Text-to-Speech | Polyphone disambiguation aims to capture accurate pronunciation knowledge from natural text sequences for reliable Text-to-speech (TTS) systems. However, previous approaches require substantial annotated training data and additional efforts from language experts, making it difficult to extend high-quality neural TTS systems to out-of-domain daily conversations and countless languages worldwide. This paper tackles the polyphone disambiguation problem from a concise and novel perspective: we propose Dict-TTS, a semantic-aware generative text-to-speech model with an online website dictionary (the existing prior information in the natural language). Specifically, we design a semantics-to-pronunciation attention (S2PA) module to match the semantic patterns between the input text sequence and the prior semantics in the dictionary and obtain the corresponding pronunciations; The S2PA module can be easily trained with the end-to-end TTS model without any annotated phoneme labels. Experimental results in three languages show that our model outperforms several strong baseline models in terms of pronunciation accuracy and improves the prosody modeling of TTS systems. Further extensive analyses demonstrate that each design in Dict-TTS is effective. The code is available at https://github.com/Zain-Jiang/Dict-TTS. | Accept | The reviewers generally liked the proposed approach in this paper, agreed that it is novel, and that the experiments showed good improvements over reasonable baselines. There was broad concern about the ablation study in the original paper (one shared by the AC), but the authors revised that section during the discussion period to the satisfaction of three of the reviewers. While three reviewers recommend that the paper be accepted, one reviewer recommends a borderline reject. The reviewer stuck to this recommendation after the discussion period, primarily citing concerns about whether or not the method is broadly applicable versus being limited primarily to being useful for logographic languages. While I am recommending that this paper be accepted, I urge the authors to expand their discussion of the limitations of the method in Appendix G. I think the discussion with reviewer ksPw of the JSUT results and the fact that Japanese writing comprises both more alphabetic and more logographic elements would be a valuable addition to that appendix and would help to clarify the contributions and limitations of the proposed method.
| test | [
"uIyZ_t27FL9",
"UvZqzAiuG0T",
"MzLeN_NcKRI",
"2iJFV05vZU5",
"eHP8JmamRHT",
"iYXCU_MwylK",
"lZ-4oGxISlb",
"vBQbgBgoA1",
"V0Kmi153VJo",
"kR70CKfVFkH",
"GI1tqKa4zy_",
"QmMjs-LLz8k",
"OlLILDjgYuw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for revising the paper. The response answers my questions. I am updating the score accordingly.",
" Thanks again for your great efforts and valuable comments. \n\nWe have carefully addressed the main concerns and provided detailed responses to each reviewer. We hope you might find the resp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"2iJFV05vZU5",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa",
"OlLILDjgYuw",
"iYXCU_MwylK",
"QmMjs-LLz8k",
"vBQbgBgoA1",
"GI1tqKa4zy_",
"kR70CKfVFkH",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa"
] |
nips_2022_vhKaBdOOobB | GhostNetV2: Enhance Cheap Operation with Long-Range Attention | Light-weight convolutional neural networks (CNNs) are specially designed for applications on mobile devices with faster inference speed. The convolutional operation can only capture local information in a window region, which prevents performance from being further improved. Introducing self-attention into convolution can capture global information well, but it will largely encumber the actual speed. In this paper, we propose a hardware-friendly attention mechanism (dubbed DFC attention) and then present a new GhostNetV2 architecture for mobile applications. The proposed DFC attention is constructed based on fully-connected layers, which can not only execute fast on common hardware but also capture the dependence between long-range pixels. We further revisit the expressiveness bottleneck in previous GhostNet and propose to enhance expanded features produced by cheap operations with DFC attention, so that a GhostNetV2 block can aggregate local and long-range information simultaneously. Extensive experiments demonstrate the superiority of GhostNetV2 over existing architectures. For example, it achieves 75.3% top-1 accuracy on ImageNet with 167M FLOPs, significantly suppressing GhostNetV1 (74.5%) with a similar computational cost. The source code will be available at https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch and https://gitee.com/mindspore/models/tree/master/research/cv/ghostnetv2. | Accept | This paper aims to augment efficient CNNs with self-attention. However, since the naive approach to self-attention is computationally expensive and would contradict the point of efficient CNNs, the authors introduce a new attention mechanism which captures long-range information without substantially added computation cost. The paper demonstrates that GhostNetV2 exhibits markedly better performance at various compute limits as compared to previously proposed efficient networks. Three of the reviewers were quite positive on this paper, noting the novelty of the approach and the strength of the empirical results. One reviewer had several concerns, primarily regarding comparison to NAS based approaches and the novelty of the approach. I agree with the other reviewers that it is not reasonable to compare NAS approaches to non-NAS approaches, and agree that there are marked differences between this work and the previous work cited. I therefore recommend acceptance. I think this will be a valuable contribution to the efficient network community. | val | [
"VS4hQobanBh",
"EPtLi4LZTaE",
"B6AmHzyy0Sv",
"swqgwmTN6Lq",
"edvNiHYiDEx",
"vWNJ4wxaFZ",
"RTwOorslz7G",
"kNGapMAj-8",
"k3JWlEJAjD1",
"Hya34Za4oB9",
"8C1AgMTFes",
"oby17P7M50",
"3NlUgry_7Dt",
"oURXecLNmwd",
"L8OIDMusXmS",
"SmKnmcr_lAL",
"u6iftgwjhb"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear area chair and anonymous reviewers,\n\nThanks for your constructive comments and valuable suggestions to improve this paper. We have revised the manuscript and supplemental materials by improving the presentation and including more experiments, discussions, and explanations. If you have any questions, we are... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"nips_2022_vhKaBdOOobB",
"edvNiHYiDEx",
"swqgwmTN6Lq",
"8C1AgMTFes",
"Hya34Za4oB9",
"kNGapMAj-8",
"oURXecLNmwd",
"k3JWlEJAjD1",
"u6iftgwjhb",
"SmKnmcr_lAL",
"L8OIDMusXmS",
"oURXecLNmwd",
"oURXecLNmwd",
"nips_2022_vhKaBdOOobB",
"nips_2022_vhKaBdOOobB",
"nips_2022_vhKaBdOOobB",
"nips_2... |
nips_2022_-zYfrOl2I6O | CASA: Category-agnostic Skeletal Animal Reconstruction | Recovering a skeletal shape from a monocular video is a longstanding challenge. Prevailing nonrigid animal reconstruction methods often adopt a control-point driven animation model and optimize bone transforms individually without considering skeletal topology, yielding unsatisfactory shape and articulation. In contrast, humans can easily infer the articulation structure of an unknown character by associating it with a seen articulated object in their memory. Inspired by this fact, we present CASA, a novel category-agnostic articulated animal reconstruction method. Our method consists of two components, a video-to-shape retrieval process and a neural inverse graphics framework. During inference, CASA first finds a matched articulated shape from a 3D character assets bank so that the input video scores highly with the rendered image, according to a pretrained image-language model. It then integrates the retrieved character into an inverse graphics framework and jointly infers the shape deformation, skeleton structure, and skinning weights through optimization. Experiments validate the efficacy of our method in shape reconstruction and articulation. We further show that we can use the resulting skeletal-animated character for re-animation.
| Accept | The paper shows how to combine 3d model retrieval with an inverse graphics framework to recover 3D models of a diverse range of animals from video. The paper also introduces a new dataset of 3D animals that is projected to be of value in future works.
While one reviewer considers the technical problem to be "an engineering work", the other reviewers, and the AC, consider that the implementation and experimentation of the effects of this novel (in this context) idea is valuable.
Based on calibration across other papers and reviews in this AC's stack, the average review score is generally inconsistent with the review text, even given the effective rebuttal. I mention this only because a poster acceptance might seem at odds with average score, but of course the point of meta reviewing is to make a judgement which looks at more than average score. The key decision that might be affected in this case is oral vs poster, so it is perhaps useful to clarify: an oral presentation needs to be of value to the broad NeurIPS community. 3D computer vision is an important subfield, and animal reconstruction is an emerging topic in the subfield, but the learnings of this paper remain essentially within a subfield, so I am confident that poster is the appropriate disposition of this paper.
| train | [
"VF7oPoJf3B3",
"1tNX2xG4mm",
"pXL96kKHOZ6",
"4hNKy6CUspJ",
"f4g18gNqH6T",
"jWeQ3RGE2YW",
"NlI6IS-NT7t",
"VFydtWFE-cJ",
"VnZTuK6QUN",
"6XzKgobF_FU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Most of my questions are answered adequately. I'd like to raise score to 7. The interesting part of the paper is the 3D skeletal model retrieval given a large database, which provides reasonable constraints when the target object falls roughly within the database. The remaining concern is that the method does not... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"4hNKy6CUspJ",
"jWeQ3RGE2YW",
"nips_2022_-zYfrOl2I6O",
"f4g18gNqH6T",
"6XzKgobF_FU",
"VnZTuK6QUN",
"VFydtWFE-cJ",
"nips_2022_-zYfrOl2I6O",
"nips_2022_-zYfrOl2I6O",
"nips_2022_-zYfrOl2I6O"
] |
nips_2022_6H00JM-DZjU | Fair and Efficient Allocations Without Obvious Manipulations | We consider the fundamental problem of allocating a set of indivisible goods among strategic agents with additive valuation functions. It is well known that, in the absence of monetary transfers, Pareto efficient and truthful rules are dictatorial, while there is no deterministic truthful mechanism that allocates all items and achieves envy-freeness up to one item (EF1), even for the case of two agents. In this paper, we investigate the interplay of fairness and efficiency under a relaxation of truthfulness called non-obvious manipulability (NOM), recently proposed by~\citep{troyan2020obvious}. We show that this relaxation allows us to bypass the aforementioned negative results in a very strong sense. Specifically, we prove that there are deterministic and EF1 algorithms that are not obviously manipulable, and the algorithm that maximizes utilitarian social welfare (the sum of agents' utilities), which is Pareto efficient but not dictatorial, is not obviously manipulable for $n \geq 3$ agents (but obviously manipulable for $n=2$ agents). At the same time, maximizing the egalitarian social welfare (the minimum of agents' utilities) or the Nash social welfare (the product of agents' utilities) is obviously manipulable for any number of agents and items. Our main result is an approximation preserving black-box reduction from the problem of designing EF1 and NOM mechanisms to the problem of designing EF1 algorithms. En route, we prove an interesting structural result about EF1 allocations, as well as new ``best-of-both-worlds'' results (for the problem without incentives), that might be of independent interest. | Accept | Reviewers agreed that this paper explored a natural and interesting strategic aspect of fair division (non-obvious manipulability). This helped escape classical impossibility results in fair division. Minor concerns were raised about the practical significance of NOM, but overall the sentiment was quite positive. | train | [
"jIwMat57jNj",
"CMjz54AGT0v",
"Rwc40XjGwbr",
"CMuiwB9X2Pn",
"wHBnJSBp5p",
"mnwgYpaTI7J3",
"RROxOv3mdU9O",
"SUQHYw3zJ9p",
"yV3xyunBi8O",
"t6QsAxE3RCK",
"zubo2tQnpqG",
"u9xZs5YDhA",
"jzTRXWlKSS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We will certainly incorporate this discussion in the final version. We hope you will reconsider your score in light of the response. Please let us know if you have any further questions.",
" Thanks for the additional discussion of these issues. I think bringing discussion along these lines into appropriate pla... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"CMjz54AGT0v",
"Rwc40XjGwbr",
"CMuiwB9X2Pn",
"wHBnJSBp5p",
"jzTRXWlKSS",
"u9xZs5YDhA",
"zubo2tQnpqG",
"t6QsAxE3RCK",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU"
] |
nips_2022_xwBdjfKt7_W | SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training | Spiking neural networks (SNNs) are promising to be widely deployed in real-time and safety-critical applications with the advance of neuromorphic computing. Recent work has demonstrated the insensitivity of SNNs to small random perturbations due to the discrete internal information representation. The variety of training algorithms and the involvement of the temporal dimension pose more threats to the robustness of SNNs than that of typical neural networks. We account for the vulnerability of SNNs by constructing adversaries based on different differentiable approximation techniques. By deriving a Lipschitz constant specifically for the spike representation, we first theoretically answer the question of how much adversarial invulnerability is retained in SNNs. Hence, to defend against the broad attack methods, we propose a regularized adversarial training scheme with low computational overheads. SNNs can benefit from the constraint of the perturbed spike distance's amplification and the generalization on multiple adversarial $\epsilon$-neighbourhoods. Our experiments on the image recognition benchmarks have proven that our training scheme can defend against powerful adversarial attacks crafted from strong differentiable approximations. To be specific, our approach makes the black-box attacks of the Projected Gradient Descent attack nearly ineffective. We believe that our work will facilitate the spread of SNNs for safety-critical applications and help understand the robustness of the human brain. | Accept | This paper proposes an adversarial training method for Spike neural networks. One challenge is that spike networks are non-differentiable and the paper develops various gradient approximation methods and builds on previous attack methods like FGSM and PGD with approximate gradients. An additional innovation is the development of a regularization method that estimates Lipschitz constants. Estimating Lipschitz constants of spike neural networks is another technical challenge and the paper develops a rigorous bound using a concept they call spike distance. which is an upper bound to the normal Lipschitz contanst.
Several concerns were raised by the reviewers including incomplete discussion of prior work, clarifications on the performed ablation study and comparison to prior SOTA. Overall the authors did a good job in their rebuttal and discussion to convince the revewers and this meta-reviewer that the paper merits publication. This is a somewhat niche problem setting but the paper has several theoretical and practical innovations that are interesting and suitable for publication.
| test | [
"E4WEPpQBR_g",
"uATX9PK9nUA",
"qgQ_zEQsbT",
"jS1cuqYKab",
"EPPiP_R9kI2",
"IXvET_EefiO",
"G4-S-vbfvl5",
"WuUMJ_r7tN",
"H1-yxyxMkO",
"KsxmZdJmar",
"7yF83O_4Dd8",
"2Cgc6SU9_LQ",
"n2MpYT8yEc1",
"6VA-Tk5zCtX",
"mmnFAs8SgdL",
"W1xNu1uOujz",
"WvVi_GeRRc",
"u_fE3pJrGa9",
"enAt10tm_jR",
... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" After reading the rebuttal, most concerns are addressed. I will increase my score to borderline accept. The combination of adversarial training with SNN seems promising but the current version lacks theoretical contribution.",
" **Tabel R4: Layerwise Matrix Norm of Batch Normalization**\n| Performance | RA... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"6VA-Tk5zCtX",
"IXvET_EefiO",
"EPPiP_R9kI2",
"H3Z0zMz1LuH",
"R9UTY7uy4Av",
"H1-yxyxMkO",
"H3Z0zMz1LuH",
"R9UTY7uy4Av",
"KsxmZdJmar",
"2Cgc6SU9_LQ",
"nips_2022_xwBdjfKt7_W",
"mmnFAs8SgdL",
"R9UTY7uy4Av",
"H3Z0zMz1LuH",
"W1xNu1uOujz",
"WvVi_GeRRc",
"u_fE3pJrGa9",
"2Muk4sH2d7",
"nip... |
nips_2022_cFOhdl1cyU- | M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design | Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly. Multi-tasking models have become successful and often essential for many sophisticated systems such as autonomous driving and indoor robots. However, when deploying MTL onto those real-world systems that are often resource-constrained or latency-sensitive, two prominent challenges arise: (i) during training, simultaneously optimizing all tasks is often difficult due to gradient conflicts across tasks, and the challenge is amplified when a growing number of tasks have to be squeezed into one compact model; (ii) at inference, current MTL regimes have to activate nearly the entire model even to just execute a single task. Yet most real systems demand only one or two tasks at each moment, while flexibly switching between tasks per need: therefore such “all tasks activated” inference is also highly inefficient and non-scalable in practice.
In this paper, we present a model-accelerator co-design framework to enable efficient on-device MTL, that tackles both training and inference bottlenecks. Our framework, dubbed M³ViT, customizes mixture-of-experts (MoE) layers into a vision transformer (ViT) backbone for MTL, and sparsely activates task-specific experts during training, which effectively disentangles the parameter spaces to avoid different tasks’ training conflicts. Then at inference with any task of interest, the same design allows for activating only the task-corresponding sparse “expert” pathway, instead of the full model. Our new model design is further enhanced by hardware-level innovations, in particular, a novel computation reordering scheme tailored for memory-constrained MTL that achieves zero-overhead switching between tasks and can scale to any number of experts. Extensive experiments on PASCAL-Context and NYUD-v2 datasets at both software and hardware levels are conducted to demonstrate the effectiveness of the proposed design. When executing the practical scenario of single-task inference, M³ViT achieves higher accuracies than encoder-focused MTL methods, while significantly reducing 88% inference FLOPs. When implemented on a hardware platform of one Xilinx ZCU104 FPGA, our co-design framework reduces the memory requirement by 2.40×, while achieving energy efficiency (as the product of latency and power) up to 9.23× times higher than a comparable FPGA baseline. | Accept | This paper presents a model-accelerator co-design framework to enable on-device Multi-task Learning (MTL). At the model level, customized mixture-of-expert (MOE) layers are introduced for MTL, which alleviate gradient conflict at training time and improve the efficiency at inference time via sparse activation. At the accelerator level, the paper proposes computation reordering which allows zero-overhead switching between tasks. The algorithm is verified the on popular multi-task datasets, and the accelerator is implemented on commercial FPGAs, demonstrating improved efficiency.
The paper is very well written, the details on the algorithm and hardware implementation are clearly explained. The author chose a particular setting of MTL, then design the model and tailor the parameters to enable efficient on-device MTL. The work is complete, covering from algorithm design to hardware implementation with sufficient innovations.
Reviewers have raised concerns such as
1). Evaluation on small datasets. During rebuttal period, the authors provide more experimental results from the large-scale Taskonomy dataset.
2). Overclaiming. For example, double buffering is a well-known technique for dataflow optimization. The technique itself is by no means novel. However, I think using it to solve a practical problem still has value.
Overall, it is a solid paper and is recommended for acceptance.
| train | [
"QxGckJISqnY",
"Qn2UA6sG_XW",
"9U1tEAVCiSK",
"R0Uf74TLDrS",
"Gpq8ORTV1kl",
"ertzBwxe6A-H",
"y9Jk4RbmnON0",
"-W-Yh2m_RSD",
"JIrol0AaZyV",
"1yVI4Vzi6b3",
"f7gb3Up7mPl",
"UzA_XxIpOS",
"XgORurZT_Ud",
"r9cUm6ih1ZV",
"G_NAXruH3I2",
"L8Al9lhUbm5"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. The rebuttal has well addressed my questions. I support this paper for its novelty and solid experiments, and I will keep my original score.",
" Dear Reviewer V1Gi:\n\nSince the author-reviewer discussion period will end by tomorrow, we will appreciate if you could check our response t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
2
] | [
"-W-Yh2m_RSD",
"Gpq8ORTV1kl",
"R0Uf74TLDrS",
"Gpq8ORTV1kl",
"1yVI4Vzi6b3",
"r9cUm6ih1ZV",
"L8Al9lhUbm5",
"G_NAXruH3I2",
"1yVI4Vzi6b3",
"r9cUm6ih1ZV",
"XgORurZT_Ud",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU... |
nips_2022_NaZwgxp-mT_ | Training Uncertainty-Aware Classifiers with Conformalized Deep Learning | Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty and estimate reliable probabilities. In particular, they tend to be overconfident. We begin to address this problem in the context of multi-class classification by developing a novel training algorithm producing models with more dependable uncertainty estimates, without sacrificing predictive power. The idea is to mitigate overconfidence by minimizing a loss function, inspired by advances in conformal inference, that quantifies model uncertainty by carefully leveraging hold-out data. Experiments with synthetic and real data demonstrate this method can lead to smaller conformal prediction sets with higher conditional coverage, after exact calibration with hold-out data, compared to state-of-the-art alternatives. | Accept | Decision: Accept
This paper extends conformal prediction techniques to multi-class classification using deep neural networks and make the training of the neural network to be aware of the conformal inference processing. The main technical contribution is a differentiable objective to approximate a CDF-based test on the conformity score. The paper provides both theoretical analysis as well as empirical evaluation results of the proposed approach.
Reviewers found the paper to be well written and the approach to be well motivated and supported. There were a few technical concerns but many of them were addressed in author feedback.
Still the main technical downside is the expensiveness of the approach, and the experiments being relatively small scale regarding network size & dataset size. Also a lot of practical issues are not discussed, e.g., data augmentation, distribution shift, etc.
However, this is indeed an early work in the conformal prediction area that tries to make neural network training adaptive to conformal inference "post-processing", and I believe the work is going to the right direction and will have good impact in "conformal prediction + DL" area.
As a side note, I'd encourage the authors to add discussions on related work that proposes regularisers for better neural network calibration, e.g., MMCE https://proceedings.mlr.press/v80/kumar18a/kumar18a.pdf. | train | [
"MWQ40uZ86p",
"ky-KOEzDU0_",
"luTqRfaHUX",
"TOk5gBkQyGJ",
"9tDeUTly8zI",
"zoWfSQH0yFK",
"U9iDarhnHrg",
"yRVBlTBy6Mm",
"pdCp6KmoHo4",
"bPKi8Thg6HV",
"5D1vE2-isEj",
"lHiEfbN0KY_",
"Vi-6KuqupI_",
"O6goads0k1d",
"8y9Qx6AyrM3",
"5-Mf2nPzZC",
"AhPtj6ROenb",
"QPD2xxvu1MZ",
"AYJjsj0wUDa"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_rev... | [
" Thank you for the stimulating discussion! We will incorporate these ideas in the paper, and the extra analyses in the appendix. You have also successfully convinced us to look at data augmentation more closely in the near future.",
" In any event, thanks for engaging. Given the amount of discussion that has ari... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
2
] | [
"ky-KOEzDU0_",
"luTqRfaHUX",
"TOk5gBkQyGJ",
"9tDeUTly8zI",
"zoWfSQH0yFK",
"U9iDarhnHrg",
"yRVBlTBy6Mm",
"bPKi8Thg6HV",
"5-Mf2nPzZC",
"5D1vE2-isEj",
"DrrdSoQVjb6",
"ZAbPk_ZEsu",
"iAgcSKyV1GN",
"8y9Qx6AyrM3",
"AhPtj6ROenb",
"pPEsGlpA-wZ",
"QPD2xxvu1MZ",
"f3XAPkRJAO_",
"nips_2022_Na... |
nips_2022_aPXMGv7aeOn | Compressible-composable NeRF via Rank-residual Decomposition | Neural Radiance Field (NeRF) has emerged as a compelling method to represent 3D objects and scenes for photo-realistic rendering.
However, its implicit representation causes difficulty in manipulating the models like the explicit mesh representation.
Several recent advances in NeRF manipulation are usually restricted by a shared renderer network, or suffer from large model size.
To circumvent the hurdle, in this paper, we present a neural field representation that enables efficient and convenient manipulation of models.
To achieve this goal, we learn a hybrid tensor rank decomposition of the scene without neural networks.
Motivated by the low-rank approximation property of the SVD algorithm, we propose a rank-residual learning strategy to encourage the preservation of primary information in lower ranks.
The model size can then be dynamically adjusted by rank truncation to control the levels of detail, achieving near-optimal compression without extra optimization.
Furthermore, different models can be arbitrarily transformed and composed into one scene by concatenating along the rank dimension.
The growth of storage cost can also be mitigated by compressing the unimportant objects in the composed scene.
We demonstrate that our method is able to achieve comparable rendering quality to state-of-the-art methods, while enabling extra capability of compression and composition.
Code is available at https://github.com/ashawkey/CCNeRF. | Accept | This paper presents a new NeRF method based on tensor decomposition. The method supports both compression and composability, while achieving similar results compared to standard NeRF models. The method does not use a neural network. Several reviewers found the paper easy to follow, the method novel & sound, and the comparisons comprehensive. Two reviewers mentioned the similarity between the proposed work and TensoRF. The rebuttal addressed most concerns and highlighted the differences between the two works. As TensoRF is a concurrent ECCV submission, the existence of TensoRF should not be used against the proposed work. The AC agreed with most of the reviewers and recommended accepting the paper.
| val | [
"Ik_iKtjrzE9",
"lQTGB3hyhxK",
"llF5ba0ciFh",
"Ihwt1rGrYqd",
"T19cyBMnQNc",
"BkGV6zM11d",
"-VABUxny1dt",
"Xzn4-PdvaEV",
"w2B90SL8YN",
"9HEzTid0moy",
"jGRqqJWY30",
"aXbDFvjTpW",
"pJ-TqZ6mTJH",
"OC89pgR7GFs",
"a3wgB6d1UB0",
"Ljk2nMqWNM5",
"7640BB9fdA6",
"xf4TatTslIS",
"DLds5Sz5WMH"
... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"... | [
" Thanks for the answer. The authors have addressed my concerns and I will keep my score the same.",
" Dear reviewers, \n\nThank you all for providing valuable comments. The authors have provided detailed responses to your comments. Has the response addressed your concerns?\n\nIf you haven't, I would appreciate i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"pJ-TqZ6mTJH",
"nips_2022_aPXMGv7aeOn",
"Ihwt1rGrYqd",
"T19cyBMnQNc",
"BkGV6zM11d",
"-VABUxny1dt",
"Xzn4-PdvaEV",
"aXbDFvjTpW",
"9HEzTid0moy",
"OC89pgR7GFs",
"nips_2022_aPXMGv7aeOn",
"DLds5Sz5WMH",
"xf4TatTslIS",
"7640BB9fdA6",
"Ljk2nMqWNM5",
"nips_2022_aPXMGv7aeOn",
"nips_2022_aPXMG... |
nips_2022_uxc8hDSs_xh | Can Hybrid Geometric Scattering Networks Help Solve the Maximum Clique Problem? | We propose a geometric scattering-based graph neural network (GNN) for approximating solutions of the NP-hard maximum clique (MC) problem. We construct a loss function with two terms, one which encourages the network to find highly connected nodes and the other which acts as a surrogate for the constraint that the nodes form a clique. We then use this loss to train an efficient GNN architecture that outputs a vector representing the probability for each node to be part of the MC and apply a rule-based decoder to make our final prediction. The incorporation of the scattering transform alleviates the so-called oversmoothing problem that is often encountered in GNNs and would degrade the performance of our proposed setup. Our empirical results demonstrate that our method outperforms representative GNN baselines in terms of solution accuracy and inference speed as well as conventional solvers like Gurobi with limited time budgets. Furthermore, our scattering model is very parameter efficient with only $\sim$ 0.1\% of the number of parameters compared to previous GNN baseline models. | Accept | All reviewers agree that the proposed approach to use the geometric scattering transform is simple and effective both computationally and in terms of the ability of the method to identify larger cliques for the max-clique problem (except perhaps for one reviewer on the last point).
The work would have more impact if it could be demonstrated that using the geometric scattering transform yields improvement for other combinatorial optimization problems on graphs, or if it could outperform classical heuristics even if they are run for longer time. Currently the experiments presented in the appendix are more compelling than the experiments presented in the main paper.
Given elements they provided in the discussion with the reviewers, the authors should also emphasize more clearly in the paper how their proposed architecture differs from other scattering GCNs that have been proposed, and I would suggest to do an ablation study to show that the enhancements that they introduced in the architecture are actually useful.
A consensus between all reviewers could unfortunately not be found:
- Two reviewers were satisfied with the way the authors had addressed their concerns and with the additional experiments proposed.
- One reviewer considers that the idea of using the scattering transform in this application is not a sufficient contribution to grant publication.
Given that
- two reviewers find the contribution compelling and their concerns are well addressed
- the use the geometric scattering transform is simple and yet effective both computationally and in terms of the ability of the method to identify larger cliques
- the sole motivation of the reviewer who votes for rejection is a claim that the scientific contribution is not sufficient against the opinion of the two reviewers and that of the AC,
the AC is in favor of acceptance.
### Acknowledging that the proposed loss function is the same as in Karalias and Loukas (2021) !
One element which is very important is that the discussion with one of the reviewers has clearly established that **the loss function introduced in this paper is exactly the same** (up to a constant and a multiplicative factor) **as the loss function $\ell_{\text{clique}}$ obtained in** Corollary 1 of **Karalias and Loukas (2021)**.
In the discussion with the reviewer, the authors wrote
"We are happy to add discussion and clarification of the loss terms to our manuscript. This discussion and clarification can also help readers to understand the model better." (which I entirely agree with) but they did not act upon that, yet...
It would now be more than **absolutely necessary to add that discussion** ! This will add value to the paper as it will show that the proposed loss is less ad hoc than it might seem, given that it can be obtained via at least two routes. Moreover establishing connections between approaches in the literature is clearly a valuable contribution.
Currently, the conclusion says: "We further construct a two-term loss function which [...]" which still strongly suggests that the loss function is novel, and it therefore very problematic ethically. The sentence added in blue on line 186 is not sufficient to address the issue.
**The authors should** at the very least **add a sentence** at the beginning of section 3.4 **saying** something like: "We propose a simple derivation of a multi-objective loss function, and retrieve **a loss function which was also obtained by Karalias and Loukas (2021)** as a natural upper bound to the probabilistic penalty loss that they propose".
And at the end of section 3.4, the authors should add a sentence saying: **"The proposed loss matches the loss $\ell_{\text{clique}}$ obtained in Corollary 1, Section 4.1 of Karalias and Loukas (2021)."** | train | [
"2TcnilNRcL8",
"9C6teTzY1z",
"gR4cjDRb8GB",
"dEdJ7-3fqsi",
"utzLmsWXyCp",
"ht2Y2JgYrM1",
"VkDiWor7Fru",
"7JBZDph2Cyp",
"M2pVf4F5PKs",
"e6s2l5qH6tU",
"rnQbSpk5M-6",
"8Uv5539t2GC",
"K8B_5_k28ves"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that some of your concerns have been addressed. \nAnd we are happy that the reviewer agrees with us on the following points:\n1. Our model is lightweight (~0.1 % parameters count) and performs well compared to previous work.\n2. We get a more noticeable benefit on the hardness dataset.\n3. Our struct... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"9C6teTzY1z",
"VkDiWor7Fru",
"dEdJ7-3fqsi",
"utzLmsWXyCp",
"ht2Y2JgYrM1",
"VkDiWor7Fru",
"K8B_5_k28ves",
"8Uv5539t2GC",
"rnQbSpk5M-6",
"nips_2022_uxc8hDSs_xh",
"nips_2022_uxc8hDSs_xh",
"nips_2022_uxc8hDSs_xh",
"nips_2022_uxc8hDSs_xh"
] |
nips_2022_R5KjUket6w | CEIP: Combining Explicit and Implicit Priors for Reinforcement Learning with Demonstrations | Although reinforcement learning has found widespread use in dense reward settings, training autonomous agents with sparse rewards remains challenging. To address this difficulty, prior work has shown promising results when using not only task-specific demonstrations but also task-agnostic albeit somewhat related demonstrations. In most cases, the available demonstrations are distilled into an implicit prior, commonly represented via a single deep net. Explicit priors in the form of a database that can be queried have also been shown to lead to encouraging results. To better benefit from available demonstrations, we develop a method to Combine Explicit and Implicit Priors (CEIP). CEIP exploits multiple implicit priors in the form of normalizing flows in parallel to form a single complex prior. Moreover, CEIP uses an effective explicit retrieval and push-forward mechanism to condition the implicit priors. In three challenging environments, we find the proposed CEIP method to improve upon sophisticated state-of-the-art techniques. | Accept | All three reviewers have elected to accept the paper, with accept ratings of 5,6,7.
The reviews were thorough and demonstrated an understanding of the paper, and the authors have addressed many of the suggested edits. I like that the paper tackles the combination of parametric vs. non-parametric learning. One weakness of the paper, from a reproducibility POV (and also mentioned by the authors in limitations), is that there are a lot of moving pieces in the system (RL, non-parametric dataset lookup, one flow per task + 1 additional one for distilling them). It would seem quite annoying to implement correctly, if starting from scratch (but this is just an aesthetic feedback).
Despite the authors saying that the paper is "not too good to be true", I still find the stark contrast between baselines and the proposed method a bit hard to believe. I believe the code (if released) by the authors would reproduce the stated results in the paper, but what I am more skeptical of is that the baselines couldn't be tuned to perform much better. This is important for this specific paper, given the complexity of the method: a practitioner would want to know whether there is a simpler way to implement the improvements proposed here. For example, authors mention "The key of our strong results are due to our combination of 1-layer flows with explicit prior, which are missing in the baselines. SKiLD and FIST have an LSTM-VAE architecture, which is too heavy with few task-specific trajectories compared to 1-layer flows; PARROT includes neither explicit prior nor flow combination."
This which makes me wonder whether there isn't some simpler way to implement this, i.e. k-NN retrieval paired with contrastive embeddings + small networks for behavior cloning.
A minor nit: The explicit / implicit priors terminology was also confusing to me, as I typically think of this as "amortized inference + retrieval" or "parametric learning + non-parametric learning".
Recommendation: accept.
| train | [
"VTzSNrtfgq",
"HdPgfwAvW4v",
"Y4E7Tc-9go",
"c5VY3K6UApI",
"CDGlNBwuBaD",
"0BN5xEK3BRz",
"FwB1MYBdHhf",
"7xuy38Di2V",
"duz0mc4-AN1",
"ezoNxEp7D3_",
"dGAZ7EfH08g",
"-ymyYOznsIW",
"8M9qlkCbnNw",
"B0L9oRjZ_FG",
"0zlYybwuPBm",
"VJ-TiRi8O6Y"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification. I will keep my rating as a result of the author-reviewer discussion.",
" Thanks for the response. I have updated my rating accordingly. ",
" We thank all reviewers for their valuable and insightful comments. We have updated the pdf which integrates all advice and all new exper... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"c5VY3K6UApI",
"CDGlNBwuBaD",
"nips_2022_R5KjUket6w",
"FwB1MYBdHhf",
"0BN5xEK3BRz",
"B0L9oRjZ_FG",
"duz0mc4-AN1",
"VJ-TiRi8O6Y",
"VJ-TiRi8O6Y",
"0zlYybwuPBm",
"0zlYybwuPBm",
"B0L9oRjZ_FG",
"B0L9oRjZ_FG",
"nips_2022_R5KjUket6w",
"nips_2022_R5KjUket6w",
"nips_2022_R5KjUket6w"
] |
nips_2022_lMMaNf6oxKM | Recipe for a General, Powerful, Scalable Graph Transformer | We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks. Graph Transformers (GTs) have gained popularity in the field of graph representation learning with a variety of recent publications but they lack a common foundation about what constitutes a good positional or structural encoding, and what differentiates them. In this paper, we summarize the different types of encodings with a clearer definition and categorize them as being $\textit{local}$, $\textit{global}$ or $\textit{relative}$. The prior GTs are constrained to small graphs with a few hundred nodes, here we propose the first architecture with a complexity linear in the number of nodes and edges $O(N+E)$ by decoupling the local real-edge aggregation from the fully-connected Transformer. We argue that this decoupling does not negatively affect the expressivity, with our architecture being a universal function approximator on graphs. Our GPS recipe consists of choosing 3 main ingredients: (i) positional/structural encoding, (ii) local message-passing mechanism, and (iii) global attention mechanism. We provide a modular framework $\textit{GraphGPS}$ that supports multiple types of encodings and that provides efficiency and scalability both in small and large graphs. We test our architecture on 16 benchmarks and show highly competitive results in all of them, show-casing the empirical benefits gained by the modularity and the combination of different strategies. | Accept | This paper presents a powerful, general, scalable, and linearly complex graph Transformer. Positional encodings and structural encodings are redefined with local, global, and relative categories, and an attempt has been made to include local and global focus attentions in a graph Transformer. All of the reviewers acknowledged the novelty of this work, particularly within the context of the domain, and therefore voted for its acceptance. Please take feedback from reviewers into account when preparing the camera-ready version. | train | [
"Z7ikPmJiNuB",
"WORCGPyI-8Z",
"G9ppKtmRqw3",
"jd0mwxwDK9i",
"fDxtu-BLC8U",
"F1mTGUo1aiw",
"pL0DPsUC8Vr",
"WnutjwaE7yO",
"gxzpCWCPgjI",
"Cvvx24DrLMw",
"Y7t6COO-OLv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 719v,\n\nThank you again for your review and comments! We have tried our best to address your questions and accordingly we revised the paper.\n\nAs we are near the end of the discussion period, we sincerely hope that you could provide us with a feedback on our revision and whether it has addressed y... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"pL0DPsUC8Vr",
"F1mTGUo1aiw",
"jd0mwxwDK9i",
"Y7t6COO-OLv",
"Y7t6COO-OLv",
"Cvvx24DrLMw",
"gxzpCWCPgjI",
"nips_2022_lMMaNf6oxKM",
"nips_2022_lMMaNf6oxKM",
"nips_2022_lMMaNf6oxKM",
"nips_2022_lMMaNf6oxKM"
] |
nips_2022_NjeEfP7e3KZ | Revisiting Heterophily For Graph Neural Networks | Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption). While GNNs have been commonly believed to outperform NNs in real-world tasks, recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory. Heterophily has been considered the main cause of this empirical observation and numerous works have been put forward to address it. In this paper, we first revisit the widely used homophily metrics and point out that their consideration of only graph-label consistency is a shortcoming. Then, we study heterophily from the perspective of post-aggregation node similarity and define new homophily metrics, which are potentially advantageous compared to existing ones. Based on this investigation, we prove that some harmful cases of heterophily can be effectively addressed by local diversification operation. Then, we propose the Adaptive Channel Mixing (ACM), a framework to adaptively exploit aggregation, diversification and identity channels to extract richer localized information in each baseline GNN layer. ACM is more powerful than the commonly used uni-channel framework for node classification tasks on heterophilic graphs. When evaluated on 10 benchmark node classification tasks, ACM-augmented baselines consistently achieve significant performance gain, exceeding state-of-the-art GNNs on most tasks without incurring significant computational burden. | Accept | In this submission, the authors revisit the existing homophily metrics and point out the limitations of existing metrics in analyzing the performance of GNN. Then the authors propose a novel homophily metric that specifics harmful heterophily, and further propose Adaptive Channel Mixing (ACM) framework to handle the harmful heterophily.
Although there exist some concerns about the novelty of the idea (as pointed out by 9hm2 and Y2Du), overall, the proposed metric and framework are well-motivated, interesting, and effective (as pointed out by icHY, Y2Du, and S2KT), and the experiments are comprehensive and convincing (as pointed out by icHY, Y2Du, and vQE7). Due to these, here, I recommend accepting this submission.
This submission also can be improved based on the suggestions by reviewers (such as writing and typesetting), and hope they find the discussion useful and make this submission a better one.
| val | [
"MZkSJe3OQ0S",
"TR2PXzxoVAy",
"ZYDvj584PDZ",
"NENn38_ftnP",
"0UTujAXuPo",
"tPxNZ1zrlF",
"OD-BvmeR1Yg",
"eFqy0OjZfUmS",
"eoAPjWeJ3PK",
"0T3p2SJtJ6X",
"ohrJRr5JTs",
"EwRKKMIkumM",
"YrSSrsZH2vk",
"GggAs0KKnv2",
"vWFdN9AHFED",
"4XVcI25MMH",
"DbZnBG5YmPX",
"6zOWqKyMDIE"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 9hm2,\n\nThanks for spending your time evaluating our paper. Since you have negative rating on our paper, we would like to know if you still have any question left to discuss. If your concerns are addressed, we respectfully request a raise of your rating. We will appreciate that.\n\nAuthors",
" \n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3,
5
] | [
"0T3p2SJtJ6X",
"ZYDvj584PDZ",
"eFqy0OjZfUmS",
"6zOWqKyMDIE",
"tPxNZ1zrlF",
"OD-BvmeR1Yg",
"DbZnBG5YmPX",
"eoAPjWeJ3PK",
"4XVcI25MMH",
"ohrJRr5JTs",
"vWFdN9AHFED",
"YrSSrsZH2vk",
"GggAs0KKnv2",
"nips_2022_NjeEfP7e3KZ",
"nips_2022_NjeEfP7e3KZ",
"nips_2022_NjeEfP7e3KZ",
"nips_2022_NjeEf... |
nips_2022_2vYmjZVT29T | Hamiltonian Latent Operators for content and motion disentanglement in image sequences | We introduce \textit{HALO} -- a deep generative model utilising HAmiltonian Latent Operators to reliably disentangle content and motion information in image sequences. The \textit{content} represents summary statistics of a sequence, and \textit{motion} is a dynamic process that determines how information is expressed in any part of the sequence. By modelling the dynamics as a Hamiltonian motion, important desiderata are ensured: (1) the motion is reversible, (2) the symplectic, volume-preserving structure in phase space means paths are continuous and are not divergent in the latent space. Consequently, the nearness of sequence frames is realised by the nearness of their coordinates in the phase space, which proves valuable for disentanglement and long-term sequence generation. The sequence space is generally comprised of different types of dynamical motions. To ensure long-term separability and allow controlled generation, we associate every motion with a unique Hamiltonian that acts in its respective subspace. We demonstrate the utility of \textit{HALO} by swapping the motion of a pair of sequences, controlled generation, and image rotations. | Accept | This paper proposes a novel type of variational auto encoder, referred to as HELO. The latent space is decomposed into a content space and a motion space, and the main contribution is the proposal to model the motion space using Hamiltonian dynamics. All reviewers agree that the idea of using Hamiltonian dynamics is interesting and novel. One main critique, that the authors agreed on, was that the operator does not contain any stochasticity and that this might be a limitation when applying the idea to model more complex data. Another remark was that the experiments are limited and experiments on less constraint data are missing. A quick look at the baseline methods revealed that they also use the same kind of data sets to evaluate their methods, so this latter concern might be of minor importance.
All in all, the potential positive outcomes of this paper outweight its current limitations, so we recommend acceptance at this point, while urging authors to address the remaining concerns in the final version.
| train | [
"VvDIC94tiV4",
"zEsy_ewT0F",
"TEThRlTKZqA",
"PmzjpO3D_Dy",
"lFN4oq4ycv",
"jg-48zG-hvU",
"vYmmj1SQHsM",
"4Hkyzaumbu",
"522HjYJyrpx",
"AZ6bD7BOLBa",
"SB_BBFqS0Jn",
"Xy5b3_kPEjz"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are very thankful for your detailed feedback on the paper and for responding to the rebuttal. We appreciate it a lot. We replied to your reviews to the best of our effort and promise to incorporate feedback in the final version. Due to limited time and computational issues pointed out in our rebuttal, we consi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"PmzjpO3D_Dy",
"Xy5b3_kPEjz",
"AZ6bD7BOLBa",
"lFN4oq4ycv",
"4Hkyzaumbu",
"SB_BBFqS0Jn",
"Xy5b3_kPEjz",
"SB_BBFqS0Jn",
"AZ6bD7BOLBa",
"nips_2022_2vYmjZVT29T",
"nips_2022_2vYmjZVT29T",
"nips_2022_2vYmjZVT29T"
] |
nips_2022_upuYKQiyxa_ | Optimizing Relevance Maps of Vision Transformers Improves Robustness | It has been observed that visual classification models often rely mostly on spurious cues such as the image background, which hurts their robustness to distribution changes.
To alleviate this shortcoming, we propose to monitor the model's relevancy signal and direct the model to base its prediction on the foreground object.
This is done as a finetuning step, involving relatively few samples consisting of pairs of images and their associated foreground masks. Specifically, we encourage the model's relevancy map (i) to assign lower relevance to background regions, (ii) to consider as much information as possible from the foreground, and (iii) we encourage the decisions to have high confidence. When applied to Vision Transformer (ViT) models, a marked improvement in robustness to domain-shifts is observed. Moreover, the foreground masks can be obtained automatically, from a self-supervised variant of the ViT model itself; therefore no additional supervision is required. Our code is available at: https://github.com/hila-chefer/RobustViT. | Accept | Initially, this paper received positive reviews. The rebuttal addresses the remaining concerns. All reviewers feel that the contributions of this work are sufficient to merit its acceptance. The area chair agrees with the reviewers and recommends it be acecpted at this conference. | train | [
"GQjSo_9NJJ8",
"Ebst6DGCMB3",
"NrdACp8KqaY",
"-hjJkg92TTE",
"U4SEw6We6U9",
"MFmddy_DvHa",
"OQR48A2JnZ",
"Xs2vHRAosS-",
"Kx1M1YXIIzL",
"FsFA3lL8ei1",
"8WqioCggj9t",
"NKfGWmTJ0jD",
"U4C0GC2XR5w",
"-1brURHx6r0",
"sXv9iTSb2PM",
"U0e_CX4xBQ",
"l4I5IgaCUxp",
"ZTjsOyDf4fa",
"t2r6oVx4fI"... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for an extended discussion. \n\n1. Whether supervising GAE faithfully changes the inner mechanisms of the Transformer\n\nAfter going through the references [8] on GAE and the ICML 2022 paper [35] evaluating multiple explanation methods on attention-based models, I'm convinced that GAE is indeed ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"Ebst6DGCMB3",
"NrdACp8KqaY",
"OQR48A2JnZ",
"U4SEw6We6U9",
"Xs2vHRAosS-",
"U4C0GC2XR5w",
"Kx1M1YXIIzL",
"ZTjsOyDf4fa",
"FsFA3lL8ei1",
"t2r6oVx4fI",
"NKfGWmTJ0jD",
"ZTjsOyDf4fa",
"l4I5IgaCUxp",
"sXv9iTSb2PM",
"U0e_CX4xBQ",
"nips_2022_upuYKQiyxa_",
"nips_2022_upuYKQiyxa_",
"nips_2022... |
nips_2022_eN2lQxjWL05 | Decision-Focused Learning without Decision-Making: Learning Locally Optimized Decision Losses | Decision-Focused Learning (DFL) is a paradigm for tailoring a predictive model to a downstream optimization task that uses its predictions in order to perform better \textit{on that specific task}. The main technical challenge associated with DFL is that it requires being able to differentiate through the optimization problem, which is difficult due to discontinuous solutions and other challenges. Past work has largely gotten around this this issue by \textit{handcrafting} task-specific surrogates to the original optimization problem that provide informative gradients when differentiated through. However, the need to handcraft surrogates for each new task limits the usability of DFL. In addition, there are often no guarantees about the convexity of the resulting surrogates and, as a result, training a predictive model using them can lead to inferior local optima. In this paper, we do away with surrogates altogether and instead \textit{learn} loss functions that capture task-specific information. To the best of our knowledge, ours is the first approach that entirely replaces the optimization component of decision-focused learning with a loss that is automatically learned. Our approach (a) only requires access to a black-box oracle that can solve the optimization problem and is thus \textit{generalizable}, and (b) can be \textit{convex by construction} and so can be easily optimized over. We evaluate our approach on three resource allocation problems from the literature and find that our approach outperforms learning without taking into account task-structure in all three domains, and even hand-crafted surrogates from the literature. | Accept | This paper considers the problem of making decision-focused learning (DFL) more usable for both researchers and practitioners. It proposes a novel approach referred to as locally-optimized decision losses (LODL) which learns the parameters of surrogate intermediate losses to match the decision loss. Experimental results clearly demonstrate that LODL approach is able to learn effective surrogate for the considered tasks.
All the reviewers appreciated the LODL idea, but also raised a number of concerns. There was a lot of discussion and authors' have addressed most of the concerns and also acknowledged some limitations pointed out by some reviewers'. One expert reviewer who deeply engaged with the authors to both clarify and improve the paper was willing to strongly champion the paper. In their words: "It's a brilliant idea that will be foundational in the space and will be engaging and thought-provoking at the conference." Couple of reviewers' raised few points beyond the author-reviewer discussion which authors' could not see/respond to. However, I think the overall strengths of the paper outweigh these concerns.
Therefore, I recommend accepting the paper. I strongly encourage the authors' to improve the paper in terms of clarity, exposition, and additional experimental results to reflect the discussion with reviewers. | train | [
"2owQcx51Kvh",
"apTD57W2vRm",
"I1tcvBOC8rQ",
"Uk-KbbW9RB-",
"eizXYGZ1aaG",
"l8owqhJhodu",
"eLMqnbR6OR1",
"d10zsF4V6E",
"RBxoXKL1mJn",
"ef8CHnc-IyK",
"7XbTNM8splV",
"udvM9gpWgi",
"tZZHvz4aU3J",
"h87cvf_HOts",
"HPeR1pLd-kp",
"Xz2GmevwoTH",
"fXQUc2hpLJk",
"fTYNTAYphf7",
"h3kg09N_nYk... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
" Thank you for your response! I appreciate that you decide to include scalability results in the camera-ready and promise to clarify the issues I mentioned. I suggest the authors can also discuss more scalability in the camera-ready. Incorporating information in the common response above will be helpful. ",
" I ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"HPeR1pLd-kp",
"I1tcvBOC8rQ",
"Uk-KbbW9RB-",
"eizXYGZ1aaG",
"l8owqhJhodu",
"eLMqnbR6OR1",
"d10zsF4V6E",
"RBxoXKL1mJn",
"ef8CHnc-IyK",
"7XbTNM8splV",
"udvM9gpWgi",
"tZZHvz4aU3J",
"Xz2GmevwoTH",
"h3kg09N_nYk",
"pxgQ3TLdWR",
"rcOMSCpWM-",
"5CFbirRWnup",
"nips_2022_eN2lQxjWL05",
"nip... |
nips_2022_RuNhbvX9o9S | Learning General World Models in a Handful of Reward-Free Deployments | Building generally capable agents is a grand challenge for deep reinforcement learning (RL). To approach this challenge practically, we outline two key desiderata: 1) to facilitate generalization, exploration should be task agnostic; 2) to facilitate scalability, exploration policies should collect large quantities of data without costly centralized retraining. Combining these two properties, we introduce the reward-free deployment efficiency setting, a new paradigm for RL research. We then present CASCADE, a novel approach for self-supervised exploration in this new setting. CASCADE seeks to learn a world model by collecting data with a population of agents, using an information theoretic objective inspired by Bayesian Active Learning. CASCADE achieves this by specifically maximizing the diversity of trajectories sampled by the population through a novel cascading objective. We provide theoretical intuition for CASCADE which we show in a tabular setting improves upon naïve approaches that do not account for population diversity. We then demonstrate that CASCADE collects diverse task-agnostic datasets and learns agents that generalize zero-shot to novel, unseen downstream tasks on Atari, MiniGrid, Crafter and the DM Control Suite. Code and videos are available at https://ycxuyingchen.github.io/cascade/ | Accept | This paper proposes a method to learn world models without rewards, using a collection of agents that explore an environment. The key idea is to maximize diversity between the trajectories collected by the agents to obtain a good world model, with an emphasis on being as efficient as possible. The authors present some theoretical justification for using a population of agents and their empirical results on several datasets provide a good demonstration of the method. The reviewers all agree this is an interesting and important setting and the author response significantly improves the paper on aspects of clarity and empirical results, based on the reviewer concerns. Overall, I believe this work provides interesting ideas and will encourage more work in this direction in the future. I encourage the authors to revise their paper taking the reviewer suggestions into account and add in the new experiments to make it stronger. | train | [
"ue6iCgBkEU",
"WCJsWhlmMPi",
"swLWDA0UhV",
"UA1K40tRCL",
"ITmuKmq8RQI",
"45CkpvW9Qsn",
"BpbH-UHuNwu",
"x_qN4mPlRvN",
"RP74IK81ykJ",
"M-TanDVG7f",
"ZAVxBnnyv0G",
"blF009OGmuQS",
"nBarta-SX0K",
"AW4K0gjkyRi",
"Y-ChXaRtMl0h",
"5D4Mm1Z8SDH",
"_wgHiTE2T9",
"o9FsPC3xnh",
"qVLmXF8jjjo",... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" Hi Reviewer zCLK,\n\nThank you for coming back, and for increasing your score to a \"weak accept\". It seems your only remaining concern is regarding the use of rewarding episodes as a metric for evaluating exploration. We want to reiterate that it is just being used as a proxy for depth of exploration, which we ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"WCJsWhlmMPi",
"45CkpvW9Qsn",
"RP74IK81ykJ",
"ITmuKmq8RQI",
"blF009OGmuQS",
"jJTTnPmtkYE",
"nips_2022_RuNhbvX9o9S",
"ZAVxBnnyv0G",
"5D4Mm1Z8SDH",
"ZAVxBnnyv0G",
"o9FsPC3xnh",
"nBarta-SX0K",
"ASx3SgtsbG",
"Y-ChXaRtMl0h",
"nips_2022_RuNhbvX9o9S",
"_wgHiTE2T9",
"96JD7H-SwU1",
"qVLmXF8... |
nips_2022_3-3XMModtrx | Is a Modular Architecture Enough? | Inspired from human cognition, machine learning systems are gradually revealing advantages of sparser and more modular architectures. Recent work demonstrates that not only do some modular architectures generalize well, but they also lead to better out of distribution generalization, scaling properties, learning speed, and interpretability. A key intuition behind the success of such systems is that the data generating system for most real-world settings is considered to consist of sparse modular connections, and endowing models with similar inductive biases will be helpful. However, the field has been lacking in a rigorous quantitative assessment of such systems because these real-world data distributions are complex and unknown. In this work, we provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions. We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems. In doing so, we propose evaluation metrics that highlight the benefits of modularity, the regimes in which these benefits are substantial, as well as the sub-optimality of current end-to-end learned modular systems as opposed to their claimed potential. | Accept | This study investigates modular architectures, their properties, and their effectiveness in a class of synthetic yet informative scenarios. The reviewers unanimously recommend this paper for acceptance, some of them with high praise, and I enjoyed it as well: I suspect it will be read widely and have a lasting impact on our thinking about modularity. | train | [
"GPoKbSvWDz",
"rVwLFfr0H8W",
"TN4WSe8uXl8",
"OuWRrLu_nJS",
"bKGw0-hMRe8",
"3Jt3tGS5QSG",
"6jJGQigVlXj",
"Eq_-dEyaJv6",
"j5ax1FNU6Ce",
"eyhEivyrbR",
"dGF0yxJV5q-",
"RGh9wPKxDg2",
"WXsy9qgG71J",
"byFbhK9cF12",
"IEF5IW6-XNk",
"I5_kIOTV4rr",
"PMssHUCyC5B",
"5K7GuKyQmws",
"lGSjptQ_p7"... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thanks so much for your rebuttal! I understand your points, and believe this paper to have merits - I think it should be accepted, and my score currently reflects that! \n\nHoping that the other reviewers can similarly see the merits of this work!",
" We thank the reviewer for their time and response and are gr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"bKGw0-hMRe8",
"TN4WSe8uXl8",
"j5ax1FNU6Ce",
"WXsy9qgG71J",
"RGh9wPKxDg2",
"j5ax1FNU6Ce",
"Eq_-dEyaJv6",
"dGF0yxJV5q-",
"eyhEivyrbR",
"mvCLvBUsdHz",
"ALqfTKrmN2D",
"lGSjptQ_p7",
"5K7GuKyQmws",
"IEF5IW6-XNk",
"nips_2022_3-3XMModtrx",
"PMssHUCyC5B",
"nips_2022_3-3XMModtrx",
"nips_202... |
nips_2022_68EuccCtO5i | Differentially Private Model Compression | Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream Natural Language Processing (NLP) tasks while simultaneously guaranteeing differential privacy. The inference cost of these models -- which consist of hundreds of millions of parameters -- however, can be prohibitively large. Hence, often in practice, LLMs are compressed before they are deployed in specific applications. In this paper, we initiate the study of differentially private model compression and propose frameworks for achieving 50% sparsity levels while maintaining nearly full performance. We demonstrate these ideas on standard GLUE benchmarks using BERT models, setting benchmarks for future research on this topic. | Accept | This work proposes and empirically evaluates algorithms for compressing and fine-tuning a large model for a downstream task, while satisfying DP for the downstream task training data. The set up is the following: we have a large pre-trained language model such as BERT. We would like fine-tune it for a task using a dataset D, as well as compress it to a smaller model. The paper studies algorithms that are DP with respect to D and do fine-tuning+compression. The authors propose and evaluate different strategies for this problem and compare the privacy-utility tradeoffs.
The reviewers found the empirical evaluation to be thorough. Some of the other concerns raised by the reviewers have been addressed to my (and in most cases, their) satisfaction.
I think the problem studied by the paper is timely and important. I view the paper largely as a solid empirical study of natural algorithms for this problem. While the paper can be improved as discussed in the reviews and rebuttal, I believe it brings attention to an important problem and makes solid progress on it. I would therefore recommend acceptance. | train | [
"jJfhpv_pf0X",
"JHTD5yV_iHg",
"yMNwKVRfgq",
"ZWGZAEa9xcF",
"76WUyEhFgZC",
"AUJlrJ2HuP",
"4IsA7kinovg",
"yF8J6SFbPTl",
"nJAkJ_m5l9w",
"wEf-N-k5xC6",
"JIXOX-cBW_is",
"uGejnq1vRox",
"4S815jBCJjY",
"W3sSdPAmGJ",
"e3fUoSTkCwL",
"4J6vRMl4iV",
"L7u1VHSTEqO",
"SaGpF500-N8",
"H1vv3AagDt",... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" We thank you for participating in the discussions. We will include experiments with epsilon = 1 in the future revisions of the paper. We will add more discussions on pros/cons of our approach DistillBERT (or any pre-trained compressed models) and elaborate on where our work is applicable. We appreciate your tim... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"76WUyEhFgZC",
"ZWGZAEa9xcF",
"4IsA7kinovg",
"W3sSdPAmGJ",
"AUJlrJ2HuP",
"yF8J6SFbPTl",
"nJAkJ_m5l9w",
"H1vv3AagDt",
"e3fUoSTkCwL",
"JIXOX-cBW_is",
"L7u1VHSTEqO",
"nips_2022_68EuccCtO5i",
"nips_2022_68EuccCtO5i",
"jUYHYJk5N9L",
"4J6vRMl4iV",
"QePr2azemVc",
"SaGpF500-N8",
"30wS-tc1I... |
nips_2022_wwyiEyK-G5D | REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering | This paper revisits visual representation in knowledge-based visual question answering (VQA) and demonstrates that using regional information in a better way can significantly improve the performance. While visual representation is extensively studied in traditional VQA, it is under-explored in knowledge-based VQA even though these two tasks share the common spirit, i.e., rely on visual input to answer the question. Specifically, we observe in most state-of-the-art knowledge-based VQA methods: 1) visual features are extracted either from the whole image or in a sliding window manner for retrieving knowledge, and the important relationship within/among object regions is neglected; 2) visual features are not well utilized in the final answering model, which is counter-intuitive to some extent. Based on these observations, we propose a new knowledge-based VQA method REVIVE, which tries to utilize the explicit information of object regions not only in the knowledge retrieval stage but also in the answering model. The key motivation is that object regions and inherent relationship are important for knowledge-based VQA. We perform extensive experiments on the standard OK-VQA dataset and achieve new state-of the-art performance, i.e., 58.0 accuracy, surpassing previous state-of-the-art method by a large margin (+3.6%). We also conduct detailed analysis and show the necessity of regional information in different framework components for knowledge-based VQA. Code is publicly available at https://github.com/yzleroy/REVIVE. | Accept | The paper incorporates regional features to better retrieve relevant knowledge and makes direct use of the visual signal in answer prediction whereas the previous SOTA methods simply rely on the retrieved knowledge for the final prediction. The proposed method outperforms SOTA on OK-VQA by a large margin effectively showing the efficacy of the direct use of visual information in the answer prediction. I agree with the reviewer 5EMT that showing that the information contained in the image is important for answering knowledge-based visual questions is an important contribution to the field as most of the attention was put on the language and knowledge signal.
The author rebuttal also resolved most of the reviewers’ concerns and questions, and made the reviewers reach a consensus towards the acceptance. | train | [
"fg6uLQgwIc",
"rTwYoamIfPv",
"COosSKOeia8",
"lhbnKiYqRp",
"SCiG3BBEKWz",
"QrBXFdWHl0l",
"w7wPOxtyf1K",
"6Tth2lzVXmk",
"n922P0utmR_",
"9TpWi19Sg8h",
"3gtcjzqkPSt",
"ZzZfeN79eul",
"duA_wKEoHWU",
"84ibeUA0RBb",
"HfmTnvDhPdST",
"8IHy-eXRQ39",
"H7lKXbcBSh_",
"TScHQ1ueEom",
"D3bIQfmZAj... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 5EMT, thanks for your effort again! We are happy that our rebuttal well addressed your concerns!\n\n",
" Dear reviewer uNM2, thanks for your effort again! We are happy that our rebuttal well addressed your concerns!",
" Thank you for the detailed author response. After reading all the reviews an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"COosSKOeia8",
"lhbnKiYqRp",
"QrBXFdWHl0l",
"duA_wKEoHWU",
"6Tth2lzVXmk",
"84ibeUA0RBb",
"9TpWi19Sg8h",
"n922P0utmR_",
"8IHy-eXRQ39",
"duA_wKEoHWU",
"H7lKXbcBSh_",
"nips_2022_wwyiEyK-G5D",
"H7lKXbcBSh_",
"TScHQ1ueEom",
"D3bIQfmZAjK",
"D3bIQfmZAjK",
"nips_2022_wwyiEyK-G5D",
"nips_20... |
nips_2022__keb_XuP5oI | Generative Neural Articulated Radiance Fields | Unsupervised learning of 3D-aware generative adversarial networks (GANs) using only collections of single-view 2D photographs has very recently made much progress. These 3D GANs, however, have not been demonstrated for human bodies and the generated radiance fields of existing frameworks are not directly editable, limiting their applicability in downstream tasks. We propose a solution to these challenges by developing a 3D GAN framework that learns to generate radiance fields of human bodies or faces in a canonical pose and warp them using an explicit deformation field into a desired body pose or facial expression. Using our framework, we demonstrate the first high-quality radiance field generation results for human bodies. Moreover, we show that our deformation-aware training procedure significantly improves the quality of generated bodies or faces when editing their poses or facial expressions compared to a 3D GAN that is not trained with explicit deformations. | Accept | The reviewers all recognize the quality of the work, particularly technical soundness and quality of the experimental setting and there is a clear consensus for acceptance. I ask the authors to address the reviewers concerns, particularly clear up any confusion in the manuscript and better analysis of the synthesis results.
| train | [
"bsC3t18VNcF",
"Jbrbh3nbrXH",
"lQQqgVkbJx6",
"COSGoRk_ohr",
"2H-r48VbKV_",
"-e5OmiEMLHf"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer UgRe for their time spent reviewing and commenting on our work. We appreciate the note that integrating advanced radiance field implementation architectures, generative models, and articulation is not trivial and is an important contribution for future applications.\n\n**Lack of technical contri... | [
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
5,
5,
4
] | [
"-e5OmiEMLHf",
"2H-r48VbKV_",
"COSGoRk_ohr",
"nips_2022__keb_XuP5oI",
"nips_2022__keb_XuP5oI",
"nips_2022__keb_XuP5oI"
] |
nips_2022_nYrFghNHzz | Learning Individualized Treatment Rules with Many Treatments: A Supervised Clustering Approach Using Adaptive Fusion | Learning an optimal Individualized Treatment Rule (ITR) is a very important problem in precision medicine. This paper is concerned with the challenge when the number of treatment arms is large, and some groups of treatments in the large treatment space may work similarly for the patients. Motivated by the recent development of supervised clustering, we propose a novel adaptive fusion based method to cluster the treatments with similar treatment effects together and estimate the optimal ITR simultaneously through a single convex optimization. The problem is formulated as balancing \textit{loss}$+$\textit{penalty} terms with a tuning parameter, which allows the entire solution path of the treatment clustering process to be clearly visualized hierarchically. For computation, we propose an efficient algorithm based on accelerated proximal gradient and further conduct a novel group-lasso based algorithm for variable selection to boost the performance. Moreover, we demonstrate the theoretical guarantee of recovering the underlying true clustering structure of the treatments for our method. Finally, we demonstrate the superior performance of our method via both simulations and a real data application on cancer treatment, which may assist the decision making process for doctors. | Accept | This paper proposes a method for learning the optimal individualized treatment rule (ITR). The proposed approach uses a fusion penalty term that encourages clustering between treatments. A dendrogram of the treatments is generated by running the proposed algorithm using different tuning parameters as a solution path. The effectiveness of the proposed approach is empirically validated on synthetic and real data. The paper is well written and technically sound. A thorough analysis/interpretation of the resulting model/results will further improve the paper. | train | [
"gcCAE29yv7U",
"3K07sqDpaP",
"3FsS3ZCMUk",
"JAbYHmVFiIj",
"pjtNrbvRTFX",
"MTfm1qNn6IL",
"4cv__hfF139",
"Avcd5ZRyI4J",
"h0tDYJjkh6V",
"vGgzTaa1pZi",
"P09UwO0zaG9",
"6PRfNw-4rRG"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your response and acknowledgement of our clarifications for the paper. Thanks for your further suggestions about the group lasso step. As you suggested, to better clarify the group lasso step, we will add some results in the supplements.",
" Thank you for your comments clarifying the PDX data and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"3K07sqDpaP",
"4cv__hfF139",
"JAbYHmVFiIj",
"MTfm1qNn6IL",
"6PRfNw-4rRG",
"P09UwO0zaG9",
"vGgzTaa1pZi",
"h0tDYJjkh6V",
"nips_2022_nYrFghNHzz",
"nips_2022_nYrFghNHzz",
"nips_2022_nYrFghNHzz",
"nips_2022_nYrFghNHzz"
] |
nips_2022_sipwrPCrIS | Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks | We analyze feature learning in infinite-width neural networks trained with gradient flow through a self-consistent dynamical field theory. We construct a collection of deterministic dynamical order parameters which are inner-product kernels for hidden unit activations and gradients in each layer at pairs of time points, providing a reduced description of network activity through training. These kernel order parameters collectively define the hidden layer activation distribution, the evolution of the neural tangent kernel, and consequently output predictions. We show that the field theory derivation recovers the recursive stochastic process of infinite-width feature learning networks obtained from Yang & Hu with Tensor Programs. For deep linear networks, these kernels satisfy a set of algebraic matrix equations. For nonlinear networks, we provide an alternating sampling procedure to self-consistently solve for the kernel order parameters. We provide comparisons of the self-consistent solution to various approximation schemes including the static NTK approximation, gradient independence assumption, and leading order perturbation theory, showing that each of these approximations can break down in regimes where general self-consistent solutions still provide an accurate description. Lastly, we provide experiments in more realistic settings which demonstrate that the loss and kernel dynamics of CNNs at fixed feature learning strength is preserved across different widths on a CIFAR classification task. | Accept | This paper analyzes a dynamical mean field theory that describes feature learning via gradient flow for certain infinite-width neural networks. Self-consistent equations for the order parameters characterizing the dynamics are presented and methods for approximate numerical evaluation are discussed. Overall, this is a solid paper that advances the theory and understanding of feature learning for neural networks of large width and the reviewers and I unanimously support acceptance.
| train | [
"CqgRDH3II06",
"ZytGDh-hIIQ",
"nmoQp2HdAf3",
"-tJyXXATcHW",
"4F7SlhCmcAV",
"fN2B_AKwdnQ8",
"9C89tFueSO6l",
"SyKL5dbsmT9D",
"eZzX_uTZ2n2",
"hfK6alENLU7i",
"VOnvByZFV5T",
"kt3ed3BDw8C",
"ge5iRWS860y",
"PFDy_5LUVp",
"NFRhnntgIlH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed responses. I appreciate that the authors make an effect to further address my concerns. Thus I decide to keep my score and recommend acceptance.",
" I'm grateful to the authors for their detailed response to all my questions. I still feel confident that the paper should be accepted a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"eZzX_uTZ2n2",
"9C89tFueSO6l",
"NFRhnntgIlH",
"4F7SlhCmcAV",
"fN2B_AKwdnQ8",
"NFRhnntgIlH",
"SyKL5dbsmT9D",
"VOnvByZFV5T",
"hfK6alENLU7i",
"PFDy_5LUVp",
"ge5iRWS860y",
"nips_2022_sipwrPCrIS",
"nips_2022_sipwrPCrIS",
"nips_2022_sipwrPCrIS",
"nips_2022_sipwrPCrIS"
] |
nips_2022_GFgjnk2Q-ju | Parametrically Retargetable Decision-Makers Tend To Seek Power | If capable AI agents are generally incentivized to seek power in service of the objectives we specify for them, then these systems will pose enormous risks, in addition to enormous benefits. In fully observable environments, most reward functions have an optimal policy which seeks power by keeping options open and staying alive. However, the real world is neither fully observable, nor must trained agents be even approximately reward-optimal. We consider a range of models of AI decision-making, from optimal, to random, to choices informed by learning and interacting with an environment. We discover that many decision-making functions are retargetable, and that retargetability is sufficient to cause power-seeking tendencies. Our functional criterion is simple and broad. We show that a range of qualitatively dissimilar decision-making procedures incentivize agents to seek power. We demonstrate the flexibility of our results by reasoning about learned policy incentives in Montezuma's Revenge. These results suggest a safety risk: Eventually, retargetable training procedures may train real-world agents which seek power over humans. | Accept | The paper studies an alignment problem - that of agent seeking powers, and extends previous work (Turner, 2021 - which showed that optimal policies seek power, to demonstrate more generally that parametrically retargetable policies (policies whose 'target' can be changed by simple change of hyperparameters of the agent) also tend to seek power. The problem is interesting and under-studied, and all reviewers agreed that the work was 'original, non-trivial and significant'. Most concerns were regarding presentation, which could be at times vague and imprecise (in the mathematical parts) or unintuitive (in the informal parts). The authors presented a plan to significantly address clarity of the paper, which alleviated many of the reviewers concern. Please do ensure that the final version include these improvements. | val | [
"G05hans_6VP",
"slJUEDpWMy",
"mdC2TNY4jTb",
"90k3DpvIN6t",
"77SNZSYqtom",
"DFCtkICsWWY",
"N9QKPkIEPhX",
"snEWbolGK4T",
"eu86zhKrGkw",
"uRhBaDdOQaf"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for gathering some of the feedback on clarity. Here are some details concerning our current plan:\n> \"Overall, the writing is a bit light on “scaffolding”\" \n\nIn the beginning of each section, we will add signposting and scaffolding. For example, at the beginning of section 3, we will write: \n\n\"Se... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"slJUEDpWMy",
"nips_2022_GFgjnk2Q-ju",
"uRhBaDdOQaf",
"snEWbolGK4T",
"eu86zhKrGkw",
"N9QKPkIEPhX",
"nips_2022_GFgjnk2Q-ju",
"nips_2022_GFgjnk2Q-ju",
"nips_2022_GFgjnk2Q-ju",
"nips_2022_GFgjnk2Q-ju"
] |
nips_2022_Z6BFQqzwuS4 | Bayesian Persuasion for Algorithmic Recourse | When subjected to automated decision-making, decision subjects may strategically modify their observable features in ways they believe will maximize their chances of receiving a favorable decision. In many practical situations, the underlying assessment rule is deliberately kept secret to avoid gaming and maintain competitive advantage. The resulting opacity forces the decision subjects to rely on incomplete information when making strategic feature modifications. We capture such settings as a game of Bayesian persuasion, in which the decision maker offers a form of recourse to the decision subject by providing them with an action recommendation (or signal) to incentivize them to modify their features in desirable ways. We show that when using persuasion, the decision maker and decision subject are never worse off in expectation, while the decision maker can be significantly better off. While the decision maker’s problem of finding the optimal Bayesian incentive compatible (BIC) signaling policy takes the form of optimization over infinitely many variables, we show that this optimization can be cast as a linear program over finitely-many regions of the space of possible assessment rules. While this reformulation simplifies the problem dramatically, solving the linear program requires reasoning about exponentially-many variables, even in relatively simple cases. Motivated by this observation, we provide a polynomial-time approximation scheme that recovers a near-optimal signaling policy. Finally, our numerical simulations on semi-synthetic data empirically demonstrate the benefits of using persuasion in the algorithmic recourse setting. | Accept | The paper formulates the problem of algorithmic recourse under partial transparency as a Bayesian persuasion game. It is shown that the decision-maker can design an incentive-compatible action signaling strategy with guarantees that both the decision-maker and decision-subjects are not worse off in terms of expected utility. The results provide several insights into the complexity of computing an optimal signaling strategy; moreover, a polynomial-time approximation algorithm is provided to compute a near-optimal signaling strategy. The reviewers acknowledged that the paper considers an important problem setting and provides new technical insights into algorithmic recourse using the framework of Bayesian persuasion. However, the reviewers also raised several concerns and questions in their initial reviews. We want to thank the authors for their detailed responses and for actively engaging with the reviewers during the discussion phase. The reviewers appreciated the responses, which helped in answering their key questions. The reviewers have an overall positive assessment of the paper, and there is a consensus for acceptance. The reviewers have provided detailed feedback in their reviews, and we strongly encourage the authors to incorporate this feedback when preparing the final version of the paper. | train | [
"bdaSvZFxHGz",
"E6upms1jdM_",
"ccG87ebyiv",
"eWL4r6_s-nt",
"sCZLODtXMUD",
"MWV8K2FV7wz",
"sTxT9JKevRp",
"G_z9yxFcHhz",
"T0tHVWXIOT0",
"aQd1HOVxSgW",
"EYagEYKJFUw",
"dk4HwsacppA",
"o76UwI-aDky5",
"5Sz35CQfFMk",
"uRPVNdAD9lu",
"jJ7nPtX0LPO",
"mbqQE1zg0_F",
"YSNUauR2XXy",
"nMuA8WRQ1... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" I think there is a distinction. In your model, the sender chooses to not disclose full information not because they are not allowed to but because they are better off not doing that. This are no restrictions on how much information the decision-maker can disclose in your model, and the case with such restrictions... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"ccG87ebyiv",
"dk4HwsacppA",
"sCZLODtXMUD",
"MWV8K2FV7wz",
"5Sz35CQfFMk",
"uRPVNdAD9lu",
"mbqQE1zg0_F",
"T0tHVWXIOT0",
"aQd1HOVxSgW",
"EYagEYKJFUw",
"nMuA8WRQ1R_",
"o76UwI-aDky5",
"YSNUauR2XXy",
"mbqQE1zg0_F",
"jJ7nPtX0LPO",
"nips_2022_Z6BFQqzwuS4",
"nips_2022_Z6BFQqzwuS4",
"nips_2... |
nips_2022_0IywQ8uxJx | Graph Neural Networks as Gradient Flows | Dynamical systems minimizing an energy are ubiquitous in geometry and physics. We propose a gradient flow framework for GNNs where the equations follow the direction of steepest descent of a learnable energy. This approach allows to analyse the GNN evolution from a multi-particle perspective as learning attractive and repulsive forces in feature space via the positive and negative eigenvalues of a symmetric `channel-mixing' matrix. We perform spectral analysis of the solutions and conclude that gradient flow graph convolutional models can induce a dynamics dominated by the graph high frequencies which is desirable for heterophilic datasets. We also describe structural constraints on common GNN architectures allowing to interpret them as gradient flows. We perform thorough ablation studies corroborating our theoretical analysis and show competitive performance of simple and lightweight models on real-world homophilic and heterophilic datasets. | Reject | The authors present a graph neural network for heterophilic data using gradient flows. The proposed architecture is quite simple...large sections of the architecture are fully linear dynamical systems rather than neural networks, and still achieve roughly SotA results on standard graph learning benchmarks. There was a significant amount of disagreement between the reviewers. Some seemed to think the strength of mostly linear methods meant that the benchmarks were too easy, but these are standard graph neural network benchmarks. A simple model performing well is not a negative, and can often be useful for puncturing hype (e.g. https://arxiv.org/abs/2206.13211). Simple architectures can also be useful for providing analytic insights which might get obscured in more complex models. Some reviewers seemed concerned about the scaling of certain tools (e.g. graph Laplacian eigenvectors), but these tools are only used for analysis, not for training. Nevertheless, I feel that there were enough general concerns around the paper that I have a difficult time recommending acceptance. Even if the purpose of the paper is primarily to drive analytic insights rather than achieve SotA results on big benchmarks, I would recommend the authors to show how these analytic insights can be used to improve models on big datasets to strengthen the paper. | train | [
"lBW95jdWgB",
"3is4MgxD7Nj",
"tSTeYcDuGRJ",
"-jq6jCQJF06",
"qoUYZ4bc3rTm",
"kku2Q_X6hov",
"O675c48meXv",
"a71FXRX1YOJ",
"J7aG-6fvpaU",
"49yUH8Eq2A0",
"i1txvnK97-p",
"uJnCGKuejv",
"7aZKQESKNqX-",
"mXjLJIdiNTQ",
"t6WbmVZAMSO",
"l0_mNZ4v0yI",
"K9JmBtEsalG",
"HgeILay1TX9",
"lgkAtxhil... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",... | [
" Thank you for the response and finding our contribution significant. Some final points:\n\n- The analysis of non-linear activations in Proposition 3.2 and the whole Section E in the SM is quite novel in the GNN literature and in fact as you acknowledged it is much more common to have theoretical analysis restrict... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"tSTeYcDuGRJ",
"-jq6jCQJF06",
"a71FXRX1YOJ",
"7aZKQESKNqX-",
"kku2Q_X6hov",
"uJnCGKuejv",
"HgeILay1TX9",
"J7aG-6fvpaU",
"49yUH8Eq2A0",
"F-Oh_ruEYzA",
"R2n3t2ffnkS",
"zkVQHhU_NoS",
"mXjLJIdiNTQ",
"t6WbmVZAMSO",
"l0_mNZ4v0yI",
"K9JmBtEsalG",
"jxDn0fiTC3U",
"lgkAtxhil1N",
"6ayQowmlD... |
nips_2022_kRgOlgFW9aP | Thompson Sampling Efficiently Learns to Control Diffusion Processes | Diffusion processes that evolve according to linear stochastic differential equations are an important family of continuous-time dynamic decision-making models. Optimal policies are well-studied for them, under full certainty about the drift matrices. However, little is known about data-driven control of diffusion processes with uncertain drift matrices as conventional discrete-time analysis techniques are not applicable. In addition, while the task can be viewed as a reinforcement learning problem involving exploration and exploitation trade-off, ensuring system stability is a fundamental component of designing optimal policies. We establish that the popular Thompson sampling algorithm learns optimal actions fast, incurring only a square-root of time regret, and also stabilizes the system in a short time period. To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem. We validate our theoretical results through empirical simulations with real matrices. Moreover, we observe that Thompson sampling significantly improves (worst-case) regret, compared to the state-of-the-art algorithms, suggesting Thompson sampling explores in a more guarded fashion. Our theoretical analysis involves characterization of a certain \emph{optimality manifold} that ties the local geometry of the drift parameters to the optimal control of the diffusion process. We expect this technique to be of broader interest. | Accept | This paper proposes and analyzes a Thompson-Sampling based method to learn to control continuous-time linear systems when the costs are quadratic. The authors first propose an algorithm that guarantees stabilization of the diffusion process and then give a second, Thompson-Sampling-based method with regret bounds and estimation rates for the parameters of the linear system.
The reviews for this paper were generally positive and found this work to positively contribute to our understanding of linear control--- though several reviews noted the similarities with with reference [2] and a general lack of contextualization of the work in the general adaptive and Bayesian control literatures. Nevertheless the results were sound---and extended our understanding of learning and control in the LQ setting, and the paper was well written and easy to follow. | train | [
"Amahqzs7Yb_",
"FMp4LRUhR3",
"1HTqNnGBbqF",
"nf_hXTfFNp9",
"8AwzIScQmq",
"RFL5JD63BZ",
"kUfI8F-O0j",
"4mrxfdFoWO"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the feedback. The authors will be happy to provide point-by-point explanations to all questions of the reviewer.\n",
" Thanks for the deep conceptual and technical comments the reviewer correctly provided. The authors appreciate the comprehensive review and the constructive comments, are grateful tha... | [
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
1,
4,
4,
4
] | [
"8AwzIScQmq",
"4mrxfdFoWO",
"kUfI8F-O0j",
"RFL5JD63BZ",
"nips_2022_kRgOlgFW9aP",
"nips_2022_kRgOlgFW9aP",
"nips_2022_kRgOlgFW9aP",
"nips_2022_kRgOlgFW9aP"
] |
nips_2022_lJHkZbX6Ic1 | Is this the Right Neighborhood? Accurate and Query Efficient Model Agnostic Explanations | There have been multiple works that try to ascertain explanations for decisions of black box models on particular inputs by perturbing the input or by sampling around it, creating a neighborhood and then fitting a sparse (linear) model (e.g. LIME). Many of these methods are unstable and so more recent work tries to find stable or robust alternatives. However, stable solutions may not accurately represent the behavior of the model around the input. Thus, the question we ask in this paper is are we approximating the local boundary around the input accurately? In particular, are we sampling the right neighborhood so that a linear approximation of the black box is faithful to its true behavior around that input given that the black box can be highly non-linear (viz. deep relu network with many linear pieces). It is difficult to know the correct neighborhood width (or radius) as too small a width can lead to a bad condition number of the inverse covariance matrix of function fitting procedures resulting in unstable predictions, while too large a width may lead to accounting for multiple linear pieces and consequently a poor local approximation. We in this paper propose a simple approach that is robust across neighborhood widths in recovering faithful local explanations. In addition to a naive implementation of our approach which can still be accurate, we propose a novel adaptive neighborhood sampling scheme (ANS) that we formally show can be much more sample and query efficient. We then empirically evaluate our approach on real data where our explanations are significantly more sample and query efficient than the competitors, while also being faithful and stable across different widths. | Accept | The paper attacks the problem of how to define "local" when generating local linear explanations (e.g. LIME). Forming the linear approximation using multiple points, the proposed method attempts to balance robustness of the explanation vs its specificity. The approach of using multidimensional piecewise linear segmented regression is sensible for this end, albeit if increasing runtime. The majority of reviewers had a favorable opinion of the work, recognizing the paper's contribution as targeted but important, given the popularity of local linear explanations. Even reviewer hHAF, who recommended rejection, recognized the work's practical benefit ("experimentally it is beneficial"). Thus, I recommend acceptance. | train | [
"H-0buaAJa26",
"7IUw3PI2fSl",
"uHmNV6lxChO",
"_GB8sS_eMVc",
"4Mvj2eHdn2C",
"kqBSHt8kjt",
"8hOFzXRmqoH",
"6VnykH2hGeI",
"wVZtQBqzpHnI",
"NPYMtJUPLxW",
"dv-84d3z5PC",
"8SHaeoRw0D3",
"dmuUikECKPg"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Since the response period ends tomorrow. Please let us know if you have any further questions/concerns. Thank you.",
" We are glad that most of your concerns have been addressed. Yes, most other methods sample in input space. Even manifold methods such as MeLime which sample in the latent space end up decoding ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
3
] | [
"8hOFzXRmqoH",
"uHmNV6lxChO",
"4Mvj2eHdn2C",
"6VnykH2hGeI",
"dmuUikECKPg",
"8SHaeoRw0D3",
"dv-84d3z5PC",
"NPYMtJUPLxW",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1"
] |
nips_2022_ZQcpYaE1z1r | A Quantitative Geometric Approach to Neural-Network Smoothness | Fast and precise Lipschitz constant estimation of neural networks is an important task for deep learning. Researchers have recently found an intrinsic trade-off between the accuracy and smoothness of neural networks, so training a network with a loose Lipschitz constant estimation imposes a strong regularization, and can hurt the model accuracy significantly. In this work, we provide a unified theoretical framework, a quantitative geometric approach, to address the Lipschitz constant estimation. By adopting this framework, we can immediately obtain several theoretical results, including the computational hardness of Lipschitz constant estimation and its approximability. We implement the algorithms induced from this quantitative geometric approach, which are based on semidefinite programming (SDP). Our empirical evaluation demonstrates that they are more scalable and precise than existing tools on Lipschitz constant estimation for $\ell_\infty$-perturbations. Furthermore, we also show their intricate relations with other recent SDP-based techniques, both theoretically and empirically. We believe that this unified quantitative geometric perspective can bring new insights and theoretical tools to the investigation of neural-network smoothness and robustness. | Accept | All the reviewers agree that the paper is novel and interesting and it should be accepted. Please take into account the reviewers' comments while preparing the camera-ready version, particularly the ones on the clarity of the paper. | train | [
"hEjhb-6ix8W",
"df7EmIJvtbp",
"YK2UDZW9qeH",
"QJZyQAD96m6",
"lT7Is6FsHFN",
"tU_-GkdbeB",
"EEG_ckDPNLN",
"akn03o95q-H",
"yypPzzRiG6q",
"keIXIrqAIOM",
"Q-tJAJ0gag",
"I38kHNA-GAI",
"omxrrEh-xwN",
"VXZkdm7Vrzb",
"GRw98_X-XwB",
"T_L6oMlPaQ",
"UEd1w89AlHM",
"3qDyjcn06tx",
"oZ9eIBt2EKX"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" We appreciate your continuing engagement in the discussion. \n- Changing the underlying geometry: yes.\n\n- Evaluation in the context of adversarial robustness: Thanks for the clarification. Given the limited time remaining for the discussion, we would not be able to provide additional experimental results. For e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2,
3
] | [
"QJZyQAD96m6",
"VXZkdm7Vrzb",
"tU_-GkdbeB",
"Q-tJAJ0gag",
"akn03o95q-H",
"omxrrEh-xwN",
"yypPzzRiG6q",
"omxrrEh-xwN",
"I38kHNA-GAI",
"nips_2022_ZQcpYaE1z1r",
"3qDyjcn06tx",
"oZ9eIBt2EKX",
"1RefYRy8i2g",
"UEd1w89AlHM",
"T_L6oMlPaQ",
"nips_2022_ZQcpYaE1z1r",
"nips_2022_ZQcpYaE1z1r",
... |
nips_2022_foNVYPnQbhk | SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration | Next Best View computation (NBV) is a long-standing problem in robotics, and consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Like most current methods, we consider NBV prediction from a depth sensor like Lidar systems. Learning-based methods relying on a volumetric representation of the scene are suitable for path planning, but have lower accuracy than methods using a surface-based representation. However, the latter do not scale well with the size of the scene and constrain the camera to a small number of poses. To obtain the advantages of both representations, we show that we can maximize surface metrics by Monte Carlo integration over a volumetric representation. In particular, we propose an approach, SCONE, that relies on two neural modules: The first module predicts occupancy probability in the entire volume of the scene. Given any new camera pose, the second module samples points in the scene based on their occupancy probability and leverages a self-attention mechanism to predict the visibility of the samples. Finally, we integrate the visibility to evaluate the gain in surface coverage for the new camera pose. NBV is selected as the pose that maximizes the gain in total surface coverage. Our method scales to large scenes and handles free camera motion: It takes as input an arbitrarily large point cloud gathered by a depth sensor as well as camera poses to predict NBV. We demonstrate our approach on a novel dataset made of large and complex 3D scenes. | Accept | The paper describes an approach to next-best-view (NBV) planning for the reconstruction of large-scale 3D scenes using depth sensors. The proposed framework models the scene using a probabilistic occupancy map and chooses the next-best-view as the free camera pose that maximizes the gain in surface coverage. Integral to the approach's ability to handle large-scale scenes is the paper's formulation of surface coverage estimation as sample-based volumetric integration. Based on this formulation, the approach employs one neural network to predict the visibilities that are used to calculate surface coverage gain, and a second network to estimate the probabilistic occupancy map from the point cloud input. The paper presents experimental evaluations on the benchmark ShapeNet dataset as well as a proposed large-scale dataset, demonstrating gains over contemporary methods.
The paper was reviewed by three reviewers who read the author response and discussed the paper with the AC. The reviewers agree that the proposal to approximate surface coverage via sample-based volumetric integration, which is integral to the approach, is both novel and principled. To that end, the reviewers appreciate that the proposed architecture is well grounded in rigorous theoretical foundations. The experimental evaluation is thorough, with ablations that clearly demonstrate the advantages of the proposed architectural components. A key concern raised by several reviewers is that the readability of the submission is poor, which makes it difficult to relate the formal derivations to the neural network architecture. This lack of clarity lead to notable misunderstandings on the part of at least two reviewers. During the discussion phase, the reviewers acknowledged that the author response largely resolves this concern, but it is critical that the paper be updated to address these issues as well. | train | [
"7u4h6Svx6R",
"7B51TekbLmU",
"cLepXPm7f_",
"KVoyXjAEfES",
"4UZ-O36ErK3",
"sRyYvef_vn",
"swIq_bomfa",
"V40GNEqt3r1",
"yuBntlB-rcc",
"VrvROZ7Mt7c"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for the rebuttle. Many of the doubts are clear. Please include the monte carlo and neural aggregator discussion briefly in the paper.",
" In this third comment, we would like to answer the last questions asked by the reviewer.\n\n**Q7: L292: ‘Model suffers ... to compute coverage gain’ - d... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"4UZ-O36ErK3",
"VrvROZ7Mt7c",
"VrvROZ7Mt7c",
"VrvROZ7Mt7c",
"yuBntlB-rcc",
"yuBntlB-rcc",
"V40GNEqt3r1",
"nips_2022_foNVYPnQbhk",
"nips_2022_foNVYPnQbhk",
"nips_2022_foNVYPnQbhk"
] |
nips_2022_lxdWr1jN8-h | Integrating Symmetry into Differentiable Planning | We study how group symmetry helps improve data efficiency and generalization for end-to-end differentiable planning algorithms, specifically on 2D robotic path planning problems: navigation and manipulation. We first formalize the idea from Value Iteration Networks (VINs) on using convolutional networks for path planning, because it avoids explicitly constructing equivalence classes and enables end-to-end planning. We then show that value iteration can always be represented as some convolutional form for (2D) path planning, and name the resulting paradigm Symmetric Planner (SymPlan). In implementation, we use steerable convolution networks to incorporate symmetry. Our algorithms on navigation and manipulation, with given or learned maps, improve training efficiency and generalization performance by large margins over non-equivariant counterparts, VIN and GPPN. | Reject | The paper addresses path planning with RGB inputs by leveraging the workspace symmetry. To that end, the authors propose a end-to-end differentiable planner that builds on top of VINs and evaluate the method of several 2D-grid planning tasks.
The reviewers recognized that the method presents a performance improvement compared to VINs-like methods, but have raised the questions around the accessibility, scalability, and overall benefits of the method. During the rebuttal, the authors added new experiments to show the method's efficiency in larger environments, and reorganized the manuscript for the better accessibility.
I have read the final version of the manuscript. Based on the current state of the manuscript and the reviewers feedback, I do not believe that it is ready for the publication. My main concerns are around the positioning of the paper and the accessibility.
Positioning of the work -- the authors present the work as addressing robot path planning. The environments and evaluation tasks, even with the new experiments, are toy for the robotics. The 2D C-Space with image observations and no robot dynamics is not a suitable robotics problem. See PRM-RL [Faust et al., ICRA 2018], RL-RRT [Chiang et al., RA-L 2019], Critical PRMs [Ichter et al., ICRA 2020], optimal control w/ visual navigation [Bansal et al, CORL 2020] for methods that combine motion planning, controls and perception (ego sensors, motion planning, and non-trivial robot dynamics and geometry). Granted, they are not differential planning, but they solve more path planning more realistic and complex settings. (In the rebuttal the authors comment that the differentiable planning is capable of jointly training perception with the transition model is intractable for RRT or A*. However, the transition model is trivial here -- there are no kinodynamic constraints, or complicated geometry.) Perhaps a better framing for the presented work is as incorporating symmetry into latent planning, instead of framing it around the robotics.
It is not clear what problem the paper is seeking to solve. Please add a clear definition of the path planning problem. Are the policies goal-conditioned? Is the generalization over the workspaces, or the initial configurations or both? What is in the training set? Are the connections between the planning points known or not? And are there any other constraints on the transition function (beyond the workspace constraints)?
Accessibility -- Even after the rebuttal, Sections 3 and 4 are not clear. The symmetry is not introduced well for a non-expert. Some questions -- If I understand correctly, the symmetric NNs maps inputs to equivalent states. How is that different from latent spaces? Is the proposed method too specific for CNNs, which are rapidly becoming obsolete, in favor of newer models? How would the method compare to VAE? I suggest that authors take an intuitive example of the symmetry (for example, we expect the planner to learn when it sees the wall in a given direction, that the transition in that direction is not possible. The same will hold for left, right, top or bottom. So we hope that by exploiting symmetry, we can speed up the learning since the agent would need to learn only on a single instance of the equivalence class, and generalize to the others.) Lastly, the paper would be stronger, with a more in-depth analysis of the method. Where and how exactly did the symmetry help?
Overall, the exploitation of workspace symmetry in E2E differential planning has merit. But the framing around the robotics, VINs, and CNNs is too specific, yielding results which significance is not clear. With more generalized framing rooted into the current ML trends this paper can make a strong a valuable contribution. | train | [
"0VVFxi3E5p7",
"-py5oH4mgs6",
"k786IkHKBhC",
"fqtMB7jIJgt",
"mpILW2Ar6bq",
"zQgRMQ00pdC",
"aT_jLsUgJH",
"9N3ySU9xux",
"u7thZdG07nV",
"EWe-2DSsXUo",
"phH-_8k8DKu",
"FCQZ4JEdkg0",
"ggbLYM9CKNO",
"PoRrJJt9VjR",
"BkMT4Vef6nz",
"s-zxjhwEIAI"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As a kind reminder, additional to the first revision on adding pseudocode and a new experiment, we just add a new intuitive version of the technical sections (method + framework) in appendix Section D, which is written from scratch and contains minimal terminology for equivariant networks / steerable CNNs. We hop... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
2
] | [
"BkMT4Vef6nz",
"PoRrJJt9VjR",
"zQgRMQ00pdC",
"aT_jLsUgJH",
"zQgRMQ00pdC",
"EWe-2DSsXUo",
"nips_2022_lxdWr1jN8-h",
"BkMT4Vef6nz",
"s-zxjhwEIAI",
"s-zxjhwEIAI",
"PoRrJJt9VjR",
"PoRrJJt9VjR",
"BkMT4Vef6nz",
"nips_2022_lxdWr1jN8-h",
"nips_2022_lxdWr1jN8-h",
"nips_2022_lxdWr1jN8-h"
] |
nips_2022_jQR9YF2-Jhg | Respecting Transfer Gap in Knowledge Distillation | Knowledge distillation (KD) is essentially a process of transferring a teacher model's behavior, e.g., network response, to a student model. The network response serves as additional supervision to formulate the machine domain, which uses the data collected from the human domain as a transfer set. Traditional KD methods hold an underlying assumption that the data collected in both human domain and machine domain are both independent and identically distributed (IID). We point out that this naive assumption is unrealistic and there is indeed a transfer gap between the two domains. Although the gap offers the student model external knowledge from the machine domain, the imbalanced teacher knowledge would make us incorrectly estimate how much to transfer from teacher to student per sample on the non-IID transfer set. To tackle this challenge, we propose Inverse Probability Weighting Distillation (IPWD) that estimates the propensity of a training sample belonging to the machine domain, and assigns its inverse amount to compensate for under-represented samples. Experiments on CIFAR-100 and ImageNet demonstrate the effectiveness of \ours~for both two-stage distillation and one-stage self-distillation. | Accept | This paper analyzes the way in which most previous knowledge distillation methods violate IID assumptions and it aims to address the drop in performance on student models through this analysis. The paper proposes an Inverse Probability Weighting Distillation (IPWD) technique, derived in part through a causal analysis of the distillation setting. Results are mainly presented for CIFAR-100, but some ImageNet results are given and these result show that the proposed approach does indeed outperform a wide variety of prior work for distillation. The review scores for this paper place it right at the borderline of acceptance, with two weak accepts and one weak reject.
Given the paper was at the borderline of numerical acceptance and the signals from reviews and subsequent discussions were not conclusive, the Area Chair also read this paper and found the underlying idea to be quite interesting and novel. The application of causal analysis to the problem in this way does a nice job of brining together an important branch of machine learning (causal analysis) with deep learning and knowledge distillation. The AC also judged that the experimental work in this paper was substantial. Given that the method also yields better results than many other prior methods, AC recommends accepting this paper.
| train | [
"LI5cWy6Bi0x",
"MZi-KlZqo9i",
"yMbQ1tnXxA",
"pVcIn0Z7GS",
"JT9Yjei1eDd",
"AhklVs9q4xe",
"6Jrp6ou7CTU",
"-c8Bbfr-sN",
"hjqzjrnpXUe",
"UjsqdMlWrw",
"lLuoU59rfLJ",
"sJYLD78tLv",
"YldNQ3NtwQ",
"vgTzGNqb8mz",
"toqKWM5xUyy",
"clergAPJ-04",
"USK2rArPAaD",
"z-bNO_kiBJJ"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the acknowledgement of our responses and for upgrading the rating! For your remaining concerns, we would like to summarize the motivation, contributions (especially the **technical contribution** of propensity score estimation), and empirical performance on ImageNet in the following:\n\n* **Motivati... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"MZi-KlZqo9i",
"YldNQ3NtwQ",
"pVcIn0Z7GS",
"toqKWM5xUyy",
"-c8Bbfr-sN",
"6Jrp6ou7CTU",
"vgTzGNqb8mz",
"z-bNO_kiBJJ",
"USK2rArPAaD",
"clergAPJ-04",
"nips_2022_jQR9YF2-Jhg",
"z-bNO_kiBJJ",
"z-bNO_kiBJJ",
"USK2rArPAaD",
"clergAPJ-04",
"nips_2022_jQR9YF2-Jhg",
"nips_2022_jQR9YF2-Jhg",
... |
nips_2022_mfxq7BrMfga | Generalized One-shot Domain Adaptation of Generative Adversarial Networks | The adaptation of a Generative Adversarial Network (GAN) aims to transfer a pre-trained GAN to a target domain with limited training data. In this paper, we focus on the one-shot case, which is more challenging and rarely explored in previous works. We consider that the adaptation from a source domain to a target domain can be decoupled into two parts: the transfer of global style like texture and color, and the emergence of new entities that do not belong to the source domain. While previous works mainly focus on style transfer, we propose a novel and concise framework to address the \textit{generalized one-shot adaptation} task for both style and entity transfer, in which a reference image and its binary entity mask are provided. Our core idea is to constrain the gap between the internal distributions of the reference and syntheses by sliced Wasserstein distance. To better achieve it, style fixation is used at first to roughly obtain the exemplary style, and an auxiliary network is introduced to the generator to disentangle entity and style transfer. Besides, to realize cross-domain correspondence, we propose the variational Laplacian regularization to constrain the smoothness of the adapted generator. Both quantitative and qualitative experiments demonstrate the effectiveness of our method in various scenarios. Code is available at \url{https://github.com/zhangzc21/Generalized-One-shot-GAN-adaptation}. | Accept | This paper focuses on the one-shot domain adaption of GAN model. The idea of disentangling style and entity transfer is simple and effective. The meta-reviewer recommends acceptance of the paper, and the authors are encouraged to take the reviews into consideration when preparing a final version of the paper. | train | [
"hsiD26gpvdC",
"cAE_szxsbrJ",
"QeQy-Up1NEF",
"wj81F2gCTUN",
"Q407KHuMRTm",
"jqrXa4XzR6u",
"u_zgrZoJ53P",
"0jMWltjrp7e",
"HL6Z2D41woH",
"GNfMuIxERD",
"uzAn-WduQh6",
"E03tlvtrlRW",
"gQ3-gX4tK3m",
"ACwlD9gVT3C"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author addressed most of my concerns. Thus, I tend to raise my score.",
" Thank you for answers and extra experiments. Most of my concerns were addressed and I am raising the score accordingly. ",
" __Q4. Where can see other methods produce artifacts when the entities are big?__\n\nA4. Please see the last... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"u_zgrZoJ53P",
"QeQy-Up1NEF",
"wj81F2gCTUN",
"ACwlD9gVT3C",
"gQ3-gX4tK3m",
"E03tlvtrlRW",
"0jMWltjrp7e",
"HL6Z2D41woH",
"uzAn-WduQh6",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga"
] |
nips_2022_htUvh7xPoa | Random Sharpness-Aware Minimization | Currently, Sharpness-Aware Minimization (SAM) is proposed to seek the parameters that lie in a flat region to improve the generalization when training neural networks. In particular, a minimax optimization objective is defined to find the maximum loss value centered on the weight, out of the purpose of simultaneously minimizing loss value and loss sharpness. For the sake of simplicity, SAM applies one-step gradient ascent to approximate the solution of the inner maximization. However, one-step gradient ascent may not be sufficient and multi-step gradient ascents will cause additional training costs. Based on this observation, we propose a novel random smoothing based SAM (R-SAM) algorithm. To be specific, R-SAM essentially smooths the loss landscape, based on which we are able to apply the one-step gradient ascent on the smoothed weights to improve the approximation of the inner maximization. Further, we evaluate our proposed R-SAM on CIFAR and ImageNet datasets. The experimental results illustrate that R-SAM can consistently improve the performance on ResNet and Vision Transformer (ViT) training. | Accept | All reviewers except one agreed that this paper should be accepted because of the strong author response during the rebuttal phase. Specifically the reviewers appreciated the significance of the problem being addressed, the clarity of the paper, the simplicity of the method, and the analysis. Authors: please carefully revise the manuscript based on the suggestions by the reviewers: they made many careful suggestions to improve the work and stressed that the paper should only be accepted once these changes are implemented. Once these are done the paper will be a nice addition to the conference! | train | [
"z1LZuatxMOD",
"EPc2L6vvxbh",
"poNL1Gg5BX8G",
"Jzga_pj59p6",
"51hzizhZq_sv",
"vDGFc9xkDmCM",
"msihoDDRAD",
"GXFt8247X_D",
"vl1GRud_nmJ",
"hLhcnFC65sR",
"zpsvtTxl37",
"2E2zVUc1Yi2",
"K-5dAOXnViD",
"dIGDYo8l07Z",
"SMdKl8qeio",
"BJxQxxDvN9j",
"RACcRH0dqV7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers! Thank you so much for your time on this paper so far.\n\nThe authors have written a detailed response to your concerns. How does this change your review?\n\nPlease engage with the authors in the way that you would like reviewers to engage your submitted papers: critically and open to changing your... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
3
] | [
"nips_2022_htUvh7xPoa",
"GXFt8247X_D",
"Jzga_pj59p6",
"51hzizhZq_sv",
"vDGFc9xkDmCM",
"msihoDDRAD",
"RACcRH0dqV7",
"vl1GRud_nmJ",
"BJxQxxDvN9j",
"SMdKl8qeio",
"2E2zVUc1Yi2",
"K-5dAOXnViD",
"dIGDYo8l07Z",
"nips_2022_htUvh7xPoa",
"nips_2022_htUvh7xPoa",
"nips_2022_htUvh7xPoa",
"nips_20... |
nips_2022_9ND8fMUzOAr | Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning | Vision transformers have recently achieved competitive results across various vision tasks but still suffer from heavy computation costs when processing a large number of tokens. Many advanced approaches have been developed to reduce the total number of tokens in the large-scale vision transformers, especially for image classification tasks. Typically, they select a small group of essential tokens according to their relevance with the [\texttt{class}] token, then fine-tune the weights of the vision transformer. Such fine-tuning is less practical for dense prediction due to the much heavier computation and GPU memory cost than image classification.
In this paper, we focus on a more challenging problem, \ie, accelerating large-scale vision transformers for dense prediction without any additional re-training or fine-tuning. In response to the fact that high-resolution representations are necessary for dense prediction, we present two non-parametric operators, a \emph{token clustering layer} to decrease the number of tokens and a \emph{token reconstruction layer} to increase the number of tokens. The following steps are performed to achieve this: (i) we use the token clustering layer to cluster the neighboring tokens together, resulting in low-resolution representations that maintain the spatial structures; (ii) we apply the following transformer layers only to these low-resolution representations or clustered tokens; and (iii) we use the token reconstruction layer to re-create the high-resolution representations from the refined low-resolution representations. The results obtained by our method are promising on five dense prediction tasks including object detection, semantic segmentation, panoptic segmentation, instance segmentation, and depth estimation. Accordingly, our method accelerates $40\%\uparrow$ FPS and saves $30\%\downarrow$ GFLOPs of ``Segmenter+ViT-L/$16$'' while maintaining $99.5\%$ of the performance on ADE$20$K without fine-tuning the official weights. | Accept | This paper presents a method to reduce the computational cost of a trained vision transformer for dense prediction. According to the authors' presented experiments, the method can accelerate the transformers effectively without retraining. Although some experiments are not throughout (as discussed below), I see potential in this method and would give the research community a try to see whether the method can be further generalized to other architectures.
The AC does see some strange experimental setups. For instance, it is strange that the authors chose Mask2Former to conduct experiments but uses Segmentor to conduct experiments on ADE20K. Mask2Former already provides quite a strong model on ADE20K. Why use Segmentor for ADE20K experiments? The AC also observe that the authors compare with ACT on Segmentor as well.
The code is strongly encouraged to release for letting the general public test the method on other architectures. | train | [
"YOtJMWU1KGK",
"_ljYXctMSFO",
"UqD4w1icGP1",
"MA7Wcmg6wP8",
"8CrxXqj5nQ0",
"EYopl8wEyee",
"G-nrv1FAnmo",
"qktSXEYvwN-",
"tJ1XOCsdZn9",
"2NEEQdq38c9",
"QFf9S0S2A8K",
"2-KNyTq5j_",
"s-kzfNBDej7",
"r6AT4zJCIEq"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the previous careful reviews and valuable suggestions.\n\nWe have learned a lot through the suggested comparisons with TokenPooling[47]/DynamicViT[52]/TokenLearner[55]. We also hope to learn more from your further valuable suggestions.",
" We thank the reviewer for the previous careful... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
4,
4
] | [
"s-kzfNBDej7",
"2-KNyTq5j_",
"QFf9S0S2A8K",
"8CrxXqj5nQ0",
"EYopl8wEyee",
"r6AT4zJCIEq",
"s-kzfNBDej7",
"2-KNyTq5j_",
"QFf9S0S2A8K",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr"
] |
nips_2022_TjVU5Lipt8F | When Privacy Meets Partial Information: A Refined Analysis of Differentially Private Bandits | We study the problem of multi-armed bandits with ε-global Differential Privacy (DP). First, we prove the minimax and problem-dependent regret lower bounds for stochastic and linear bandits that quantify the hardness of bandits with ε-global DP. These bounds suggest the existence of two hardness regimes depending on the privacy budget ε. In the high-privacy regime (small ε), the hardness depends on a coupled effect of privacy and partial information about the reward distributions. In the low-privacy regime (large ε), bandits with ε-global DP are not harder than the bandits without privacy. For stochastic bandits, we further propose a generic framework to design a near-optimal ε global DP extension of an index-based optimistic bandit algorithm. The framework consists of three ingredients: the Laplace mechanism, arm-dependent adaptive episodes, and usage of only the rewards collected in the last episode for computing private statistics. Specifically, we instantiate ε-global DP extensions of UCB and KL-UCB algorithms, namely AdaP-UCB and AdaP-KLUCB. AdaP-KLUCB is the first algorithm that both satisfies ε-global DP and yields a regret upper bound that matches the problem-dependent lower bound up to multiplicative constants. | Accept | This paper studies the problem of multi-armed bandits under differential privacy. The reviewers are all positive about the results and presentation of the paper. | test | [
"rFMQT2jwMvh",
"asXUKzGs6e",
"jDPR1mgoGIA",
"eBkhpBMyuj",
"2t_Q2uVrWl3",
"NBnkMPY1Ge",
"Sc5-Mz_xSii",
"iBEILsLK1b",
"VFaZDiLN0V",
"T7kwfq9PnTq",
"pgw0RISwMgJB",
"qBkLCsnGULi",
"CWZ5OY39dN1",
"DdaT3b8IMvk",
"2xNRAmsJdKV",
"yOj4MnL4_mf",
"29fdSce3mVr",
"IYQwvWwpjt",
"fikrpuDE2u6",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" We are glad that all of your concerns have been addressed and thank you for raising your score. We will add a paragraph explaining the details discussed here and the comparaison to [1, 20, R1, 3] after Theorem 2.",
" Thanks for the update.\n\nI hope the authors can add one paragraph to carefully discuss the cur... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"asXUKzGs6e",
"jDPR1mgoGIA",
"eBkhpBMyuj",
"2t_Q2uVrWl3",
"NBnkMPY1Ge",
"iBEILsLK1b",
"qBkLCsnGULi",
"VFaZDiLN0V",
"CWZ5OY39dN1",
"pgw0RISwMgJB",
"DdaT3b8IMvk",
"_jD7zvtP1Nq",
"fikrpuDE2u6",
"IYQwvWwpjt",
"29fdSce3mVr",
"nips_2022_TjVU5Lipt8F",
"nips_2022_TjVU5Lipt8F",
"nips_2022_T... |
nips_2022_uOQNvEfjpaC | What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs | Given an input image, and nothing else, our method returns the bounding boxes of objects in the image and phrases that describe the objects. This is achieved within an open world paradigm, in which the objects in the input image may not have been encountered during the training of the localization mechanism. Moreover, training takes place in a weakly supervised setting, where no bounding boxes are provided. To achieve this, our method combines two pre-trained networks: the CLIP image-to-text matching score and the BLIP image captioning tool. Training takes place on COCO images and their captions and is based on CLIP. Then, during inference, BLIP is used to generate a hypothesis regarding various regions of the current image. Our work generalizes weakly supervised segmentation and phrase grounding and is shown empirically to outperform the state of the art in both domains. It also shows very convincing results in the novel task of weakly-supervised open-world purely visual phrase-grounding presented in our work.
For example, on the datasets used for benchmarking phrase-grounding, our method results in a very modest degradation in comparison to methods that employ human captions as an additional input. | Accept | The paper presents a new approach, using two pre-trained models (CLIP and BLIP) as supervision to enable three tasks, including the newly proposed task WWbL, which is a joint open vocabulary description and grounding/localization task trained only with weak supervision.
I recommend acceptance based on the revised paper, the reviewer's comments, and the author response. I think the paper sufficiently contributes:
- Overall idea and architecture
- The WWbL task, even if similar to previous task
- Extensive experimental evaluation and comparison to prior work
- Solid ablation study
The paper received mixed review scores with 2 Borderline rejects, 1 Borderline accept, and 1 strong accept.
The authors have in my opinion largely addressed the concerns and revised the paper, one of the remaining concerns of the weak reject reviewers is novelty, which I think is sufficient.
My recommendation for acceptance is under expectation that the authors revise the paper to address any outstanding points made by reviewers, e.g.
- additional alternative models (reviewer MVej) if possible
Additionally, I think it would be great if the authors discuss the relation ship of WWbL to to the task of dense captioning task more clearly in the paper. | test | [
"_FvDhxCL0J5",
"LDCgPpP_SJW",
"j199srD8iX",
"SKa4GTws8Q9",
"MYUeVg9na_r",
"R0mDqVPUlKh",
"HJaGo6pkvHf",
"nnMUQq_2mhJ",
"Ss7tGKqWVTU",
"8AxZZuTxAtE",
"it6u6y0VauV",
"5zQ7FJl4cl",
"xZ6v_UdSN_n",
"_FTsFCbyMI",
"tutiBrOkaO",
"SdyZyw8-Xj"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer aWBt,\n\nplease look at the author response to your review, and comment on the corresponding author response, and if this changes your ratings / understanding / resolves your concerns / creates new concerns/questions.\n\nThank you, your AC\n\nPS: Don't respond to this message but directly to the aut... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"tutiBrOkaO",
"j199srD8iX",
"Ss7tGKqWVTU",
"Ss7tGKqWVTU",
"R0mDqVPUlKh",
"5zQ7FJl4cl",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvEfjpaC",
"SdyZyw8-Xj",
"tutiBrOkaO",
"_FTsFCbyMI",
"xZ6v_UdSN_n",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvE... |
nips_2022_hFni381edL | SAPA: Similarity-Aware Point Affiliation for Feature Upsampling | We introduce point affiliation into feature upsampling, a notion that describes the affiliation of each upsampled point to a semantic cluster formed by local decoder feature points with semantic similarity. By rethinking point affiliation, we present a generic formulation for generating upsampling kernels. The kernels encourage not only semantic smoothness but also boundary sharpness in the upsampled feature maps. Such properties are particularly useful for some dense prediction tasks such as semantic segmentation. The key idea of our formulation is to generate similarity-aware kernels by comparing the similarity between each encoder feature point and the spatially associated local region of decoder features. In this way, the encoder feature point can function as a cue to inform the semantic cluster of upsampled feature points. To embody the formulation, we further instantiate a lightweight upsampling operator, termed Similarity-Aware Point Affiliation (SAPA), and investigate its variants. SAPA invites consistent performance improvements on a number of dense prediction tasks, including semantic segmentation, object detection, depth estimation, and image matting. Code is available at: https://github.com/poppinace/sapa | Accept | The paper focuses on the task of feature upsampling, specifically in decoder layers for dense prediction problems. The proposed point affiliation module can be used in upsampling kernels to produce semantically smooth and boundary preserving upsampled sets. The paper received four detailed reviewers from experts. There was a healthy discussion between authors and reviewers during the discussion period and the extra analyses, explanation, and experiments from the authors helped resolve most of the concerns raised by the reviewers. With these extra items presented in the discussion period, the paper has reached the level of impact and contribution expected by NeurIPS papers. The authors are recommended to include them in the final version of the paper. | train | [
"BFhhkJlE6hj",
"vJjKQThIYya",
"JQ5Nt4rZav8",
"fzydfh-2XdV",
"9DdWooU8D17",
"XOsTNBcLwe",
"3gPGSInzJGX",
"NKdgQ2bON_1"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for positive comments and consider our approach useful. We answer the questions as follows.\n\n**Actual runtime comparison.**\n\nWe test the runtime on a single NVIDIA GeForce RTX 1080Ti GPU with Intel Xeon CPU E5-1620 v4 @ 3.50GHz CPU. \nBy upsampling a random feature map of size 256\\*120\... | [
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"NKdgQ2bON_1",
"3gPGSInzJGX",
"XOsTNBcLwe",
"9DdWooU8D17",
"nips_2022_hFni381edL",
"nips_2022_hFni381edL",
"nips_2022_hFni381edL",
"nips_2022_hFni381edL"
] |
nips_2022_iQpaHC7cPfR | SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections | Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics. Neural approaches such as NeRF have achieved photorealistic results on novel view synthesis, but they require known camera poses. Solving this problem with unknown camera poses is highly challenging as it requires joint optimization over shape, radiance, and pose. This problem is exacerbated when the input images are captured in the wild with varying backgrounds and illuminations. Standard pose estimation techniques fail in such image collections in the wild due to very few estimated correspondences across images. Furthermore, NeRF cannot relight a scene under any illumination, as it operates on radiance (the product of reflectance and illumination). We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination. Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR. To our knowledge, our method is the first to tackle this severely unconstrained task with minimal user interaction. | Accept | This paper had notable consistent reviews. All reviews were thoughtful, and there was a consensus that this paper tackles an important problem in a way that has not been explored. While there were some weaknesses highlighted in the review process, discussion and the author rebuttal ameliorated all major concerns. Therefore I am accepting this paper. | train | [
"qGJwhSgFoLt",
"kDFqs5jNAkhb",
"gDp5CoxUFaa",
"vxG70-LEQSV",
"F4odPg3o94B",
"z2k7n3cn2ra",
"tQ8LCZ-VhFN",
"5h2e2G_girj"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers, Thanks for your constructive feedback. We hope to have clarified most of the reviewer questions in our response. As we are nearing the end of the author-reviewer discussion period, we would like to give a gentle reminder in case you have any more questions or concerns.",
" **Comparison with nois... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"F4odPg3o94B",
"5h2e2G_girj",
"tQ8LCZ-VhFN",
"z2k7n3cn2ra",
"nips_2022_iQpaHC7cPfR",
"nips_2022_iQpaHC7cPfR",
"nips_2022_iQpaHC7cPfR",
"nips_2022_iQpaHC7cPfR"
] |
nips_2022_HjicdpP-Nth | Generalized Laplacian Eigenmaps | Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Riemannian metric (AIRM) and Jesen-Bregman the LogDet Divergence (JBLD), and upper-bounded by AIRM scaled by the factor of $\sqrt{m}$. We show that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines. | Accept | The Authors provided a nice rebuttal, and address major issues in the last round. Therefore, I recommend to accept this paper. | train | [
"m6Rf6iLflKV",
"FwuxOkxYaGi",
"W2nLudS5zlT",
"P94g2PP1S6x",
"-e2A0L0wzJP",
"7Vx19nn6VW4",
"u0ISM-i_nZx",
"BbPgi_Y85k9",
"Iimyw9XR2O1",
"0LKFYg6ifJc",
"427K1wlrPCx"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers and the AC for their work.\n\nAs the reviewer-author discussion period is finishing in the next few hours, we just wanted to say that we are here to help should you have any additional questions.\n\nBest regards,\nAuthors.\n\n",
" # Response to Rev. 3 (3VJW)\n\n***We thank the reviewer** ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2022_HjicdpP-Nth",
"427K1wlrPCx",
"0LKFYg6ifJc",
"0LKFYg6ifJc",
"Iimyw9XR2O1",
"Iimyw9XR2O1",
"Iimyw9XR2O1",
"Iimyw9XR2O1",
"nips_2022_HjicdpP-Nth",
"nips_2022_HjicdpP-Nth",
"nips_2022_HjicdpP-Nth"
] |
nips_2022_g_bqn4ewVG | PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape Completion on Unseen Categories | While 3D shape representations enable powerful reasoning in many visual and perception applications, learning 3D shape priors tends to be constrained to the specific categories trained on, leading to an inefficient learning process, particularly for general applications with unseen categories. Thus, we propose PatchComplete, which learns effective shape priors based on multi-resolution local patches, which are often more general than full shapes (e.g., chairs and tables often both share legs) and thus enable geometric reasoning about unseen class categories. To learn these shared substructures, we learn multi-resolution patch priors across all train categories, which are then associated to input partial shape observations by attention across the patch priors, and finally decoded into a complete shape reconstruction. Such patch-based priors avoid overfitting to specific train categories and enable reconstruction on entirely unseen categories at test time. We demonstrate the effectiveness of our approach on synthetic ShapeNet data as well as challenging real-scanned objects from ScanNet, which include noise and clutter, improving over state of the art in novel-category shape completion by 19.3% in chamfer distance on ShapeNet, and 9.0% for ScanNet. | Accept | This is an interesting paper on class-independent 3d shape completion. Reviewers agree that the paper has good quality and is moderately original. There were initially some questions about the level of generalization to new classes, but after a strong rebuttal all reviewers find the results compelling and all of them suggest acceptance. I agree with their assessment. | train | [
"iP0I2My5C_r",
"982PpeRSyyO",
"ZZMg34Zp-mQ",
"rXs_MuyRofA",
"JIYEIKdMl-5",
"GjpeeSdtSWg",
"siWbwWdOdHg",
"sEC7eyVnfSj"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your valuable review; we are glad that our method was found to be 'novel' and to enable 'good generalization to unseen categories', with 'experimental evaluation [that] is done well'.\n\n**Applications.** Our method focuses on the problem of shape completion on objects from unseen categories.\nWe be... | [
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"sEC7eyVnfSj",
"siWbwWdOdHg",
"GjpeeSdtSWg",
"JIYEIKdMl-5",
"nips_2022_g_bqn4ewVG",
"nips_2022_g_bqn4ewVG",
"nips_2022_g_bqn4ewVG",
"nips_2022_g_bqn4ewVG"
] |
nips_2022_pkfpkWU536D | Neural Shape Deformation Priors | We present Neural Shape Deformation Priors, a novel method for shape manipulation that predicts mesh deformations of non-rigid objects from user-provided handle movements. State-of-the-art methods cast this problem as an optimization task, where the input source mesh is iteratively deformed to minimize an objective function according to hand-crafted regularizers such as ARAP. In this work, we learn the deformation behavior based on the underlying geometric properties of a shape, while leveraging a large-scale dataset containing a diverse set of non-rigid deformations. Specifically, given a source mesh and desired target locations of handles that describe the partial surface deformation, we predict a continuous deformation field that is defined in 3D space to describe the space deformation. To this end, we introduce transformer-based deformation networks that represent a shape deformation as a composition of local surface deformations. It learns a set of local latent codes anchored in 3D space, from which we can learn a set of continuous deformation functions for local surfaces.
Our method can be applied to challenging deformations and generalizes well to unseen deformations. We validate our approach in experiments using the DeformingThing4D dataset, and compare to both classic optimization-based and recent neural network-based methods. | Accept | While some of the scores on this paper are mixed, even the negative reviews highlight the quality and interest of the work and have specific (and somewhat debatable) technical concerns. Overall, the AE recommends accept, especially in light of the detailed and thoughtful responses during the rebuttal phase.
In the camera ready, the authors are encouraged to see if they can squeeze some of the new results (e.g., transfer learning attempt in Figure 6 and comparisons to Shapeflow) in the main body of the paper, where they're more likely to be noticed. | val | [
"VYdPXnwycA_",
"kKOpZtOo27l",
"U9nGvfEWajV",
"5ZPxzh8nvxg",
"Q1BXyYI5BII",
"uXeMer-cIQ8",
"q1oGYs1Qfcu",
"JmGjq-PKZ2s",
"ywcU4zezp5",
"TI180jT6d6K",
"JbqUdqQFOVN",
"rne54RJ8NFk"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the positive feedback! \n\nOur model learns deformation priors from a dataset containing realistic non-rigid motions. When it is directly evaluated on non-realistic or non-physical-aware handles, it will try to find the most similar realistic deformation that can best explain the given handles.\nHoweve... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"kKOpZtOo27l",
"Q1BXyYI5BII",
"uXeMer-cIQ8",
"nips_2022_pkfpkWU536D",
"rne54RJ8NFk",
"JbqUdqQFOVN",
"TI180jT6d6K",
"ywcU4zezp5",
"nips_2022_pkfpkWU536D",
"nips_2022_pkfpkWU536D",
"nips_2022_pkfpkWU536D",
"nips_2022_pkfpkWU536D"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.