paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_IsHRUzXPqhI | SHINE: SubHypergraph Inductive Neural nEtwork | Hypergraph neural networks can model multi-way connections among nodes of the graphs, which are common in real-world applications such as genetic medicine. In particular, genetic pathways or gene sets encode molecular functions driven by multiple genes, naturally represented as hyperedges. Thus, hypergraph-guided embedding can capture functional relations in learned representations. Existing hypergraph neural network models often focus on node-level or graph-level inference. There is an unmet need in learning powerful representations of subgraphs of hypergraphs in real-world applications. For example, a cancer patient can be viewed as a subgraph of genes harboring mutations in the patient, while all the genes are connected by hyperedges that correspond to pathways representing specific molecular functions. For accurate inductive subgraph prediction, we propose SubHypergraph Inductive Neural nEtwork (SHINE). SHINE uses informative genetic pathways that encode molecular functions as hyperedges to connect genes as nodes. SHINE jointly optimizes the objectives of end-to-end subgraph classification and hypergraph nodes' similarity regularization. SHINE simultaneously learns representations for both genes and pathways using strongly dual attention message passing. The learned representations are aggregated via a subgraph attention layer and used to train a multilayer perceptron for subgraph inferencing. We evaluated SHINE against a wide array of state-of-the-art (hyper)graph neural networks, XGBoost, NMF and polygenic risk score models, using large scale NGS and curated datasets. SHINE outperformed all comparison models significantly, and yielded interpretable disease models with functional insights. | Accept | The paper proposed a GNN that explicitly treats hyperedges, and makes use of strongly dual attention, hypergraph regularization, and weighted subgraph attention. The proposed method shows better performance than existing baselines on two genetic medicine datasets. Explainability is also demonstrated.
Reviewers originally raised many concerns on presentation (too specialized for the target application), lack of ablation (effectiveness of each proposed component is not clearly shown), novelty (combination of small modifications of existing methods), and explainability (existing methods can do the same). The authors made an amazing job to address most of the concerns: They reported additional ablation results and baseline results, and showed that the proposed method still performs better, and each proposed component plays a significant role.
Two reviewers have been convinced by the author's response, while the other two have not, insisting that the novelty issue remains, and with the limited novelty, more careful investigation is required for publication.
This is a borderline paper, and I recommend acceptance because I think adjusting existing methods to target applications is important research even if the modifications are small. The proposed method significantly outperforms existing baselines (including the ones reviewers suggested), and the additional ablation study shows each of the proposed components is effective.
On the other hand, I also sympathize with the reviewers with negative evaluations on the following comments:
"formalising the key differences with existing similar methods (e.g., HyperGAT in lines 159-169) and confirming the differences with convincing (synthetic/real-world) experiments, e.g., on a dataset chosen cleverly to show clear failure of HyperGAT but success of SHINE, would improve the paper's quality."
"The paper can be strengthened by positioning strongly dual attention in SHINE with different attention mechanisms in heterogeneous graph neural network literature (some are listed below):
Heterogeneous Graph Attention Network, In WWW'19
HetGNN: Heterogeneous Graph Neural Network, In KDD'19
Metapath enhanced graph attention encoder for HINs representation learning, In BigData'19.
MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding, In WWW'20
Heterogeneous Graph Transformer, In WWW'20.
There is no need to empirically compare and run them as baselines but explaining the key differences conceptually to make hypergraphs a more compelling choice for genetic medicine than heterogeneous graphs can strengthen the paper."
I hope the authors would make a bit more effort to incorporate these suggestions in the final version.
| train | [
"3Xa1WvXErgo",
"EbEeD7kMR5",
"RllXdyULtc",
"8WQH8TtaC6j",
"yam4CVNAqux",
"AlqJ2FDqqYo1",
"6QnXfIv7CDb",
"ZREzLf4f6Ke",
"GUgS_KB-X7I",
"soQcEa9QjBx",
"hR-X65HwHae",
"xrsJ4_Q5L2s",
"cIwGbi4eZg",
"-Ms8W_5J3x",
"5YZI5BfHtM",
"uZUAcvyQtOf"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer FtLi,\n\nThank you for your constructive feedbacks and suggestions again! We greatly appreciate the additional rigorous ablation studies and state-of-the-art baselines (e.g., AllSetTransformer and AllDeepSets) that you suggested, and our new results have further strengthened the paper. We also highl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"AlqJ2FDqqYo1",
"6QnXfIv7CDb",
"soQcEa9QjBx",
"hR-X65HwHae",
"nips_2022_IsHRUzXPqhI",
"ZREzLf4f6Ke",
"xrsJ4_Q5L2s",
"uZUAcvyQtOf",
"nips_2022_IsHRUzXPqhI",
"-Ms8W_5J3x",
"cIwGbi4eZg",
"5YZI5BfHtM",
"nips_2022_IsHRUzXPqhI",
"nips_2022_IsHRUzXPqhI",
"nips_2022_IsHRUzXPqhI",
"nips_2022_Is... |
nips_2022_0TDki1mlcwz | LASSIE: Learning Articulated Shapes from Sparse Image Ensemble via 3D Part Discovery | Creating high-quality articulated 3D models of animals is challenging either via manual creation or using 3D scanning tools. Therefore, techniques to reconstruct articulated 3D objects from 2D images are crucial and highly useful. In this work, we propose a practical problem setting to estimate 3D pose and shape of animals given only a few (10-30) in-the-wild images of a particular animal species (say, horse). Contrary to existing works that rely on pre-defined template shapes, we do not assume any form of 2D or 3D ground-truth annotations, nor do we leverage any multi-view or temporal information. Moreover, each input image ensemble can contain animal instances with varying poses, backgrounds, illuminations, and textures. Our key insight is that 3D parts have much simpler shape compared to the overall animal and that they are robust w.r.t. animal pose articulations. Following these insights, we propose LASSIE, a novel optimization framework which discovers 3D parts in a self-supervised manner with minimal user intervention. A key driving force behind LASSIE is the enforcing of 2D-3D part consistency using self-supervisory deep features. Experiments on Pascal-Part and self-collected in-the-wild animal datasets demonstrate considerably better 3D reconstructions as well as both 2D and 3D part discovery compared to prior arts. Project page: https://chhankyao.github.io/lassie/ | Accept | This paper had substantial discussion amongst reviewers, and concluded with mixed reviews (7, 7, 7, 4). The positive reviewers (mH51, rsf7, LjEQ) actively and strongly championed the paper. The remaining concern comes from boNQ, who (in reviewer-to-reviewer discussions) was primarily concerned with making sure that the authors are clear about specifying limitations in method and evaluation. In particular, boNQ was concerned about making sure the following aspects were clearly and prominently discussed in the paper: (a) generalization to new images; (b) assumption of the rest-pose prior; (c) lack of adjusting the shape per-instance; and (d) non-guarantee of gaps between parts. Additionally, boNQ was concerned about keypoint-based evaluations as a proxy for 3D (although was convinced by the videos in the supplemental).
The AC has examined the paper, reviews, and discussion, and is inclined to agree with the accepting reviewers. The paper will be a strong contribution to the literature and will be of great interest the community. However, the AC notes that many of the reviewers may have similar questions to boNQ. Thus, the AC strongly encourages the authors to address boNQ's concerns in the final version of the paper. With the additional space, the AC believes it will be feasible to specify the limitations more clearly and earlier in the manuscript, and also somehow incorporate additional visualizations demonstrating the effectiveness of the reconstructions. While there is no mechanism for enforcing these changes, they will substantially improve the reception of the paper. | train | [
"6HEUTxqH4_0",
"yU1CFFpb6Z",
"yHiCUhkT7lv",
"Z41TEJOoCG1",
"Qt82lSP0Krl",
"9xPLXT1AhtJ",
"YFcBFuprjb",
"gyzxOGH74_7",
"1ylsrqlhqh",
"dYUfJyHObmW",
"iWwjzdo9lCB",
"5G9bqMqZbzA",
"Vdb6l9FlDql",
"YedAzEZVM8Md",
"Bzlv9nj2eB_",
"Im5D4RRxfou",
"VwTi-2vjRJT",
"Ecz3KPbPn_O",
"qTHSMJFN7Gh... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" I appreciate the authors' detailed response. I like the new experiments on novel image \"inference\". I am also satisfied with the response to the current limitations and have no further questions. I think this submission present important insight on reconstructing articulated objects from sparse images.",
" Th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"YedAzEZVM8Md",
"yHiCUhkT7lv",
"9xPLXT1AhtJ",
"5G9bqMqZbzA",
"nips_2022_0TDki1mlcwz",
"iWwjzdo9lCB",
"dYUfJyHObmW",
"1ylsrqlhqh",
"Vdb6l9FlDql",
"Vdb6l9FlDql",
"Vdb6l9FlDql",
"9czt0pTJERN",
"ASzaLqRnCmD",
"qTHSMJFN7Gh",
"Ecz3KPbPn_O",
"nips_2022_0TDki1mlcwz",
"nips_2022_0TDki1mlcwz",... |
nips_2022_PO6cKxILdi | Bayesian Risk Markov Decision Processes | We consider finite-horizon Markov Decision Processes where parameters, such as transition probabilities, are unknown and estimated from data. The popular distributionally robust approach to addressing the parameter uncertainty can sometimes be overly conservative. In this paper, we propose a new formulation, Bayesian risk Markov decision process (BR-MDP), to address parameter uncertainty in MDPs, where a risk functional is applied in nested form to the expected total cost with respect to the Bayesian posterior distributions of the unknown parameters. The proposed formulation provides more flexible risk attitudes towards parameter uncertainty and takes into account the availability of data in future time stages. To solve the proposed formulation with the conditional value-at-risk (CVaR) risk functional, we propose an efficient approximation algorithm by deriving an analytical approximation of the value function and utilizing the convexity of CVaR. We demonstrate the empirical performance of the BR-MDP formulation and proposed algorithms on a gambler’s betting problem and an inventory control problem. | Accept | Motivated by the often overly conservative characteristics of distributionally robust MDPs, this paper employs (nested) Bayesian posterior distributions to model the uncertainty over MDP parameters. The programming solution is similar to belief state approximation methods for POMDPs. The experiments (after revision) seem to demonstrate the advantages of this approach. The reviewers believe the paper could be improved with better theoretical analyses and/or more compelling experiments (higher dimensional tasks in particular). The paper lacks a strong advocate among the reviewers, but their aggregate sentiment is that it is worth accepting to the conference unless there are other more deserving works that it would displace. | test | [
"3mIlAn1fPdS",
"_0jWgzld7S",
"D_Mb-LjQJ-",
"TFGjEgW-av",
"odOtcTwD7IE",
"37iEV8Sxb2l",
"TlNjywKErww3",
"KcyoQ3qQQ1t",
"umkwerzRnSn",
"5ZWXgMvVB31",
"VskWNCXhYS7",
"TXFOPE_ukbR",
"b3Q1oRqdXRY",
"jJM7DkwOJCn",
"0MPwW4GV8k7",
"gYu3w85eC6U",
"z4wv5X_K88O",
"BY5TBbu5ypn",
"EivJXf0zS42... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for your reply and acknowledgement! We look forward to your final recommendation!",
" I want to thank the authors for their further clarifications over my follow-up comments. I believe I now have a better understanding of the merits of this submission, and I will discuss them with other reviewers befo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"_0jWgzld7S",
"37iEV8Sxb2l",
"TFGjEgW-av",
"gYu3w85eC6U",
"nips_2022_PO6cKxILdi",
"TlNjywKErww3",
"KcyoQ3qQQ1t",
"6tKNg66mjLJ",
"6tKNg66mjLJ",
"6tKNg66mjLJ",
"dust7sxSeA3",
"dust7sxSeA3",
"dust7sxSeA3",
"u53tsiI4tQY",
"u53tsiI4tQY",
"u53tsiI4tQY",
"EivJXf0zS42",
"EivJXf0zS42",
"n... |
nips_2022_16nVkS8Twxo | Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization | Variance reduction techniques such as SPIDER/SARAH/STORM have been extensively studied to improve the convergence rates of stochastic non-convex optimization, which usually maintain and update a sequence of estimators for a single function across iterations. What if we need to track multiple functional mappings across iterations but only with access to stochastic samples of $\mathcal{O}(1)$ functional mappings at each iteration? There is an important application in solving an emerging family of coupled compositional optimization problems in the form of $\sum_{i=1}^m f_i(g_i(\mathbf{w}))$, where $g_i$ is accessible through a stochastic oracle. The key issue is to track and estimate a sequence of $\mathbf g(\mathbf{w})=(g_1(\mathbf{w}), \ldots, g_m(\mathbf{w}))$ across iterations, where $\mathbf g(\mathbf{w})$ has $m$ blocks and it is only allowed to probe $\mathcal{O}(1)$ blocks to attain their stochastic values and Jacobians. To improve the complexity for solving these problems, we propose a novel stochastic method named Multi-block-Single-probe Variance Reduced (MSVR) estimator to track the sequence of $\mathbf g(\mathbf{w})$. It is inspired by STORM but introduces a customized error correction term to alleviate the noise not only in stochastic samples for the selected blocks but also in those blocks that are not sampled. With the help of the MSVR estimator, we develop several algorithms for solving the aforementioned compositional problems with improved complexities across a spectrum of settings with non-convex/convex/strongly convex/Polyak-{\L}ojasiewicz (PL) objectives. Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on the strong convexity parameter. Empirical studies on multi-task deep AUC maximization demonstrate the better performance of using the new estimator. | Accept | The paper makes a nice contribution to the growing field of stochastic compositional optimization. In particular, it considers the case of coupled compositional problems and provides an algorithm that tracks all the inner-level objective information required in an efficient manner. Sample complexities (which are intuitively, optimal) are established.
The authors **must** emphasize that they work under the stronger Assumption 3 in the revision. | train | [
"cP4gjWLE1fo",
"Aklc2meAJW",
"0mbPb2XjRaO",
"xQsOB5sn_T",
"LbQ6Ba5iAm8",
"8IrmRA4hzWs",
"cG-6bcvTPXj",
"Msems-vxL5x",
"_c-0v9zaNwY",
"6QICfl0PKE9"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors, \n\nThanks for your detailed comments. It makes more sense now. ",
" Dear authors,\n\nThank you for the response! It clearly addressed all my concerns.",
" Thank you very much for your constructive comments and suggestions! We will revise accordingly.\n\n---\n\nQ1: It would be much more clear if... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"8IrmRA4hzWs",
"0mbPb2XjRaO",
"Msems-vxL5x",
"_c-0v9zaNwY",
"6QICfl0PKE9",
"cG-6bcvTPXj",
"nips_2022_16nVkS8Twxo",
"nips_2022_16nVkS8Twxo",
"nips_2022_16nVkS8Twxo",
"nips_2022_16nVkS8Twxo"
] |
nips_2022_tuC6teLFZD | Synergy-of-Experts: Collaborate to Improve Adversarial Robustness | Learning adversarially robust models require invariant predictions to a small neighborhood of its natural inputs, often encountering insufficient model capacity. There is research showing that learning multiple sub-models in an ensemble could mitigate this insufficiency, further improving the generalization and the robustness. However, the ensemble's voting-based strategy excludes the possibility that the true predictions remain with the minority. Therefore, this paper further improves the ensemble through a collaboration scheme---Synergy-of-Experts (SoE). Compared with the voting-based strategy, the SoE enables the possibility of correct predictions even if there exists a single correct sub-model. In SoE, every sub-model fits its specific vulnerability area and reserves the rest of the sub-models to fit other vulnerability areas, which effectively optimizes the utilization of the model capacity. Empirical experiments verify that SoE outperforms various ensemble methods against white-box and transfer-based adversarial attacks. | Accept | This paper proposes an ensemble-type solution for improving the adversarial robustness of a model. The proposed idea is simple, yet novel with theoretical supports. The authors did a good job clarifying reviewers' concerns and all reviewers finally recommend acceptance. AC also thinks that this is a good paper in various aspects (novel idea, good write-up, solid theoretical supports) and has a potential to be a generic ensemble-type solution even in non-adversarial/standard setups, e.g., see [1]. Furthermore, the confidence prediction can be used for other purposes, e.g., out-of-distribution detection [2] and active learning [3]. It is useful for readers to discuss about these extensions.
[1] Confident Multiple Choice Learning, Kimin Lee et al., ICML 2017.
[2] Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers, Apoorv Vyas et al., ECCV 2018.
[3] Learning Loss for Active Learning, Donggeun Yoo and In So Kweon, CVPR 2019. | val | [
"06fWOcaoqn2",
"UHwoqXRg1sy",
"B7FY2zmeTJu",
"9M35tIpBksX",
"z1OVmQa5JSm",
"xd7qyoykPo5",
"qk45WHwz5SR",
"xLhZ10Pb3b",
"88SZdQrcLFe3",
"cSPqL_O6bkW",
"ZtDaIE03drm",
"eNhKGjl7XUC",
"WvTWvShbAnCG",
"xKEnJ3g_3o",
"Zk_6DhGXNAD",
"77jTnko3Kgx"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarification. I do not have other questions, and I would raise the score.",
" Thanks for your reply! We would like to make every effort to clarify the unclear points.\n\n**Question 1:First, the generation of adversarial examples $\\tilde{x}''$ in Part 3 of your response is not discussed in t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"UHwoqXRg1sy",
"B7FY2zmeTJu",
"z1OVmQa5JSm",
"xd7qyoykPo5",
"Zk_6DhGXNAD",
"qk45WHwz5SR",
"xLhZ10Pb3b",
"77jTnko3Kgx",
"Zk_6DhGXNAD",
"Zk_6DhGXNAD",
"Zk_6DhGXNAD",
"Zk_6DhGXNAD",
"xKEnJ3g_3o",
"nips_2022_tuC6teLFZD",
"nips_2022_tuC6teLFZD",
"nips_2022_tuC6teLFZD"
] |
nips_2022_Q8GnGqT-GTJ | Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning | Prompt learning approaches have made waves in natural language processing by inducing better few-shot performance while they still follow a parametric-based learning paradigm; the oblivion and rote memorization problems in learning may encounter unstable generalization issues. Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data. To alleviate such limitations, we develop RetroPrompt with the motivation of decoupling knowledge from memorization to help the model strike a balance between generalization and memorization. In contrast with vanilla prompt learning, RetroPrompt constructs an open-book knowledge-store from training instances and implements a retrieval mechanism during the process of input, training and inference, thus equipping the model with the ability to retrieve related contexts from the training corpus as cues for enhancement. Extensive experiments demonstrate that RetroPrompt can obtain better performance in both few-shot and zero-shot settings. Besides, we further illustrate that our proposed RetroPrompt can yield better generalization abilities with new datasets. Detailed analysis of memorization indeed reveals RetroPrompt can reduce the reliance of language models on memorization; thus, improving generalization for downstream tasks. Code is available in https://github.com/zjunlp/PromptKG/tree/main/research/RetroPrompt. | Accept | The paper proposes RetroPrompt, which builds a knowledge-store with training examples and improves few-shot and zero-shot performance.
All reviewers appreciate the improvements over competitive baselines and the quality of presentation. The main weaknesses are the lack of ablations to clarify where the gains come from and unclear positioning with respect to previous works (KNN-LM, RETRO, REALM, RAG). The reviewers were unanimous in accepting and the authors addressed some of the issues raised. | test | [
"hyCgs-J09YJ",
"GuU_U7r5OXw",
"vwo0rOEq-dY",
"ykRpRpd30qW",
"edIY11fmtTL",
"WMCFr1-n8H1",
"qxJTDdP4AJf",
"AiARcUhgMfu",
"6YYq5k8rCDs",
"FCCkOsDkfcH",
"doQ8GeJEXHV",
"dVX_lBc8sw5",
"vQC_sYC9NYj",
"vR142V3oUUm"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nWe hope that you've had a chance to read our response. We would really appreciate a reply as to whether our response and clarifications have addressed the issues raised in your review, or whether there is anything else we can address.",
" Dear reviewers, we sincerely appreciate any suggestions... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"vR142V3oUUm",
"nips_2022_Q8GnGqT-GTJ",
"ykRpRpd30qW",
"FCCkOsDkfcH",
"nips_2022_Q8GnGqT-GTJ",
"qxJTDdP4AJf",
"vR142V3oUUm",
"vQC_sYC9NYj",
"dVX_lBc8sw5",
"doQ8GeJEXHV",
"nips_2022_Q8GnGqT-GTJ",
"nips_2022_Q8GnGqT-GTJ",
"nips_2022_Q8GnGqT-GTJ",
"nips_2022_Q8GnGqT-GTJ"
] |
nips_2022_Soadfc-JMeX | HSDF: Hybrid Sign and Distance Field for Modeling Surfaces with Arbitrary Topologies | Neural implicit function based on signed distance field (SDF) has achieved impressive progress in reconstructing 3D models with high fidelity. However, such approaches can only represent closed shapes.
Recent works based on unsigned distance function (UDF) are proposed to handle both watertight and open surfaces.
Nonetheless, as UDF is signless, its direct output is limited to point cloud, which imposes an additional challenge on extracting high-quality meshes from discrete points.
To address this issue, we present a new learnable implicit representation, coded HSDF, that connects the good ends of SDF and UDF. In particular, HSDF is able to represent arbitrary topologies containing both closed and open surfaces while being compatible with existing iso-surface extraction techniques for easy field-to-mesh conversion. In addition to predicting a UDF, we propose to learn an additional sign field via a simple classifier. Unlike traditional SDF, HSDF is able to locate the surface of interest before level surface extraction by generating surface points following NDF~\cite{chibane2020ndf}. We are then able to obtain open surfaces via an adaptive meshing approach that only instantiates regions containing surface into a polygon mesh. We also propose HSDF-Net, a dedicated learning framework that factorizes the learning of HSDF into two easier problems.
Experiments on multiple datasets show that HSDF outperforms state-of-the-art techniques both qualitatively and quantitatively. | Accept | The reviewers agree that the paper's idea to include both sign and distance fields is a valuable contribution to 3D computer vision research.
Reviewers ask sensible clarifying questions (e.g. orienting the training data, sign network continuity) and the rebuttal's answers are illuminating and to the point.
A short notice on terminoloy: I agree that "adversarial" should not be used here as it has a special meaning for the wider NeurIPS audience. Regarding other wording suggestions, I add no extra vote for or against.
| train | [
"BKGqAO_Ye-h",
"5w5ehCuIJFT",
"7zOGEdWwNUE",
"ixSAzRzzrJ",
"8dGgN01Hwbl",
"MGOogJlOvdH",
"1GuEZ2jtB1r",
"PQmNtS7KWiC",
"DkhJCTT_1Sl",
"hc0av-EJdtYz",
"QPhvP7fFvkG0",
"8F-t8eUpqopP",
"eaHAgmV46_-",
"UiBkByyU-hq",
"qOETV4zKql"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the reviewer nGbx for further explanation, we provide some point cloud data[A] (mentioned by reviewer oe6G) and the reconstruction script[B] provided by NDF's author in these links below. If you are interested, you can try it for the reconstruction of the point cloud.\n\n[A] https://ufile.io/kdu0hoja\n\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ixSAzRzzrJ",
"8dGgN01Hwbl",
"MGOogJlOvdH",
"8dGgN01Hwbl",
"QPhvP7fFvkG0",
"hc0av-EJdtYz",
"nips_2022_Soadfc-JMeX",
"qOETV4zKql",
"UiBkByyU-hq",
"UiBkByyU-hq",
"eaHAgmV46_-",
"eaHAgmV46_-",
"nips_2022_Soadfc-JMeX",
"nips_2022_Soadfc-JMeX",
"nips_2022_Soadfc-JMeX"
] |
nips_2022_cRNl08YWRKq | Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks | Visual tasks vary a lot in their output formats and concerned contents, therefore it is hard to process them with an identical structure. One main obstacle lies in the high-dimensional outputs in object-level visual tasks. In this paper, we propose an object-centric vision framework, Obj2Seq. Obj2Seq takes objects as basic units, and regards most object-level visual tasks as sequence generation problems of objects. Therefore, these visual tasks can be decoupled into two steps. First recognize objects of given categories, and then generate a sequence for each of these objects. The definition of the output sequences varies for different tasks, and the model is supervised by matching these sequences with ground-truth targets. Obj2Seq is able to flexibly determine input categories to satisfy customized requirements, and be easily extended to different visual tasks. When experimenting on MS COCO, Obj2Seq achieves 45.7% AP on object detection, 89.0% AP on multi-label classification and 65.0% AP on human pose estimation. These results demonstrate its potential to be generally applied to different visual tasks. Code has been made available at: https://github.com/CASIA-IVA-Lab/Obj2Seq. | Accept | The paper proposes an approach for formulating a few visual tasks as sequence prediction with class prompt. Reviewers are overall positive about the paper, especially the direction towards a unified vision model where the paper is exploring. However, it is also pointed out the paper should be more explicit about how the sequence is modeled with object queries and bipartite graph matching loss, which are significant differences from standard sequence modeling, as presented in language models, or Pix2Seq v1/v2. The authors should consider point out these differences in the abstract, Figure 1/2, to avoid misleading readers into thinking this is just like language modeling with autoregressive loss. Overall I’d recommend accepting the paper given it is a good attempt towards a unified vision model, but also encourage the authors to further improve the writing and clarify the sequence modeling part as mentioned above.
| train | [
"zav4cScyGk1",
"I4C9ueIQslD",
"84hIRL-8_vg",
"qVcwiyGtfcE",
"3bjv2friH7a",
"PIa0b71KPPU",
"rgTLKCf2Uzb",
"1ANpEBB26PA",
"gV74kBO8xhH",
"KWuYU3in46v",
"ylHzrPXYu-V",
"iwVa7TnHUNZ",
"LkjsjbQ-RWz",
"hRfuoR9g1p",
"I2YT3QKJXSp",
"2nBLDGYJB2O"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their careful thoughts and kindly comments. We have updated the manuscript to emphasize more on how Obj2Seq is able to solve object-level visual tasks in a unified way, and therefore make our description more accurate and easier to understand. We will keep working to generalize Obj2Seq ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
1
] | [
"3bjv2friH7a",
"qVcwiyGtfcE",
"PIa0b71KPPU",
"ylHzrPXYu-V",
"gV74kBO8xhH",
"iwVa7TnHUNZ",
"1ANpEBB26PA",
"KWuYU3in46v",
"2nBLDGYJB2O",
"I2YT3QKJXSp",
"hRfuoR9g1p",
"LkjsjbQ-RWz",
"nips_2022_cRNl08YWRKq",
"nips_2022_cRNl08YWRKq",
"nips_2022_cRNl08YWRKq",
"nips_2022_cRNl08YWRKq"
] |
nips_2022_1ryTomA0iKa | Riemannian Neural SDE: Learning Stochastic Representations on Manifolds | In recent years, the neural stochastic differential equation (NSDE) has gained attention for modeling stochastic representations with great success in various types of applications. However, it typically loses expressivity when the data representation is manifold-valued. To address this issue, we suggest a principled method for expressing the stochastic representation with the Riemannian neural SDE (RNSDE), which extends the conventional Euclidean NSDE. Empirical results for various tasks demonstrate that the proposed method significantly outperforms baseline methods. | Accept | There was a consensus towards weak acceptance among all the reviewers, and I agree with this consensus. This paper solves an important problem of applying SDEs to manifolds. It is clearly written, and all the reviewers agree that the claims are well-supported by strong experimental results. On the other hand, this clarity of writing perhaps relies overmuch on familiarity with the area, and some effort should be made to smooth out the presentation for a general NeurIPS audience. Beyond this, the weaknesses pointed out by the reviewers were well addressed by the author response, including an additional experimental result that provides a comparison to Moser Flow: there is no strong unaddressed weakness that would merit rejection. I think that the manifold learning community, as well as the Neural ODE community more broadly, at NeurIPS will find this work interesting and useful, as it expands the range of methods they can apply on manifolds. As such, I lean towards accepting this paper. | train | [
"Xmgfi7voG3H",
"br3zpxZq_d",
"HK1_M-bqGWw",
"qI_A1GZmXJa",
"20RnQXosm8P",
"7cpUOuIXL8M",
"8Ivqt_XIt7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad to inform that every reviewers appreciated the contributions and acknowledged the strength of our paper. In this section, we compare our model with prior work [D] and give a brief explanation and comparison. The contents in this comment will be added in an additional content page for the camera-ready ... | [
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"nips_2022_1ryTomA0iKa",
"20RnQXosm8P",
"7cpUOuIXL8M",
"8Ivqt_XIt7",
"nips_2022_1ryTomA0iKa",
"nips_2022_1ryTomA0iKa",
"nips_2022_1ryTomA0iKa"
] |
nips_2022_C9yUwd72yy | Learning Latent Seasonal-Trend Representations for Time Series Forecasting | Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github. | Accept | The paper presents a novel learning approach named LaST for time-series forecasting based on variational inference to disentangle the seasonal-trend representations in the latent space. Empirical results validate the effectiveness of the proposed method in comparison with several strong baselines. Reviewers generally agree the work is technically solid, the idea is novel, the experiments are convincing, and the paper is well presented. Thus, the paper is clearly above the acceptance bar. Authors are encouraged to incorporate all the discussions and the additional results during the rebuttal in the final version. | train | [
"xgkBP6cE25T",
"PmQZsEWYDre",
"dX0-aFFUF6mf",
"RdnCN-MIFDC",
"XYynOTFts0F",
"i2Ro0Cy9h_8",
"qPy03NJWtxQ",
"kXu2QFq97dK",
"ynME5FnsLJe",
"sRmTTlhqtKA",
"r6OxGF_cWAz",
"ovVwsDvcg6",
"6u_gBx4gZKF",
"zO7QCIwaOvP",
"FqjOEGdKLx7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nThe authors have provided the rebuttal responses. The discussion period between authors and reviewers will end soon. \nPlease do check the author's response, acknowledge your reading, and update your review if needed. \nIf there is any further question, please do ask the authors to clarify befo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"nips_2022_C9yUwd72yy",
"dX0-aFFUF6mf",
"RdnCN-MIFDC",
"zO7QCIwaOvP",
"zO7QCIwaOvP",
"zO7QCIwaOvP",
"zO7QCIwaOvP",
"FqjOEGdKLx7",
"6u_gBx4gZKF",
"6u_gBx4gZKF",
"ovVwsDvcg6",
"nips_2022_C9yUwd72yy",
"nips_2022_C9yUwd72yy",
"nips_2022_C9yUwd72yy",
"nips_2022_C9yUwd72yy"
] |
nips_2022_tbId-oAOZo | QueryPose: Sparse Multi-Person Pose Regression via Spatial-Aware Part-Level Query | We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes. Without bells and whistles, QueryPose surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose. | Accept | The authors propose a novel framework for end-to-end multi-person pose estimation by employing a set of learnable part-level queries along with instance-level queries. Promising results are demonstrated on the challenging COCO and CrowdPose datasets. The provided author rebuttal successfully addressed all reviewer concerns. As a result, all four reviewers recommend accepting the papers. The AC has read the paper, reviewer comments, author rebuttal, and all the discussions. The AC agrees with the reviewer recommendations. The authors are encouraged to include the rebuttal results (e.g., runtime analysis) to their camera-ready. | train | [
"AS-SsXbmuMj",
"snUv03txNV8",
"CUFsG2vv6D",
"sQ_XoyN1OhQ",
"E5WAAOvoht6",
"W3WzaiZnt6_",
"GR9_8qWy5gb",
"jTh-x0JioAD",
"i6tYPys7Rvp",
"fQuPBOB5EZ5",
"8exCq9F5UO",
"9Zm4EN68RJ-",
"9ljvollRits",
"ymKHlXYAZAg"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the previous insightful comments. We also would like to receive your further response about our clarifications.",
" We are glad to hear that the concerns have been addressed. Thanks again for the time and effort in reviewing our paper. The constructive suggestions help us make our paper better.",
... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"GR9_8qWy5gb",
"sQ_XoyN1OhQ",
"i6tYPys7Rvp",
"jTh-x0JioAD",
"W3WzaiZnt6_",
"nips_2022_tbId-oAOZo",
"ymKHlXYAZAg",
"9ljvollRits",
"9Zm4EN68RJ-",
"8exCq9F5UO",
"nips_2022_tbId-oAOZo",
"nips_2022_tbId-oAOZo",
"nips_2022_tbId-oAOZo",
"nips_2022_tbId-oAOZo"
] |
nips_2022_OHkq7qNr72- | A Mixture Of Surprises for Unsupervised Reinforcement Learning | Unsupervised reinforcement learning aims at learning a generalist policy in a reward-free manner for fast adaptation to downstream tasks. Most of the existing methods propose to provide an intrinsic reward based on surprise. Maximizing or minimizing surprise drives the agent to either explore or gain control over its environment. However, both strategies rely on a strong assumption: the entropy of the environment's dynamics is either high or low. This assumption may not always hold in real-world scenarios, where the entropy of the environment's dynamics may be unknown. Hence, choosing between the two objectives is a dilemma. We propose a novel yet simple mixture of policies to address this concern, allowing us to optimize an objective that simultaneously maximizes and minimizes the surprise. Concretely, we train one mixture component whose objective is to maximize the surprise and another whose objective is to minimize the surprise. Hence, our method does not make assumptions about the entropy of the environment's dynamics. We call our method a $\textbf{M}\text{ixture }\textbf{O}\text{f }\textbf{S}\text{urprise}\textbf{S}$ (MOSS) for unsupervised reinforcement learning. Experimental results show that our simple method achieves state-of-the-art performance on the URLB benchmark, outperforming previous pure surprise maximization-based objectives. Our code is available at: https://github.com/LeapLabTHU/MOSS. | Accept | This paper proposes a method for unsupervised skill discovery, which learns a mixture of policies that simultaneously maximizes and minimizes the surprise. All the reviewers agree that the paper tackles an important active research area. The paper is well written; the motivation is well explained; the proposed method is simple and easy-to-implement; and it performs well on the benchmark. Several concerns were raised by the reviewers, including novelty, delta beyond the interpolated CIC, and the choice of M. The rebuttal and the additional experimental results have addressed some of these concerns. One of the main criticisms, choice on when to switch the objectives, still remains after the rebuttal. Although the current heuristic is simple, it seems ad-hoc, without good justifications. After discussion with reviewers, we agree that this limitation is compensated by the other contributions of the paper, and helps to set the stage for future work that does optimize M or devises a new method that does not rely on it. Thus, we recommend accepting this paper. | train | [
"WcEprqb2qF",
"D1ZKeQ9NO0P",
"cvVk_kBg-So",
"isIvJmqOCHv",
"LHDNuxz1ecB",
"JUkXFvOoPK",
"hepydie_US_",
"QdyLeY1rsdm",
"hdFH16srl7z",
"DC7do0cQ9_C",
"QhvG92gH0zP",
"AXEBllPVy8X",
"-mPx8SsbOag"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your responses. They comprehensively addressed my concerns.",
" We appreciate the follow-up response from the reviewer. Although our objective switching mechanism appears to be ad-hoc, we consider its simplicity a strength for the following reasons: it does not increase the computational cost duri... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"JUkXFvOoPK",
"cvVk_kBg-So",
"hepydie_US_",
"nips_2022_OHkq7qNr72-",
"-mPx8SsbOag",
"-mPx8SsbOag",
"AXEBllPVy8X",
"QhvG92gH0zP",
"DC7do0cQ9_C",
"nips_2022_OHkq7qNr72-",
"nips_2022_OHkq7qNr72-",
"nips_2022_OHkq7qNr72-",
"nips_2022_OHkq7qNr72-"
] |
nips_2022_pCrB8orUkSq | Monocular Dynamic View Synthesis: A Reality Check | We study the recent progress on dynamic view synthesis (DVS) from monocular video. Though existing approaches have demonstrated impressive results, we show a discrepancy between the practical capture process and the existing experimental protocols, which effectively leaks in multi-view signals during training. We define effective multi-view factors (EMFs) to quantify the amount of multi-view signal present in the input capture sequence based on the relative camera-scene motion. We introduce two new metrics: co-visibility masked image metrics and correspondence accuracy, which overcome the issue in existing protocols. We also propose a new iPhone dataset that includes more diverse real-life deformation sequences. Using our proposed experimental protocol, we show that the state-of-the-art approaches observe a 1-2 dB drop in masked PSNR in the absence of multi-view cues and 4-5 dB drop when modeling complex motion. Code and data can be found at http://hangg7.com/dycheck. | Accept | Pre-rebuttal, this paper had mixed reviews. Post-rebuttal, the paper had two strong supporters, A6gt and vDbH, who argued that the paper provides valuable insights into an important field, as well as a supporter dLU6, who commented in the discussion below that they are in favor of the paper (although did not update their review). The only remaining criticism comes from 2BcV. The AC does not find 2BcV's review persuasive (A6gt's comments summarize the AC's perspective well) and 2BcV did not participate in discussion. The AC is inclined to accept the paper and encourages the authors to use their extra page to integrate their responses to the reviewers. | test | [
"y6X2UEzJIpJ",
"D_j-GoSB9f",
"G2u-3KO6N4Q",
"A_M9uad6L9",
"wiBGHcuP605",
"Ka3-Ozb-NLB",
"vy_vt0cuN-q",
"UcUugUcVq9P",
"2z_9k7qbRFj",
"zTqu_Xjhls7",
"8EX8_O33Hzw",
"Ch1bNawxkfB",
"PDN1Abae72",
"yPNVkq5NFJv",
"H5_3FlzWouL",
"Xqe-Mv7xpxx"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We’d like to thank the reviewers for the discussion. We are glad that they are now positive about the work, and would request the reviewer to kindly update their rating/review to reflect this. We will modify the text in the final version using the comments from Reviewer vDhB to help position the work and its impo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
4
] | [
"D_j-GoSB9f",
"G2u-3KO6N4Q",
"PDN1Abae72",
"wiBGHcuP605",
"vy_vt0cuN-q",
"2z_9k7qbRFj",
"UcUugUcVq9P",
"Xqe-Mv7xpxx",
"H5_3FlzWouL",
"yPNVkq5NFJv",
"PDN1Abae72",
"nips_2022_pCrB8orUkSq",
"nips_2022_pCrB8orUkSq",
"nips_2022_pCrB8orUkSq",
"nips_2022_pCrB8orUkSq",
"nips_2022_pCrB8orUkSq"
... |
nips_2022_jSorGn2Tjg | Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures | Antibodies are immune system proteins that protect the host by binding to specific antigens such as viruses and bacteria. The binding between antibodies and antigens is mainly determined by the complementarity-determining regions (CDR) of the antibodies. In this work, we develop a deep generative model that jointly models sequences and structures of CDRs based on diffusion probabilistic models and equivariant neural networks. Our method is the first deep learning-based method that generates antibodies explicitly targeting specific antigen structures and is one of the earliest diffusion probabilistic models for protein structures. The model is a "Swiss Army Knife" capable of sequence-structure co-design, sequence design for given backbone structures, and antibody optimization. We conduct extensive experiments to evaluate the quality of both sequences and structures of designed antibodies. We find that our model could yield competitive results in binding affinity measured by biophysical energy functions and other protein design metrics. | Accept | This is a very exciting and timely paper that eleganlty enables CDR sequence-structure co-design, sequence design given a certain backbone, and antibody optimization.
The reviewers and AC all appreciate the extensive feedback provided by the authors and the additional studies included in the supplements. We strongly encourage the authors to also incorporate in their manuscript certain points made in their feedback. In particular please include comments to
- contrast the proposed approach with (i) the work "Iterative Refinement Graph Neural Network for Antibody Sequence-Structure Co-design" and (ii)neutralization prediction approaches
- highlight the limitations of docking algorithms and the pertinent future work direction of generating antibody orientations for antigens
- clarify various points, such as description of Figure 1. | train | [
"GoPhnphp6w",
"ToM4S0PHJIn",
"saZNNC5zryk",
"T1rH8mSdctM",
"kbnjaU4sqH",
"UyCKO5jIrWNW",
"U-jiuHMHlCko",
"_XsZRYOtyr",
"64q5oVGC0f0",
"kkNqs5DuFQT",
"al5S-1vHUc39",
"24poOHqLJeT",
"AMytTHJwwq8",
"0Ck8zXfinzA"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThanks for addressing all of my comments! I increased my score to “weak accept”.\n\n[Q2] I agree with you that MSA based approaches such as AlphaFold2 or RosettaFold are less accurate for predicting CDR loops due to often insufficient homologs for building a reliable MSA. However, recent approach... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"kbnjaU4sqH",
"64q5oVGC0f0",
"T1rH8mSdctM",
"al5S-1vHUc39",
"UyCKO5jIrWNW",
"U-jiuHMHlCko",
"_XsZRYOtyr",
"0Ck8zXfinzA",
"kkNqs5DuFQT",
"AMytTHJwwq8",
"24poOHqLJeT",
"nips_2022_jSorGn2Tjg",
"nips_2022_jSorGn2Tjg",
"nips_2022_jSorGn2Tjg"
] |
nips_2022_QLGuUwDx4S | DropCov: A Simple yet Effective Method for Improving Deep Architectures | Previous works show global covariance pooling (GCP) has great potential to improve deep architectures especially on visual recognition tasks, where post-normalization of GCP plays a very important role in final performance. Although several post-normalization strategies have been studied, these methods pay more close attention to effect of normalization on covariance representations rather than the whole GCP networks, and their effectiveness requires further understanding. Meanwhile, existing effective post-normalization strategies (e.g., matrix power normalization) usually suffer from high computational complexity (e.g., $O(d^{3})$ for $d$-dimensional inputs). To handle above issues, this work first analyzes the effect of post-normalization from the perspective of training GCP networks. Particularly, we for the first time show that \textit{effective post-normalization can make a good trade-off between representation decorrelation and information preservation for GCP, which are crucial to alleviate over-fitting and increase representation ability of deep GCP networks, respectively}. Based on this finding, we can improve existing post-normalization methods with some small modifications, providing further support to our observation. Furthermore, this finding encourages us to propose a novel pre-normalization method for GCP (namely DropCov), which develops an adaptive channel dropout on features right before GCP, aiming to reach trade-off between representation decorrelation and information preservation in a more efficient way. Our DropCov only has a linear complexity of $O(d)$, while being free for inference. Extensive experiments on various benchmarks (i.e., ImageNet-1K, ImageNet-C, ImageNet-A, Stylized-ImageNet, and iNat2017) show our DropCov is superior to the counterparts in terms of efficiency and effectiveness, and provides a simple yet effective method to improve performance of deep architectures involving both deep convolutional neural networks (CNNs) and vision transformers (ViTs). | Accept | Both reviewer Fzo6, reviewer VWPn and reviewer 5eq4 have concerns and been questions regarding equation 5. Please clarify the clarifications on the paper and add intuition and more discussion of Eq. 5.
The paper and comments from the authors indicate that dropout base regularizations are effective (Maxdropout, Maxout and Decov also outperforms GCP). This does mean that a large part of the benefits of the proposed method (no inference processing, lower complexity) are the result of that dropout/regularization in training, which takes away from the contributions.
Overall, I think the paper is borderline, leaning to acceptance, as the proposed DropConv does out perform other dropout/regularization methods and I believe the paper might benefit the community. I'd strongly encourage the authors to review their manuscript and address the reviewers’ concerns as best as possible in the revised manuscript. | train | [
"hYxrbNCEApM",
"lwA2Pin-DEU",
"nKTIpsVcVL",
"YuEhvamr4oq",
"0LK9Ncl_n6f",
"uvQAFPfdmRF",
"5yysqDOh1Uh",
"mkK0-YOZNA",
"1CUngiee1q",
"2WqE6OvkFR_"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" [Q1]: For the experiments in Table 2, the counterpart of DropChannel seems analogous to ACD except that the probability in ACD is determined by features, while the performance gap is so significant and even higher than DropElement. Could the authors analyze the phenomenon? How does the DropChannel and DropElement... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"lwA2Pin-DEU",
"2WqE6OvkFR_",
"1CUngiee1q",
"nips_2022_QLGuUwDx4S",
"5yysqDOh1Uh",
"mkK0-YOZNA",
"nips_2022_QLGuUwDx4S",
"nips_2022_QLGuUwDx4S",
"nips_2022_QLGuUwDx4S",
"nips_2022_QLGuUwDx4S"
] |
nips_2022_XrECTbqRCfX | Approximate Secular Equations for the Cubic Regularization Subproblem | The cubic regularization method (CR) is a popular algorithm for unconstrained non-convex optimization. At each iteration, CR solves a cubically regularized quadratic problem, called the cubic regularization subproblem (CRS). One way to solve the CRS relies on solving the secular equation, whose computational bottleneck lies in the computation of all eigenvalues of the Hessian matrix. In this paper, we propose and analyze a novel CRS solver based on an approximate secular equation, which requires only some of the Hessian eigenvalues and is therefore much more efficient. Two approximate secular equations (ASEs) are developed. For both ASEs, we first study the existence and uniqueness of their roots and then establish an upper bound on the gap between the root and that of the standard secular equation. Such an upper bound can in turn be used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for our CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. Numerical experiments with synthetic and real datasets are conducted to investigate the practical performance of the proposed CRS solver. Experiment results show that the proposed solver outperforms two state-of-the-art methods. | Accept | This paper proposes a new method for solving the cubic subproblem in the cubic regularized Newton method. The propose method is simple, but works very well in practice. The numerical experiments demonstrated that the ARC algorithm combined with the proposed new subproblem solver significantly outperforms ARC with different subproblem solvers. Moreover, the accuracy of the solution generated by the proposed method can be several orders better than the one generated by other methods. Error analysis of the proposed method is also provided. Overall, this is a very nice contribution to the cubic regularized Newton method, which has the potential to accelerate this important method. | train | [
"1JNQo5N6BYY",
"r51mTVIXku4",
"Lhb-7O13k4",
"8QMmhdaz7nQ",
"h3a4lyvmct",
"hrbw0UewJ_k",
"3Goka3XdFJD",
"OLSYTN14gtd",
"SZ16x6VCVSr",
"T1FgLoYC--g9",
"EzQU_swpPZ",
"rdJkxO2d9W",
"ujErMiQJ9S8",
"yUfGcTHkHV",
"4znGsKepf1j",
"1OL6YAbibWS",
"iuj_MH2mEYE",
"Q_PtvDK2Wp"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer MvBV,\n\nThanks again for your review and comments. Do you have further comments? We hope our response answers all your concerns well.\n\nWe would like to state something further.\n## Contribution\nin this paper, **our main contribution** is the proposed novel ASEM in solving cubic subproblems, theo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4,
4
] | [
"4znGsKepf1j",
"1OL6YAbibWS",
"8QMmhdaz7nQ",
"h3a4lyvmct",
"hrbw0UewJ_k",
"SZ16x6VCVSr",
"OLSYTN14gtd",
"T1FgLoYC--g9",
"Q_PtvDK2Wp",
"iuj_MH2mEYE",
"1OL6YAbibWS",
"1OL6YAbibWS",
"4znGsKepf1j",
"4znGsKepf1j",
"nips_2022_XrECTbqRCfX",
"nips_2022_XrECTbqRCfX",
"nips_2022_XrECTbqRCfX",
... |
nips_2022_QqWqFLbllZh | Spatial Pruned Sparse Convolution for Efficient 3D Object Detection | 3D scenes are dominated by a large number of background points, which is redundant for the detection task that mainly needs to focus on foreground objects. In this paper, we analyze major components of existing sparse 3D CNNs and find that 3D CNNs ignores the redundancy of data and further amplifies it in the down-sampling process, which brings a huge amount of extra and unnecessary computational overhead. Inspired by this, we propose a new convolution operator named spatial pruned sparse convolution (SPS-Conv), which includes two variants, spatial pruned submanifold sparse convolution (SPSS-Conv) and spatial pruned regular sparse convolution (SPRS-Conv), both of which are based on the idea of dynamically determine crucial areas for performing computations to reduce redundancy. We empirically find that magnitude of features can serve as an important cues to determine crucial areas which get rid of the heavy computations of learning-based methods. The proposed modules can easily be incorporated into existing sparse 3D CNNs without extra architectural modifications. Extensive experiments on the KITTI and nuScenes datasets demonstrate that our method can achieve more than 50% reduction in GFLOPs without compromising the performance. | Accept | The paper shows that it is possible to obtain a good saving in both terms of FLOPS and latency using sparse convolutions for 3d object detection by leveraging the magnitude of features. After a strong rebuttal all 4 reviewers vote for acceptance of the paper with high-confidence.
I suggest that the authors incorporate the comments from reviewers and some of the results from the rebuttal to make the paper more immediately convincing upon a first reading. | train | [
"VLS98bvtQv",
"VJAg41xViH",
"uwthCSy6CCx",
"2ApmzBcmzlO",
"jAGnJfKpQS4",
"OkjJlNs2m84",
"2Sv-sPclQ2C",
"KcsaNv8iDD23",
"Lnr0U-W7w2R",
"HwGY9dx-vci",
"7mlXBxC0IEdb",
"RdqYOYgp6qR",
"LQZAKxxSqgG",
"WFg8LnOVn-U",
"t151OooZatl",
"qRj2gV6q4L-",
"o2yFdphOjcS",
"dbnbi7oRQq",
"aaJwubmzNI... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer TzZm:\n\nWe are sincerely grateful for your positive feedback of our work. Thanks for your remind that the visualization should still be further explored. We are preparing and will add these to our paper.\n\nBest regards,\n\nPaper 1506 authors",
" Thanks for the authors for providing the visualiza... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"VJAg41xViH",
"WFg8LnOVn-U",
"2ApmzBcmzlO",
"LQZAKxxSqgG",
"qRj2gV6q4L-",
"nips_2022_QqWqFLbllZh",
"KcsaNv8iDD23",
"aaJwubmzNIB",
"aaJwubmzNIB",
"aaJwubmzNIB",
"dbnbi7oRQq",
"dbnbi7oRQq",
"o2yFdphOjcS",
"qRj2gV6q4L-",
"nips_2022_QqWqFLbllZh",
"nips_2022_QqWqFLbllZh",
"nips_2022_QqWqF... |
nips_2022_H5z5Q--YdYd | BMU-MoCo: Bidirectional Momentum Update for Continual Video-Language Modeling | Video-language models suffer from forgetting old/learned knowledge when trained with streaming data. In this work, we thus propose a continual video-language modeling (CVLM) setting, where models are supposed to be sequentially trained on five widely-used video-text datasets with different data distributions. Although most of existing continual learning methods have achieved great success by exploiting extra information (e.g., memory data of past tasks) or dynamically extended networks, they cause enormous resource consumption when transferred to our CVLM setting. To overcome the challenges (i.e., catastrophic forgetting and heavy resource consumption) in CVLM, we propose a novel cross-modal MoCo-based model with bidirectional momentum update (BMU), termed BMU-MoCo. Concretely, our BMU-MoCo has two core designs: (1) Different from the conventional MoCo, we apply the momentum update to not only momentum encoders but also encoders (i.e., bidirectional) at each training step, which enables the model to review the learned knowledge retained in the momentum encoders. (2) To further enhance our BMU-MoCo by utilizing earlier knowledge, we additionally maintain a pair of global momentum encoders (only initialized at the very beginning) with the same BMU strategy. Extensive results show that our BMU-MoCo remarkably outperforms recent competitors w.r.t. video-text retrieval performance and forgetting rate, even without using any extra data or dynamic networks. | Accept | This work presents a study on continual video-language modeling. In addition to the modeling side of things (BMU-MoCO), the authors construct a new benchmark in which they compare a number of existing methods. While I think it's great that the authors came up with a new benchmark, it's always a somewhat difficult analysis when a paper comes up with both a new benchmark and a method that beats the previous methods on this new benchmark. This is a shared concern with at least one of the reviewers. I do note that computational limitations make it difficult for the authors to thoroughly test on many other benchmarks. The authors do provide some UCL results in the rebuttal, which strengthen their case.
All in all, I have to agree with reviewer ZHhB that the method is somewhat complicated and that the gains from the global branch seem overstated. I found the methods part hard to follow as well. All in all, the work does seem interesting and important, but perhaps I am more convinced about the benchmark rather than the method. As is, I would still recommend it for acceptance, but I feel it would need a good amount of work to improve clarity of the exposition and ideally more robust empirical evidence (that doesn't involve benchmarks created by the authors themselves).
| train | [
"kzlelIfENv",
"6zl3ymWKhKK",
"S1FDyfIpfOZ",
"xPfmENZFPc",
"Hclm6svFN2WN",
"lSUq70lL6-8",
"KZ5Id0HpUMm",
"W3xlJKxni5l",
"3uxolWR4eHh",
"DcopD6h7jSS",
"anXLYgLcawF",
"2TPPIFGSs-_",
"wFUi9weFn0h",
"pwAgPw6jSiY",
"srJqej-zfHT",
"H3SiN9K4NZ_"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely appreciate all reviewers’ time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our contributions: \n\n* **Model.** Proposing a novel cross-modal MoCo-based model with bidirectional momentum update for continual learning [6RoA, ZHhB].\n* **Setting.** Introd... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_H5z5Q--YdYd",
"Hclm6svFN2WN",
"xPfmENZFPc",
"anXLYgLcawF",
"lSUq70lL6-8",
"3uxolWR4eHh",
"H3SiN9K4NZ_",
"pwAgPw6jSiY",
"wFUi9weFn0h",
"H3SiN9K4NZ_",
"H3SiN9K4NZ_",
"srJqej-zfHT",
"pwAgPw6jSiY",
"nips_2022_H5z5Q--YdYd",
"nips_2022_H5z5Q--YdYd",
"nips_2022_H5z5Q--YdYd"
] |
nips_2022_GAUwreODU5L | GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images | As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications. Prior works on 3D generative modeling either lack geometric details, are limited in the mesh topology they can produce, typically do not support textures, or utilize neural renderers in the synthesis process, which makes their use in common 3D software non-trivial. In this work, we introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high fidelity textures. We bridge recent success in the differentiable surface modeling, differentiable rendering as well as 2D Generative Adversarial Networks to train our model from 2D image collections. GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings, achieving significant improvements over previous methods. | Accept |
The paper proposes a generative model for synthesizing textured 3D meshes given only a collection of 2D images. The paper has received overwhelmingly positive reviews. Many reviewers find the idea interesting, the paper well-written, the results compelling, and the experiments comprehensive. The rebuttal further addressed the concerns such as camera poses and missing comparisons. The AC agreed with the reviewers’ consensus and recommended accepting the paper. | train | [
"JTbtH5FEXg",
"_W9jeqEu10",
"IGbMrHNj7_",
"1Yt5-YGW0_",
"4y8gR9uaBQ",
"E2h7HsKveZ",
"jN2KkfNEedU",
"mEI_HhX42Bz",
"bozwr55AvRr",
"wNqjHJ7XPNE",
"opUHMgrgGJ_",
"QCKD5SL9ZmE",
"JUdeoF7_cMs",
"I2X9p8gxZt",
"ZzGCMQ-7anQ",
"Mcwlix9NNcg"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the feedback! \n\nThat's really great point to interpolate the geometry code and texture code individually! We provide two such results in our revised main paper (see Sec 6.5, Fig 16 & 17). Please check it.\n\nFor each figure, at every row, we interpolate the geometry latent code while ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"IGbMrHNj7_",
"opUHMgrgGJ_",
"wNqjHJ7XPNE",
"bozwr55AvRr",
"jN2KkfNEedU",
"nips_2022_GAUwreODU5L",
"QCKD5SL9ZmE",
"nips_2022_GAUwreODU5L",
"Mcwlix9NNcg",
"ZzGCMQ-7anQ",
"I2X9p8gxZt",
"JUdeoF7_cMs",
"nips_2022_GAUwreODU5L",
"nips_2022_GAUwreODU5L",
"nips_2022_GAUwreODU5L",
"nips_2022_GA... |
nips_2022_3vYkhJIty7E | Learning Optical Flow from Continuous Spike Streams | Spike camera is an emerging bio-inspired vision sensor with ultra-high temporal resolution. It records scenes by accumulating photons and outputting continuous binary spike streams. Optical flow is a key task for spike cameras and their applications. A previous attempt has been made for spike-based optical flow. However, the previous work only focuses on motion between two moments, and it uses graphics-based data for training, whose generalization is limited. In this paper, we propose a tailored network, Spike2Flow that extracts information from binary spikes with temporal-spatial representation based on the differential of spike firing time and spatial information aggregation. The network utilizes continuous motion clues through joint correlation decoding. Besides, a new dataset with real-world scenes is proposed for better generalization. Experimental results show that our approach achieves state-of-the-art performance on existing synthetic datasets and real data captured by spike cameras. The source code and dataset are available at \url{https://github.com/ruizhao26/Spike2Flow}. | Accept | This work is focused on the estimation of optical flow from a neuromorphic camera that produces Poisson spiking at each pixel with a rate governed by overall intensity. The authors use local space-time aggregation of spike-time differentials to identify features that are then corresponded via a convGRU decoder.
The reviewers found the application interesting, and noted the good performance of the method. There were however a number of concerns about innovation and novelty of the method. Specifically the aggregating spikes to operate on point process data is a standard approach and the assessment of the spiking source of the data was not analyzed. Regardless of the similarity to past methods, overall the reviewers felt that the strengths of the paper, specifically the combination of methods brought together to solve a unique problem, outweighed the weaknesses. Thus I recommend that this work be accepted. | train | [
"rVNfHZ5rMRq",
"SeOeHhGlL91",
"2soTGHJ5-kq",
"Z5OKEFLB0q",
"twsT-ipE7mC",
"noYzkM0WIHT",
"lphZOpi2_Y",
"MDafw_TfeLg",
"Y9F1gWduQ0U",
"vUvsI0ugtPV",
"zRIyKakHpu5",
"ZQEgAEmheSA",
"bmdsOSNV9DW",
"2WzeIT3cQxg",
"qh_fMBqlHK-",
"flzzpgQU92Z",
"Zj-1hFl_Usf",
"0bD6A6FEqk7",
"_C7XeuntLTp... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for your precious time and valuable comments. Please let us know if you have any further questions, and we will be glad to discuss with you to provide further details about our work if you like.",
" Thank you for your precious time and valuable comments. Please let us know if you have any further ques... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"lphZOpi2_Y",
"MDafw_TfeLg",
"vUvsI0ugtPV",
"_C7XeuntLTp",
"lphZOpi2_Y",
"lphZOpi2_Y",
"flzzpgQU92Z",
"GoxQM9UD0Tz",
"vUvsI0ugtPV",
"2WzeIT3cQxg",
"_C7XeuntLTp",
"DE16BQcVrQo",
"DE16BQcVrQo",
"DE16BQcVrQo",
"GoxQM9UD0Tz",
"HDlrCXrYgb6",
"HDlrCXrYgb6",
"HDlrCXrYgb6",
"nips_2022_3v... |
nips_2022_7KBzV5IL7W | INRAS: Implicit Neural Representation for Audio Scenes | The spatial acoustic information of a scene, i.e., how sounds emitted from a particular location in the scene are perceived in another location, is key for immersive scene modeling. Robust representation of scene's acoustics can be formulated through a continuous field formulation along with impulse responses varied by emitter-listener locations. The impulse responses are then used to render sounds perceived by the listener. While such representation is advantageous, parameterization of impulse responses for generic scenes presents itself as a challenge. Indeed, traditional pre-computation methods have only implemented parameterization at discrete probe points and require large storage, while other existing methods such as geometry-based sound simulations still suffer from inability to simulate all wave-based sound effects. In this work, we introduce a novel neural network for light-weight Implicit Neural Representation for Audio Scenes (INRAS), which can render a high fidelity time-domain impulse responses at any arbitrary emitter-listener positions by learning a continuous implicit function. INRAS disentangles scene’s geometry features with three modules to generate independent features for the emitter, the geometry of the scene, and the listener respectively. These lead to an efficient reuse of scene-dependent features and support effective multi-condition training for multiple scenes. Our experimental results show that INRAS outperforms existing approaches for representation and rendering of sounds for varying emitter-listener locations in all aspects, including the impulse response quality, inference speed, and storage requirements. | Accept | This is a technically good paper, with some flaws. Parts of the paper are hard to read. Several questions remain, e.g. how to determine the optimal number and location of bouncepoints, and how they depend on room layout and content.
The motivation behind some of the comparisons, e.g. to AACs is unclear.
Regardless, the overall paper presents a well-defined novel idea, and represent a significant contribution. The authors have also addressed most of the reviewers' comments satisfactorily.
I am recommending that the paper be accepted.
| val | [
"-42w23NjxwK",
"4Q4NexdYhST",
"VioZK76GhfQ",
"FJnxoLuE70v",
"VOTP1wSYq7-",
"pgWyXofd5jL",
"jmmrNTnCI6-",
"i2A5mDe4KCl"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for the response, which addresses many of my concerns. The new comparions are very helpful. It makes sense to use a compact neural representation to encode the acoustic fields, which can be computationally expensive to simulate. However, I am still not very convinced that the dataset used in the... | [
-1,
-1,
-1,
-1,
-1,
8,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"4Q4NexdYhST",
"i2A5mDe4KCl",
"jmmrNTnCI6-",
"jmmrNTnCI6-",
"pgWyXofd5jL",
"nips_2022_7KBzV5IL7W",
"nips_2022_7KBzV5IL7W",
"nips_2022_7KBzV5IL7W"
] |
nips_2022_Zzi8Od19DSU | Posterior and Computational Uncertainty in Gaussian Processes | Gaussian processes scale prohibitively with the size of the dataset. In response, many approximation methods have been developed, which inevitably introduce approximation error. This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior. Therefore in practice, GP models are often as much about the approximation method as they are about the data. Here, we develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended. The most common GP approximations map to an instance in this class, such as methods based on the Cholesky factorization, conjugate gradients, and inducing points. For any method in this class, we prove (i) convergence of its posterior mean in the associated RKHS, (ii) decomposability of its combined posterior covariance into mathematical and computational covariances, and (iii) that the combined variance is a tight worst-case bound for the squared error between the method's posterior mean and the latent function. Finally, we empirically demonstrate the consequences of ignoring computational uncertainty and show how implicitly modeling it improves generalization performance on benchmark datasets. | Accept | Gaussian Processes are a very nice modelisation tool in Bayesian nonparametrics, with very nice uncertainty quantification. But they also lead to serious computational issues. So, in practice, it is difficult to know what part of the uncertainty is due to the data, and what is due to approximations in the computations. Here, the authors propose a new (and cheap) iterative approximation, IterGP. They analyse carefully its approximation error in Section 3. Experimental results corroborate this analysis. Overall, IterGP can reach the same accuracy than previous methods with a limited number of steps, and thus a smaller computational burden. Increasing the number of steps will of course make it even more accurate.
The four reviewers agreed on the relevance of the algorithm, the quality of its technical analysis and the quality of the experimental results (I agree with them). Reviewer 5Mot praised the high quality of the writing. The other reviewers overall agreed, but provided a list of minor points that could be fixed to improve the paper. I therefore recommend to accept this paper. | train | [
"RGkMla_GnZa",
"H_FKlhmIrEH",
"TNEfXFm_JCs",
"xFnTlU5_vvT",
"ds3q3_ikncf",
"4KvCzVA5ba_",
"sGbrvQ4EJ6w",
"GgBki4KHVIr",
"PlPom84hajb",
"ElqZ8lzBl8W",
"5aflQp4nTU",
"K3tbPQFcg6",
"rXpXrUxHKFm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed explanations. I will keep my score as it is. ",
" Thank you for your response and feedback! We will make sure to include the details on the stopping criteria used and a clarification about the grey lines in Algorithm 1 in the final version.",
" Thank you for addressing my comments a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
3,
4
] | [
"GgBki4KHVIr",
"TNEfXFm_JCs",
"4KvCzVA5ba_",
"ds3q3_ikncf",
"PlPom84hajb",
"ElqZ8lzBl8W",
"5aflQp4nTU",
"K3tbPQFcg6",
"rXpXrUxHKFm",
"nips_2022_Zzi8Od19DSU",
"nips_2022_Zzi8Od19DSU",
"nips_2022_Zzi8Od19DSU",
"nips_2022_Zzi8Od19DSU"
] |
nips_2022_9s3CbJh4vRP | Precise Regret Bounds for Log-loss via a Truncated Bayesian Algorithm | We study sequential general online regression, known also as sequential probability assignments, under logarithmic loss when compared against a broad class of experts. We obtain tight, often matching, lower and upper bounds for sequential minimax regret, which is defined as the excess loss incurred by the predictor over the best expert in the class. After proving a general upper bound we consider some specific classes of experts from Lipschitz class to bounded Hessian class and derive matching lower and upper bounds with provably optimal constants. Our bounds work for a wide range of values of the data dimension and the number of rounds. To derive lower bounds, we use tools from information theory (e.g., Shtarkov sum) and for upper bounds, we resort to new "smooth truncated covering" of the class of experts. This allows us to find constructive proofs by applying a simple and novel truncated Bayesian algorithm. Our proofs are substantially simpler than the existing ones and yet provide tighter (and often optimal) bounds. | Accept | This paper considers the problem of online learning with the logarithmic loss, and provides a new algorithm based on smoothing of the log loss which matches certain rates that were previously only achieved through non-constructive methods.
Reviewers agreed that the algorithm and proof technique are novel, and that the resulting regret bounds improve over the state of the art. For the final version of the paper, the authors are encouraged to incorporate the reviewers' comments regarding presentation. | train | [
"LmTWlASJQuG",
"6xl1RGCbBz1",
"mKHWKuYYYrg",
"RP7b2XSi81",
"kLdUOMffq00",
"5VCm_vJd7m",
"2XkS45eWoNn",
"bK_t8ccFlOR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I'd like to thank the authors to carefully address my concerns. I was the only reviewer that had a hard time with the presentation, so do not feel obligated to drastically change the paper because of it. My guess is that I am more acquainted with the study of (simulatable) experts in the online learning literatur... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
5
] | [
"kLdUOMffq00",
"mKHWKuYYYrg",
"bK_t8ccFlOR",
"2XkS45eWoNn",
"5VCm_vJd7m",
"nips_2022_9s3CbJh4vRP",
"nips_2022_9s3CbJh4vRP",
"nips_2022_9s3CbJh4vRP"
] |
nips_2022_UVF3yybAjF | Robust Testing in High-Dimensional Sparse Models | We consider the problem of robustly testing the norm of a high-dimensional sparse signal vector under two different observation models. In the first model, we are given $n$ i.i.d. samples from the distribution $\mathcal{N}\left(\theta,I_d\right)$ (with unknown $\theta$), of which a small fraction has been arbitrarily corrupted. Under the promise that $\|\theta\|_0\le s$, we want to correctly distinguish whether $\|\theta\|_2=0$ or $\|\theta\|_2>\gamma$, for some input parameter $\gamma>0$. We show that any algorithm for this task requires $n=\Omega\left(s\log\frac{ed}{s}\right)$ samples, which is tight up to logarithmic factors. We also extend our results to other common notions of sparsity, namely, $\|\theta\|_q\le s$ for any $0 < q < 2$. In the second observation model that we consider, the data is generated according to a sparse linear regression model, where the covariates are i.i.d. Gaussian and the regression coefficient (signal) is known to be $s$-sparse. Here too we assume that an $\epsilon$-fraction of the data is arbitrarily corrupted. We show that any algorithm that reliably tests the norm of the regression coefficient requires at least $n=\Omega\left(\min(s\log d,{1}/{\gamma^4})\right)$ samples. Our results show that the complexity of testing in these two settings significantly increases under robustness constraints. This is in line with the recent observations made in robust mean testing and robust covariance testing. | Accept | The reviewers and I agree that this result is a solid, if not groundbreaking, result in the theory of robust statistics. It considers a very natural testing problem, and when the fraction of corrupted samples is constant, completely settles the statistical complexity of the problem. There are some concerns that its immediate practical impact is limited, but overall, the consensus is that the paper represents a solid technical contribution, and will be of interest to the robust statistics community, and the learning theory community at large. Therefore, we believe this paper is above the bar for acceptance at NeurIPS. | val | [
"Fif6n9dyjL",
"ON50mzBeEmbu",
"yUDyie5CpUI",
"QbS53lfqBrG",
"WKa9WPPp-bTu",
"2QtiQK4tY9B",
"ozLDMBNOq_G",
"JydQ_ggJr3I",
"HYlynQ6B8vj",
"UIZLfgwUH8R"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I'm happy with the author response and will maintain my score.",
" Your response addresses most of my questions and helps my assessment of the significance of your results. \nDue to the overlap in the techniques required to prove the current lower bounds with lower bounds in the non-sparse setting, I will maint... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"WKa9WPPp-bTu",
"yUDyie5CpUI",
"ozLDMBNOq_G",
"JydQ_ggJr3I",
"HYlynQ6B8vj",
"UIZLfgwUH8R",
"nips_2022_UVF3yybAjF",
"nips_2022_UVF3yybAjF",
"nips_2022_UVF3yybAjF",
"nips_2022_UVF3yybAjF"
] |
nips_2022_VrJWseIN98 | VER: Scaling On-Policy RL Leads to the Emergence of Navigation in Embodied Rearrangement | We present Variable Experience Rollout (VER), a technique for efficiently scaling batched on-policy reinforcement learning in heterogenous environments (where different environments take vastly different times to generate rollouts) to many GPUs residing on, potentially, many machines. VER combines the strengths of and blurs the line between synchronous and asynchronous on-policy RL methods (SyncOnRL and AsyncOnRL, respectively). Specifically, it learns from on-policy experience (like SyncOnRL) and has no synchronization points (like AsyncOnRL) enabling high throughput.
We find that VER leads to significant and consistent speed-ups across a broad range of embodied navigation and mobile manipulation tasks in photorealistic 3D simulation environments. Specifically, for PointGoal navigation and ObjectGoal navigation in Habitat 1.0, VER is 60-100% faster (1.6-2x speedup) than DD-PPO, the current state of art for distributed SyncOnRL, with similar sample efficiency. For mobile manipulation tasks (open fridge/cabinet, pick/place objects) in Habitat 2.0 VER is 150% faster (2.5x speedup) on 1 GPU and 170% faster (2.7x speedup) on 8 GPUs than DD-PPO. Compared to SampleFactory (the current state-of-the-art AsyncOnRL), VER matches its speed on 1 GPU, and is 70% faster (1.7x speedup) on 8 GPUs with better sample efficiency.
We leverage these speed-ups to train chained skills for GeometricGoal rearrangement tasks in the Home Assistant Benchmark (HAB). We find a surprising emergence of navigation in skills that do not ostensible require any navigation. Specifically, the Pick skill involves a robot picking an object from a table. During training the robot was always spawned close to the table and never needed to navigate. However, we find that if base movement is part of the action space, the robot learns to navigate then pick an object in new environments with 50% success, demonstrating surprisingly high out-of-distribution generalization. | Accept | The paper proposes a novel method that takes the best of both worlds: synchronous and asynchronous on-policy RL methods.
The rebuttal nicely addressed the concerns of most reviewers.
Why the method makes sense and has benefits is rather straightforward and intuitive (which is a good thing!). The paper is clearly an experimental paper / systems paper with very extensive evaluations that show significant practical benefits. The method potentially has large practical impact, which is an important contribution to the community (and a valid - though less common - NeurIPS paper format). Hence I disagree with reviewer 45yB about requiring a theoretical novelty/contribution.
*** not visible to authors as posted after discussion phase ***
Agree to raise the score
NeurIPS 2022 Conference Paper1458 Reviewer ep2X
18 Aug 2022
Excellent work, impressive results!
I will raise my score.
| test | [
"dhZLgbShIr",
"K8OrmRbR8bh",
"TuxPvkPGFWo",
"wYNN90p9omZ",
"-yCz3vjZQd",
"UITutjrQ7o0",
"FV1RBE5_Lw",
"hsfK2Wcg9Y-",
"3RMUAvuqovIK",
"Qa1f7TqmMuj",
"PymNiM_Kcv9",
"QmZIHqh7fcq",
"uI69wY6zO5w",
"OFJWa5xNZUY",
"gfaSuQaGLgc",
"jGjAPVli4w",
"zaQ5ZEB7hQL",
"AScXMIxnP_-"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response, I will be maintaining my score in favor of acceptance. ",
" We hope that we have addressed your concerns. Are you satisfied with our response or do you have additional questions?",
" Thank you for your suggestions, we agree that these plots have increased the clarity of... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"QmZIHqh7fcq",
"zaQ5ZEB7hQL",
"wYNN90p9omZ",
"UITutjrQ7o0",
"FV1RBE5_Lw",
"hsfK2Wcg9Y-",
"uI69wY6zO5w",
"3RMUAvuqovIK",
"AScXMIxnP_-",
"zaQ5ZEB7hQL",
"zaQ5ZEB7hQL",
"jGjAPVli4w",
"gfaSuQaGLgc",
"nips_2022_VrJWseIN98",
"nips_2022_VrJWseIN98",
"nips_2022_VrJWseIN98",
"nips_2022_VrJWseI... |
nips_2022_KFxIsdIvUj | Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning | While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert's behavior. It is well known that, in general, various reward functions can lead to the same optimal policy, and hence, IRL is ill-defined. However, \cite{cao2021identifiability} showed that, if we observe two or more experts with different discount factors or acting in different environments, the reward function can under certain conditions be identified up to a constant. This work starts by showing an equivalent identifiability statement from multiple experts in tabular MDPs based on a rank condition, which is easily verifiable and is shown to be also necessary. We then extend our result to various different scenarios, i.e., we characterize reward identifiability in the case where the reward function can be represented as a linear combination of given features, making it more interpretable, or when we have access to approximate transition matrices. Even when the reward is not identifiable, we provide conditions characterizing when data on multiple experts in a given environment allows to generalize and train an optimal agent in a new environment. Our theoretical results on reward identifiability and generalizability are validated in various numerical experiments. | Accept | The paper provides an investigation of conditions for recovering the reward function up to a constant from multiple experts. While the assumption that observations from multiple (entropy regularized experts) acting in different environments is quite strong, the authors did a good job in justifying and further explaining the setting in the rebuttal. While the paper is incremental, I agree with the reviewers that the paper is solid and interesting. | train | [
"gi3vJUWlen-",
"Wd8oOszDwpO",
"Ia11C9GEdiY",
"52_PoFdyO9p",
"kyxOVsjhjV7",
"0OXvquvYOU-B",
"bcr83JSDalV",
"O8pTeEFG92H",
"IQPf77_JIjc",
"RXjb9h8Bpf2",
"5MYYIhaPAar",
"ufOYqJGTmjk",
"ct-zqnn3erWV",
"l2-LOy1Jyky",
"_NeFRkdx5mG",
"YHpoQlFngUH",
"1h5TMpLY4S",
"eCA2SPjLsZ",
"ECJ9GecyE... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Dear reviewer,\n\nas the discussion period ends soon we were wondering if you have any other question on the differences between our work and (Ng et al. 1999). If that is the case, we are happy to have further discussion.\n\nBest,\nAuthors ",
" Thank you for the positive feedback. We will add this conclusion i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"52_PoFdyO9p",
"Ia11C9GEdiY",
"0OXvquvYOU-B",
"bcr83JSDalV",
"RXjb9h8Bpf2",
"nips_2022_KFxIsdIvUj",
"O8pTeEFG92H",
"IQPf77_JIjc",
"_NeFRkdx5mG",
"ct-zqnn3erWV",
"l2-LOy1Jyky",
"qW9xNVZ5cpN",
"ECJ9GecyE8u",
"eCA2SPjLsZ",
"1h5TMpLY4S",
"nips_2022_KFxIsdIvUj",
"nips_2022_KFxIsdIvUj",
... |
nips_2022_4kjQZTNz-NH | AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos | This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR. First, recent real-world super-resolution approaches typically rely on degradation simulation using basic operators without any learning capability, such as blur, noise, and compression. In this work, we propose to learn such basic operators from real low-quality animation videos, and incorporate the learned ones into the degradation generation pipeline. Such neural-network-based basic operators could help to better capture the distribution of real degradations. Second, a large-scale high-quality animation video dataset, AVC, is built to facilitate comprehensive training and evaluations for animation VSR. Third, we further investigate an efficient multi-scale network structure. It takes advantage of the efficiency of unidirectional recurrent networks and the effectiveness of sliding-window-based methods. Thanks to the above delicate designs, our method, AnimeSR, is capable of restoring real-world low-quality animation videos effectively and efficiently, achieving superior performance to previous state-of-the-art methods. | Accept | The paper proposes a method for super-resolution of animation videos. The contribution is three-fold: a new approach to learned image degradations, a dataset of high-resolution animation videos, and a multiscale model architecture. The method demonstrated good empirical results while being substantially faster than prior approaches.
All reviewers are positive about the paper (although to a different degree) and mention that the proposed "learned basic operators" are interesting and new, the dataset is valuable, and the method is thoroughly evaluated and works well.
Overall, the paper is a solid application paper with some interesting new ideas and I recommend acceptance. I highly encourage the authors to update the paper based on the discussions with the reviewers, in particular with the details on dataset creation and the rescaling factor. | train | [
"hlvLkVJg0C_",
"E_Xsf-JE0-K",
"zhPGnxeTmEI",
"CwQGqEsXmbB",
"04Fk-7XXpR7",
"uQoyiEpgYCI",
"mFFqG3i0e0t",
"95CkRBoQUGt",
"t6yxYEP58R",
"T-ir0ZWXhI_",
"n5uFR54A28x",
"zEusZnOSHbZ",
"GUm5Lk-FcY"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your feedback.\n\nWe are glad that the detailed comments and explanations can clarify the unclear parts. We will add those detailed descriptions to the manuscript.\n\nWe agree with the reviewer that the human labor in the rescaling factor selection is a limitation. As explained above, such manual selec... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"E_Xsf-JE0-K",
"T-ir0ZWXhI_",
"CwQGqEsXmbB",
"mFFqG3i0e0t",
"zEusZnOSHbZ",
"GUm5Lk-FcY",
"zEusZnOSHbZ",
"n5uFR54A28x",
"n5uFR54A28x",
"n5uFR54A28x",
"nips_2022_4kjQZTNz-NH",
"nips_2022_4kjQZTNz-NH",
"nips_2022_4kjQZTNz-NH"
] |
nips_2022_NgIf3FpcHie | Rethinking Alignment in Video Super-Resolution Transformers | The alignment of adjacent frames is considered an essential operation in video super-resolution (VSR). Advanced VSR models, including the latest VSR Transformers, are generally equipped with well-designed alignment modules. However, the progress of the self-attention mechanism may violate this common sense. In this paper, we rethink the role of alignment in VSR Transformers and make several counter-intuitive observations. Our experiments show that: (i) VSR Transformers can directly utilize multi-frame information from unaligned videos, and (ii) existing alignment methods are sometimes harmful to VSR Transformers. These observations indicate that we can further improve the performance of VSR Transformers simply by removing the alignment module and adopting a larger attention window. Nevertheless, such designs will dramatically increase the computational burden, and cannot deal with large motions. Therefore, we propose a new and efficient alignment method called patch alignment, which aligns image patches instead of pixels. VSR Transformers equipped with patch alignment could demonstrate state-of-the-art performance on multiple benchmarks. Our work provides valuable insights on how multi-frame information is used in VSR and how to select alignment methods for different networks/datasets. Codes and models will be released at https://github.com/XPixelGroup/RethinkVSRAlignment. | Accept | This paper re-thinks the role of alignment in video super-resolution based on transformer models. The video alignment is costly which may need manual efforts. This paper proposed several inspiring and counter-intuitive remarks, such as that alignment is unnecessary and may be harmful to the transformer model. The authors presented a new model with only patch alignment instead of the pixel ones and larger window size, which achieves non-trivial improvements over the SOTA methods. Most of the reviewers agreed with the contribution and significance of this paper to the community. | train | [
"jhPxOIanD1",
"MbITuTiGqAu",
"25oPGa8gC30",
"vmgqDU6JCm4",
"7OB7QjqUFuA",
"lYHVsy3f5jlg",
"_T6hGQ9Cvc3",
"SH7Vluvmf3Tb",
"IFVlfem71pq",
"65fr-g7hzQh",
"0aIfqI2PHD-",
"Qpn9TgOpJix",
"NEbSjGDQY7q",
"CKfYmWO3U4",
"JSVHTMt4c1f",
"DJTriNuno6m",
"OHmgKXO_ifV",
"szaeuSzPZ98",
"gzzvuCpTx... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed response. I would like to raise my rating.",
" Dear Reviewer CC5y:\n\nThanks again for your precious time and valuable comments.\n\nInitially, Reviewer F7UK (denoted as R3) and Reviewer TWT3 (denoted as R4) both thought positively about our work. After rebuttal, Reviewer j86q (denoted a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"_T6hGQ9Cvc3",
"szaeuSzPZ98",
"lYHVsy3f5jlg",
"7clqEYpHnNa",
"H0ILTT3guth",
"gzzvuCpTxq7",
"szaeuSzPZ98",
"nips_2022_NgIf3FpcHie",
"7clqEYpHnNa",
"H0ILTT3guth",
"H0ILTT3guth",
"gzzvuCpTxq7",
"gzzvuCpTxq7",
"gzzvuCpTxq7",
"szaeuSzPZ98",
"szaeuSzPZ98",
"szaeuSzPZ98",
"nips_2022_NgIf3... |
nips_2022_pGcTocvaZkJ | Censored Quantile Regression Neural Networks for Distribution-Free Survival Analysis | This paper considers doing quantile regression on censored data using neural networks (NNs). This adds to the survival analysis toolkit by allowing direct prediction of the target variable, along with a distribution-free characterisation of uncertainty, using a flexible function approximator. We begin by showing how an algorithm popular in linear models can be applied to NNs. However, the resulting procedure is inefficient, requiring sequential optimisation of an individual NN at each desired quantile. Our major contribution is a novel algorithm that simultaneously optimises a grid of quantiles output by a single NN. To offer theoretical insight into our algorithm, we show firstly that it can be interpreted as a form of expectation-maximisation, and secondly that it exhibits a desirable `self-correcting' property. Experimentally, the algorithm produces quantiles that are better calibrated than existing methods on 10 out of 12 real datasets. | Accept | This paper studies the quantile regression of censored data. Neural network models are used as the statistical model. Numerical results show the proposed algorithm is computationally efficient and attains high prediction accuracy compared to existing methods. Since reviewers agree that this paper is well written and interesting, I recommend accepting the paper. | train | [
"5BhuO_Ls3O",
"DKzGbbL5kge",
"A2GINEwU7d",
"VrFuvD0nTJp",
"X0RUh2mGs2o",
"lw_XjbGnoRM",
"NpIXPgaDMSMe",
"uro6jvy6TPM",
"lUWNEUBmFX",
"VFh4wmBzd3i",
"paJ5iGtnptN",
"hTX64kd6KXh",
"ywvkrYAnsf",
"-TJhni2iFfj",
"3VrjEXkhPja"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for pushing on this. After another look, we agree that Algorithm 1 \\& 2 should use $\\hat{\\mathbf{w}}$ (it conflicts as is), and it would be better to be consistent in the text about whether we refer to estimates or true quantities (e.g. line 117 should use estimates). As per your suggestion, dropping th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"DKzGbbL5kge",
"VFh4wmBzd3i",
"X0RUh2mGs2o",
"lw_XjbGnoRM",
"lUWNEUBmFX",
"uro6jvy6TPM",
"nips_2022_pGcTocvaZkJ",
"3VrjEXkhPja",
"-TJhni2iFfj",
"ywvkrYAnsf",
"hTX64kd6KXh",
"nips_2022_pGcTocvaZkJ",
"nips_2022_pGcTocvaZkJ",
"nips_2022_pGcTocvaZkJ",
"nips_2022_pGcTocvaZkJ"
] |
nips_2022_AsH-Tx2U0Ug | Effective Backdoor Defense by Exploiting Sensitivity of Poisoned Samples | Poisoning-based backdoor attacks are serious threat for training deep models on data from untrustworthy sources. Given a backdoored model, we observe that the feature representations of poisoned samples with trigger are more sensitive to transformations than those of clean samples. It inspires us to design a simple sensitivity metric, called feature consistency towards transformations (FCT), to distinguish poisoned samples from clean samples in the untrustworthy training set. Moreover, we propose two effective backdoor defense methods. Built upon a sample-distinguishment module utilizing the FCT metric, the first method trains a secure model from scratch using a two-stage secure training module. And the second method removes backdoor from a backdoored model with a backdoor removal module which alternatively unlearns the distinguished poisoned samples and relearns the distinguished clean samples. Extensive results on three benchmark datasets demonstrate the superior defense performance against eight types of backdoor attacks, to state-of-the-art backdoor defenses. Codes are available at: https://github.com/SCLBD/Effective_backdoor_defense. | Accept | The authors propose a new method for defending against backdoor attacks which is based on the observation that poisoned samples are more sensitive to transformations than clean samples. They design a metric called \textit{feature consistency towards transformations (FCT)} to distinguish poisoned samples from clean samples in the untrustworthy training set.
The paper received favorable reviews and has made substantial updates during the rebuttal phase to the general satisfaction of the reviewers. I thus recommend accept.
| train | [
"FIHJLUpCBJa",
"JHLfrLBZaiU",
"3dGD4ewg6LQ",
"M0lAQb13NGv",
"QTvcQmHlDXZZ",
"VAdhUeKBVYK",
"r1qHqHeqSKn",
"mlZufn1NFVd",
"yMhYBrvLL9G",
"n7nzmn3mUWE",
"KLUYeZz2-Bz",
"OpH6jt706mD-",
"K4GVYRZv44z",
"USKcSwP-CTq",
"ITZS4wmbEo1",
"n63Bm_t8cO",
"LAAoEhS-6CM",
"ZQdazYtdTFu",
"b2yeTSo-... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewer,\n\nThanks for you constructive comments which are really beneficial to our work. And we will add the limitation on the feature dimensionality into the final manuscript.",
" I acknowledge I read all the responses and I want to thank the authors for their thorough explanations. My scores stay the s... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"JHLfrLBZaiU",
"4xjRo3AA04s",
"VAdhUeKBVYK",
"QTvcQmHlDXZZ",
"r1qHqHeqSKn",
"OpH6jt706mD-",
"aWqaoddm5gX",
"aWqaoddm5gX",
"cq2blWJobTA",
"cq2blWJobTA",
"cq2blWJobTA",
"cq2blWJobTA",
"4xjRo3AA04s",
"4xjRo3AA04s",
"4xjRo3AA04s",
"4xjRo3AA04s",
"4xjRo3AA04s",
"b2yeTSo-9F0",
"nips_20... |
nips_2022_er4GR0wHWQO | Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm | The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decision constraints must be considered. Consequently, PAUC optimization has naturally attracted increasing attention in the machine learning community within the last few years. Nonetheless, most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable. Fortunately, a recent work presents an unbiased formulation of the PAUC optimization problem via distributional robust optimization. However, it is based on the pair-wise formulation of AUC, which suffers from the limited scalability w.r.t. sample size and a slow convergence rate, especially for TPAUC. To address this issue, we present a simpler reformulation of the problem in an asymptotically unbiased and instance-wise manner. For both OPAUC and TPAUC, we come to a nonconvex strongly concave min-max regularized problem of instance-wise functions. On top of this, we employ an efficient solver that enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\epsilon^{-1/3})$ to reach a $\epsilon$ stationary point. Furthermore, we find that the min-max reformulation also facilitates the theoretical analysis of generalization error as a byproduct. Compared with the existing results, we present new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs. Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method. | Accept | The paper presented a novel reformulation of maximzing PAUC in an asymptotically unbiased and instance-wise manner. Based on this formulation, the authors presented an efficient stochastic min-max algorithm for OPAUC and TPAUC maximization. Convergence and generalization analysis were conducted. The concerns and questions are well addressed in the rebuttal. Following the recommendation from the reviewers, I recommend its acceptance. | train | [
"NC_kPC_FyZ",
"my5XGQ2wYAv",
"zx9ApGRjs_",
"E6gzBn1k0oz",
"jYA0GZmRPOZ",
"lsAm1SNwur3",
"RPmibdUySzT",
"C06fF-b0qdb",
"7JBCjnmMgX5",
"piBdHZRPCCk",
"UvDhsX55nbV",
"p0NKdSs56V",
"-HnuACj0lr3",
"bYm0TjxljJe",
"URU8OW1IzgH",
"rplbj07AGSJ",
"bjlaaHXzfFM",
"RWM3108R5SF",
"4pzM61h-5Hm"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" The authors have addressed all my concerns. I'm very appreciated to see the new version with so many revisons done! So, I'm happy to keep my score as a strong accept.",
" Thanks for making modifications. I raised my score since the authors provide solid proof to fix the problems and make necessary modifications... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
5,
5
] | [
"jYA0GZmRPOZ",
"zx9ApGRjs_",
"E6gzBn1k0oz",
"UvDhsX55nbV",
"nips_2022_er4GR0wHWQO",
"p0NKdSs56V",
"bYm0TjxljJe",
"bYm0TjxljJe",
"bYm0TjxljJe",
"bYm0TjxljJe",
"bYm0TjxljJe",
"-HnuACj0lr3",
"RWM3108R5SF",
"TnDol5yBhgaJ",
"k92K3JuWulV",
"k92K3JuWulV",
"k92K3JuWulV",
"k92K3JuWulV",
"... |
nips_2022_Lpla1jmJkW | Constants of motion network | The beauty of physics is that there is usually a conserved quantity in an always-changing system, known as the constant of motion. Finding the constant of motion is important in understanding the dynamics of the system, but typically requires mathematical proficiency and manual analytical work. In this paper, we present a neural network that can simultaneously learn the dynamics of the system and the constants of motion from data. By exploiting the discovered constants of motion, it can produce better predictions on dynamics and can work on a wider range of systems than Hamiltonian-based neural networks. In addition, the training progresses of our method can be used as an indication of the number of constants of motion in a system which could be useful in studying a novel physical system. | Accept | 2 of the 3 reviewers highly appreciated the rebuttal and are now recommending the paper for acceptance without any reservations. The 3rd, most critical reviewer, FmUK did unfortunately not react. The new experiments "learning from pixels" nicely addresses the reviewer's concern about having to carefully choose the system state. Also the concern about the number of constants of motion is well addressed. The question about the sensitivity to noise could have been stronger: FmUK was talking about physical systems that typically have accelerations as part of their states, but typical sensors only measure positions/angles (which typically already produce slightly noisy measurements). Applying numerical differentiation twice to get to the accelerations often results in very noisy measurements. Hence the question how representative the experiments (where the states - that also include $\dot{x}$ - are assumed to be measured perfectly) are for real systems. I think this is still an interesting point to discuss - but no deal-breaker for me. | val | [
"YDUrRXUyf9J",
"rWxz53oNOMm",
"WALXgxhJibJ",
"iOUdWe3rR5l",
"W0kJl_1RGl",
"koMUAhowAP",
"MopwIkWwmeS",
"8uL7eZFYbra"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their reviews and for increasing the score.",
" I thank the authors for their informative replies. Ideally I would have liked some more analysis of the QR part, but the paper is otherwise well written, the technical approach appears well founded, and it improves upon relevant baselines... | [
-1,
-1,
-1,
-1,
-1,
7,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"rWxz53oNOMm",
"WALXgxhJibJ",
"8uL7eZFYbra",
"MopwIkWwmeS",
"koMUAhowAP",
"nips_2022_Lpla1jmJkW",
"nips_2022_Lpla1jmJkW",
"nips_2022_Lpla1jmJkW"
] |
nips_2022_gbXqMdxsZIP | OTKGE: Multi-modal Knowledge Graph Embeddings via Optimal Transport | Multi-modal knowledge graph embeddings (KGE) have caught more and more attention in learning representations of entities and relations for link prediction tasks. Different from previous uni-modal KGE approaches, multi-modal KGE can leverage expressive knowledge from a wealth of modalities (image, text, etc.), leading to more comprehensive representations of real-world entities. However, the critical challenge along this course lies in that the multi-modal embedding spaces are usually heterogeneous. In this sense, direct fusion will destroy the inherent spatial structure of different modal embeddings. To overcome this challenge, we revisit multi-modal KGE from a distributional alignment perspective and propose optimal transport knowledge graph embeddings (OTKGE). Specifically, we model the multi-modal fusion procedure as a transport plan moving different modal embeddings to a unified space by minimizing the Wasserstein distance between multi-modal distributions. Theoretically, we show that by minimizing the Wasserstein distance between the individual modalities and the unified embedding space, the final results are guaranteed to maintain consistency and comprehensiveness. Moreover, experimental results on well-established multi-modal knowledge graph completion benchmarks show that our OTKGE achieves state-of-the-art performance. | Accept | This paper presents a method to learn multi-modal knowledge graph embeddings. To integrate the embeddings from different modalities, which is a difficult task because of the heterogeneity across the different modalities, the paper presents an optimal transport based method to learn multi-modal embeddings.
The paper received positive reviews from all the reviewers. The authors submitted a rebuttal to answer the questions from the reviewers, and the reviewers seem to be satisfied.
Given the unanimously positive reviews and my own reading of the paper, I vote for the acceptance of the paper. | train | [
"mkcGDxBssw",
"WcZQDd4CBgb",
"M_5874oCZoB",
"nMvuqaY44HU",
"sgzI8B8IUl3",
"e1VKGCggCE7"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for taking the time to carefully read our submission and providing such detailed suggestions. Our responses are as follows.\n\n__Q1:__ In Equation (4), is the symbol $E$ represent the structural embedding? Equation ($4$) is supposed to be further explained. What is the insight behind the multi-modal ... | [
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"e1VKGCggCE7",
"sgzI8B8IUl3",
"nMvuqaY44HU",
"nips_2022_gbXqMdxsZIP",
"nips_2022_gbXqMdxsZIP",
"nips_2022_gbXqMdxsZIP"
] |
nips_2022_m6DJxSuKuqF | Keypoint-Guided Optimal Transport with Applications in Heterogeneous Domain Adaptation | Existing Optimal Transport (OT) methods mainly derive the optimal transport plan/matching under the criterion of transport cost/distance minimization, which may cause incorrect matching in some cases. In many applications, annotating a few matched keypoints across domains is reasonable or even effortless in annotation burden. It is valuable to investigate how to leverage the annotated keypoints to guide the correct matching in OT. In this paper, we propose a novel KeyPoint-Guided model by ReLation preservation (KPG-RL) that searches for the matching guided by the keypoints in OT. To impose the keypoints in OT, first, we propose a mask-based constraint of the transport plan that preserves the matching of keypoint pairs. Second, we propose to preserve the relation of each data point to the keypoints to guide the matching. The proposed KPG-RL model can be solved by the Sinkhorn's algorithm and is applicable even when distributions are supported in different spaces. We further utilize the relation preservation constraint in the Kantorovich Problem and Gromov-Wasserstein model to impose the guidance of keypoints in them. Meanwhile, the proposed KPG-RL model is extended to partial OT setting. As an application, we apply the proposed KPG-RL model to the heterogeneous domain adaptation. Experiments verified the effectiveness of the KPG-RL model. | Accept | In this paper the authors propose a novel Optimal Transport problem that uses a small number of annotated keypoints in both source and target domain to encode additional information and guide the OT plan in the problem. The authors propose a variant of the sinkhorn algorithm to solve the problem and show that it can be used to solve OT across different spaces with also an extension to Partial OT setting. Numerical experiments show the interest of the method on a difficult heterogeneous domain adaptation problem.
The contribution was appreciated and all reviewers agree about the novelty of the method and the interest of the new model in practical applications. All experiments (in the paper, appendix and the new ones in reply) show that the method work better than existing approaches in semi-supervised HDA. Some concerns were raised by reviewers: missing discussion and citation of Masked OT and other approaches such as Fused GW but those were mostly addressed in the replies. The question of the choice of d was also well answered with new experiments.
The consensus between reviewers was that the replies were great and that the paper should be accepted an NeurIPS. Nevertheless it was very clear from the discussion that the new results, discussion and positioning wrt the state of the art MUST be included in the final version and its supplementary. The paper is also lacking a discussion about how to obtain keypoints pairs in practice (other than using existing labels) which is very important to ensure that the method can be used in practice.
| train | [
"Ll8y6xJnRu6",
"-DpwAcTxmBL",
"zhldE7vuPVYi",
"mYEOe1R7G8N",
"iHpMbSpLpd",
"rZZVsw-bXt3",
"OKXG7aBUr_6_",
"Gl_CwY4t6F6",
"JCV8ulaa1P",
"l5A_8GUNCfA",
"D2D35e030vQ",
"-EbvV_YxfTy",
"96Z5_0HBj_U",
"u8KLagAnNFa",
"ZYOi3Hts8f",
"4bDQY_hKtV1",
"ah8H7dJXHF4",
"1eU1h41eyI",
"WtTO81gFyZ"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for the comments again. We will include the discussions on the related work, the experimental details, and the additional experiments in the final version if accepted. Regarding to Q3, built upon the softmax, the relation scores in Eqs. (7) and (8) model the \"relation\" of each point to the keypoints. Bas... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"zhldE7vuPVYi",
"mYEOe1R7G8N",
"96Z5_0HBj_U",
"4bDQY_hKtV1",
"rZZVsw-bXt3",
"OKXG7aBUr_6_",
"l5A_8GUNCfA",
"li4O7btuyS",
"li4O7btuyS",
"li4O7btuyS",
"eOO2QON-zu6",
"eOO2QON-zu6",
"eOO2QON-zu6",
"vvrcEPf0Bs",
"vvrcEPf0Bs",
"vvrcEPf0Bs",
"57aGvEl2wyP",
"57aGvEl2wyP",
"57aGvEl2wyP",... |
nips_2022_qSYVigfakqS | Weak-shot Semantic Segmentation via Dual Similarity Transfer | Semantic segmentation is a practical and active task, but severely suffers from the expensive cost of pixel-level labels when extending to more classes in wider applications. To this end, we focus on the problem named weak-shot semantic segmentation, where the novel classes are learnt from cheaper image-level labels with the support of base classes having off-the-shelf pixel-level labels. To tackle this problem, we propose a dual similarity transfer framework, which is built upon MaskFormer to disentangle the semantic segmentation task into single-label classification and binary segmentation for each proposal. Specifically, the binary segmentation sub-task allows proposal-pixel similarity transfer from base classes to novel classes, which enables the mask learning of novel classes. We also learn pixel-pixel similarity from base classes and distill such class-agnostic semantic similarity to the semantic masks of novel classes, which regularizes the segmentation model with pixel-level semantic relationship across images. In addition, we propose a complementary loss to facilitate the learning of novel classes. Comprehensive experiments on the challenging COCO-Stuff-10K and ADE20K datasets demonstrate the effectiveness of our method. | Accept | Two reviewers give a weak accept rating while the other one gives a borderline reject rating. Considering the low confidence of the negative comment and the contrary comments in paper writing (confident "easy to follow" vs. unconfident "hard to understand"), the AC would lean to accept this paper. | train | [
"6tL5m967tuS",
"zhIzL9_aRYR",
"yD1rQRJEUJ3",
"-qBnk7OyXIt",
"vkMi8Kpqdqr",
"Wbs17t_rXfQ",
"OsG_3CTVfIF",
"Zu4-3OUyfgI",
"mzgIQU22N2R",
"ZcVvYUbnWf"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your recommendation. We are pleased to release the code and model for future researchers to further explore.",
" Thanks for the response. \n\nThe authors have replied my question and resolved my concerns. I am happy to recommend this paper to be accepted.\n\nIn addition, the proposed method have mu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
3
] | [
"zhIzL9_aRYR",
"Wbs17t_rXfQ",
"-qBnk7OyXIt",
"vkMi8Kpqdqr",
"ZcVvYUbnWf",
"mzgIQU22N2R",
"Zu4-3OUyfgI",
"nips_2022_qSYVigfakqS",
"nips_2022_qSYVigfakqS",
"nips_2022_qSYVigfakqS"
] |
nips_2022_V03mpOjCwtg | Learning Generalizable Part-based Feature Representation for 3D Point Clouds | Deep networks on 3D point clouds have achieved remarkable success in 3D classification, while they are vulnerable to geometry variations caused by inconsistent data acquisition procedures. This results in a challenging 3D domain generalization (3DDG) problem, that is to generalize a model trained on source domain to an unseen target domain. Based on the observation that local geometric structures are more generalizable than the whole shape, we propose to reduce the geometry shift by a generalizable part-based feature representation and design a novel part-based domain generalization network (PDG) for 3D point cloud classification. Specifically, we build a part-template feature space shared by source and target domains. Shapes from distinct domains are first organized to part-level features and then represented by part-template features. The transformed part-level features, dubbed aligned part-based representations, are then aggregated by a part-based feature aggregation module. To improve the robustness of the part-based representations, we further propose a contrastive learning framework upon part-based shape representation. Experiments and ablation studies on 3DDA and 3DDG benchmarks justify the efficacy of the proposed approach for domain generalization, compared with the previous state-of-the-art methods. Our code will be available on http://github.com/weixmath/PDG. | Accept | The paper works on domain generalization of 3D point cloud classification, and proposes a part-based domain generalization network for the purpose, whose key idea is to build a common feature space of part template and align the part-level features wherein. Three reviewers appreciate the contributions, including the clear motivation, the implicit domain alignment by part-template features, and the proposed part feature aggregation module. They also suggest to improve the paper by clearer definitions of parts, better organization of contrastive learning in the paper, a more complete citation of closely related works, etc.
After discussions between the authors and reviewers, consensus is reached on accepting the paper. Congratulations!
| train | [
"FZiaT_ioHLe",
"ad8ji4FIyl",
"JGiZHjx1Mms",
"JsjSVAqHA3T",
"CDT2Z8UiCJW",
"e6KpqvlMhOt",
"4D6EGHpOK2",
"MNnGj2yzNm",
"yAxcxhoilM",
"fWklBDO-Aiu",
"sj3b80TdN__",
"WYGlUXl_G6"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the suggestions. We will consider to clarify the role of shape-level contrastive learning part, compress or move details in sect.4 to supplementary. Since we are allowed to have one extra page if the paper is accepted, we will include all these discussions in the paper. The results and discussions of o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"ad8ji4FIyl",
"JsjSVAqHA3T",
"sj3b80TdN__",
"sj3b80TdN__",
"sj3b80TdN__",
"sj3b80TdN__",
"WYGlUXl_G6",
"WYGlUXl_G6",
"fWklBDO-Aiu",
"nips_2022_V03mpOjCwtg",
"nips_2022_V03mpOjCwtg",
"nips_2022_V03mpOjCwtg"
] |
nips_2022_Bv8GV6d76Sy | Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks | Monte Carlo (MC) integration is the _de facto_ method for approximating the predictive distribution of Bayesian neural networks (BNNs). But, even with many MC samples, Gaussian-based BNNs could still yield bad predictive performance due to the posterior approximation's error. Meanwhile, alternatives to MC integration are expensive. In this work, we experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself. However, previous methods for obtaining accurate posterior approximations are expensive and non-trivial to implement. We, therefore, propose to refine Gaussian approximate posteriors with normalizing flows. When applied to last-layer BNNs, it yields a simple, cost-efficient, _post hoc_ method for improving pre-existing parametric approximations. We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo. | Accept | The paper proposes a method to refine Gaussian approximations of the posterior in Bayesian computations by using the normalizing flow. Such Gaussian approximations are usually cheap to obtain, via Laplace approximation or variational Bayes. The method proposed by the authors outperform standard MC approaches and is competitive with the most sophisticated ones (Hamiltonian MC), while cheaper.
The reviewers praised the experimental results. They also liked the nice explanations and illustrations of the failure of the standard MC approaches. Some remarks about the limited novelty of this discussion with respect to existing works (e.g. Wilson and Izmailov, 2020) were satisfactorily addressed by the authors during the discussion with the reviewers. Overall, the reviewers agreed that, while the writing of the paper could be improved in parts, the discussion and the method proposed in this paper are a nice contribution to Bayesian learning, and will be useful to the community. I will therefore recommend to accept the paper. I encourage the authors to take into account the comments of the reviewers (especially Rev. 5mzJ) when preparing the camera-ready version of the paper. | train | [
"3LppUYAZrV",
"yRwDKqhO07H",
"J9rTh-7RMPC",
"DTaUmuV7yThw",
"MLLs9WBeOX",
"keY60D4GAw6",
"BM35OJVKxS0",
"GHfRnt_jHE0",
"dFQ6I689GQw",
"xFkngTNg2od",
"Ub_jHE6u0km"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all reviewers for their comments and proposals to further clarify our work. We hope that all concerns were sufficiently addressed by our replies. If you feel like your concerns and questions were not addressed to your satisfaction, we would highly appreciate a follow-up comment. Since the discussion peri... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"nips_2022_Bv8GV6d76Sy",
"J9rTh-7RMPC",
"keY60D4GAw6",
"GHfRnt_jHE0",
"dFQ6I689GQw",
"xFkngTNg2od",
"Ub_jHE6u0km",
"nips_2022_Bv8GV6d76Sy",
"nips_2022_Bv8GV6d76Sy",
"nips_2022_Bv8GV6d76Sy",
"nips_2022_Bv8GV6d76Sy"
] |
nips_2022_VMU-hMsonit | Training and Inference on Any-Order Autoregressive Models the Right Way | Conditional inference on arbitrary subsets of variables is a core problem in probabilistic inference with important applications such as masked language modeling and image inpainting. In recent years, the family of Any-Order Autoregressive Models (AO-ARMs) -- closely related to popular models such as BERT and XLNet -- has shown breakthrough performance in arbitrary conditional tasks across a sweeping range of domains. But, in spite of their success, in this paper we identify significant improvements to be made to previous formulations of AO-ARMs. First, we show that AO-ARMs suffer from redundancy in their probabilistic model, i.e., they define the same distribution in multiple different ways. We alleviate this redundancy by training on a smaller set of univariate conditionals that still maintains support for efficient arbitrary conditional inference. Second, we upweight the training loss for univariate conditionals that are evaluated more frequently during inference. Our method leads to improved performance with no compromises on tractability, giving state-of-the-art likelihoods in arbitrary conditional modeling on text (Text8), image (CIFAR10, ImageNet32), and continuous tabular data domains. | Accept | This paper introduces an improved training method for auto-regressive generative models. Specifically, the paper identifies a redundancy problem in common auto-regressive models and proposes a way to fix this. The reviewers found the contribution significant and important, and it is likely that the paper will have substantial impact.
| train | [
"k9Q4H1ooYvmd",
"dqJfuYC_NNfm",
"xqodtj0jrUx",
"fbD28zy1_id",
"fDoVj5SpY9qq",
"UBIaDZhtJBX7",
"hKJmF35lsE6",
"pOWkb1XdMRe",
"kbTPnIcqUR8",
"40WTtbj6Z2G",
"4V88ipLvGD2",
"G-Tq2y9WKCV",
"QpTB0GDiuD3",
"uINKE7DIEJA",
"beSdMKgjpHY"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed explanation, which solves my main confusion. I am happy to increase my score to 6 to support the accept of the paper.",
" > **In Figure 2 (b) or (c), there is no edge between (x2, x4) and x1, or (x1, x2, x4) and x3. Since both p(x1|x2, x4) and p(x3|x1,x2,x4) are never seen b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"dqJfuYC_NNfm",
"fbD28zy1_id",
"UBIaDZhtJBX7",
"4V88ipLvGD2",
"pOWkb1XdMRe",
"40WTtbj6Z2G",
"nips_2022_VMU-hMsonit",
"beSdMKgjpHY",
"uINKE7DIEJA",
"QpTB0GDiuD3",
"G-Tq2y9WKCV",
"nips_2022_VMU-hMsonit",
"nips_2022_VMU-hMsonit",
"nips_2022_VMU-hMsonit",
"nips_2022_VMU-hMsonit"
] |
nips_2022_nax3ATLrovW | Versatile Multi-stage Graph Neural Network for Circuit Representation | Due to the rapid growth in the scale of circuits and the desire for knowledge transfer from old designs to new ones, deep learning technologies have been widely exploited in Electronic Design Automation (EDA) to assist circuit design. In chip design cycles, we might encounter heterogeneous and diverse information sources, including the two most informative ones: the netlist and the design layout. However, handling each information source independently is sub-optimal. In this paper, we propose a novel way to integrate the multiple information sources under a unified heterogeneous graph named Circuit Graph, where topological and geometrical information is well integrated. Then, we propose Circuit GNN to fully utilize the features of vertices, edges as well as heterogeneous information during the message passing process. It is the first attempt to design a versatile circuit representation that is compatible across multiple EDA tasks and stages. Experiments on the two most representative prediction tasks in EDA show that our solution reaches state-of-the-art performance in both logic synthesis and global placement chip design stages. Besides, it achieves a 10x speed-up on congestion prediction compared to the state-of-the-art model. | Accept | This paper proposes a GNN approach to EDA using the construction of a circuit graph that combines geometric and topological information, as well as features generated from physical properties of circuit components. While reviewers have raised certain concerns (some addressed already in rebuttal), they all settled (post rebuttal) on recommending weak accept of the paper. I agree with them and think the NeurIPS audience would benefit from the inclusion of this work in the program, and therefore I recommend acceptance. I would like to encourage the authors to take into account the comments and discussion with the reviewers, as well as incorporate materials presented in their responses, when preparing the camera ready version. | train | [
"jKEVS6PRE5R",
"2N6eK2vaYBe",
"D2Wg35JMDtL",
"7vp-m2CeXa",
"zT9YofH_Vp",
"aviiJRLS0tO",
"-PATnDXsN_md",
"KyH5KMK_K4j",
"TO75D-31Eqo",
"KvnG-JaMxs6",
"aM-7tHpCXNk",
"h18LIFlJ8vd",
"6AY9psURFST",
"s-NDb9krlhC",
"fxG-3WqEPBB"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response and additional results.\n\nRegarding to the support of EDA stages, this is the sentence copied from the Introduction section \"To our best knowledge, this is the first unified circuit representation approach that can be easily compatible across EDA tasks and stages.\", which gives people ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"2N6eK2vaYBe",
"aviiJRLS0tO",
"zT9YofH_Vp",
"nips_2022_nax3ATLrovW",
"TO75D-31Eqo",
"KvnG-JaMxs6",
"6AY9psURFST",
"fxG-3WqEPBB",
"fxG-3WqEPBB",
"fxG-3WqEPBB",
"6AY9psURFST",
"s-NDb9krlhC",
"nips_2022_nax3ATLrovW",
"nips_2022_nax3ATLrovW",
"nips_2022_nax3ATLrovW"
] |
nips_2022_tvwkeAIcRP8 | S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint | In this paper, we address the "dual problem" of multi-view scene reconstruction in which we utilize single-view images captured under different point lights to learn a neural scene representation. Different from existing single-view methods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene. Instead of relying on multi-view photo-consistency, our method exploits two information-rich monocular cues, namely shading and shadow, to infer scene geometry. Experiments on multiple challenging datasets show that our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images. Thanks to the neural reflectance field representation, our method is robust to depth discontinuities. It supports applications like novel-view synthesis and relighting. Our code and model can be found at https://ywq.github.io/s3nerf. | Accept | This paper had reviews ranging from borderline reject to strong accept. The most negative reviewer had concerns about the assumptions in the framework (point light sources), and the loss of accuracy as the number of light sources decreases, but the remaining reviewers were compelled by the ability to hand scenes with lights not at infinity and the integration of the shadow constraints to give constraints on the structures of scene parts not directly viewed.
Overall I agree with the three positive reviewers that this paper considers an interesting variation of the photometric stereo problem with coherent experimental evaluation that shows the contributions of each of the different pieces of their overall system
Therefore I accept this paper.
| test | [
"KrAGjdkvUVN",
"zYUIrVMR1dL",
"2ACPOTAwWo",
"hlAWNz9DUj3",
"ehDjzGqzptI",
"xZfN-ACVRlP",
"jNDKVOzBQsy",
"IT2lRNgm7dU",
"X582xS8_9W9",
"MgUt6rLteME",
"q-yf-NXnrMH",
"Fo98ntBG61r",
"fqNX1xZ9ro_",
"kxgRqQZXY96",
"N6NgMgnZjY3",
"X9Dek14yHtp",
"GqkNEe-O7U",
"aCxLvsYrVkh"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer bpE7,\n\nWe just noticed that our previous responses posted under your Acknowledgement thread (titled “Author Rebuttal Acknowledgement by Paper1398 Reviewer bpE7”) are invisible to you and other reviewers. We therefore attach our responses again under this original thread for your reference. Sorry f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"GqkNEe-O7U",
"aCxLvsYrVkh",
"X9Dek14yHtp",
"xZfN-ACVRlP",
"nips_2022_tvwkeAIcRP8",
"fqNX1xZ9ro_",
"nips_2022_tvwkeAIcRP8",
"aCxLvsYrVkh",
"GqkNEe-O7U",
"GqkNEe-O7U",
"X9Dek14yHtp",
"N6NgMgnZjY3",
"N6NgMgnZjY3",
"nips_2022_tvwkeAIcRP8",
"nips_2022_tvwkeAIcRP8",
"nips_2022_tvwkeAIcRP8",... |
nips_2022_YCPmfirAcc | High-dimensional Additive Gaussian Processes under Monotonicity Constraints | We introduce an additive Gaussian process (GP) framework accounting for monotonicity constraints and scalable to high dimensions. Our contributions are threefold. First, we show that our framework enables to satisfy the constraints everywhere in the input space. We also show that more general componentwise linear inequality constraints can be handled similarly, such as componentwise convexity. Second, we propose the additive MaxMod algorithm for sequential dimension reduction. By sequentially maximizing a squared-norm criterion, MaxMod identifies the active input dimensions and refines the most important ones. This criterion can be computed explicitly at a linear cost. Finally, we provide open-source codes for our full framework. We demonstrate the performance and scalability of the methodology in several synthetic examples with hundreds of dimensions under monotonicity constraints as well as on a real-world flood application. | Accept | This paper deals with the problem of regression with an additive Gaussian process prior and a linear inequality constraint. A finite-dimensional approximation is proposed to the Gaussian process in terms of a linear combination of triangular basis functions with Gaussian weights. The weights are then estimated by solving a quadratic program or approximately sampled to handle the inequality constraints. Additionally, the authors consider the problem of variable selection and propose a forward selection method based on the difference between the posterior mode with and without the inclusion of a particular variable. The reviews were mixed, but are leaning towards acceptance. | train | [
"so_mI1tjErg",
"89C0hnHwwZ",
"d72qgfLVXV4i",
"da1gfZzCbzY",
"FyiNk69Mss2",
"DwPeeaFW_Hp",
"P7iEzuF9nTr",
"O6wL2_cQrt",
"_0q0KZmaf2b",
"EmaobDsKNA",
"02RgJJ48ULT",
"DfktHssD1gT",
"okrs-HKL2B9"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nThank you for updating your rating from \"weak accept\" to “accept”. We appreciate it. \n\nThe author(s)",
" We are grateful to the reviewer for the response. We next provide replies to their two questions.\n\n- **Why is extending the proof in the prior reference [18] categorically different f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"da1gfZzCbzY",
"d72qgfLVXV4i",
"_0q0KZmaf2b",
"DwPeeaFW_Hp",
"nips_2022_YCPmfirAcc",
"P7iEzuF9nTr",
"okrs-HKL2B9",
"_0q0KZmaf2b",
"02RgJJ48ULT",
"DfktHssD1gT",
"nips_2022_YCPmfirAcc",
"nips_2022_YCPmfirAcc",
"nips_2022_YCPmfirAcc"
] |
nips_2022_Qt4rKNYzcO | Enhanced Latent Space Blind Model for Real Image Denoising via Alternative Optimization | Motivated by the achievements in model-based methods and the advances in deep networks, we propose a novel enhanced latent space blind model based deep unfolding network, namely ScaoedNet, for complex real image denoising. It is derived by introducing latent space, noise information, and guidance constraint into the denoising cost function. A self-correction alternative optimization algorithm is proposed to split the novel cost function into three alternative subproblems, i.e., guidance representation (GR), degradation estimation (DE) and reconstruction (RE) subproblems. Finally, we implement the optimization process by a deep unfolding network consisting of GR, DE and RE networks. For higher performance of the DE network, a novel parameter-free noise feature adaptive enhancement (NFAE) layer is proposed. To synchronously and dynamically realize internal-external feature information mining in the RE network, a novel feature multi-modulation attention (FM2A) module is proposed. Our approach thereby leverages the advantages of deep learning, while also benefiting from the principled denoising provided by the classical model-based formulation. To the best of our knowledge, our enhanced latent space blind model, optimization scheme, NFAE and FM2A have not been reported in the previous literature. Experimental results show the promising performance of ScaoedNet on real image denoising. Code is available at https://github.com/chaoren88/ScaoedNet. | Accept | The paper under review introduces a deep unrolling network driven by a latent space blind model for image denoising. Although the network combines known components, it has novel elements and good algorithms, the experimental results are robust, the implementation details are rich, and the ablation research is extensive. Revisions and rebuttals addressed most of the reviewers' concerns, leading them to improve their scores. Therefore, I accept this paper. | train | [
"jle9pAo1lm_",
"j7lHtgmyz51",
"NlewbzwMfY-",
"vefsm2Hwbs7T",
"vRuSvI-gstg",
"Z3UreSZ1tcZ",
"war4AMBoEo_",
"xUbYbiucJnI",
"aVi0roC7IQG",
"cs5Qu9yXAtE",
"MEAoqydzf9m",
"WBCHVg9lwU"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your useful and kind comment. We appreciate your positive comment about our method, and also appreciate that you raised the final rating. Thank you!\n\nFor the main reason of using LS, your understanding is correct. It is based on the proposed task formulation, which can break through the limitation... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"j7lHtgmyz51",
"NlewbzwMfY-",
"vefsm2Hwbs7T",
"xUbYbiucJnI",
"WBCHVg9lwU",
"MEAoqydzf9m",
"cs5Qu9yXAtE",
"aVi0roC7IQG",
"nips_2022_Qt4rKNYzcO",
"nips_2022_Qt4rKNYzcO",
"nips_2022_Qt4rKNYzcO",
"nips_2022_Qt4rKNYzcO"
] |
nips_2022_2nYz4WZAne4 | Generative Evolutionary Strategy For Black-Box Optimizations | Many scientific and technological problems are related to optimization. Among them, black-box optimization in high-dimensional space is particularly challenging. Recent neural network-based black-box optimization studies have shown noteworthy achievements. However, their capability in high-dimensional search space is still limited. This study proposes a black-box optimization method based on evolution strategy and generative neural network model. We designed the algorithm so that the evolutionary strategy and the generative neural network model work cooperatively with each other. This hybrid model enables reliable training of surrogate networks; it optimizes multi-objective, high-dimensional, and stochastic black-box functions. In this experiment, our method outperforms baseline optimization methods, including , including evolution strategies, and a Bayesian optimization. | Reject | While the topic of the paper and the reported experimental results appeared to be of interest of the reviewing team, a number of limitations were put to the fore by the reviewers, who graded the paper with scores between 2 and 5, and often emphasized various issues such as a lacunary literature review (See in particular comments of Reviewer wE4h), insufficient comparison with state-of-the-art approaches and in particular on realistic test-casses (bPMJ), as well as on the lack of clarity (See for instance in Reviever uzBL’s review: „The paper is poorly structured and written to the point where it is quite difficult to understand“ ). For all these reasons I recommend rejection. | val | [
"URrcJOKkCt",
"9Qp49S7q6ke",
"eulxqC6A3sm",
"Y6IbK70l6m",
"93Hq1E1ihes",
"R9av0CHDtFk",
"VmDvzKmmM7X",
"3y_j2y_hJS",
"8Wl0CqN2cQz",
"_Ys0WKRx4J3",
"3ZNCrCde4Pe",
"ZoC5zoOK7e",
"g8BswzUN727",
"E2nq74rvYGq",
"78PKyQx30TJK",
"iAhth3NqgT1",
"GL55imJ7D-A",
"YpCyGK0O7EF",
"RItkzVUquQ_"... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks to your valuable review, we were able to improve the revision a lot.\n\nIn the first submission, their were a lot of insufficient explanation.\nIn particular, it seems that the purpose of each experiment was not sufficiently explained. (Catpole-V1, LeNet-5)\nIn the revision, we tried to write a detailed de... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
2,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
5
] | [
"9Qp49S7q6ke",
"g8BswzUN727",
"R9av0CHDtFk",
"93Hq1E1ihes",
"YpCyGK0O7EF",
"YpCyGK0O7EF",
"3y_j2y_hJS",
"8Wl0CqN2cQz",
"_Ys0WKRx4J3",
"iAhth3NqgT1",
"GL55imJ7D-A",
"YpCyGK0O7EF",
"iAhth3NqgT1",
"78PKyQx30TJK",
"mQZJRhuqww8",
"QKlduAyTPTW",
"RItkzVUquQ_",
"bUbqRPGHKq",
"nips_2022_... |
nips_2022_q0XxMcbaZH9 | Learning Equivariant Segmentation with Instance-Unique Querying | Prevalent state-of-the-art instance segmentation methods fall into a query-based scheme, in which instance masks are derived by querying the image feature using a set of instance-aware embeddings. In this work, we devise a new training framework that boosts query-based models through discriminative query embedding learning. It explores two essential properties, namely dataset-level uniqueness and transformation equivariance, of the relation between queries and instances. First, our algorithm uses the queries to retrieve the corresponding instances from the whole training dataset, instead of only searching within individual scenes. As querying instances across scenes is more challenging, the segmenters are forced to learn more discriminative queries for effective instance separation. Second, our algorithm encourages both image (instance) representations and queries to be equivariant against geometric transformations, leading to more robust, instance-query matching. On top of four famous, query-based models (i.e., CondInst, SOLOv2, SOTR, and Mask2Former), our training algorithm provides significant performance gains (e.g., +1.6 – 3.2 AP) on COCO dataset. In addition, our algorithm promotes the performance of SOLOv2 by 2.7 AP, on LVISv1 dataset. | Accept | This paper leverages dataset-level uniqueness and transformation equivariance to improve state-of-the-art instance segmentation methods.
The reviews were overall positive about the submission: the reviewers especially highlighted the good experimental results, the relevance of the scene level query embedding and its complementarity with the equivariance constraints.
The authors' feedback brings important answers to some reviewers' concerns. Especially, the new conclusive experiments on LVIS or the extension of the method for panoptic segmentation widens the approach's applicability and has been appreciated. Other answers in the rebuttal did not convinced some reviewers, and there remains issues about the novelty of the approach, the terminology and positioning with respect to 'query-based' approaches, or the extension of the method for photometric equivariance.
The AC carefully read the submission. The AC considers that the idea of querying instances from the whole training dataset is interesting. Despite the limited contribution of the equivariance loss, which has been used in several related scenarios, the design of the whole approach in the context of instance segmentation is relevant and well designed. The experiments are also convincing. It is a pity that the authors did not take the opportunity to update the paper during the discussion period, especially the new experiments and the clarifications requested by the reviewers. Based on the relevance of the approach and its good experimental results obtained over various baselines in several datasets, the AC recommends acceptance. He highly encourages the authors to include the elements discussed in the rebuttal to improve the quality of the final paper.
| train | [
"-BBrdp5JIr",
"ibvfhf9PqGX",
"V37CqIzoxap",
"A1XXFSGDAhH",
"mUwpqmRMjEz",
"pEIfrRu8jOA",
"z_EgHqToxBK",
"dbmNjpoutGV",
"F_zZIFHbDX2"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the response to my review and all other reviews, and I think authors covered the issues pretty well. I do believe adding the new results on the additional datasets will make the paper stronger. The newly proposed training paradigm is novel, and experiments support consistent significant improvements o... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"mUwpqmRMjEz",
"F_zZIFHbDX2",
"dbmNjpoutGV",
"z_EgHqToxBK",
"pEIfrRu8jOA",
"nips_2022_q0XxMcbaZH9",
"nips_2022_q0XxMcbaZH9",
"nips_2022_q0XxMcbaZH9",
"nips_2022_q0XxMcbaZH9"
] |
nips_2022_kADW_LsENM | Video-based Human-Object Interaction Detection from Tubelet Tokens | We present a novel vision Transformer, named TUTOR, which is able to learn tubelet tokens, served as highly-abstracted spatial-temporal representations, for video-based human-object interaction (V-HOI) detection. The tubelet tokens structurize videos by agglomerating and linking semantically-related patch tokens along spatial and temporal domains, which enjoy two benefits: 1) Compactness: each token is learned by a selective attention mechanism to reduce redundant dependencies from others; 2) Expressiveness: each token is enabled to align with a semantic instance, i.e., an object or a human, thanks to agglomeration and linking. The effectiveness and efficiency of TUTOR are verified by extensive experiments. Results show our method outperforms existing works by large margins, with a relative mAP gain of $16.14\%$ on VidHOI and a 2 points gain on CAD-120 as well as a $4 \times$ speedup. | Accept | *Summary*
This paper presents a novel vision Transformer TUTOR for human-object interaction detection in videos. TUTOR structurizes a video into a few tubelet tokens by agglomerating and linking semantically-related patch tokens along spatial and temporal domains. Experiments are conducted on VidHOI and CAD-120, showing that the proposed approach is more effective and efficient than previous patch token-based reference methods.
*Reviews*
The paper received 3 reviews, with ratings: 6 (weak accept), 5 (borderline accept) and 5 (borderline accept). All reviewers voted to accept the paper, but raised some concerns:
- the claim that tubelet tokens align with semantic instances is not rigorously supported (authors added additional evidence).
- an ablation study of the global context refining mechanism is required (this has been added by the authors).
- an experimental comparison to 'Detecting Human-Object Relationships in Videos' [17/18] is required (authors note that code is not available, but instead provide a comparison with an alternative similar approach that reported even better performance).
- clarification of the relationship between this model and Deformable DETR is required
- some details of the model require clarification.
*Decision*
I am satisfied that the substantive concerns raised by reviewers have been addressed by the authors, and with all reviewers voting to accept the paper I also agree. I encourage the authors to carefully update the manuscript to address all the discussions below.
| train | [
"65bke6qVK4o",
"zv3eb37RHH7",
"1wvoKxfwE8v",
"xz2OjMH20nV",
"OVZKHj9T7eG",
"tKUEKSCtve",
"DcCN3VF-u-a",
"c59arUS9HU9",
"N-q0gr_DFXA",
"Dl2wXbbcaRJ",
"Y74njvFFTJn",
"QnHTu2chk8",
"OTDSwLaGAkU",
"dXghCT-xeZk",
"reFFUNfxdH1"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your response and appreciation. It is worth emphasizing that [17] is not specifically proposed for V-HOI detection, but for dynamic scene graph generation (DSGG), which aims to detect the object relationships in a video. Note that, DSGG consists of only several action categories but mostly s... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"1wvoKxfwE8v",
"xz2OjMH20nV",
"c59arUS9HU9",
"tKUEKSCtve",
"DcCN3VF-u-a",
"DcCN3VF-u-a",
"reFFUNfxdH1",
"dXghCT-xeZk",
"Dl2wXbbcaRJ",
"OTDSwLaGAkU",
"nips_2022_kADW_LsENM",
"nips_2022_kADW_LsENM",
"nips_2022_kADW_LsENM",
"nips_2022_kADW_LsENM",
"nips_2022_kADW_LsENM"
] |
nips_2022_r70ZpWKiCW | Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant | Semi-Supervised Semantic Segmentation aims at training the segmentation model with limited labeled data and a large amount of unlabeled data. To effectively leverage the unlabeled data, pseudo labeling, along with the teacher-student framework, is widely adopted in semi-supervised semantic segmentation. Though proved to be effective, this paradigm suffers from incorrect pseudo labels which inevitably exist and are taken as auxiliary training data. To alleviate the negative impact of incorrect pseudo labels, we delve into the current Semi-Supervised Semantic Segmentation frameworks. We argue that the unlabeled data with pseudo labels can facilitate the learning of representative features in the feature extractor, but it is unreliable to supervise the mask predictor. Motivated by this consideration, we propose a novel framework, Gentle Teaching Assistant (GTA-Seg) to disentangle the effects of pseudo labels on feature extractor and mask predictor of the student model. Specifically, in addition to the original teacher-student framework, our method introduces a teaching assistant network which directly learns from pseudo labels generated by the teacher network. The gentle teaching assistant (GTA) is coined gentle since it only transfers the beneficial feature representation knowledge in the feature extractor to the student model in an Exponential Moving Average (EMA) manner, protecting the student model from the negative influences caused by unreliable pseudo labels in the mask predictor. The student model is also supervised by reliable labeled data to train an accurate mask predictor, further facilitating feature representation. Extensive experiment results on benchmark datasets validate that our method shows competitive performance against previous methods. We promise to release our code towards reproducibility. | Accept | The paper was reviewed by four expert reviewers in the field. The initial ratings were three weak accept and one weak reject.
In the response to reviewer sFdH (who gave Weak reject), the authors clarify all the questions from the reviewer, including using labeled data in GTA, the data split, training details, advantages of re-weighting mechanism. While the reviewer sFdH did not acknowledge the rebuttal, the AC believes that these questions have been sufficiently addressed.
Given the novel approach, extensive quantitative evaluation, and clear writing, the AC agrees with the three reviewers and recommends to accept. | val | [
"nkwXMg_K4ZB",
"qb2ayODV49n",
"jHSzRXzr59r",
"XYA7QGmDJnm",
"FUsAWIYIUv",
"3YMdH6ezk_x",
"ZfPY2YnGMzi",
"IZd1J17Wc03",
"1bc8XjJNFiZ",
"qAMqtd9H4l"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all our reviewers for your distinguished efforts and insightful comments. We have answered your questions in our responses, hoping that we can address your concerns. \n\n\nWe have also uploaded the revised paper and supplementary materials (the modifications are present in blue color). Here, we summarize... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"nips_2022_r70ZpWKiCW",
"qAMqtd9H4l",
"XYA7QGmDJnm",
"IZd1J17Wc03",
"ZfPY2YnGMzi",
"1bc8XjJNFiZ",
"nips_2022_r70ZpWKiCW",
"nips_2022_r70ZpWKiCW",
"nips_2022_r70ZpWKiCW",
"nips_2022_r70ZpWKiCW"
] |
nips_2022_JSha3zfdmSo | Faster Stochastic Algorithms for Minimax Optimization under Polyak-{\L}ojasiewicz Condition | This paper considers stochastic first-order algorithms for minimax optimization under Polyak-{\L}ojasiewicz (PL) conditions.
We propose SPIDER-GDA for solving the finite-sum problem of the form $\min_x \max_y f(x,y)\triangleq \frac{1}{n} \sum_{i=1}^n f_i(x,y)$, where the objective function $f(x,y)$ is $\mu_x$-PL in $x$ and $\mu_y$-PL in $y$; and each $f_i(x,y)$ is $L$-smooth. We prove SPIDER-GDA could find an $\epsilon$-approximate solution within ${\mathcal O}\left((n + \sqrt{n}\,\kappa_x\kappa_y^2)\log (1/\epsilon)\right)$ stochastic first-order oracle (SFO) complexity, which is better than the state-of-the-art method whose SFO upper bound is ${\mathcal O}\big((n + n^{2/3}\kappa_x\kappa_y^2)\log (1/\epsilon)\big)$, where $\kappa_x\triangleq L/\mu_x$ and $\kappa_y\triangleq L/\mu_y$.
For the ill-conditioned case, we provide an accelerated algorithm to reduce the computational cost further. It achieves $\tilde{{\mathcal O}}\big((n+\sqrt{n}\,\kappa_x\kappa_y)\log^2 (1/\epsilon)\big)$ SFO upper bound when $\kappa_x\geq\sqrt{n}$. Our ideas also can be applied to the more general setting that the objective function only satisfies PL condition for one variable. Numerical experiments validate the superiority of proposed methods. | Accept | This paper present an algorithm with strong theoretical guarantees for a fundamental problem of broad interest. It is well-written.
| test | [
"cTcNQdT9QK5",
"UdnU5vDb9f",
"XlHhOUJxYM2",
"zm3hiEKHNjI",
"b5UKs2-iN9E",
"YjIyTzf1q02",
"6se_ojsGceJ",
"XqbzSLAg11-",
"b5W8jhjlR3",
"BBb_rP4oZjL",
"gjElqKwO1_J",
"B3LGnFc0-zv",
"-sM17u52Wfny",
"GjngVzrmXCa",
"nNOaK0o1L3",
"WO4g3hPQ3Nv",
"G4gNn4ofCoP"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank Reviewer N7QK for the time and effort. We are glad that the reviewer appreciated our clarification.",
" Thank you for the clarification! I thought they helped me better understand the difference between the new algorithm and the previous ones, as well as the challenge posed by the minimax optimization (co... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
2,
2
] | [
"UdnU5vDb9f",
"gjElqKwO1_J",
"-sM17u52Wfny",
"gjElqKwO1_J",
"-sM17u52Wfny",
"XqbzSLAg11-",
"b5W8jhjlR3",
"BBb_rP4oZjL",
"B3LGnFc0-zv",
"G4gNn4ofCoP",
"WO4g3hPQ3Nv",
"nNOaK0o1L3",
"GjngVzrmXCa",
"nips_2022_JSha3zfdmSo",
"nips_2022_JSha3zfdmSo",
"nips_2022_JSha3zfdmSo",
"nips_2022_JSha... |
nips_2022_mTXQIpXPDbh | Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation | Transfer learning from the model trained on large datasets to customized downstream tasks has been widely used as the pre-trained model can greatly boost the generalizability. However, the increasing sizes of pre-trained models also lead to a prohibitively large memory footprints for downstream transferring, making them unaffordable for personal devices. Previous work recognizes the bottleneck of the footprint to be the activation, and hence proposes various solutions such as injecting specific lite modules. In this work, we present a novel memory-efficient transfer framework called Back Razor, that can be plug-and-play applied to any pre-trained network without changing its architecture. The key idea of Back Razor is asymmetric sparsifying: pruning the activation stored for back-propagation, while keeping the forward activation dense. It is based on the observation that the stored activation, that dominates the memory footprint, is only needed for backpropagation. Such asymmetric pruning avoids affecting the precision of forward computation, thus making more aggressive pruning possible. Furthermore, we conduct the theoretical analysis for the convergence rate of Back Razor, showing that under mild conditions, our method retains the similar convergence rate as vanilla SGD. Extensive transfer learning experiments on both Convolutional Neural Networks and Vision Transformers show that Back Razor could yield up to 97% sparsity, saving 9.2x memory usage, without losing accuracy. The code is available at: https://github.com/VITA-Group/BackRazor_Neurips22. | Accept | This paper focuses on pruning the backpropogation activation to reduce the memory footprint in transfer learning. The paper is well structured and the method is simple to understand. All the reviewers acknowledge that the experimental results are convincing. Overall, the meta-reviewer recommends acceptance of the paper. | train | [
"nQQaJ8ZSKab",
"hm_x6tm8h1e",
"qY5HyiDIwcf",
"plBt6BUgu2U",
"85QTV0m4dP",
"WjQdCdi9gZ",
"CqY4duQNm0Z",
"z7IKMMnMSnn",
"HmK0Kzn1rPz",
"Z3_R1DKySMW",
"146z1WKCTE6k",
"qtKKF3AFMUZ",
"7STfVHeLky0-",
"-vs-KHuYFG",
"LTnSZx1eSMA",
"UiYpe5NIgvW",
"ECi-8e8JXdW",
"26d9oNy3aQY",
"_EtR1RuPIz... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_r... | [
" I appreciate your reply to address my concerns. Although there is no experimental results about NLP, authors answered all my questions thoroughly. So, I maintain my score to acceptance.",
" Thanks for your response. We conduct new experiments by adapting the channel-wise structural sparsification on BackRazor ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
5,
4
] | [
"WjQdCdi9gZ",
"qY5HyiDIwcf",
"7STfVHeLky0-",
"z7IKMMnMSnn",
"7STfVHeLky0-",
"-vs-KHuYFG",
"LTnSZx1eSMA",
"HmK0Kzn1rPz",
"Z3_R1DKySMW",
"146z1WKCTE6k",
"qtKKF3AFMUZ",
"Bhr3wSg9HS",
"_EtR1RuPIz2",
"26d9oNy3aQY",
"ECi-8e8JXdW",
"nips_2022_mTXQIpXPDbh",
"nips_2022_mTXQIpXPDbh",
"nips_2... |
nips_2022__atSgd9Np52 | DreamShard: Generalizable Embedding Table Placement for Recommender Systems | We study embedding table placement for distributed recommender systems, which aims to partition and place the tables on multiple hardware devices (e.g., GPUs) to balance the computation and communication costs. Although prior work has explored learning-based approaches for the device placement of computational graphs, embedding table placement remains to be a challenging problem because of 1) the operation fusion of embedding tables, and 2) the generalizability requirement on unseen placement tasks with different numbers of tables and/or devices. To this end, we present DreamShard, a reinforcement learning (RL) approach for embedding table placement. DreamShard achieves the reasoning of operation fusion and generalizability with 1) a cost network to directly predict the costs of the fused operation, and 2) a policy network that is efficiently trained on an estimated Markov decision process (MDP) without real GPU execution, where the states and the rewards are estimated with the cost network. Equipped with sum and max representation reductions, the two networks can directly generalize to any unseen tasks with different numbers of tables and/or devices without fine-tuning. Extensive experiments show that DreamShard substantially outperforms the existing human expert and RNN-based strategies with up to 19% speedup over the strongest baseline on large-scale synthetic tables and our production tables. The code is available. | Accept | The paper proposes DreamShard, a RL-based framework for placing embedding tables across multiple devices in distributed recommender systems. DreamShard jointly trains a cost model (to predict the cost of communication and operator fusion for new configurations) and a policy network to make placement decisions based on the cost model. This two step design makes the algorithm more efficient than naive RL solutions and end-to-end training leads to better generalization than model-based offline strategies.
All reviewers agree that the paper is well written and proposes a practical solution to an important problem that is not well studied in the literature. Furthermore, the paper has a strong empirical section that compares DreamShard to strong baselines on open-sourced and production datasets, shows good results and conveys a broad picture of many aspects of their method.
Overall this is a very well executed paper proposing an efficient and practical solution to an underexplored problem. I recommend acceptance.
For the camera ready the authors should include the new scaling experiments they performed to address the reviewers comments. I would also recommend integrate some of the clarifications regarding the contributions and distinctions from prior work (comments to Reviewer AKvA) in the paper. Also, it might be worth including the greedy baseline numbers for some experiments, just to put the performance into perspective. | train | [
"-wI8DvETKTK",
"M2hYyGBXk9Ar",
"JH2jc_DxhGs",
"LkwejsLEBQk",
"0c5aa__OLo8",
"dHkTEDZ68e",
"40lG2Pjcyuj",
"r3k6hKa0HyP",
"6jf1tCdm-qP",
"i8xvomYq742",
"Nww6WIzpPUT"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank all the reviewers for the support and for taking the time to provide all the feedback to help improve the paper. As we are approaching the end of the rebuttal/discussion period, we would like to highlight the contributions of our work and summarize the improvements we have made.\n\n**We have ma... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
2,
4
] | [
"nips_2022__atSgd9Np52",
"LkwejsLEBQk",
"Nww6WIzpPUT",
"i8xvomYq742",
"6jf1tCdm-qP",
"r3k6hKa0HyP",
"nips_2022__atSgd9Np52",
"nips_2022__atSgd9Np52",
"nips_2022__atSgd9Np52",
"nips_2022__atSgd9Np52",
"nips_2022__atSgd9Np52"
] |
nips_2022_YBsLfudKlBu | Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space | Humans are remarkably flexible in understanding viewpoint changes due to visual cortex supporting the perception of 3D structure. In contrast, most of the computer vision models that learn visual representation from a pool of 2D images often fail to generalize over novel camera viewpoints. Recently, the vision architectures have shifted towards convolution-free architectures, visual Transformers, which operate on tokens derived from image patches. However, these Transformers do not perform explicit operations to learn viewpoint-agnostic representation for visual understanding. To this end, we propose a 3D Token Representation Layer (3DTRL) that estimates the 3D positional information of the visual tokens and leverages it for learning viewpoint-agnostic representations. The key elements of 3DTRL include a pseudo-depth estimator and a learned camera matrix to impose geometric transformations on the tokens, trained in an unsupervised fashion. These enable 3DTRL to recover the 3D positional information of the tokens from 2D patches. In practice, 3DTRL is easily plugged-in into a Transformer. Our experiments demonstrate the effectiveness of 3DTRL in many vision tasks including image classification, multi-view video alignment, and action recognition. The models with 3DTRL outperform their backbone Transformers in all the tasks with minimal added computation. Our code is available at https://github.com/elicassion/3DTRL. | Accept | This paper presents a method for transformers to upgrade the 2D image input to pseudo-3D. It proposes a neural layer that estimates per-token depth and also a camera pose (pitch yaw roll), then unproject token coordinates to 3D, encodes these coordinates into embeddings, then adds these with the existing embeddings, and proceeds with the rest of the transformer. This gives performance boosts on a variety of tasks, such as video alignment.
The reviewers raised concerns regarding the depth maps inferred looking more like saliency maps, but also, the depth scale ambiguity, and the ambiguity of absolute camera pose inference. The rebuttal submitted by the authors included additional results that showed that the inferred depth maps and camera poses correlated with the correct ones. All reviewers appreciated the additional experiments contributed by the authors, and suggested them to be included to the main paper.
| train | [
"aV6Xg9md4fZ",
"gd5W4Q64-Pw",
"cFUJ53HnJeY",
"gYs7cQg8-YO",
"HfCGVatnN",
"qGg8ogvRCTR",
"6BL7p-Zg8up",
"Osj9_nN5yVI",
"qRSadUSQRR_",
"5vEranXF7JV",
"dShtErTbYB",
"5hb5_bsZums",
"Y3SlDoA5icz",
"CW4D54SQGou",
"7GNcLa-FcpF",
"gEWa_SEgOZN",
"Sq_ugIhySwe",
"6Z2HwyZhJM9a",
"KEOZmmZyIj4... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_... | [
" Thanks for the suggestions. We have some updates and please check the 10-page version in the updated supplementary to see our changes. \n\n> Add object class on top for Figure 8\n - We have added class labels in the updated Figure 8.\n\n> Epochs\n - We apologize for the confusion, they are from different traini... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4,
4
] | [
"gd5W4Q64-Pw",
"6BL7p-Zg8up",
"dShtErTbYB",
"7GNcLa-FcpF",
"5hb5_bsZums",
"401wiYJa0bV",
"dShtErTbYB",
"5vEranXF7JV",
"KEOZmmZyIj4",
"6Z2HwyZhJM9a",
"Sq_ugIhySwe",
"Y3SlDoA5icz",
"DVSld8rAqGZ",
"nips_2022_YBsLfudKlBu",
"401wiYJa0bV",
"pBeJiy5VXoR",
"pBeJiy5VXoR",
"iAONypffKBW",
"... |
nips_2022_fVslVNBfjd8 | Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels? | We investigate whether self-supervised learning (SSL) can improve online reinforcement learning (RL) from pixels. We extend the contrastive reinforcement learning framework (e.g., CURL) that jointly optimizes SSL and RL losses and conduct an extensive amount of experiments with various self-supervised losses. Our observations suggest that the existing SSL framework for RL fails to bring meaningful improvement over the baselines only taking advantage of image augmentation when the same amount of data and augmentation is used. We further perform evolutionary searches to find the optimal combination of multiple self-supervised losses for RL, but find that even such a loss combination fails to meaningfully outperform the methods that only utilize carefully designed image augmentations. After evaluating these approaches together in multiple different environments including a real-world robot environment, we confirm that no single self-supervised loss or image augmentation method can dominate all environments and that the current framework for joint optimization of SSL and RL is limited. Finally, we conduct the ablation study on multiple factors and demonstrate the properties of representations learned with different approaches. | Accept |
The paper studies an important question, and extends the contrastive reinforcement learning framework to jointly optimize SSL and RL losses. The paper also experiments with various self-supervised losses to empirically validdate the main claim -- "the existing SSL framework for RL fails to bring meaningful improvement over the baselines only taking advantage of image augmentation when the same amount of data and augmentation is used"
The paper presents a surprising result and hopefully provides an interesting platform for others to build on.
The main novelty of this work in not necessarily in algorithms or systems, but rather providing a thorough experimental evaluation of some of the insights that are known either as 'dark knowledge' or implicit insights from an aggregation of number of previous papers.
In that it does a good job. The reviewer opinion is split, and rightly so, given the flaws in the presentation and experimentation.
- conclusions are not very rigorous
- Most tested SSL methods in this paper are naively applied to RL
The rebuttal however has yielded a stronger manuscript, which is likely to be useful to the community.
The AC strongly advises the authors to make the claims more objective, and less definitive, opinionated, or catchy/click-bait.
Further the manuscripts should be revised to include the gist of the discussion in the main paper, and addditional clarifications in the appendix. | train | [
"9ezX7T0esGx",
"ApIV2hvd7y",
"jfKlSogOIoS",
"7P09AEGhZs2",
"pLwh6VgOmLN",
"wuF0nadNjZl",
"eJ9JPd-kpXvj",
"ZhQKERoTDnA",
"VpE61bGVeI7",
"RM7Ist8vmsV",
"qkHH_xXpVT8",
"_HRjKD_tEzy",
"KJTFjaZxbL",
"WICYiWssVoY"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for detailed comments and the revised manuscript. I believe the revised paper organization and the added experiments have improved the submission considerably. Therefore, I increased my score to recommend acceptance.",
" Dear reviewer GjgB,\n\nThis is a kind reminder that we revised the paper with add... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"_HRjKD_tEzy",
"_HRjKD_tEzy",
"7P09AEGhZs2",
"wuF0nadNjZl",
"WICYiWssVoY",
"WICYiWssVoY",
"KJTFjaZxbL",
"_HRjKD_tEzy",
"_HRjKD_tEzy",
"_HRjKD_tEzy",
"nips_2022_fVslVNBfjd8",
"nips_2022_fVslVNBfjd8",
"nips_2022_fVslVNBfjd8",
"nips_2022_fVslVNBfjd8"
] |
nips_2022_8FuITQn6rG3 | CRAFT: explaining using Concepts from Recursive Activation FacTorization | Despite their considerable potential, concept-based explainability methods have received relatively little attention, and explaining what’s driving models’ decisions and where it’s located in the input is still an open problem. To tackle this, we revisit unsupervised concept extraction techniques for explaining the decisions of deep neural networks and present CRAFT – a framework to generate concept-based explanations for understanding individual predictions and the model’s high-level logic for whole classes. CRAFT takes advantage of a novel method for recursively decomposing higher-level concepts into more elementary ones, combined with a novel approach for better estimating the importance of identified concepts with Sobol indices. Furthermore, we show how implicit differentiation can be used to generate concept-wise attribution explanations for individual images. We further demonstrate through fidelity metrics that our proposed concept importance estimation technique is more faithful to the model than previous methods, and, through human psychophysic experiments, we confirm that our recursive decomposition can generate meaningful and accurate concepts. Finally, we illustrate CRAFT’s potential to enable the understanding of predictions of trained models on multiple use-cases by producing meaningful concept-based explanations. | Reject | Reviewers generally agreed that this paper is innovative (the decomposition of high-level concepts into sub-concepts in particular sets this paper apart from existing concept-based methods), and appreciated its potential practical utility for the explainability community (for example by providing localization in input space in addition to concepts, which can be used to debug model errors).
All Reviewers however also agreed on the main weaknesses, i.e. the limits of the validation of the method (which is restricted to only 1 dataset and lacking rigorous quantitative metrics), and a related lack of clarity in terms of technical motivation and use cases enabled for end-users of the method.
Despite some useful clarifications and improvements in the presentation of results provided in the rebuttals and the ensuing exchange, Reviewers were left unsatisfied by the responses addressing the mentioned main weaknesses, which raised renewed concerns about their seriousness.
As a result, the discussion after the rebuttal period was marked by the opinion expressed by two Reviewers that the paper's weaknesses outweigh the (albeit clear) merits and that in its current version the paper is not ready for being accepted. | test | [
"ZSNIZn6ZdNM",
"_r9C5lzxD-",
"7DXwSz-PJ0s",
"L4824TFfUi7",
"iJfN4-vycaw",
"hw7neznBt10",
"b_wE-xOwPlU",
"FIi5XI345ht",
"8rPpZPtnoxI",
"2QeD3CmYl7-",
"RWgq8DnvFU7",
"Yz0llkIbY1R",
"anXUsoZmW48",
"tgD-EtHK4dS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n**W3.**\n\nWe agree and upon acceptance, the extra page will be dedicated to the broader impact and limitations sections.\n\nRegarding the use of labels, we wish to avoid user confirmation bias - i.e. the user has unconscious expectations of what the explanation should look like for a given class before it is g... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"_r9C5lzxD-",
"iJfN4-vycaw",
"L4824TFfUi7",
"FIi5XI345ht",
"hw7neznBt10",
"tgD-EtHK4dS",
"FIi5XI345ht",
"anXUsoZmW48",
"Yz0llkIbY1R",
"RWgq8DnvFU7",
"nips_2022_8FuITQn6rG3",
"nips_2022_8FuITQn6rG3",
"nips_2022_8FuITQn6rG3",
"nips_2022_8FuITQn6rG3"
] |
nips_2022_MAMOi89bOL | Masked Autoencoders that Listen | This paper studies a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms. Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target datasets. Empirically, Audio-MAE sets new state-of-the-art performance on six audio and speech classification tasks, outperforming other recent models that use external supervised pre-training. Our code and models is available at https://github.com/facebookresearch/AudioMAE. | Accept | The paper has two strong accepts and two borderline reject reviews. However, as one of the reviewers did not engage with the authors post-rebuttal, I had to interpret the authors' response to the reviewer's concerns, and they seem to properly address them (even including a new experiment into the paper). The work seems to have been executed concurrently with other similar approaches, and while not entirely novel, the paper seems to include in-depth experiments and a discussion that can be beneficial to the research community. | train | [
"h8-EEjr5i2E",
"KO3PBegHYN",
"uC8u4oPVkyA",
"gPWLqOJjIpe",
"wOvG1qAOXGk",
"NL8aEppZrF0",
"V7uc8lj2cMs",
"I3ou_oa1n6D",
"6uwqr9kq9Yv",
"E2g69DcD36z",
"vr0DBQhJoK_",
"gladqu-Tt-F",
"flF80iOB5PD",
"FNNiH84ZhJ",
"EVxEmlFp7x"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the response! We are glad that most of your concerns have been properly addressed. We will experiment and include the speaker verification experiment following the suggested protocol in VoxSRC.",
" Thanks a lot for the explanation! Yes so far I have no problem with the comments from the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"KO3PBegHYN",
"vr0DBQhJoK_",
"E2g69DcD36z",
"I3ou_oa1n6D",
"6uwqr9kq9Yv",
"vr0DBQhJoK_",
"EVxEmlFp7x",
"EVxEmlFp7x",
"flF80iOB5PD",
"FNNiH84ZhJ",
"gladqu-Tt-F",
"nips_2022_MAMOi89bOL",
"nips_2022_MAMOi89bOL",
"nips_2022_MAMOi89bOL",
"nips_2022_MAMOi89bOL"
] |
nips_2022_Ho_zIH4LA90 | MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training | We propose MinVIS, a minimal video instance segmentation (VIS) framework that achieves state-of-the-art VIS performance with neither video-based architectures nor training procedures. By only training a query-based image instance segmentation model, MinVIS outperforms the previous best result on the challenging Occluded VIS dataset by over 10% AP. Since MinVIS treats frames in training videos as independent images, we can drastically sub-sample the annotated frames in training videos without any modifications. With only 1% of labeled frames, MinVIS outperforms or is comparable to fully-supervised state-of-the-art approaches on YouTube-VIS 2019/2021. Our key observation is that queries trained to be discriminative between intra-frame object instances are temporally consistent and can be used to track instances without any manually designed heuristics. MinVIS thus has the following inference pipeline: we first apply the trained query-based image instance segmentation to video frames independently. The segmented instances are then tracked by bipartite matching of the corresponding queries. This inference is done in an online fashion and does not need to process the whole video at once. MinVIS thus has the practical advantages of reducing both the labeling costs and the memory requirements, while not sacrificing the VIS performance. | Accept | All four reviewers are positive about this work. Reviewers appreciate the clear writing, simple yet highly effective idea, and strong experimental validation on three Video Instance Segmentation datasets. The authors responses further clarified and sufficiently addressed the concerns from the reviewers. The AC reads the reviews, the rebuttal, and agree with the reviewers to recommend acceptance. | train | [
"hW0cKpI9t2w",
"S-y9H7Biwre",
"Rf5DHj97Cxp",
"D_OYpoCiXo2",
"F9Wctj1MK8R",
"sU0eVOMXv0m",
"ZYHxOlV4Wdt",
"9Gq-S1dWsPn",
"rHwY6DOquAQ",
"sg6Z1sLqoK4"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, thank you very much for the appreciation of this work!",
" I would like to thank the authors for their responses. Most of my concerns are addressed. I’ve read the comments from other reviewers and the author responses. I’ve also read the revised manuscript and appendix (the paragraphs highlighted... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"S-y9H7Biwre",
"sU0eVOMXv0m",
"sg6Z1sLqoK4",
"rHwY6DOquAQ",
"9Gq-S1dWsPn",
"ZYHxOlV4Wdt",
"nips_2022_Ho_zIH4LA90",
"nips_2022_Ho_zIH4LA90",
"nips_2022_Ho_zIH4LA90",
"nips_2022_Ho_zIH4LA90"
] |
nips_2022_7b7iGkuVqlZ | Unsupervised Learning of Equivariant Structure from Sequences | In this study, we present \textit{meta-sequential prediction} (MSP), an unsupervised framework to learn the symmetry from the time sequence of length at least three.
Our method leverages the stationary property~(e.g. constant velocity, constant acceleration) of the time sequence to learn the underlying equivariant structure of the dataset by simply training the encoder-decoder model to be able to predict the future observations.
We will demonstrate that, with our framework, the hidden disentangled structure of the dataset naturally emerges as a by-product by applying \textit{simultaneous block-diagonalization} to the transition operators in the latent space, the procedure which is commonly used in representation theory to decompose the feature-space based on the type of response to group actions.
We will showcase our method from both empirical and theoretical perspectives.
Our result suggests that finding a simple structured relation and learning a model with extrapolation capability are two sides of the same coin. The code is available at https://github.com/takerum/meta_sequential_prediction. | Accept |
While there was a certain lack of enthusiasm in the scores of the reviewers, the author's answers cleared the concerns of the reviewers participating in the discussion and overall the recommendation leans towards acceptance. This paper is, in the reviewers' opinions, sound and adds to the literature on unsupervised learning of symmetry. The formulation (of learning symmetry by only modelling linear transitions) is nicely simple. Experiments and evaluations generally were considered of adequate quality.
| train | [
"qsOFATNlng3",
"2MAsWYKXaV9",
"IwtBAm74TIJ",
"RSCNYdOPnIz",
"mjoN0KYS8y",
"sQLbdUfBBLX",
"_EuDgakG8_g",
"itNlZP5Xajg",
"vjGtEqfUY9f",
"ZWPyMQB_PAj",
"TUSCUkKtAt8",
"R7oGXWdDe3S_",
"047ikamlCzD",
"qBhCpCrcUti",
"FjO0jJwnbpr",
"1bvMFKhj01",
"RknCLq83Z6e"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the further clarifications and the updates to the draft.",
" Thank you very much for your comment, and we are glad our response clarifies your concerns.\n\n> The reason that the prediction task gives disentanglement is that by representation theory, an equivariant model gives features that can be ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"2MAsWYKXaV9",
"RSCNYdOPnIz",
"RknCLq83Z6e",
"sQLbdUfBBLX",
"047ikamlCzD",
"_EuDgakG8_g",
"itNlZP5Xajg",
"vjGtEqfUY9f",
"1bvMFKhj01",
"TUSCUkKtAt8",
"R7oGXWdDe3S_",
"RknCLq83Z6e",
"FjO0jJwnbpr",
"nips_2022_7b7iGkuVqlZ",
"nips_2022_7b7iGkuVqlZ",
"nips_2022_7b7iGkuVqlZ",
"nips_2022_7b7... |
nips_2022_HxZpawUrv9Q | A Conditional Randomization Test for Sparse Logistic Regression in High-Dimension | Identifying the relevant variables for a classification model with correct confidence levels is a central but difficult task in high-dimension. Despite the core role of sparse logistic regression in statistics and machine learning, it still lacks a good solution for accurate inference in the regime where the number of features $p$ is as large as or larger than the number of samples $n$. Here we tackle this problem by improving the Conditional Randomization Test (CRT). The original CRT algorithm shows promise as a way to output p-values while making few assumptions on the distribution of the test statistics. As it comes with a prohibitive computational cost even in mildly high-dimensional problems, faster solutions based on distillation have been proposed. Yet, they rely on unrealistic hypotheses and result in low-power solutions. To improve this, we propose \emph{CRT-logit}, an algorithm that combines a variable-distillation step and a decorrelation step that takes into account the geometry of $\ell_1$-penalized logistic regression problem. We provide a theoretical analysis of this procedure, and demonstrate its effectiveness on simulations, along with experiments on large-scale brain-imaging and genomics datasets. | Accept | The decision is to accept this paper.
The paper presents a method for producing asymptotically valid p-values when testing the null hypothesis of conditional randomization tests in sparse logistic regression. The method builds on a previous distillation method that examines correlations between residuals for the label y and the focal covariate x_j when they are projected onto the remaining covariates. The method corrects a bias that arises in this distillation method due to the non-linearity in penalized logistic regression. The authors prove the asymptotic validity of the resulting p-values and study the power and FDR of the procedure.
The reviewers agreed that this is a strong method and a clearly written paper. The authors answered all major questions from the reviewers and made changes in response to reviewer feedback. | train | [
"qWBbw1_Vu8S",
"5uXu-7xf91k",
"mwVvFUdjId",
"f657JpJRTtR",
"JV-HKgaETT",
"fPyMyHEfXop",
"rF44lTztoV8",
"Rj2hJ15u7f",
"O07w4mb9fmJ",
"TV-tavsEhpU",
"8O1MMZ4dv7c",
"TDLn0erASxA",
"aoGxfP2HC_b"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for provided detailed answers to all of the reviewers and the revised manuscript. I have no further questions, and my concerns were addressed.",
" I thank the authors for their responses and for editing the paper. I have no further questions at present.",
" Thank you for the through r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
3
] | [
"fPyMyHEfXop",
"Rj2hJ15u7f",
"JV-HKgaETT",
"aoGxfP2HC_b",
"aoGxfP2HC_b",
"TDLn0erASxA",
"8O1MMZ4dv7c",
"TV-tavsEhpU",
"nips_2022_HxZpawUrv9Q",
"nips_2022_HxZpawUrv9Q",
"nips_2022_HxZpawUrv9Q",
"nips_2022_HxZpawUrv9Q",
"nips_2022_HxZpawUrv9Q"
] |
nips_2022_A6EmxI3_Xc | Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network? | Modern deep neural networks for classification usually jointly learn a backbone for representation and a linear classifier to output the logit of each class. A recent study has shown a phenomenon called neural collapse that the within-class means of features and the classifier vectors converge to the vertices of a simplex equiangular tight frame (ETF) at the terminal phase of training on a balanced dataset. Since the ETF geometric structure maximally separates the pair-wise angles of all classes in the classifier, it is natural to raise the question, why do we spend an effort to learn a classifier when we know its optimal geometric structure? In this paper, we study the potential of learning a neural network for classification with the classifier randomly initialized as an ETF and fixed during training. Our analytical work based on the layer-peeled model indicates that the feature learning with a fixed ETF classifier naturally leads to the neural collapse state even when the dataset is imbalanced among classes. We further show that in this case the cross entropy (CE) loss is not necessary and can be replaced by a simple squared loss that shares the same global optimality but enjoys a better convergence property. Our experimental results show that our method is able to bring significant improvements with faster convergence on multiple imbalanced datasets. | Accept | This paper examines the use of a random equiangular tight frame (ETF) as a replacement mechanism for the final classification layer in a deep neural network, and demonstrates experimental advantages in class-imbalanced training scenarios.
Reviewers gave drastically different assessments of this paper, with ratings ranging from reject to weak accept. The authors provided extensive responses to all reviewers, and Reviewer amWi participated in an extended discussion with the authors. Author responses directly addressing concerns raised by other reviewers, such as pointing to ImageNet results in response to Reviewer Nj4c asking for such experiments, appear not to have received subsequent engagement from reviewers.
The Area Chair has taken an detailed look at the paper and the entirety of the discussion, and agrees with Reviewer amWi's assessment. The work provides an interesting examination of ETF as a novel mechanism to address class imbalanced training; the contributions meet the bar for acceptance to NeurIPS.
Reviewer amWi makes several suggestions regarding presentation of the main contributions as well as additional papers for citation and discussion, which the authors may want to take into consideration when preparing the final version of the paper. | train | [
"2OP1pbwoG5D",
"1XxQSIEte5p",
"uhix49Pfxw",
"pA8z1FvEan",
"lR-aSKBmFrB",
"HlkpLqCDpcQ",
"qaIa9S9UJgP",
"1Sg0enDxdOL",
"VoAj80bKMX",
"1I2TB-00DH",
"PKGSNw3fOZ",
"ZxW__SMyDm7",
"c7dDLS0U5n",
"xXU6c8maZNR",
"2waE7v9a4Mw",
"skwJcr04771",
"3f5t9GzlfVG",
"gNadXxrwkng",
"UU7g_LQ9CN3",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Our method performs much better than the weighted CE baseline (71.9 vs 68.5 in 0.005 imbalance ratio), while ArcFace is apprantly worse than the weighted CE baseline (66.3 vs 68.5). Is this a confilct for ArcFace? \n\nOur Theorem 1 mainly focuses on the imbalanced training setting, compared with previous studies ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"1XxQSIEte5p",
"uhix49Pfxw",
"pA8z1FvEan",
"lR-aSKBmFrB",
"HlkpLqCDpcQ",
"qaIa9S9UJgP",
"1Sg0enDxdOL",
"VoAj80bKMX",
"1I2TB-00DH",
"ZxW__SMyDm7",
"ZxW__SMyDm7",
"c7dDLS0U5n",
"w5cpEtCbfT",
"2waE7v9a4Mw",
"skwJcr04771",
"HIzVMfDQhM",
"gNadXxrwkng",
"UU7g_LQ9CN3",
"HfMzT3AjeHt",
... |
nips_2022_4R7YrAGhnve | SegViT: Semantic Segmentation with Plain Vision Transformers | We explore the capability of plain Vision Transformers (ViTs) for semantic segmentation and propose the SegViT. Previous ViT-based segmentation networks usually learn a pixel-level representation from the output of the ViT. Differently, we make use of the fundamental component—attention mechanism, to generate masks for semantic segmentation. Specifically, we propose the Attention-to-Mask (ATM) module, in which the similarity maps between a set of learnable class tokens and the spatial feature maps are transferred to the segmentation masks. Experiments show that our proposed SegViT using the ATM module outperforms its counterparts using the plain ViT backbone on the ADE20K dataset and achieves new state-of-the-art performance on COCO-Stuff-10K and PASCAL-Context datasets. Furthermore, to reduce the computational cost of the ViT backbone, we propose query-based down-sampling (QD) and query-based up-sampling (QU) to build a Shrunk structure. With our Shrunk structure, the model can save up to 40% computations while maintaining competitive performance. | Accept | This submission has received comments from 4 official reviewers. The authors have made very detailed replies to the reviewer's comments. The authors and reviewers had quite rich discussions. After these discussions, 3 reviewers recommended weak acceptance, and 1 recommended rejection.
For the novelty concerns, the authors clarify them during the rebuttal. The reviewers have also recommended comparing with recent semantic segmentation methods using ViTs. Missing comparisons should be included in the final version, including comparisons with
[1] Ma, Xuezhe, et al. "Luna: Linear unified nested attention." NeurIPS 2021.
[2] Ryoo, Michael, et al. "Tokenlearner: Adaptive space-time tokenization for videos." NeurIPS 2021.
[3] Wu, Yu-Huan, et al. "P2T: Pyramid Pooling Transformer for Scene Understanding", IEEE TPAMI, 2022.
Only reviewer Eyo8 recommends borderline rejection. The authors have made quite a detailed rebuttal but we have not heard from the reviewer after the rebuttal.
Thus, the AC would like to recommend acceptance.
| train | [
"I6nVqXxqxUB",
"QvgxcJ28gSA",
"PQGK_gchOwX",
"IV_a0jjqaF",
"YLLcYI3lXE_",
"LULpVItfJA",
"8VKrIqiMdrL",
"rq7vAnYXwTm",
"e3Nm3Cna_Za",
"RozqhOBRSVZ",
"706DyYC1nDI",
"9rJIby3AtR-",
"chxcAVxKNhp",
"UtxgVeF5OWb",
"wb7UM6E56K0",
"GNPF-8kiwGv",
"PXdCd0hmnqE"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **About QU/QD**\n\n*We thank YDPR for the great advice, we will add those cites to the paper.*\n\n**About Table 4 and Line 234-239**\n*In Maskformer, they first the queries to have multiple transformer decoder calculation with the high-level feature maps from the backbone. The results are 100 queries. Then they u... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"8VKrIqiMdrL",
"8VKrIqiMdrL",
"IV_a0jjqaF",
"706DyYC1nDI",
"LULpVItfJA",
"e3Nm3Cna_Za",
"9rJIby3AtR-",
"nips_2022_4R7YrAGhnve",
"PXdCd0hmnqE",
"GNPF-8kiwGv",
"wb7UM6E56K0",
"chxcAVxKNhp",
"UtxgVeF5OWb",
"nips_2022_4R7YrAGhnve",
"nips_2022_4R7YrAGhnve",
"nips_2022_4R7YrAGhnve",
"nips_... |
nips_2022_Vt3_mJNrjt | Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure | This paper presents a new efficient black-box attribution method built on Hilbert-Schmidt Independence Criterion (HSIC). Based on Reproducing Kernel Hilbert Spaces (RKHS), HSIC measures the dependence between regions of an input image and the output of a model using the kernel embedding of their distributions. It thus provides explanations enriched by RKHS representation capabilities. HSIC can be estimated very efficiently, significantly reducing the computational cost compared to other black-box attribution methods.
Our experiments show that HSIC is up to 8 times faster than the previous best black-box attribution methods while being as faithful.
Indeed, we improve or match the state-of-the-art of both black-box and white-box attribution methods for several fidelity metrics on Imagenet with various recent model architectures.
Importantly, we show that these advances can be transposed to efficiently and faithfully explain object detection models such as YOLOv4.
Finally, we extend the traditional attribution methods by proposing a new kernel enabling an ANOVA-like orthogonal decomposition of importance scores based on HSIC, allowing us to evaluate not only the importance of each image patch but also the importance of their pairwise interactions. Our implementation is available at \url{https://github.com/paulnovello/HSIC-Attribution-Method}. | Accept | The paper proposes a novel black-box explanation method. The proposed method uses HSIC to measure the dependence between randomly-masked inputs and the corresponding outputs, and identifies relevance patches. Based on the decomposition property, the proposed method can also find interactions between patches. Experiments quantitatively show that the proposed method outperforms (or is comparable on some evaluation measures) existing black-box methods with less computation costs. Quantitative gains are demonstrated by finding the cause of wrong prediction in an object detection task, and interaction between patches.
Reviewers raised concerns mainly on clarity, which the authors well addressed. I expect that the presentation of the final version will be much clearer.
A good paper with an interesting idea of using HSIC, which brings benefit on the explanation performance and computation time. Furthermore, it allows explaining interactions, which most existing methods do not. The advantages of the proposed methods are demonstrated quantitatively and qualitatively. | train | [
"zqOQUOBZknr",
"h0SInk7yyAT",
"7USJWAHwToJ",
"t3QM193nJl5",
"e4fUNmDOHgq",
"FmT8peNmHsm",
"1j1Fs7y8Qf4",
"z-bNZzewpa",
"TdOwa78HFCF",
"I9utDw2FS9",
"yHBitF8bvq_Q",
"nS_1l_92KlQ",
"n47yNBtYXI",
"J4uRUY0SjSF",
"jg052CJhnFs",
"MxOtw2o5BfL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response and further explanations. I like the clarification on the patch interaction part. I increase the score accordingly.\n\n",
" Thank you for addressing my concerns. This work is interesting and adds value to the field. ",
" Thank you for the additional explanations. I will take them into... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3,
3
] | [
"I9utDw2FS9",
"J4uRUY0SjSF",
"FmT8peNmHsm",
"e4fUNmDOHgq",
"z-bNZzewpa",
"MxOtw2o5BfL",
"z-bNZzewpa",
"jg052CJhnFs",
"J4uRUY0SjSF",
"n47yNBtYXI",
"nips_2022_Vt3_mJNrjt",
"nips_2022_Vt3_mJNrjt",
"nips_2022_Vt3_mJNrjt",
"nips_2022_Vt3_mJNrjt",
"nips_2022_Vt3_mJNrjt",
"nips_2022_Vt3_mJNrj... |
nips_2022_-ZQOx6yaVa- | Causally motivated multi-shortcut identification and removal | For predictive models to provide reliable guidance in decision making processes, they are often required to be accurate and robust to distribution shifts. Shortcut learning--where a model relies on spurious correlations or shortcuts to predict the target label--undermines the robustness property, leading to models with poor out-of-distribution accuracy despite good in-distribution performance. Existing work on shortcut learning either assumes that the set of possible shortcuts is known a priori or is discoverable using interpretability methods such as saliency maps, which might not always be true. Instead, we propose a two step approach to (1) efficiently identify relevant shortcuts, and (2) leverage the identified shortcuts to build models that are robust to distribution shifts. Our approach relies on having access to a (possibly) high dimensional set of auxiliary labels at training time, some of which correspond to possible shortcuts. We show both theoretically and empirically that our approach is able to identify a sufficient set of shortcuts leading to more efficient predictors in finite samples. | Accept | The reviewers agreed the paper is a worthwhile contribution in a growing area of identifying and removing shortcuts for robustness to distributional shift. Please take the reviewers feedback into consideration for the camera-ready. | train | [
"RhQH0NHpf6",
"vlHRHZOMPLX",
"q2CsJhtSGx",
"rh7ErxHPYmq",
"-u-Has5CCNq",
"Z7M2oK4SqEg",
"ydgW2jvEn_Ey",
"BJ2_RP29og-",
"K8P7g_BNjTM",
"zXsui6UaDBH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comment. \nGroup DRO is equivalent to our method when the loss is convex, but this equivalent might break down when they are non-convex. \nThis is explored in detail in the group DRO paper [1] on page 8 under theoretical comparison. \n\n[1] Sagawa, \"Distributionally Robust Neural Networks for ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"vlHRHZOMPLX",
"-u-Has5CCNq",
"Z7M2oK4SqEg",
"zXsui6UaDBH",
"K8P7g_BNjTM",
"BJ2_RP29og-",
"nips_2022_-ZQOx6yaVa-",
"nips_2022_-ZQOx6yaVa-",
"nips_2022_-ZQOx6yaVa-",
"nips_2022_-ZQOx6yaVa-"
] |
nips_2022_jtq4KwZ9_n9 | Geometry-aware Two-scale PIFu Representation for Human Reconstruction | Although PIFu-based 3D human reconstruction methods are popular, the quality of recovered details is still unsatisfactory. In a sparse (e.g., 3 RGBD sensors) capture setting, the depth noise is typically amplified in the PIFu representation, resulting in flat facial surfaces and geometry-fallible bodies. In this paper, we propose a novel geometry-aware two-scale PIFu for 3D human reconstruction from sparse, noisy inputs. Our key idea is to exploit the complementary properties of depth denoising and 3D reconstruction, for learning a two-scale PIFu representation to reconstruct high-frequency facial details and consistent bodies separately. To this end, we first formulate depth denoising and 3D reconstruction as a multi-task learning problem. The depth denoising process enriches the local geometry information of the reconstruction features, while the reconstruction process enhances depth denoising with global topology information. We then propose to learn the two-scale PIFu representation using two MLPs based on the denoised depth and geometry-aware features. Extensive experiments demonstrate the effectiveness of our approach in reconstructing facial details and bodies of different poses and its superiority over state-of-the-art methods. | Accept | All reviewers were in favor of acceptance. The AC examined the paper, reviews, and author response, and is inclined to agree. The AC would encourage the authors to incorporate their responses to the reviewers into the final version of the paper. | test | [
"q5sG9pVOk17",
"tNGM84IVOsi",
"lXL0ttKsIYx",
"RaiqQns4bv",
"9N7OAmGcVLS",
"--s78wkZRyh",
"PoWLCyeTCYH",
"PCRog7lu3-b",
"kibxePx4bYu",
"kof0enkL-bg",
"9DUknTGnLDN",
"kq7o7Eop1v",
"GlE7Ts8TK_E",
"jvKW01tNPXr",
"9GRJBmwdyG"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed response. The rebuttal addressed my concerns. I’m leaning towards acceptance.",
" Thanks for the interesting questions. Our two-scale PIFu representation has the following two working prerequisites. First, the independently modeled regions (e.g., the face regions) are salient and easy to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
3
] | [
"PCRog7lu3-b",
"RaiqQns4bv",
"9N7OAmGcVLS",
"9DUknTGnLDN",
"kibxePx4bYu",
"nips_2022_jtq4KwZ9_n9",
"nips_2022_jtq4KwZ9_n9",
"9GRJBmwdyG",
"jvKW01tNPXr",
"GlE7Ts8TK_E",
"kq7o7Eop1v",
"nips_2022_jtq4KwZ9_n9",
"nips_2022_jtq4KwZ9_n9",
"nips_2022_jtq4KwZ9_n9",
"nips_2022_jtq4KwZ9_n9"
] |
nips_2022_SLdfxFdIFeN | A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective | We propose the first unified theoretical analysis of mixed sample data augmentation (MSDA), such as Mixup and CutMix. Our theoretical results show that regardless of the choice of the mixing strategy, MSDA behaves as a pixel-level regularization of the underlying training loss and a regularization of the first layer parameters. Similarly, our theoretical results support that the MSDA training strategy can improve adversarial robustness and generalization compared to the vanilla training strategy. Using the theoretical results, we provide a high-level understanding of how different design choices of MSDA work differently. For example, we show that the most popular MSDA methods, Mixup and CutMix, behave differently, e.g., CutMix regularizes the input gradients by pixel distances, while Mixup regularizes the input gradients regardless of pixel distances. Our theoretical results also show that the optimal MSDA strategy depends on tasks, datasets, or model parameters. From these observations, we propose generalized MSDAs, a Hybrid version of Mixup and CutMix (HMix) and Gaussian Mixup (GMix), simple extensions of Mixup and CutMix. Our implementation can leverage the advantages of Mixup and CutMix, while our implementation is very efficient, and the computation cost is almost neglectable as Mixup and CutMix. Our empirical study shows that our HMix and GMix outperform the previous state-of-the-art MSDA methods in CIFAR-100 and ImageNet classification tasks. | Accept | This work proposes a theoretical analysis and unified specification for mixed sample data augmentation methods. The reviewers praise the extensive theoretical analysis as well as the strong empirical results in the paper. The authors and reviewers engaged in substantial discussion, which led multiple reviewers to revise their assessment of the paper upwards. I can therefore recommend accepting this paper. | train | [
"5grKWScerZa",
"fSsmFbay6O",
"jlS5xNckvzXo",
"2P6a0yZcbeR",
"nlkvg5XWO_",
"VX5mjdjUVX",
"G6EzvP5tl-y",
"dRxBvFlHvvVi",
"Xevx7iccE9-",
"n1vw6OieRja",
"7tiJel2m9AY",
"YwHPm7xRk0v",
"ei5ImxsJN0p",
"lo9Dh4O6f03",
"IjVRAOK1Pbk"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the rebuttal. \nThe overall results look good to me.\n",
" We are happy to hear that our revision version of the paper clarifies our contribution. Thanks for the recommendation of the state-of-the-art MSDA papers. We note that we already mentioned [1,2, 4-8] in the original paper. We added the dis... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
5,
4
] | [
"VX5mjdjUVX",
"jlS5xNckvzXo",
"dRxBvFlHvvVi",
"7tiJel2m9AY",
"nips_2022_SLdfxFdIFeN",
"IjVRAOK1Pbk",
"lo9Dh4O6f03",
"lo9Dh4O6f03",
"ei5ImxsJN0p",
"YwHPm7xRk0v",
"YwHPm7xRk0v",
"nips_2022_SLdfxFdIFeN",
"nips_2022_SLdfxFdIFeN",
"nips_2022_SLdfxFdIFeN",
"nips_2022_SLdfxFdIFeN"
] |
nips_2022_qf12cWVSksq | Inception Transformer | Recent studies show that transformer has strong capability of building long-range dependencies, yet is incompetent in capturing high frequencies that predominantly convey local information. To tackle this issue, we present a novel and general-purpose $\textit{Inception Transformer}$, or $\textit{iFormer}$ for short, that effectively learns comprehensive features with both high- and low-frequency information in visual data. Specifically, we design an Inception mixer to explicitly graft the advantages of convolution and max-pooling for capturing the high-frequency information to transformers. Different from recent hybrid frameworks, the Inception mixer brings greater efficiency through a channel splitting mechanism to adopt parallel convolution/max-pooling path and self-attention path as high- and low-frequency mixers, while having the flexibility to model discriminative information scattered within a wide frequency range. Considering that bottom layers play more roles in capturing high-frequency details while top layers more in modeling low-frequency global information, we further introduce a frequency ramp structure, i.e., gradually decreasing the dimensions fed to the high-frequency mixer and increasing those to the low-frequency mixer, which can effectively trade-off high- and low-frequency components across different layers. We benchmark the iFormer on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection and ADE20K segmentation. For example, our iFormer-S hits the top-1 accuracy of 83.4% on ImageNet-1K, much higher than DeiT-S by 3.6%, and even slightly better than much bigger model Swin-B (83.3%) with only 1/4 parameters and 1/3 FLOPs. Code and models will be released. | Accept | This paper proposes a novel multi-branch style architecture for vision tasks, motivated by a frequency perspective of deep network behaviors. All reviewers are very positive about the motivation, presentation and experimental results. The AC believes this should be a good contribution to the neural architecture design community. | val | [
"SAcKXFzB3TS",
"XgTNqIGgOWC",
"WqcfbUdkvDs",
"p5QEk6Cf0Ov",
"x-IhQInmNkp",
"jj9oE2rG2kC",
"fKTBFqNONB",
"mTWPy6FAs_h"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the insightful and constructive comments. Please find the response to your questions below: \n\n**Q1: The proposed method does not provide any results under LARGE configurations. It is necessary to present the comparisons to either Mobile-Former, CVPR 2022/MobileViT, ICLR 2022 or Swin-L/... | [
-1,
-1,
-1,
-1,
7,
8,
8,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"mTWPy6FAs_h",
"fKTBFqNONB",
"jj9oE2rG2kC",
"x-IhQInmNkp",
"nips_2022_qf12cWVSksq",
"nips_2022_qf12cWVSksq",
"nips_2022_qf12cWVSksq",
"nips_2022_qf12cWVSksq"
] |
nips_2022_luGXvawYWJ | Dataset Distillation via Factorization | In this paper, we study dataset distillation (DD), from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline. Unlike conventional DD approaches that aim to produce distilled and representative samples, \emph{HaBa} explores decomposing a dataset into two components: data \emph{Ha}llucination networks and \emph{Ba}ses, where the latter is fed into the former to reconstruct image samples. The flexible combinations between bases and hallucination networks, therefore, equip the distilled data with exponential informativeness gain, which largely increase the representation capability of distilled datasets. To furthermore increase the data efficiency of compression results, we further introduce a pair of adversarial contrastive \xw{constraints} on the resultant hallucination networks and bases, which increase the diversity of generated images and inject more discriminant information into the factorization. Extensive comparisons and experiments demonstrate that our method can yield significant improvement on downstream classification tasks compared with previous state of the arts, while reducing the total number of compressed parameters by up to 65\%. Moreover, distilled datasets by our approach also achieve \textasciitilde10\% higher accuracy than baseline methods in cross-architecture generalization. Our code is available \href{https://github.com/Huage001/DatasetFactorization}{here}. | Accept | The reviewers originally had concerns but these have been well addressed by the authors in a thorough rebuttal and there is a consensus for acceptance. We encourage the authors to incorporate all the comments from the reviewers in the final version. | test | [
"0LtvIVYSrXT",
"bqykcZd_IC",
"W8qR1qGYvTc",
"Or-Bxa-d3Z",
"Vlm7O_S_0G",
"z0HfmKk-4RU",
"OmEFD2NrCWD",
"HGs-UzXzXJE",
"2iU8l6tmr1g",
"lbhXrLFICIH",
"N6z1PnHylF",
"aI54MNdSh8b",
"B6fUHF4QaxI",
"X4Cr4MqhiL",
"kY8uAotxHf7",
"FJiyiR5tvuk",
"n2TiQRKblk-",
"9SSMEuPwJA",
"b1Vvxqx-9Ao",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Thanks for the response. I appreciate that the authors could conduct such an ablation study. It seems that in the small-budget region, the number of bases dominates the performance, while in the large-budget region, the expressivity of the hallucinators dominates the performance. If my understanding is correct, m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
2
] | [
"W8qR1qGYvTc",
"OmEFD2NrCWD",
"OmEFD2NrCWD",
"z0HfmKk-4RU",
"N6z1PnHylF",
"HGs-UzXzXJE",
"2iU8l6tmr1g",
"X4Cr4MqhiL",
"lbhXrLFICIH",
"FJiyiR5tvuk",
"quiqq_LX7V1",
"WRkI5WaxMWF",
"WRkI5WaxMWF",
"WRkI5WaxMWF",
"WRkI5WaxMWF",
"Bj8AZEglkxx",
"Bj8AZEglkxx",
"Bj8AZEglkxx",
"Bj8AZEglkxx... |
nips_2022_q6bZruC3dWJ | Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation | Knowledge distillation (KD) can effectively compress neural networks by training a smaller network (student) to simulate the behavior of a larger one (teacher). A counter-intuitive observation is that a more expansive teacher does not make a better student, but the reasons for this phenomenon remain unclear. In this paper, we demonstrate that this is directly attributed to the presence of \textit{undistillable classes}: when trained with distillation, the teacher's knowledge of some classes is incomprehensible to the student model. We observe that while KD improves the overall accuracy, it is at the cost of the model becoming inaccurate in these undistillable classes. After establishing their widespread existence in state-of-the-art distillation methods, we illustrate their correlation with the capacity gap between teacher and student models. Finally, we present a simple Teach Less Learn More (TLLM) framework to identify and discard the undistillable classes during training. We validate the effectiveness of our approach on multiple datasets with varying network architectures. In all settings, our proposed method is able to exceed the performance of competitive state-of-the-art techniques. | Accept | This paper makes an interesting observation on knowledge distillation such that excluding certain undistillable classes improves performance. This observation is quite interesting and potentially impactful for a better understanding of knowledge distillation. The authors use this observation to consistently improve the existing knowledge distillation methods in the experiments. One weakness is that the explanation of "why" it is beneficial to exclude certain classes is not very satisfactory.
Nevertheless, the strength of this paper outweighs the weakness. All the reviewers are positive about this paper and I also recommend acceptance. | test | [
"iPwXuaOIv8r",
"EAjZDOSQwx",
"hlMnOTVqZ3i",
"osaeg2lotp",
"PRnwyIa_SJO",
"1tSSeS72JBH",
"xULoARL9u-0",
"WJM7CAYsEZff",
"T5LLweSo3DJ",
"Q2Q2QoPLK3g",
"C2acSNOB3M2",
"3K_sozdSWEu",
"z3Ay6p2EfjE",
"WRs496SPxay",
"czJzjh2S3H7",
"jszy-HpoXIL",
"cWh8Hv4KR4",
"Xcpf0lKynj",
"XvvhBJP9XJ8"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for additional results. I have increased my recommendation, although I stand by my review regarding the lack of concrete understanding of undistillable classes.",
" Thanks for your reply. The additional analysis of the undistillable classes from the Supplementary is interesting and solves my concerns.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"osaeg2lotp",
"z3Ay6p2EfjE",
"PRnwyIa_SJO",
"1tSSeS72JBH",
"C2acSNOB3M2",
"T5LLweSo3DJ",
"cWh8Hv4KR4",
"jszy-HpoXIL",
"XvvhBJP9XJ8",
"nips_2022_q6bZruC3dWJ",
"cWh8Hv4KR4",
"jszy-HpoXIL",
"Xcpf0lKynj",
"XvvhBJP9XJ8",
"XvvhBJP9XJ8",
"nips_2022_q6bZruC3dWJ",
"nips_2022_q6bZruC3dWJ",
"... |
nips_2022_xnuN2vGmZA0 | VITA: Video Instance Segmentation via Object Token Association | We introduce a novel paradigm for offline Video Instance Segmentation (VIS), based on the hypothesis that explicit object-oriented information can be a strong clue for understanding the context of the entire sequence. To this end, we propose VITA, a simple structure built on top of an off-the-shelf Transformer-based image instance segmentation model. Specifically, we use an image object detector as a means of distilling object-specific contexts into object tokens. VITA accomplishes video-level understanding by associating frame-level object tokens without using spatio-temporal backbone features. By effectively building relationships between objects using the condensed information, VITA achieves the state-of-the-art on VIS benchmarks with a ResNet-50 backbone: 49.8 AP, 45.7 AP on YouTube-VIS 2019 & 2021, and 19.6 AP on OVIS. Moreover, thanks to its object token-based structure that is disjoint from the backbone features, VITA shows several practical advantages that previous offline VIS methods have not explored - handling long and high-resolution videos with a common GPU, and freezing a frame-level detector trained on image domain. Code is available at the link. | Accept | All four reviewers are positive about this work (with three Accept and one Weak Accept). All reviewers appreciate the clear writing, solid results, and the idea of using local attentions in transformers to associate object token extracted at each frame. During the discussion phase, the authors further clarified some of the questions and present additional results (e.g., limitations and ablation on frozen/finetuned detector). After reading the reviews and the rebuttal, the AC agrees with the reviewers that this is a solid work with strong results on video instance segmentation. The AC thus recommends to accept. | train | [
"fJIzT5yHN-y",
"Ui1DeBuQy4h",
"UgmsQgvmobU",
"G2KPFKBJvqK",
"6gJMQC2h9A2",
"R3s4AIiq-zQ",
"vWE9Fr-lorF",
"iUeXUGnWaZ",
"rBa5PpN3JyR",
"W8wZLk1D312",
"qgiwLeLGL4p",
"LT3DgPfS7CL"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the author's response. They try to address all the concerns. I agree with the authors that performance wises it's a new benchmark across all datasets. However, I till think its two-stage processing(1st token extraction,2nd main network) is quite burdensome and stronger features from mask2former help i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"vWE9Fr-lorF",
"iUeXUGnWaZ",
"G2KPFKBJvqK",
"6gJMQC2h9A2",
"LT3DgPfS7CL",
"qgiwLeLGL4p",
"W8wZLk1D312",
"rBa5PpN3JyR",
"nips_2022_xnuN2vGmZA0",
"nips_2022_xnuN2vGmZA0",
"nips_2022_xnuN2vGmZA0",
"nips_2022_xnuN2vGmZA0"
] |
nips_2022_kZnGYt-3f_X | Hilbert Distillation for Cross-Dimensionality Networks | 3D convolutional neural networks have revealed superior performance in processing volumetric data such as video and medical imaging. However, the competitive performance by leveraging 3D networks results in huge computational costs, which are far beyond that of 2D networks. In this paper, we propose a novel Hilbert curve-based cross-dimensionality distillation approach that facilitates the knowledge of 3D networks to improve the performance of 2D networks. The proposed Hilbert Distillation (HD) method preserves the structural information via the Hilbert curve, which maps high-dimensional (>=2) representations to one-dimensional continuous space-filling curves. Since the distilled 2D networks are supervised by the curves converted from dimensionally heterogeneous 3D features, the 2D networks are given an informative view in terms of learning structural information embedded in well-trained high-dimensional representations. We further propose a Variable-length Hilbert Distillation (VHD) method to dynamically shorten the walking stride of the Hilbert curve in activation feature areas and lengthen the stride in context feature areas, forcing the 2D networks to pay more attention to learning from activation features. The proposed algorithm outperforms the current state-of-the-art distillation techniques adapted to cross-dimensionality distillation on two classification tasks. Moreover, the distilled 2D networks by the proposed method achieve competitive performance with the original 3D networks, indicating the lightweight distilled 2D networks could potentially be the substitution of cumbersome 3D networks in the real-world scenario. | Accept | This submission was reviewed by four reviewers. All reviewers provided detailed and informative reviews. During rebuttal, the authors actively submitted detailed rebuttals, which lead to improved evaluations by the reviewers with improved scores. Overall, this is an interesting paper and an accept is recommended. | test | [
"4xAsOzpbEA",
"bTQnTg4wru",
"aoVAnW4hFtB",
"JJIe8Et3cOW8",
"aPuWIimhu4",
"V_QnDybtkxr",
"B1aScn4H8bW",
"MZC_HGi1SfI",
"L0lXKROnXq",
"xgW4gDCBPx",
"mNV7mNhzAXw",
"pmEiVv5ySy0",
"kjkM-F58RDj",
"SzLSXU_uQYJ",
"9kCkd04JV9h",
"NAgjGsmOj_R"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer iQbj,\n\nWe truly appreciate for recommending the interesting works[1, 2]. In the revision, we will discuss their contribution and highlight the difference with our work in the Related Works. Moreover, we will follow all your constructive comments to improve our paper. Please kindly let us know if y... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"bTQnTg4wru",
"MZC_HGi1SfI",
"JJIe8Et3cOW8",
"mNV7mNhzAXw",
"kjkM-F58RDj",
"kjkM-F58RDj",
"kjkM-F58RDj",
"kjkM-F58RDj",
"NAgjGsmOj_R",
"NAgjGsmOj_R",
"9kCkd04JV9h",
"SzLSXU_uQYJ",
"nips_2022_kZnGYt-3f_X",
"nips_2022_kZnGYt-3f_X",
"nips_2022_kZnGYt-3f_X",
"nips_2022_kZnGYt-3f_X"
] |
nips_2022_ucNDIDRNjjv | Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting | Transformers have shown great power in time series forecasting due to their global-range modeling ability. However, their performance can degenerate terribly on non-stationary real-world data in which the joint distribution changes over time. Previous studies primarily adopt stationarization to attenuate the non-stationarity of original series for better predictability. But the stationarized series deprived of inherent non-stationarity can be less instructive for real-world bursty events forecasting. This problem, termed over-stationarization in this paper, leads Transformers to generate indistinguishable temporal attentions for different series and impedes the predictive capability of deep models. To tackle the dilemma between series predictability and model capability, we propose Non-stationary Transformers as a generic framework with two interdependent modules: Series Stationarization and De-stationary Attention. Concretely, Series Stationarization unifies the statistics of each input and converts the output with restored statistics for better predictability. To address the over-stationarization problem, De-stationary Attention is devised to recover the intrinsic non-stationary information into temporal dependencies by approximating distinguishable attentions learned from raw series. Our Non-stationary Transformers framework consistently boosts mainstream Transformers by a large margin, which reduces MSE by 49.43% on Transformer, 47.34% on Informer, and 46.89% on Reformer, making them the state-of-the-art in time series forecasting. Code is available at this repository: https://github.com/thuml/Nonstationary_Transformers. | Accept | The paper introduces a transformer-based method for non-stationary time series forecasting.
This research addresses a clear need, as acknowledged by the reviewers. Also, most reviewers found the method clearly described and the experiments compelling, demonstrating an improvement of the state of the art.
The reviewers asked questions about the baselines, evaluation methods and ablation studies. They also made requests related to clarifying the wording and some of the theory. The authors put in significant effort in addressing the comments, offering detailed responses to every reviewer. Only one of the reviewers responded during the discussion period, and the response came very late in the discussion period. However, I read the authors' response and concluded that they adequately addressed most issues raised by the reviewers.
As the model is in the Transformer space, and transformers have previously been shown to be state of the art on a number of tasks, I do not find it necessary to compare against other 'families' of methods. So I will consider that issue addressed as well. | train | [
"CaGagqU0g8",
"ZGnFhvSMhFi",
"jdNFG3_4HE-",
"dUx7DRHqdLE",
"94IuyuI18zb",
"PTGARHYIonP",
"-LEJWdNfdfwp",
"XqzOc4ax-o4",
"NQ5CEYNr_Zwt",
"KgJDHX3UdH4",
"eK7GcrQpL0",
"dZWw2cXfdNA",
"Z0ZJu4xocbH",
"_dsLH_Pv5E3",
"J694mijEMXu",
"brW6RJvSmQ",
"evEKk77-rUm",
"iuRCL_LBw0tm",
"khBkzzwZW... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
... | [
" **Q6: \"why do not consider other statistic features?\"**\n\n**(1) Our proposed Series Stationarization is powerful enough in enhancing the time series stationarity.** The comparison of ADF test statistic is shown as follows. Note that a smaller value of ADF Test Statistic means more likely to be stationarity. \n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"ZGnFhvSMhFi",
"jdNFG3_4HE-",
"dUx7DRHqdLE",
"M1uwH5kVmmD",
"khBkzzwZWh",
"nips_2022_ucNDIDRNjjv",
"XqzOc4ax-o4",
"cRsEi9uUaEf",
"KgJDHX3UdH4",
"0mKB71pMTsN",
"dZWw2cXfdNA",
"Z0ZJu4xocbH",
"_dsLH_Pv5E3",
"M1uwH5kVmmD",
"brW6RJvSmQ",
"evEKk77-rUm",
"iuRCL_LBw0tm",
"khBkzzwZWh",
"n... |
nips_2022_lme1MKnSMb | VCT: A Video Compression Transformer | We show how transformers can be used to vastly simplify neural video compression. Previous methods have been relying on an increasing number of architectural biases and priors, including motion prediction and warping operations, resulting in complex models. Instead, we independently map input frames to representations and use a transformer to model their dependencies, letting it predict the distribution of future representations given the past. The resulting video compression transformer outperforms previous methods on standard video compression data sets. Experiments on synthetic data show that our model learns to handle complex motion patterns such as panning, blurring and fading purely from data. Our approach is easy to implement, and we release code to facilitate future research. | Accept | This paper uses transformers for video compression, using less components compared to competing methods. Video compression is an important application in machine learning, and the use of transformers is well-timed w.r.t. generally strong interest in the architecture. There were some concerns over clarity of presentation, as well as issues with some of the experimentation, which the reviewers and authors seem to mostly seem to have been able to work out. There is one exception in one reviewer: the authors seem to have worked very hard to address all of this reviewer's concerns, but said reviewer did not adjust their score at all. In addition there was disagreement on some of that reviewers more prominent points.
So I recommend acceptance of this paper.
Overall, w64c stood out as an exceptional reviewer, as they engaged beyond their own review and actively both with the authors and the other reviewers. xvyj brought up some good points, but they were almost tyrannical towards the authors and never gave back in terms of score when the authors clearly satisfied their concerns. | train | [
"t4EUBBy1GC",
"P1-Ke2oF_t",
"sz2vuy9Oajh",
"G1gE1sye-uB",
"0zyLYrYVqRO",
"_neza8ITVhr",
"owirTA2Gcsu",
"QQt7v2SvPG",
"EGoljyKuZlU",
"B9XQLZ0law3",
"HOTR4h-9zqB",
"vBTejJZMCGM",
"TclkYX_W1VY",
"iWC9e00irz",
"zDlwNOLegxN",
"5j2HRbzXDMZ",
"mT6BjJMtohP",
"4lVFr7lRD7B",
"Yeu-jPG5XTy",... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" We have updated the manuscript with the modified figures and addressed the typos. Let us know if Fig. 1 in particular is an improvement from your point of view.",
" (See response above)",
" Thank the authors for providing extra results on VCT. They are an addition to the paper. I decide to keep my current eva... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
5
] | [
"wA3dKxtJEvU",
"B9XQLZ0law3",
"zDlwNOLegxN",
"mT6BjJMtohP",
"_neza8ITVhr",
"iWC9e00irz",
"nips_2022_lme1MKnSMb",
"zDlwNOLegxN",
"HOTR4h-9zqB",
"HOTR4h-9zqB",
"vBTejJZMCGM",
"TclkYX_W1VY",
"Yeu-jPG5XTy",
"zDlwNOLegxN",
"atMi363ppvn",
"nips_2022_lme1MKnSMb",
"r_6zlwehFpe",
"M2HxUoy6C... |
nips_2022_sGugMYr3Hdy | Pragmatically Learning from Pedagogical Demonstrations in Multi-Goal Environments | Learning from demonstration methods usually leverage close to optimal demonstrations to accelerate training. By contrast, when demonstrating a task, human teachers deviate from optimal demonstrations and pedagogically modify their behavior by giving demonstrations that best disambiguate the goal they want to demonstrate. Analogously, human learners excel at pragmatically inferring the intent of the teacher, facilitating communication between the two agents. These mechanisms are critical in the few demonstrations regime, where inferring the goal is more difficult. In this paper, we implement pedagogy and pragmatism mechanisms by leveraging a Bayesian model of Goal Inference from demonstrations. We highlight the benefits of this model in multi-goal teacher-learner setups with two artificial agents that learn with goal-conditioned Reinforcement Learning. We show that combining BGI-agents (a pedagogical teacher and a pragmatic learner) results in faster learning and reduced goal ambiguity over standard learning from demonstrations, especially in the few demonstrations regime. | Accept | After a strong rebuttal from the authors and an extensive discussion among the reviewers, I believe the paper's pros outweigh its cons and this paper will be a valuable contribution to NeurIPS. I recommend it for acceptance and encourage the authors to address the reviewers comments for the camera-ready version of the paper, especially regarding the weaknesses of empirical evaluation and differentiation against conventional GCRL.
| train | [
"3-yWVzedpHz",
"FfdrJez4tMe",
"iHKw0OQnXw-",
"ke7_CDav0mk",
"k4gkiaGMgOU",
"A3DHHs4ZhK",
"bPJYKXbuP_t",
"D1dsynpx3-5J",
"FVlDTyeYNcK",
"AxZmNrudgcT",
"GbYUeprZKW",
"FF36DIxsfhz",
"3T_UdhN98z-",
"4UoW-llrhfJ",
"wKwpAbWYvZi",
"kvngpOfSmG",
"BoiFqgiU2lK",
"PEffkr4T8AI",
"hohjcjdZWrC... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" As the deadline for the end of the discussion period approaches, we would like to add an argument that may clarify the concern.\n\nIn real-world problems, inferring goals from demonstrations and more generally actions is a crucial part to exchange information and learn efficiently, as a consensus of work in devel... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"ke7_CDav0mk",
"iHKw0OQnXw-",
"wKwpAbWYvZi",
"k4gkiaGMgOU",
"3T_UdhN98z-",
"hhmnN3lPUvB",
"gMyGXvzyPsK",
"FVlDTyeYNcK",
"AxZmNrudgcT",
"BoiFqgiU2lK",
"FF36DIxsfhz",
"Nx17r22J8Q",
"4UoW-llrhfJ",
"hhmnN3lPUvB",
"kvngpOfSmG",
"gMyGXvzyPsK",
"PEffkr4T8AI",
"Bmah8rnViHc",
"nips_2022_s... |
nips_2022_bntkx18xEb4 | HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes | Learning to generate diverse scene-aware and goal-oriented human motions in 3D scenes remains challenging due to the mediocre characters of the existing datasets on Human-Scene Interaction (HSI); they only have limited scale/quality and lack semantics. To fill in the gap, we propose a large-scale and semantic-rich synthetic HSI dataset, denoted as HUMANISE, by aligning the captured human motion sequences with various 3D indoor scenes. We automatically annotate the aligned motions with language descriptions that depict the action and the individual interacting objects; e.g., sit on the armchair near the desk. HUMANIZE thus enables a new generation task, language-conditioned human motion generation in 3D scenes. The proposed task is challenging as it requires joint modeling of the 3D scene, human motion, and natural language. To tackle this task, we present a novel scene-and-language conditioned generative model that can produce 3D human motions of the desirable action interacting with the specified objects. Our experiments demonstrate that our model generates diverse and semantically consistent human motions in 3D scenes.
| Accept | Paper was reviewed by four reviewers and received: 1 x Borderline Accept, 1 x Borderline Reject, 1 x Weak Accept and 1 x Accept. The general sentiment of reviewers was positive. Main identified concerns were with lack of diversity in the dataset and potential realism issues arising from construction of the dataset (placing pre-recorded motions into different scenes). Some of these concerns have been somewhat alleviated through the rebuttal. That said, [bBY2] remained concerned with the realism of interactions. AC agrees with [vzPA] that collecting "natural" real-world data for this problem would be difficult and laborious and that the proposed dataset can serve as a good bridge towards this ultimate goal and could be a stepping stone for future research. Therefore the decision is to Accept the paper. | train | [
"o-VTML9ulBm",
"UfG3YZ7oOU",
"-Ivqe8Yp-Gy",
"obIAjWek_o",
"EIiFmsR4Pl1",
"5LW41n0GRCI",
"PeV9W5sD2tL",
"2z_wQqdzgj6",
"EJ1X-0V9OGE",
"thbYAByoppX",
"MyqoaohH2vh",
"o7wgtUy-Ohm",
"DrGli7HJqWO",
"w58kXUdqz2t",
"NjfbqFkRMft",
"7oPleudlLtP",
"rDyXig_RTv",
"cudd8ExUfO4"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your time and constructive comments. We will integrate the feedback into the revision and further improve the quality and clarity of the paper. If we have resolved all your concerns, we kindly ask you to consider raising the rating. We believe our work would promote future research in the community!",
... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"UfG3YZ7oOU",
"7oPleudlLtP",
"2z_wQqdzgj6",
"thbYAByoppX",
"cudd8ExUfO4",
"7oPleudlLtP",
"NjfbqFkRMft",
"o7wgtUy-Ohm",
"w58kXUdqz2t",
"cudd8ExUfO4",
"7oPleudlLtP",
"rDyXig_RTv",
"NjfbqFkRMft",
"nips_2022_bntkx18xEb4",
"nips_2022_bntkx18xEb4",
"nips_2022_bntkx18xEb4",
"nips_2022_bntkx... |
nips_2022_2-REuflJDT | Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images | We present a simple yet effective fully convolutional one-stage 3D object detector for LiDAR point clouds of autonomous driving scenes, termed FCOS-LiDAR. Unlike the dominant methods that use the bird-eye view (BEV), our proposed detector detects objects from the range view (RV, a.k.a. range image) of the LiDAR points. Due to the range view's compactness and compatibility with the LiDAR sensors' sampling process on self-driving cars, the range view-based object detector can be realized by solely exploiting the vanilla 2D convolutions, departing from the BEV-based methods which often involve complicated voxelization operations and sparse convolutions.
For the first time, we show that an RV-based 3D detector with standard 2D convolutions alone can achieve comparable performance to state-of-the-art BEV-based detectors while being significantly faster and simpler. More importantly, almost all previous range view-based detectors only focus on single-frame point clouds since it is challenging to fuse multi-frame point clouds into a single range view. In this work, we tackle this challenging issue with a novel range view projection mechanism, and for the first time demonstrate the benefits of fusing multi-frame point clouds for a range-view based detector. Extensive experiments on nuScenes show the superiority of our proposed method and we believe that our work can be strong evidence that an RV-based 3D detector can compare favourably with the current mainstream BEV-based detectors. Code will be made publicly available. | Accept | After the rebuttal and discussion two reviewers recommend acceptance, one rejection. In their rebuttal, the authors were able to convincingly resolve all issues raised. Thus the AC sees no reason to reject this paper. | train | [
"Uf88lsECFNI",
"0ehnKlMjd9i",
"kVGGwqKBjBy",
"ThQDn9yjW-S",
"ucF6AnP5tHS",
"zA7rjjczM2wc",
"R3ITWvldA-8",
"jlTCUwQs5o7",
"n5MpGK03q03",
"0zzFP6kmU3A",
"hlbVeC4nYsv",
"KWtzlXKRzzM"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your prompt response!! I fully understand! Good luck!",
" Thank you very much for your feedback. We will add the current discussion to our manuscript.\n\nIn fact, the convolution with stride 2 can also keep the one-to-one mapping by assuming that each feature location is mapped to the center of th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"0ehnKlMjd9i",
"kVGGwqKBjBy",
"R3ITWvldA-8",
"R3ITWvldA-8",
"hlbVeC4nYsv",
"n5MpGK03q03",
"KWtzlXKRzzM",
"hlbVeC4nYsv",
"0zzFP6kmU3A",
"nips_2022_2-REuflJDT",
"nips_2022_2-REuflJDT",
"nips_2022_2-REuflJDT"
] |
nips_2022_mMuVRbsvPyw | GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models | Prevalent semantic segmentation solutions are, in essence, a dense discriminative classifier of p(class|pixel feature). Though straightforward, this de facto paradigm neglects the underlying data distribution p(pixel feature|class), and struggles to identify out-of-distribution data. Going beyond this, we propose GMMSeg, a new family of segmentation models that rely on a dense generative classifier for the joint distribution p(pixel feature,class). For each class, GMMSeg builds Gaussian Mixture Models (GMMs) via Expectation-Maximization (EM), so as to capture class-conditional densities. Meanwhile, the deep dense representation is end-to-end trained in a discriminative manner, i.e., maximizing p(class|pixel feature). This endows GMMSeg with the strengths of both generative and discriminative models. With a variety of segmentation architectures and backbones, GMMSeg outperforms the discriminative counterparts on three closed-set datasets. More impressively, without any modification, GMMSeg even performs well on open-world datasets. We believe this work brings fundamental insights into the related fields. | Accept | This paper proposes to learn generative model (mixture of Gaussian) on the discriminative features. The proposed method achieves strong performance on semantic segmentation and it is capable of anomaly detection.
The paper was reviewed by 4 reviewers.
Reviewer o8w9 (rating: 5) pointed out 2 missing references and asked about speed. The authors clarified the difference between their work and the two references, and showed that the inference speed is negligible.
Reviewer 5AWr (rating: 6) asked about assumption of uniform prior on classes. The authors explained that this is a common assumption, and added results on learned non-uniform class prior.
Reviewer TvcJ (rating: 5) asked about the motivation for the Gaussian mixture model, the meaning of the mixture components, computational overhead, comparison with MRF prior. The authors addressed the questions in detail and added new results. The reviewer read the authors' rebuttal and raised the rating to the current level 5.
Reviewer 1Te2 (rating: 7) wrote a very detailed review and was mostly satisfied with the authors' rebuttal, although this reviewer was still not entirely sure about joint training.
---------------
Overall, for the two reviewers with ratings 5, their concerns did not suggest serious flaws of the paper, and the authors addressed their concerns satisfactorily.
I thus lean toward accepting this paper.
| train | [
"FpK1jHARzm",
"dZv69INwaq9",
"6Y3_J9izJt",
"trUCSZXSz_N",
"QcPgyaP16z9",
"-i6Q_nhJlSa",
"01hMxt2Gg5j",
"DpNAr3Q4Lgs",
"RLnRE_gP4ji",
"XJBy0FOo9xl",
"BZ_iHsk8wS1",
"VgN_x5Jk193",
"hUSxQb3KhMj"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This response clarifies some big things, and I continue to recommend acceptance.\n\nI'm still not sure I follow exactly how joint training is being done, but certainly glad that it is, and I think this comments gives some more hints. Based on this rebuttal, I think there's a good chance that it will be clearer in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"QcPgyaP16z9",
"-i6Q_nhJlSa",
"VgN_x5Jk193",
"hUSxQb3KhMj",
"hUSxQb3KhMj",
"VgN_x5Jk193",
"VgN_x5Jk193",
"BZ_iHsk8wS1",
"XJBy0FOo9xl",
"nips_2022_mMuVRbsvPyw",
"nips_2022_mMuVRbsvPyw",
"nips_2022_mMuVRbsvPyw",
"nips_2022_mMuVRbsvPyw"
] |
nips_2022_9aLbntHz1Uq | Counterfactual Fairness with Partially Known Causal Graph | Fair machine learning aims to avoid treating individuals or sub-populations unfavourably based on \textit{sensitive attributes}, such as gender and race. Those methods in fair machine learning that are built on causal inference ascertain discrimination and bias through causal effects. Though causality-based fair learning is attracting increasing attention, current methods assume the true causal graph is fully known. This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown. To select features that lead to counterfactual fairness, we derive the conditions and algorithms to identify ancestral relations between variables on a \textit{Partially Directed Acyclic Graph (PDAG)}, specifically, a class of causal DAGs that can be learned from observational data combined with domain knowledge. Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph. Results on both simulated and real-world datasets demonstrate the effectiveness of our method. | Accept | This paper has divergent views in the sense two reviewers have given positive assessments (6 and 7) while the other reviewer has given a negative assessment (score of 3). This paper also had very 'heavy' discussions between the reviewer with negative opinion and the authors.
First of all I would like to thank the reviewer involved in patiently discussing with the authors dedicating valuable personal time.
Let me start with the aspects all reviewers *more or less agree* on :
a) The main technical piece is an efficient algorithm and provable guarantees for identifying definite non-descendants and definite descendants from an MPDAG - maximum partially directed acyclic graph - the equivalence class of Causal DAGs one obtains after incorporating any arbitrary side information. Previous such results were known for CPDAGs and they don't carry over to MPDAGs. Therefore it is a non trivial result (specifically Lemma 4.4 ). So all reviewers agree that finding definite non descendants in Equivalence classes that also include side information is a very solid contribution.
b) The aspect in which reviewers had divergent opinion is this: the paper's claim to be able to train counterfactual fair classifiers leveraging the result from [Kusner et. al 2017] that any function of non-descendants is counterfactually fair.
One of the reviewer's strong contention is that in most fairness datasets, most variables that are highly predictive of outcomes will also be downstream of sensitive attributes like race etc.. and therefore relying only on non-descendants is not exactly a realistic application. Authors cited their empirical structure learning results that show very few descendants and comments from Kusner et. al 2017 paper to bolster their case. Reviewer responded by citing alternate statements from the same paper etc..
*My opinion* is that in a specific context when fairness with respect to a specific sensitive attribute is desired, there are also often other features that has no causal relationship with the sensitive attribute but has a *correlation* (Examples include age and race, race and gender etc.. ). To cite a recent reference please see Example 15 in https://arxiv.org/pdf/2207.11385.pdf (this reference is recent and I am *not* expecting authors or anyone else to have known this - it is just to demonstrate the point). The example shows *testable* correlations between sensitive attributes and non-descendants in COMPAS and Adult datasets.
This shows that a) neither causal sufficiency nor b) the non-existence of non-descendants are realistic . In fact, spuriously related non-descendants give rise to spurious bias which may not be an object of correction for fairness (broadly speaking). This shows that causal sufficiency is a strong assumption (as authors have assumed) and also non-descendants do exist.
c) Another point to be noted is that Kusner et. al. 2017 do consider confounded models unlike the authors. Once you view exogenous, endogenous (observed) variables and sensitive attribute as one full deterministic system, their point is ALL exogenous + non descendant endogenous variables are "non-descendants" topologically and therefore could be used. They did not imply non-descendants endogenous 'only' as the authors contend in their discussions. In fact, the algorithm section in Kusner et. al. 2017 - advocates for sampling Exogenous from some side information (level 2 and 3 information) and forming a predictor as a function of exogenous *and* non descendant endogenous variables.
Therefore, reviewer has a valid point on the discussed aspect as well. Authors may want to pay attention to this.
*In summary*: Authors' contention that Kusner et al 2017 paper advocates for non-descendant endogenous as their main sufficient criterion appears to be not exactly correct. However, non-descendants and their confounding with sensitive attribute is a more realistic model. However, authors core technical structure learning contribution is also noteworthy.
If this line of work is to be pursued where one could find non descendants even under limited confounding (between sensitive attributes and non- descendants - a mild violation of causal sufficiency) - it would be a step towards obtaining counterfactually fair classifiers (although even such a classifier would have to sacrifice a lot on accuracy depending on how many descendants one observes). However, even positive reviewers have opined that main strength of the paper is a solid structure learning result that identifies non-descendants in a fully observational setting.
*Recommendation*: In the spirit of not blocking valid ideas that are fundamental and also the fact that one cannot always make the weakest set of assumptions to make progress, I tend to favor acceptance. A *very strong* suggestion to authors - I would place structure learning as the centerpiece and motivate it by a need to learn non-descendants (in the general sense) motivated by Kusner et al 2017. Authors also need to highlight Fair relax - a relaxation that they have proposed that uses possible descendants and definite non descendants to predict - it seems to be closer than other approaches to counterfactually fair one and therefore removing the singular focus on (as the discussions would have one believe) only observed definite non-descendants.
| test | [
"jVND2YMX9GK",
"N6sBmkSaWFm",
"9HLwOOjuj_7",
"X04VQkrm_fk",
"7moC7oAvCtP",
"KrkrH4eYgA",
"zuKOiCzhUcK",
"4wbAzBo3XVm",
"84DZ6azLia",
"E955xneNcfi",
"Pa6m_my4XGP",
"7y40m6iw65k",
"y4PfFI5opj",
"9iETKhzjKM8",
"uxf-NCgHVx",
"xwE7x6ctgNG",
"-6ihAznOPKZ",
"pfnAf1HSATk",
"3bsovDta5sq",... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
... | [
" Error terms (or latent variables) are never observed, they are estimated. The level 2+ methods do that by explicitly using the sensitive attributes (and, depending on the structure, their descendants). Hence the actual function that computes predictions explicitly uses the sensitive attributes (and etc). You can ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"X04VQkrm_fk",
"9HLwOOjuj_7",
"7moC7oAvCtP",
"7moC7oAvCtP",
"zuKOiCzhUcK",
"4wbAzBo3XVm",
"nips_2022_9aLbntHz1Uq",
"E955xneNcfi",
"9iETKhzjKM8",
"Pa6m_my4XGP",
"7y40m6iw65k",
"dG6JZ05zui",
"xDaqcLe0za",
"ZFyVul-ZnWt",
"pfnAf1HSATk",
"nips_2022_9aLbntHz1Uq",
"xDaqcLe0za",
"1yUao1HJS... |
nips_2022_LMuh9bS4tqF | Learning Distinct and Representative Modes for Image Captioning | Over the years, state-of-the-art (SoTA) image captioning methods have achieved promising results on some evaluation metrics (e.g., CIDEr). However, recent findings show that the captions generated by these methods tend to be biased toward the "average" caption that only captures the most general mode (a.k.a, language pattern) in the training corpus, i.e., the so-called mode collapse problem. Affected by it, the generated captions are limited in diversity and usually less informative than natural image descriptions made by humans. In this paper, we seek to avoid this problem by proposing a Discrete Mode Learning (DML) paradigm for image captioning. Our innovative idea is to explore the rich modes in the training caption corpus to learn a set of "mode embeddings", and further use them to control the mode of the generated captions for existing image captioning models. Specifically, the proposed DML optimizes a dual architecture that consists of an image-conditioned discrete variational autoencoder (CdVAE) branch and a mode-conditioned image captioning (MIC) branch. The CdVAE branch maps each image caption to one of the mode embeddings stored in a learned codebook, and is trained with a pure non-autoregressive generation objective to make the modes distinct and representative. The MIC branch can be simply modified from an existing image captioning model, where the mode embedding is added to the original word embeddings as the control signal. In the experiments, we apply the proposed DML to two widely used image captioning models, Transformer and AoANet. The results show that the learned mode embedding successfully facilitates these models to generate high-quality image captions with different modes, further leading to better performance for both diversity and quality on the MS COCO dataset. | Accept | The paper tackles the problem of mode collapse in image captioning and provide a method for generating diverse captions. The proposed approach uses a VAE to learn various modes, each of which can produce a different caption, along with various technical innovations to train the model. Experiments with two models on MS COCO demonstrate the effectiveness of the approach. The paper offers useful insights for the challenging and important problem of diverse captioning and will inform future work in this space. I encourage the authors to make the revisions suggested by reviewers to improve the clarity of the writing and also include adequate justification for the choice of their base models. | train | [
"zDfe1Sjsxhj",
"5qJ_uzPplCk",
"SQ-KOm8hncp",
"XUCFY5KgTw",
"ftU_UjXv9iR",
"Fpwf5PxHp0P",
"NSN1DmvHg6-",
"q0dvVGnLzyN",
"JQV7QTuF8LB",
"AErXxR1BcJK",
"eJw8mEYdJb",
"CNf1xtzIBqy",
"0s9JmxVUDgO",
"jEzMX9BPxwV",
"ya41QVm0gF",
"3o8tGqKObDl",
"_QOOdnBNBlr",
"UnysDwy9Ulj",
"uHVOeeylf5",... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you very much for your encouragement and valuable comments on our work! We will keep working on the mode collapse problem to get a better understanding.",
" Thank you for the insightful comments on our work! We will include the discussions and explanations of the comparison part w.r.t. the metrics and mod... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"ftU_UjXv9iR",
"q0dvVGnLzyN",
"Fpwf5PxHp0P",
"NSN1DmvHg6-",
"JQV7QTuF8LB",
"ya41QVm0gF",
"UnysDwy9Ulj",
"e94c6ND9XPi",
"AErXxR1BcJK",
"6CVVx0_CIqw",
"CNf1xtzIBqy",
"0s9JmxVUDgO",
"jEzMX9BPxwV",
"e94c6ND9XPi",
"uHVOeeylf5",
"_QOOdnBNBlr",
"UnysDwy9Ulj",
"nips_2022_LMuh9bS4tqF",
"n... |
nips_2022_7rcuQ_V2GFg | Parameter-Efficient Masking Networks | A deeper network structure generally handles more complicated non-linearity and performs more competitively. Nowadays, advanced network designs often contain a large number of repetitive structures (e.g., Transformer). They empower the network capacity to a new level but also increase the model size inevitably, which is unfriendly to either model restoring or transferring. In this study, we are the first to investigate the representative potential of fixed random weights with limited unique values by learning diverse masks and introduce the Parameter-Efficient Masking Networks (PEMN). It also naturally leads to a new paradigm for model compression to diminish the model size. Concretely, motivated by the repetitive structures in modern neural networks, we utilize one random initialized layer, accompanied with different masks, to convey different feature mappings and represent repetitive network modules. Therefore, the model can be expressed as \textit{one-layer} with a bunch of masks, which significantly reduce the model storage cost. Furthermore, we enhance our strategy by learning masks for a model filled by padding a given random weights vector. In this way, our method can further lower the space complexity, especially for models without many repetitive architectures. We validate the potential of PEMN learning masks on random weights with limited unique values and test its effectiveness for a new compression paradigm based on different network architectures.
Code is available at \href{https://github.com/yueb17/PEMN}{\textcolor{magenta}{https://github.com/yueb17/PEMN}}. | Accept | The paper studies the use of random weights together with learnable masks. Authors demonstrate that such training approach for neural network can reduce the model storage requirements and has applications to network compression.
Reviewer appreciated the novelty of the idea and the extensive experiments on various architectures. Adding experiments that would go beyond small-scale datasets would further strengthen the quality of the paper and its potential impact. | test | [
"w94uAWJ8C5T",
"Bn5fJNWoPYB",
"fhWzUiE4RkI",
"1y27vPXQnef",
"e07jkflbeqH",
"O5N52SwmNvE",
"igp-nWXxdQ",
"15B1oDmbwym",
"Pw5m0j2bGvIw",
"1v2U7GEqJU4",
"82F608bOz38",
"X9B5vtUf9aq",
"QjYOhgragtL",
"x2oKDQhGqI",
"c9KnauzKDEf",
"8Xv4Qt1Mx3S",
"bb9HGoNOeGT",
"rwtzqGmO_mG",
"p5t3mbiwsG... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewer 9nMH,\n\nWe appreciate the reviewer's recognition and support of our work and rebuttal. We also thank the reviewer's suggestions to help us further improve our work for both scientific exploration and compression evaluation. We will prepare them accordingly for our draft to deliver a better final ve... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"O5N52SwmNvE",
"e07jkflbeqH",
"1y27vPXQnef",
"1v2U7GEqJU4",
"cFl87AKce0",
"KsrCUGqWTh",
"15B1oDmbwym",
"8Xv4Qt1Mx3S",
"KsrCUGqWTh",
"A7Csk-v-i6f",
"KsrCUGqWTh",
"KsrCUGqWTh",
"KsrCUGqWTh",
"cFl87AKce0",
"cFl87AKce0",
"KKKSA4fougQ",
"A7Csk-v-i6f",
"A7Csk-v-i6f",
"A7Csk-v-i6f",
"... |
nips_2022_owZdBnUiw2 | Look More but Care Less in Video Recognition | Existing action recognition methods typically sample a few frames to represent each video to avoid the enormous computation, which often limits the recognition performance. To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation. Specifically, the Ample Branch takes all input frames to obtain abundant information with condensed computation and provides the guidance for Focal Branch by the proposed Navigation Module; the Focal Branch squeezes the temporal size to only focus on the salient frames at each convolution block; in the end, the results of two branches are adaptively fused to prevent the loss of information. With this design, we can introduce more frames to the network but cost less computation. Besides, we demonstrate AFNet can utilize less frames while achieving higher accuracy as the dynamic selection in intermediate features enforces implicit temporal modeling. Further, we show that our method can be extended to reduce spatial redundancy with even less cost. Extensive experiments on five datasets demonstrate the effectiveness and efficiency of our method. | Accept | The proposed architecture, AFNet, is simple yet effective for end-to-end efficient video action recognition.
It has good idea, well written paper, and relative solid experimental results to support the claim.
The emergency reviewer gives the highest score (6), while the other reviewers do have some concerns with this paper.
Most of the other reviewers give the rating of borderline accept, after the authours make more efforts in the rebuttal period. In term of these, the initial recommendation would be acceptance. | val | [
"K3TcQ_uPLrl",
"Ek0_5N_fjID",
"DeMn7YGAl85",
"dUbJwatsor",
"G0X-wDDBSY0",
"-I5Akhkt9w",
"_7VvCnAS4n2",
"sWSQaG9pF0P",
"gQNub5B7NZ7",
"b_yACFh9Kl8d",
"flU5hgQuEbu",
"qw9RST14dyqv",
"E12ZZSHtBe",
"IehRtmcVp2q",
"XXQ08nsiMPn_",
"nlew0T_oSPt",
"cT_epe5jl4y",
"nQVeX3ms3mvg",
"lnB_0omv... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"autho... | [
" Dear Reviewer bMpQ:\n\nWe have submitted the final version of our draft just now. Compared with the previous version, we have rewritten the explanations for implicit temporal modeling in Section 3.2 and included the reasons for comparisons with TSN in Table 1.\n\nIn the previous version, we have already added the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
5,
4
] | [
"dUbJwatsor",
"DeMn7YGAl85",
"w4uRoZaLRHb",
"yiTSSBEOtH-",
"BaOjV3hSrh0",
"w4uRoZaLRHb",
"x5Q0wcuusc",
"nips_2022_owZdBnUiw2",
"x5Q0wcuusc",
"flU5hgQuEbu",
"Ulel-9INRLJ",
"w4uRoZaLRHb",
"yiTSSBEOtH-",
"BaOjV3hSrh0",
"cK33T7udapT",
"yiTSSBEOtH-",
"yiTSSBEOtH-",
"BaOjV3hSrh0",
"BaO... |
nips_2022_evRyKOjOx20 | Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games | We show that, for any sufficiently small fixed $\epsilon > 0$, when both players in a general-sum two-player (bimatrix) game employ optimistic mirror descent (OMD) with smooth regularization, learning rate $\eta = O(\epsilon^2)$ and $T = \Omega(poly(1/\epsilon))$ repetitions, either the dynamics reach an $\epsilon$-approximate Nash equilibrium (NE), or the average correlated distribution of play is an $\Omega(poly(\epsilon))$-strong coarse correlated equilibrium (CCE): any possible unilateral deviation does not only leave the player worse, but will decrease its utility by $\Omega(poly(\epsilon))$. As an immediate consequence, when the iterates of OMD are bounded away from being Nash equilibria in a bimatrix game, we guarantee convergence to an \emph{exact} CCE after only $O(1)$ iterations. Our results reveal that uncoupled no-regret learning algorithms can converge to CCE in general-sum games remarkably faster than to NE in, for example, zero-sum games. To establish this, we show that when OMD does not reach arbitrarily close to a NE, the (cumulative) regret of both players is not only negative, but decays linearly with time. Given that regret is the canonical measure of performance in online learning, our results suggest that cycling behavior of no-regret learning algorithms in games can be justified in terms of efficiency. | Accept | This paper proves a new phenomenon about the Optimistic Mirror Descent (OMD) algorithm in two-player general-sum matrix games (bimatrix games): The iterates either converge to an approximate Nash Equilibrium (NE), or converge to a Strong Coarse Correlated Equilibrium (CCE). This result links and improves over two existing understandings: (1) Convergence to NE is unlikely to be generally achievable by any efficient algorithm due to its PPAD-hardness; (2) Convergence to approximate CCE is achievable by any no-regret algorithm, but it is unclear whether such CCE is a strong CCE (in the present paper’s sense).
Given the phenomenon is not only new but also fundamental and important to the game theory community, I believe it is worth the attention of NeurIPS audience, and thus recommend acceptance. | train | [
"zbha40zZJKE",
"q5A2XRyDsvX",
"1tkByKJtv4T",
"yrsSr-OG_fS5",
"B1w9F0nWzjW",
"z0ZaFv9FfpO",
"mgyudeuJ8Tq",
"MQNquEsr47C",
"55Bf-MuKmqr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the response and apologize for the delay in the discussions. I think the authors' remarks about the learning rates mostly address my concerns there---In this paper $\\eta=O(\\epsilon^2)$ with $\\epsilon=\\Theta(1)$ still has interesting implications, unlike in standard no-regret bounds whe... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"z0ZaFv9FfpO",
"yrsSr-OG_fS5",
"z0ZaFv9FfpO",
"55Bf-MuKmqr",
"MQNquEsr47C",
"mgyudeuJ8Tq",
"nips_2022_evRyKOjOx20",
"nips_2022_evRyKOjOx20",
"nips_2022_evRyKOjOx20"
] |
nips_2022_kEPAmGivMD | Deterministic Langevin Monte Carlo with Normalizing Flows for Bayesian Inference | We propose a general purpose Bayesian inference algorithm for expensive likelihoods, replacing the stochastic term in the Langevin equation with a deterministic density gradient term. The particle density is evaluated from the current particle positions using a Normalizing Flow (NF), which is differentiable and has good generalization properties in high dimensions. We take advantage of NF preconditioning and NF based Metropolis-Hastings updates for a faster convergence. We show on various examples that the method is competitive against state of the art sampling methods. | Accept | This paper proses an inference method that combines gradient ascent and normalizing flows. The idea is that one could, in principle, simulate the deterministic Fokker-Planck equation, but this would require access to the density of the evolving approximating density, which is intractable. Thus, the paper proposes to maintain a set of particles and update a normalizing flow to approximate this density. The resulting procedure is deterministic, with an accuracy that depends on the number of particles and the power of the normalizing flow.
Reviewers agreed this was an interesting approach and the experimental results are promising (albeit fairly low-dimensional). However, there were a few apparent weaknesses: Firstly, there was a lack of clarity about the theoretical guarantees. The authors state that this is all clear in the manuscript, but readers would undoubtedly benefit from a much more centralized/explicit description of what approximations are involved, and under what guarantees the method is claimed to work, which could be put in a single place. In addition, in trying to understand exactly what was done in the experiments, it is difficult to understand several of the details. Algorithm 1 is very helpful in this regard, but it would be beneficial to have a self-contained elaboration of all of the points (perhaps even in an appendix). Finally, the experimental results are all relatively low-dimensional. Often particle methods do not scale gracefully to higher dimensions. (I do not see this as a huge flaw because the method could be useful even if it does not scale, but reader would benefit from evidence on this point either way.)
In the end, however, most of the above issues are issues of clarity and I am willing to trust the authors to fix these before final submission. The paper appears to present a novel idea and the community would benefit from seeing it and discussing it. | test | [
"OzXXsVkioUf",
"Q_9eoGgDRL",
"mBC_HqBFlHH",
"CfyleVPIyTV",
"EPmfgws6WKL",
"aqR-iP9VvZv",
"Evvy3TiGv-4",
"l1sD92UFFL",
"vPIoQV978EN",
"KnlnpQk4OQA"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the further explanation of the comments. We want to emphasize that in our original response we were not trying to discredit the review, and it was not our intention to be defensive: we simply failed to understand the specific points the reviewer was raising. For example, it was not clear... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"mBC_HqBFlHH",
"aqR-iP9VvZv",
"CfyleVPIyTV",
"EPmfgws6WKL",
"KnlnpQk4OQA",
"vPIoQV978EN",
"l1sD92UFFL",
"nips_2022_kEPAmGivMD",
"nips_2022_kEPAmGivMD",
"nips_2022_kEPAmGivMD"
] |
nips_2022_CZwh1XdAhNv | Uncoupled Learning Dynamics with $O(\log T)$ Swap Regret in Multiplayer Games | In this paper we establish efficient and \emph{uncoupled} learning dynamics so that, when employed by all players in a general-sum multiplayer game, the \emph{swap regret} of each player after $T$ repetitions of the game is bounded by $O(\log T)$, improving over the prior best bounds of $O(\log^4 (T))$. At the same time, we guarantee optimal $O(\sqrt{T})$ swap regret in the adversarial regime as well. To obtain these results, our primary contribution is to show that when all players follow our dynamics with a \emph{time-invariant} learning rate, the \emph{second-order path lengths} of the dynamics up to time $T$ are bounded by $O(\log T)$, a fundamental property which could have further implications beyond near-optimally bounding the (swap) regret. Our proposed learning dynamics combine in a novel way \emph{optimistic} regularized learning with the use of \emph{self-concordant barriers}. Further, our analysis is remarkably simple, bypassing the cumbersome framework of higher-order smoothness recently developed by Daskalakis, Fishelson, and Golowich (NeurIPS'21). | Accept | Given the unanimous support from the reviewers, to which I genuinely agree, the paper is recommended for acceptance. I encourage the authors to pay close attention to the reviewers comments and suggestions (and in particular to the comments of Reviewer Wcat) when working on their final version. | train | [
"4YRAzl0L67",
"IpDjJAovtgzV",
"yKW4kNEZkO2",
"3Ku18-sDD_p",
"yXnmIfighWQ",
"MYultwGy67",
"u62oa6K6FmL"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response.\n\n--- \"While the regret bound of Daskalakis et al. (2021) depends logarithmically on the number of actions...\"\n\nIndeed I missed the fact that for swap regret, the rates must scale polynomially with the number of actions as opposed to external regret - so I retract my c... | [
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"IpDjJAovtgzV",
"u62oa6K6FmL",
"MYultwGy67",
"yXnmIfighWQ",
"nips_2022_CZwh1XdAhNv",
"nips_2022_CZwh1XdAhNv",
"nips_2022_CZwh1XdAhNv"
] |
nips_2022_fT9W53lLxNS | SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos | The visual world can be parsimoniously characterized in terms of distinct entities with sparse interactions. Discovering this compositional structure in dynamic visual scenes has proven challenging for end-to-end computer vision approaches unless explicit instance-level supervision is provided. Slot-based models leveraging motion cues have recently shown great promise in learning to represent, segment, and track objects without direct supervision, but they still fail to scale to complex real-world multi-object videos. In an effort to bridge this gap, we take inspiration from human development and hypothesize that information about scene geometry in the form of depth signals can facilitate object-centric learning. We introduce SAVi++, an object-centric video model which is trained to predict depth signals from a slot-based video representation. By further leveraging best practices for model scaling, we are able to train SAVi++ to segment complex dynamic scenes recorded with moving cameras, containing both static and moving objects of diverse appearance on naturalistic backgrounds, without the need for segmentation supervision. Finally, we demonstrate that by using sparse depth signals obtained from LiDAR, SAVi++ is able to learn emergent object segmentation and tracking from videos in the real-world Waymo Open dataset. | Accept | Three out of four reviewers provided positive reviews and scores for this submission. They agreed that SAVI++ makes meaningful improvements over a previously proposed SAVI model. Importantly, while most past approaches evaluate on synthetic data, this submission evaluates the proposed model on a real world dataset. The proposed model clearly improves over the baseline and a clear ablation analysis shows where the improvements come from.
One reviewer had concerns about the evaluation using just one real world dataset. This was also brought up by other reviewers, who mentioned that the Waymo dataset has less diversity and fewer videos than others. While a more thorough evaluation would make this a stronger submission, the leap from synthetic evaluations to real world evaluations in this line of research is notable and sets the bar for future work. I also note, based on the discussion, that the employed dataset is not trivial and has several challenges for the model.
Another concern by the reviewer was about missing baselines. The authors did provide additional baselines in their response. While these baselines do not exactly match the ones requested by the reviewer, I think they provide good evidence that the proposed method is able to employ the depth signal effectively.
Overall, this paper makes solid progress on the problem, provides value to the readers and provides strong results on a real world dataset. Given these reasons, I recommend acceptance.
| test | [
"EmkN7fQ7Yw",
"Qow2QeN1us5",
"Y1laGmH7ck",
"EzxhNX1JH3dh",
"QAOh2co0qq5",
"aqJYLX2V97n",
"8JTWl4RBcbV0",
"U9sJnnbyeYK",
"C2YjG_xK6w",
"PReW7YEIU_",
"cSZpU80SlJ6",
"xMHjW67dp1x",
"chFz6KmAN3w",
"dBZjrMSEQYy",
"y8LWgMxvfra",
"OoSNH64uIB6",
"Sbjqw-H1l6"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the authors rebuttal as well as the additional experiments. I thank the authors for the detailed response.\n\nUnfortunately, central issue of comparison against weak baselines is not resolved. \n\nIn particular, answers to Cons 4. (i)-(v) remain unsatisfactory to me. Dataset choice is not strong, and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"aqJYLX2V97n",
"chFz6KmAN3w",
"C2YjG_xK6w",
"U9sJnnbyeYK",
"aqJYLX2V97n",
"Sbjqw-H1l6",
"U9sJnnbyeYK",
"OoSNH64uIB6",
"y8LWgMxvfra",
"cSZpU80SlJ6",
"dBZjrMSEQYy",
"chFz6KmAN3w",
"nips_2022_fT9W53lLxNS",
"nips_2022_fT9W53lLxNS",
"nips_2022_fT9W53lLxNS",
"nips_2022_fT9W53lLxNS",
"nips_... |
nips_2022_mXP-qQcYCBN | AutoLink: Self-supervised Learning of Human Skeletons and Object Outlines by Linking Keypoints | Structured representations such as keypoints are widely used in pose transfer, conditional image generation, animation, and 3D reconstruction. However, their supervised learning requires expensive annotation for each target domain. We propose a self-supervised method that learns to disentangle object structure from the appearance with a graph of 2D keypoints linked by straight edges. Both the keypoint location and their pairwise edge weights are learned, given only a collection of images depicting the same object class. The resulting graph is interpretable, for example, AutoLink recovers the human skeleton topology when applied to images showing people. Our key ingredients are i) an encoder that predicts keypoint locations in an input image, ii) a shared graph as a latent variable that links the same pairs of keypoints in every image, iii) an intermediate edge map that combines the latent graph edge weights and keypoint locations in a soft, differentiable manner, and iv) an inpainting objective on randomly masked images. Although simpler, AutoLink outperforms existing self-supervised methods on the established keypoint and pose estimation benchmarks and paves the way for structure-conditioned generative models on more diverse datasets. Project website: https://xingzhehe.github.io/autolink/. | Accept | Building from works on unsupervised keypoint discovery for a domain of 2D images, this work proposes to jointly learn a skeletal structure that links discovered keypoints, and further proposes a novel image masking strategy for extracting limited background information, to force the keypoints to capture maximum information about the scene. The evaluations span a variety of datasets, with quantitative numbers on human face and body pose datasets, and show improvements from the proposed approach. A novel idea, executed well, and of interest to many at NeurIPS. Congratulations to the authors, and please fix visualization issues etc. before camera-ready / next revision.
| train | [
"yLkmvohHq7",
"oJGXx7cSFS",
"i0mCanM_h_E",
"5jrF8T0H7bj",
"6GDhOao1pUh",
"Ycb66c_5eaD",
"oLOAK7gtEp",
"0suLRjc60fT",
"uGTUsVnK5Q-",
"6EGmokjWiS8",
"ahBXuUUp-i2",
"rTpSZohx5mN"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am happy with the response and have no further concerns about this paper.\nI will keep my original rating.",
" ### For Pascal VOC evaluation one could use the Pascal-part segmentation dataset?\nThank you for the valuable pointer. We had not worked with Pascal VOC before and concluded from the foreground-backg... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"0suLRjc60fT",
"i0mCanM_h_E",
"oLOAK7gtEp",
"6GDhOao1pUh",
"Ycb66c_5eaD",
"rTpSZohx5mN",
"ahBXuUUp-i2",
"6EGmokjWiS8",
"nips_2022_mXP-qQcYCBN",
"nips_2022_mXP-qQcYCBN",
"nips_2022_mXP-qQcYCBN",
"nips_2022_mXP-qQcYCBN"
] |
nips_2022_f3zNgKga_ep | Video Diffusion Models | Generating temporally coherent high fidelity video is an important milestone in generative modeling research. We make progress towards this milestone by proposing a diffusion model for video generation that shows very promising initial results. Our model is a natural extension of the standard image diffusion architecture, and it enables jointly training from image and video data, which we find to reduce the variance of minibatch gradients and speed up optimization. To generate long and higher resolution videos we introduce a new conditional sampling technique for spatial and temporal video extension that performs better than previously proposed methods. We present the first results on a large text-conditioned video generation task, as well as state-of-the-art results on established benchmarks for video prediction and unconditional video generation. Supplementary material is available at https://video-diffusion.github.io/. | Accept | This paper proposes a diffusion model for video capable of generating long and high-resolution videos. Diffusion models have generated some more excitement around generative models as well, so the paper is well-timed. The reviewers had a few concerns regarding additional experiments and clarifications, and it appears that the authors have satisfied those concerns. Overall, the reviews are positive, and there was a decent amount of interaction between the reviewers and authors, though much of the discussion was straightforward and didn't seem to require a good deal of discussion.
I therefore recommend accept for the paper based on there being a clear consensus and the discussions and revision satisfying most outstanding concerns.
Overall I have no recommendations of the reviewers based on this paper. The paper might have been very straightforward to read, and given the consensus I think that it is the case that this was a relatively easy review for everyone (neural reviewer score for everyone, though maybe due to paper being easy). | train | [
"nQnD9Vc8QV",
"_7BAkJHfZE7",
"7htANh6dWuQ",
"dSibyDLkyKw",
"czre6QHJmT",
"lcVnNZPoZqA",
"kTRdJcegaEt",
"s8_mHAawBes",
"hAUAZ1g2T7e"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response! \n1. I believe co-train/co-finetune is well studied in image and video recognition domain so directly extending it into video generation sounds a bit weak to me. Though, the guidance method sounds interesting and thanks the author[s] for the contribution.\n2. Yes, a more comprehensive bac... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
9,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
5,
5
] | [
"dSibyDLkyKw",
"hAUAZ1g2T7e",
"s8_mHAawBes",
"kTRdJcegaEt",
"lcVnNZPoZqA",
"nips_2022_f3zNgKga_ep",
"nips_2022_f3zNgKga_ep",
"nips_2022_f3zNgKga_ep",
"nips_2022_f3zNgKga_ep"
] |
nips_2022_ZJe-XahpyBf | UDC: Unified DNAS for Compressible TinyML Models for Neural Processing Units | Deploying TinyML models on low-cost IoT hardware is very challenging, due to limited device memory capacity. Neural processing unit (NPU) hardware address the memory challenge by using model compression to exploit weight quantization and sparsity to fit more parameters in the same footprint. However, designing compressible neural networks (NNs) is challenging, as it expands the design space across which we must make balanced trade-offs. This paper demonstrates Unified DNAS for Compressible (UDC) NNs, which explores a large search space to generate state-of-the-art compressible NNs for NPU. ImageNet results show UDC networks are up to 3.35x smaller (iso-accuracy) or 6.25% more accurate (iso-model size) than previous work. | Accept | In this paper, the authors present a new way to obtain compressible neural networks to fit on resource-constrained NPU-based hardware.
Initial reviews were mixed, but the authors successfully managed to respond to reviewers' concerns during the rebuttal period. Several reviewers pointed out clarity issues, but (1) some of these issues came from reviewers not reading the paper carefully enough, and (2) others were properly addressed by the authors. I also want to acknowledge that the most negative review is a short one, falling below NeurIPS quality standards.
After discussion, all reviewers are leaning towards acceptance, agreeing that the paper successfully demonstrates the superiority of the proposed method vs. existing relevant baselines. As a result, I also recommend acceptance. | train | [
"eu4np2yyCNR",
"HkcVUemK2Uc",
"ThGfuxsDdlj",
"wtKSixofaed",
"WGs0Fw5kqK",
"0KGklds9VVk",
"tMAjRF7DBP6",
"DnDe1vmpmUF",
"vcpuK5qYbi",
"LfOBDuLi1rS",
"TsmjiWKhlgl",
"klLT4YWJSFv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Looking at the authors' response to my and other reviewers' comments. I am happy to see that my concerns are adequately addressed. I have updated my score.",
" I apologize for missing Figure 4 in the main text. Figure 4 indeed justifies the authors' claims. I update my scores accordingly. I still think that the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"tMAjRF7DBP6",
"DnDe1vmpmUF",
"wtKSixofaed",
"klLT4YWJSFv",
"0KGklds9VVk",
"TsmjiWKhlgl",
"LfOBDuLi1rS",
"vcpuK5qYbi",
"nips_2022_ZJe-XahpyBf",
"nips_2022_ZJe-XahpyBf",
"nips_2022_ZJe-XahpyBf",
"nips_2022_ZJe-XahpyBf"
] |
nips_2022_vkGk2HI8oOP | Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias | It has become cognitive inertia to employ cross-entropy loss function in classification related tasks. In the untargeted attacks on graph structure, the gradients derived from the attack objective are the attacker's basis for evaluating a perturbation scheme. Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models. However, the suitability of the cross-entropy function for constructing the untargeted attack objective has yet been discussed in previous works. This paper argues about the previous unreasonable attack objective from the perspective of budget allocation. We demonstrate theoretically and empirically that negative cross-entropy tends to produce more significant gradients from nodes with lower confidence in the labeled classes, even if the predicted classes of these nodes have been misled. To free up these inefficient attack budgets, we propose a simple attack model for untargeted attacks on graph structure based on a novel attack objective which generates unweighted gradients on graph structures that are not affected by the node confidence. By conducting experiments in gray-box poisoning attack scenarios, we demonstrate that a reasonable budget allocation can significantly improve the effectiveness of gradient-based edge perturbations without any extra hyper-parameter. | Accept | The authors study graph modification attack (through editing the edges) in the setting of untargeted poisoning and show that negative cross entropy is not a good candidate for the attack loss. Instead they propose a novel attack objective to study the problem.
The reviewers found the topic timely and of interest to the community. They felt that the theoretical and empirical analysis could be improved to validate their claims, but overall the positives seemed to outweigh the negative perceived by the reviewers. | train | [
"dApOlZBBA-t",
"R2SXzCfwge",
"fIG7rhHMFSD",
"r9HfYStbN7",
"ynukfXIPmMX",
"GuFwtWroIOo",
"FtQrDa1Jl-2",
"Si7LQjz4bLG",
"1tfPnJMXTPv",
"9Nlpqva0GY8",
"cCRIjMCTWHc",
"L505chbc8oE",
"mf0Cfk-EY-1",
"G2OLDLlcjX",
"Dfr36_U5oKM",
"UYDqGwdLDj7",
"RS3lM2EfeqP",
"ZxUXaK313sV",
"NIHICjMgGgM"... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"... | [
" I thank the authors for the late reply. \n\nFirst, my point isn't about injection or modification, instead, I am concerned about the theoretical part in the current version of the paper. The authors just show an interesting discovery, which however isn't shown to imply any **rigorous conclusions**. \n\nNo matter ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"R2SXzCfwge",
"ynukfXIPmMX",
"FtQrDa1Jl-2",
"FtQrDa1Jl-2",
"GuFwtWroIOo",
"FtQrDa1Jl-2",
"Si7LQjz4bLG",
"1tfPnJMXTPv",
"9Nlpqva0GY8",
"cCRIjMCTWHc",
"L505chbc8oE",
"UYDqGwdLDj7",
"G2OLDLlcjX",
"Wsle5sxb8Tx",
"UYDqGwdLDj7",
"RS3lM2EfeqP",
"ZxUXaK313sV",
"NIHICjMgGgM",
"gz_ITAEdvni... |
nips_2022_319xcX5qIcO | Signal Recovery with Non-Expansive Generative Network Priors | We study compressive sensing with a deep generative network prior. Initial theoretical guarantees for efficient recovery from compressed linear measurements have been developed for signals in the range of a ReLU network with Gaussian weights and logarithmic expansivity: that is when each layer is larger than the previous one by a logarithmic factor. It was later shown that constant expansivity is sufficient for recovery. It has remained open whether the expansivity can be relaxed, allowing for networks with contractive layers (as often the case of real generators). In this work we answer this question, proving that a signal in the range of a Gaussian generative network can be recovered from few linear measurements provided that the width of the layers is proportional to the input layer size (up to log factors). This condition allows the generative network to have contractive layers. Our result is based on showing that Gaussian matrices satisfy a matrix concentration inequality which we term Range Restricted Weight Distribution Condition (R2WDC) and which weakens the Weight Distribution Condition (WDC) upon which previous theoretical guarantees were based. The WDC has also been used to analyze other signal recovery problems with generative network priors. By replacing the WDC with the R2WDC, we are able to extend previous results for signal recovery with expansive generative network priors to non-expansive ones. We discuss these extensions for phase retrieval, denoising, and spiked matrix recovery. | Accept | This paper focuses on theoretically studying signal reconstruction with non-expansive generative networks. In short the authors show that with a random Gaussian generator, any signal in its range can be reconstructed from Gaussian measurements. This holds as long as the number of measurements and the width of all layers are proportional to the size of input layer. Compared to prior work this paper removes the requirement of expansion of the layers. Most reviewers thought the paper was interesting and thought the improved theoretical analysis was nice. The reviewers also raised a variety of technical concerns most of which was addressed during the rebuttal. I concur with the reviewers and think is a nice contribution despite some flaws and am recommending acceptance. I urge the authors to follow the details comments of the reviewers to improve their manuscript for the camera ready version of the paper. | train | [
"kCcSeKyFSVO",
"8FQWEkpx6k7",
"Toh427gvA9s",
"lm3j_JE8Azow",
"dr0IarwpLZi",
"3lE50hOp7FU",
"IDS8RczrKjO",
"i_kGqb0ES5U",
"vrkbdJl9W1M"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nbelow are the answer to your questions\n\n- Previous theoretical results proving $m = \\widetilde{\\Omega}(k)$ lower-bounds were given in [Ra] and [Rb]. Notice that studying (sharp) information-theoretic limits is beyond the scope of this paper, which instead is devoted to analyzing the performa... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
2,
5
] | [
"8FQWEkpx6k7",
"Toh427gvA9s",
"3lE50hOp7FU",
"vrkbdJl9W1M",
"IDS8RczrKjO",
"nips_2022_319xcX5qIcO",
"nips_2022_319xcX5qIcO",
"nips_2022_319xcX5qIcO",
"nips_2022_319xcX5qIcO"
] |
nips_2022_1ItkxrZP0rg | A Spectral Approach to Item Response Theory | The Rasch model is one of the most fundamental models in item response theory and has wide-ranging applications from education testing to recommendation systems. In a universe with $n$ users and $m$ items, the Rasch model assumes that the binary response $X_{li} \in \{0,1\}$ of a user $l$ with parameter $\theta^*_l$ to an item $i$ with parameter $\beta^*_i$ (e.g., a user likes a movie, a student correctly solves a problem) is distributed as $\mathbb{P}(X_{li}=1) = 1/(1 + \exp(-(\theta^*_l - \beta^*_i)))$. In this paper, we propose a new item estimation algorithm for this celebrated model (i.e., to estimate $\beta^*$). The core of our algorithm is the computation of the stationary distribution of a Markov chain defined on an item-item graph. We complement our algorithmic contributions with finite-sample error guarantees, the first of their kind in the literature, showing that our algorithm is consistent and enjoys favorable optimality properties. We discuss practical modifications to accelerate and robustify the algorithm that practitioners can adopt. Experiments on synthetic and real-life datasets, ranging from small education testing datasets to large recommendation systems datasets show that our algorithm is scalable, accurate, and competitive with the most commonly used methods in the literature. | Accept | This is a strong paper with interesting theoretical results and important practical contributions (much faster parameter learning than prior methods with little performance drop-off). All reviewers agreed it was above the bar for acceptance to NeurIPS. | train | [
"Z1PfbV0gnc",
"uCMtcM9E-Ek",
"-ihXeFBct6",
"_y8E0HDOuey",
"LfT-WNCQuN",
"dnmnq8Tyykz",
"3qTQmnxzSWh",
"kQYktGNVG-r",
"cvCM9-W1LV3",
"QyiB-XjL4pi",
"czKBB9agFID",
"iCZm8ZowkZR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. My concerns are addressed.\n\nI encourage the authors to describe the similarities and differences in the proof technique as compare to the Rank Centrality paper. This will make the paper stronger, not weaker.",
" Thank you very much for the targeted and professional response. I thi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"3qTQmnxzSWh",
"dnmnq8Tyykz",
"kQYktGNVG-r",
"iCZm8ZowkZR",
"czKBB9agFID",
"QyiB-XjL4pi",
"cvCM9-W1LV3",
"nips_2022_1ItkxrZP0rg",
"nips_2022_1ItkxrZP0rg",
"nips_2022_1ItkxrZP0rg",
"nips_2022_1ItkxrZP0rg",
"nips_2022_1ItkxrZP0rg"
] |
nips_2022_1pHC-yZfaTK | Regret Bounds for Information-Directed Reinforcement Learning | Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm for reinforcement learning (RL). However, theoretical understanding of IDS for Markov Decision Processes (MDPs) is still limited. We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive prior-free Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. In addition, we propose a computationally-efficient regularized-IDS that maximizes an additive form rather than the ratio form and show that it enjoys the same regret bound as vanilla-IDS. With the aid of rate-distortion theory, we improve the regret bound by learning a surrogate, less informative environment. Furthermore, we extend our analysis to linear MDPs and prove similar regret bounds for Thompson sampling as a by-product. | Accept | This paper has been well-received by the reviewers already in the initial round, and the reviewers were all happy with the authors' responses. The updates already made to the manuscript clearly showed the commitment of the authors to take all the reviewers' comments into account for the final version. After some discussion, all reviewers agreed that the paper should be accepted for publication at NeurIPS 2022. I encourage the authors to finalize the promised changes for the camera-ready version, and in particular complete the preliminary experimental section provided in the revision. | train | [
"n62OaB0bG3q",
"mSrDnc2lJxv",
"aUPGN2EP48g",
"0HY4amrm3sx",
"hd0e1rWmYWA",
"C71a-jPvH6I1",
"P7t852WOBEp",
"1UYlzOpEEIm",
"qk3eLccl7wV",
"CQo_Bb8d4rH",
"XRsSRMb0mmo",
"XSxjFseQzna",
"vZf8g-tCFoW",
"jg0D8CTuM6F",
"pOBAWa5g96L",
"Tpn0-rrxq6w"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my comments. ",
" I appreciate the authors's response, which address all of my questions. Sorry for the late reply. And I am actually satisfied with the rebuttal and especially the newly added algorithm and its implementaton. I have raised my score for this work. ",
" Thanks a lot for... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"XSxjFseQzna",
"XRsSRMb0mmo",
"C71a-jPvH6I1",
"P7t852WOBEp",
"jg0D8CTuM6F",
"qk3eLccl7wV",
"CQo_Bb8d4rH",
"nips_2022_1pHC-yZfaTK",
"Tpn0-rrxq6w",
"pOBAWa5g96L",
"jg0D8CTuM6F",
"vZf8g-tCFoW",
"nips_2022_1pHC-yZfaTK",
"nips_2022_1pHC-yZfaTK",
"nips_2022_1pHC-yZfaTK",
"nips_2022_1pHC-yZfa... |
nips_2022_VOPiHQUevh5 | TUSK: Task-Agnostic Unsupervised Keypoints | Existing unsupervised methods for keypoint learning rely heavily on the assumption that a specific keypoint type (e.g. elbow, digit, abstract geometric shape) appears only once in an image. This greatly limits their applicability, as each instance must be isolated before applying the method—an issue that is never discussed or evaluated. We thus propose a novel method to learn Task-agnostic, UnSupervised Keypoints (TUSK) which can deal with multiple instances. To achieve this, instead of the commonly-used strategy of detecting multiple heatmaps, each dedicated to a specific keypoint type, we use a single heatmap for detection, and enable unsupervised learning of keypoint types through clustering. Specifically, we encode semantics into the keypoints by teaching them to reconstruct images from a sparse set of keypoints and their descriptors, where the descriptors are forced to form distinct clusters in feature space around learned prototypes. This makes our approach amenable to a wider range of tasks than any previous unsupervised keypoint method: we show experiments on multiple-instance detection and classification, object discovery, and landmark detection—all unsupervised—with performance on par with the state of the art, while also being able to deal with multiple instances. | Accept | The meta reviewer has carefully read the paper, reviews, rebuttals, and discussions. The authors did a good job in rebuttal. The additional results and clarifications addressed the reviewers' concerns. The manuscript crosses the acceptance bar. The authors are still suggested to revise the paper considering the reviewers' comments. | train | [
"oB5CtDBKonF",
"HX84vGiyX0NS",
"rg3jK_yMf00",
"lqpGx88ocfs",
"ijt_Of1f3O3j",
"K8MlQNSrqWF",
"E8IdISoyIto",
"mlc9fsvKUDe"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As the author-reviewer discussion period will end on Tuesday, we would like to know if our response answered the reviewers' concerns regarding the paper. Please let us know if you have any further questions, and we will do our best to reply by tomorrow.",
" We thank all reviewers for their input. We have addres... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"HX84vGiyX0NS",
"nips_2022_VOPiHQUevh5",
"nips_2022_VOPiHQUevh5",
"nips_2022_VOPiHQUevh5",
"nips_2022_VOPiHQUevh5",
"nips_2022_VOPiHQUevh5",
"nips_2022_VOPiHQUevh5",
"nips_2022_VOPiHQUevh5"
] |
nips_2022_mmzkqUKNVm | Semantic Diffusion Network for Semantic Segmentation | Precise and accurate predictions over boundary areas are essential for semantic segmentation. However, the commonly used convolutional operators tend to smooth and blur local detail cues, making it difficult for deep models to generate accurate boundary predictions. In this paper, we introduce an operator-level approach to enhance semantic boundary awareness, so as to improve the prediction of the deep semantic segmentation model. Specifically, we formulate the boundary feature enhancement process as an anisotropic diffusion process.
We propose a novel learnable approach called semantic diffusion network (SDN) for approximating the diffusion process, which contains a parameterized semantic difference convolution operator followed by a feature fusion module and constructs a differentiable mapping from original backbone features to advanced boundary-aware features. The proposed SDN is an efficient and flexible module that can be plugged into existing encoder-decoder segmentation models. Extensive experiments show that our approach can achieve consistent improvements over several typical state-of-the-art segmentation baseline models on challenging public benchmarks. | Accept | This submission got a mixed rating: 1 borderline reject, 2 week accept and 1 accept.
Most of the concerns lie in the explanations on the details and experimental comparison with certain baselines/variants. The authors addressed them well by providing additional experiment results in their response.
The remained concern from the reviewers giving borderline reject lies in theoretical justification of the proposed operation. The authors managed to provide a theoretical interpretation from the viewpoint of diffusion process, which partially addresses the reviewer's question.
Overall, all the reviewers agree that this submission introduces a simple and effective method for the segmentation field. The effectiveness of the proposed method has been validated via extensive experiments. The performance improvement is significant. The manuscript is written clearly. The contribution is sufficient.
Based on the above considerations, AC recommends accept for this submission. | test | [
"GsXGEcqXnI",
"P9P3sVR9onQ",
"HpTr6xyASsE",
"GL3rN-oV06",
"LIiWVtFrerV",
"--34I9uRzb_",
"tDNmn0t9Sn3",
"aymS7T9leO7",
"aVi__MHx_7d",
"_bZcgvHBOWr",
"fBol6WxEeRF",
"0yAo3U9V-X",
"f8Lm8YWO-NA",
"lHqnuTWkErG",
"M1dTtAZBlh",
"1oXkI9GaXa",
"stKc-4GCeQWN",
"aup8-JJn_Z0",
"Y75Q0hcLXEG",... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
" Dear reviewer:\n\nWe feel very honored and glad to hear from you! Your comments have helped our paper become stronger. I would like to extend my sincere thanks to you!\n\n**About the code:** The codebase involved in this paper, training config, and model checkpoint will be public!\n\n**About rating score:** We no... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"HpTr6xyASsE",
"HpTr6xyASsE",
"PbmvXh5ylsT2",
"--34I9uRzb_",
"aymS7T9leO7",
"O6j1khky9IU",
"aVi__MHx_7d",
"T1hWTRYNTIE",
"lHqnuTWkErG",
"vS9ZHl1gGB",
"vS9ZHl1gGB",
"47bxz-N6N9O",
"XO1wjhUFHEj",
"RxJH8D_mN7z",
"RxJH8D_mN7z",
"RxJH8D_mN7z",
"vS9ZHl1gGB",
"vS9ZHl1gGB",
"vS9ZHl1gGB",... |
nips_2022_q__FmUtPZd9 | Social-Inverse: Inverse Decision-making of Social Contagion Management with Task Migrations | Considering two decision-making tasks $A$ and $B$, each of which wishes to compute an effective decision $Y$ for a given query $X$, can we solve task $B$ by using query-decision pairs $(X, Y)$ of $A$ without knowing the latent decision-making model? Such problems, called inverse decision-making with task migrations, are of interest in that the complex and stochastic nature of real-world applications often prevents the agent from completely knowing the underlying system. In this paper, we introduce such a new problem with formal formulations and present a generic framework for addressing decision-making tasks in social contagion management. On the theory side, we present a generalization analysis for justifying the learning performance of our framework. In empirical studies, we perform a sanity check and compare the presented method with other possible learning-based and graph-based methods. We have acquired promising experimental results, confirming for the first time that it is possible to solve one decision-making task by using the solutions associated with another one. | Accept | Strengths:
* novel formulation for task migration in social management tasks
* theoretical analysis: generalization bound
* results shed light on certain possible design choices
* adequate empirical evaluation on simulated data
Weaknesses:
* formalization may be too restrictive to capture realistic settings (e.g., observing only one type of task)
* connections to some related literature not clearly established
* some concerns regarding baselines used in experiments, or lack thereof
* societal implication not discussed or properly acknowledged in authors’ response **(see ethics section below)**
Summary:
All reviewers agree that the proposed problem of task migration for network diffusion is interesting, and that the proposed formal framework is elegant; some reviewers, however, found the framework to be somewhat restrictive in its ability to relate to real-world diffusive processes. Theoretical results seem sound, but to fully appreciate their novelty, it would help if the authors provide more details and make concrete connections to the literature on which they draw. The authors’ response to reviewers questions were helpful in this regard. Experimental results look encouraging; nonetheless, several reviewers raised concerns regarding the adequacy of certain baselines, or the lack of comparison to methods that include explicit diffusion modeling. Authors’ responses in this regard were only partially satisfactory.
**Ethics:**
Reviews by two designated ethics reviewers strongly suggest that the paper frames the decision task it studies in a very one-sided manner, presenting mostly possible benefits (e.g., minimizing the spread of misinformation) and lacking to acknowledge for its evident risks (e.g., maximizing the spread of misinformation). The dual objectives studied—diffusion enhancement (Problem 1) and containment (Problem 2)—make it very clear that virtually every concrete task in this space has potential dual use. **Unfortunately, the authors do not discuss this in the paper, nor were their responses in the discussion appeasing.** Due to this, acceptance is made conditional on the authors’ ability to clearly convey in their writing, and *in the early parts of the paper*, the risks that naturally follow from their proposed work, as they relate to the societally-significant applications they discuss.
| train | [
"v6NF7-09kW",
"7jl01F0qUR0",
"fJZr3E8JJXP",
"kk980vE8zBJ",
"VC-IZ6BDak18",
"rEH2gZhkGLR",
"y0ffXENGzUW",
"1MEFkrn505M",
"unOKGH_oOhl",
"zE_Cw3hc8NP",
"padG3XRFMr",
"7KI2qgB2ds"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors,\n\nThank you very much for your in depth response I am satisfied with your response and have upped my score as a result.\nI think the main piece of intuition I'd like to see added would be for the stochastic diffusion model defined in 2.1, and the various tasks defined in 2.2. As someone that was un... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4,
3
] | [
"1MEFkrn505M",
"y0ffXENGzUW",
"nips_2022_q__FmUtPZd9",
"nips_2022_q__FmUtPZd9",
"zE_Cw3hc8NP",
"7KI2qgB2ds",
"padG3XRFMr",
"unOKGH_oOhl",
"nips_2022_q__FmUtPZd9",
"nips_2022_q__FmUtPZd9",
"nips_2022_q__FmUtPZd9",
"nips_2022_q__FmUtPZd9"
] |
nips_2022_LCIZmSw1DuE | Fair and Optimal Decision Trees: A Dynamic Programming Approach | Interpretable and fair machine learning models are required for many applications, such as credit assessment and in criminal justice. Decision trees offer this interpretability, especially when they are small. Optimal decision trees are of particular interest because they offer the best performance possible for a given size. However, state-of-the-art algorithms for fair and optimal decision trees have scalability issues, often requiring several hours to find such trees even for small datasets. Previous research has shown that dynamic programming (DP) performs well for optimizing decision trees because it can exploit the tree structure. However, adding a global fairness constraint to a DP approach is not straightforward, because the global constraint violates the condition that subproblems should be independent. We show how such a constraint can be incorporated by introducing upper and lower bounds on final fairness values for partial solutions of subproblems, which enables early comparison and pruning. Our results show that our model can find fair and optimal trees several orders of magnitude faster than previous methods, and now also for larger datasets that were previously beyond reach. Moreover, we show that with this substantial improvement our method can find the full Pareto front in the trade-off between accuracy and fairness. | Accept | I recommend acceptance due to the strengths identified by the positive reviews, despite some doubts expressed by more negative reviews. This paper modifies existing dynamic programming approaches for learning decision trees to accommodate non-monotonic constraints, motivated in particular by group fairness. Experiment show that this approach is orders of magnitude faster than existing alternatives.
The main unresolved reviewer concern is novelty---how much of a contribution is the ability to hand non-monotonic constraints? If this paper were exclusively targetted to the decision tree community, this would be an important concern. However, I view the significance of this contribution in terms of making decision trees a computationally tractable option for designing fair decisions. Interpretability is an important concern in many domains where fairness is an issue, and thus this is an important contribution. | train | [
"n6ec7S38Ihv",
"Bpspo1x4WvY",
"WWKVq1GSRO1",
"3Gym-fmgA35",
"JhIAK2dt_i",
"Rpqwfz76Hs",
"mnCW8j86YEl",
"MGhYree5Sn0",
"d47B1ZlR4Vx"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed and thorough response, you have clarified my doubts.",
" Dear reviewer,\n\nThank you for your review of our work. Thank you also for your detailed feedback on some of our writing. Based on your comments we have been able to improve the clarity of our writing and explain better how thi... | [
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
1,
4,
3
] | [
"WWKVq1GSRO1",
"d47B1ZlR4Vx",
"MGhYree5Sn0",
"mnCW8j86YEl",
"Rpqwfz76Hs",
"nips_2022_LCIZmSw1DuE",
"nips_2022_LCIZmSw1DuE",
"nips_2022_LCIZmSw1DuE",
"nips_2022_LCIZmSw1DuE"
] |
nips_2022_v6CqBssIwYw | Instance-Based Uncertainty Estimation for Gradient-Boosted Regression Trees | Gradient-boosted regression trees (GBRTs) are hugely popular for solving tabular regression problems, but provide no estimate of uncertainty. We propose Instance-Based Uncertainty estimation for Gradient-boosted regression trees (IBUG), a simple method for extending any GBRT point predictor to produce probabilistic predictions. IBUG computes a non-parametric distribution around a prediction using the $k$-nearest training instances, where distance is measured with a tree-ensemble kernel. The runtime of IBUG depends on the number of training examples at each leaf in the ensemble, and can be improved by sampling trees or training instances. Empirically, we find that IBUG achieves similar or better performance than the previous state-of-the-art across 22 benchmark regression datasets. We also find that IBUG can achieve improved probabilistic performance by using different base GBRT models, and can more flexibly model the posterior distribution of a prediction than competing methods. We also find that previous methods suffer from poor probabilistic calibration on some datasets, which can be mitigated using a scalar factor tuned on the validation data. Source code is available at https://github.com/jjbrophy47/ibug. | Accept | This paper presents a method for extending any GBRT point predictor to produce probabilistic predictions such that the aleatoric uncertainty can be quantified. It computes a nonparametric distribution around a prediction using the k NNs where the distance is measured by a kernel that is similar to the random forest kernel. The paper is well written and easy to read. All of reviewers agree that it is a simple practical method that is well engineered. But all the techniques used in this system are existing ones, so that its technical novelty is limited. During the discussion period, I had more than a few communications with reviewers. One one hand, there were some concerns on the limited novelty, which I also agree with. In fact, this concern became more notable in the discussion. On the other hand, a strength is in its simplicity, practicability, and its excellence in engineering and design, how to critically evaluate alternative approaches, and how to design experiments that evaluate those approaches”. A few things that I would like the authors to consider in their future submissions include: (1) the method is applied to quantify only aleatoric uncertainty, which should be clearly mentioned in an earlier place in the paper, since these days we observe a few interesting methods for quantifying the predictive uncertainty (that is both aleatoric and episdemic uncertainty); (2) a kernel which is similar to the random forest kernel is used as a distance metric. Unlike RF, GBRT construct trees with small depth, so that it is expected that many instances fall in the same leaf. The behavior might be different from the case of RF. Despite a concern on the limited novelty, most of reviewers feel that this work can be accepted, so I recommend it for acceptance.
| train | [
"L84zZlAOC4Q",
"K8-rdNSRbpa",
"Dkf6p7JXv2n",
"nn4t0KOI3_Y",
"cu0GoVA3FiZ",
"KFCeV2jnaym",
"1nmah7Yz5my8",
"3M6NeFB0utd",
"ZoikA6jlqyh",
"uOc7hCh8Htg",
"Wy1ll5z7zY-",
"0XlCCi70BkI",
"ZwOOsyQoTEN",
"H5pQgwokEY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifications and additional experimental results. I increased my score.",
" Thanks for the response.",
" We thank the reviewer for their thoughtful feedback and appreciate their recognition of the engineering effort put into IBUG.\n\n**Q: Comparison to Davies and Ghahramani (2014)?**\n\n**... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"3M6NeFB0utd",
"Dkf6p7JXv2n",
"H5pQgwokEY",
"ZwOOsyQoTEN",
"ZwOOsyQoTEN",
"0XlCCi70BkI",
"0XlCCi70BkI",
"0XlCCi70BkI",
"Wy1ll5z7zY-",
"nips_2022_v6CqBssIwYw",
"nips_2022_v6CqBssIwYw",
"nips_2022_v6CqBssIwYw",
"nips_2022_v6CqBssIwYw",
"nips_2022_v6CqBssIwYw"
] |
nips_2022_zkQho-Jxky9 | Counterfactual harm | To act safely and ethically in the real world, agents must be able to reason about harm and avoid harmful actions. However, to date there is no statistical method for measuring harm and factoring it into algorithmic decisions. In this paper we propose the first formal definition of harm and benefit using causal models. We show that any factual definition of harm must violate basic intuitions in certain scenarios, and show that standard machine learning algorithms that cannot perform counterfactual reasoning are guaranteed to pursue harmful policies following distributional shifts. We use our definition of harm to devise a framework for harm-averse decision making using counterfactual objective functions. We demonstrate this framework on the problem of identifying optimal drug doses using a dose-response model learned from randomized control trial data. We find that the standard method of selecting doses using treatment effects results in unnecessarily harmful doses, while our counterfactual approach allows us to identify doses that are significantly less harmful without sacrificing efficacy. | Accept | All reviewers agreed that this paper should be accepted because of the strong author response during the rebuttal phase. Specifically the reviewers appreciated the motivation of the paper, its clarity, and the author clarification of the method, its assumptions, and scope during the rebuttal. Authors: please carefully revise the manuscript based on the suggestions by the reviewers: they made many careful suggestions to improve the work and stressed that the paper should only be accepted once these changes are implemented. Once these are done the paper will be a nice addition to the conference! | train | [
"srvKUolIQMd",
"LUyrUYxzhTZ",
"ljVGxGg9g5T",
"TKVBR-FIhv4",
"CbPHfOJsSLB",
"LHJtfKOkRtD",
"QcBQgwuj01I",
"GGwBj-monES",
"cU2s9_PoWK8",
"8DFXgo6ftK",
"T0tGjTj2gc",
"TxmAHB0HkL3",
"pjw8uJ_sTuy",
"LdkloKhW6ol",
"ZpYJYK4E3d",
"hYAVOQvTprP",
"x0qwVvmjqa-",
"D6JUI7Jx2J",
"ay5fgXm0UKn",... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_rev... | [
" I revised my assessment of the work based on the answers provided by the authors.",
" Thank you for addressing the comments and extending the submitted materials. The paper has been made more comprehensible.",
" Massive thank you for your comments which have really improved the paper with the inclusion of the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"LUyrUYxzhTZ",
"hYAVOQvTprP",
"TKVBR-FIhv4",
"cU2s9_PoWK8",
"LHJtfKOkRtD",
"QcBQgwuj01I",
"GGwBj-monES",
"8DFXgo6ftK",
"9KGFy-sFHM",
"T0tGjTj2gc",
"pjw8uJ_sTuy",
"nips_2022_zkQho-Jxky9",
"ZpYJYK4E3d",
"ay5fgXm0UKn",
"Cs0Kd5nU25",
"x0qwVvmjqa-",
"msGhKUDcGg6",
"nips_2022_zkQho-Jxky9... |
nips_2022_DDEwoD608_l | Hand-Object Interaction Image Generation | In this work, we are dedicated to a new task, i.e., hand-object interaction image generation, which aims to conditionally generate the hand-object image under the given hand, object and their interaction status. This task is challenging and research-worthy in many potential application scenarios, such as AR/VR games and online shopping, etc. To address this problem, we propose a novel HOGAN framework, which utilizes the expressive model-aware hand-object representation and leverages its inherent topology to build the unified surface space. In this space, we explicitly consider the complex self- and mutual occlusion during interaction. During final image synthesis, we consider different characteristics of hand and object and generate the target image in a split-and-combine manner. For evaluation, we build a comprehensive protocol to access both the fidelity and structure preservation of the generated image. Extensive experiments on two large-scale datasets, i.e., HO3Dv3 and DexYCB, demonstrate the effectiveness and superiority of our framework both quantitatively and qualitatively. The code will be available at https://github.com/play-with-HOI-generation/HOIG. | Accept | On the surface, this paper seems to be split between three borderline rejects (4) and one strong champion of the paper (10). However, this is not the full story, since two of the reject-inclined reviewers, Bc9p and zW9y did not participate post-rebuttal, despite multiple prods from the AC. The AC examined the stated weaknesses from Bc9p and zW9y, the authors' response to them, as well as the paper. The AC does not think that the concerns are paper stopping if they were left unaddressed (e.g., a glaring experimental weakness, an incorrect statement) and moreover finds the authors' response to these concerns satisfactory. The AC is then left with the reviews from p4XG (4) and aYmL (10). The remaining primary concern comes from p4XG, who points out that the method has a lot of inputs, which makes the problem considerably easier. The AC understands this concern (and thinks lowering the input requirements would be a great next step), but on balance is inclined to accept the paper. This is motivated by the quality of the results as well as aYmL's enthusiasm for the work. The AC would encourage the authors to use the extra page to give clear responses to the reviewers' questions (e.g., what are the applications, why not just do a direct render) in the final version of the paper. Others will have similar questions.
| train | [
"0GT7KBLx9C_",
"HTuo1Ay92e",
"OVxPlcWr89",
"Y3KXfmCWlUh",
"ydYn7I5QDe",
"tUyURI-9A7",
"XSY5ED5XxYbK",
"xwe8EPvoq0P",
"EJIh67NRlRN",
"ugJ875jiFL6",
"HsUIE-tfiZJ",
"TwtFjpqLArG",
"bwCSu03lCI_",
"OSglgaIxkNW",
"ozBistzqtda",
"8icyokoPXhk",
"DZoHGNHgaCK"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer p4XG,\n\nWe would like to thank again for your time and valuable comments.\nHopefully, our rebuttal and new submitted revision could properly address your concerns.\nWe look forward to your feedback and will appreciate it if you could upgrade your score.\n\nWish you a nice day.\n\nBest,\n\nAuthors o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
10,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"DZoHGNHgaCK",
"8icyokoPXhk",
"OSglgaIxkNW",
"nips_2022_DDEwoD608_l",
"DZoHGNHgaCK",
"8icyokoPXhk",
"OSglgaIxkNW",
"DZoHGNHgaCK",
"8icyokoPXhk",
"HsUIE-tfiZJ",
"ozBistzqtda",
"bwCSu03lCI_",
"OSglgaIxkNW",
"nips_2022_DDEwoD608_l",
"nips_2022_DDEwoD608_l",
"nips_2022_DDEwoD608_l",
"nip... |
nips_2022_8wtaJ9dE9Y2 | Predicting Label Distribution from Multi-label Ranking | Label distribution can provide richer information about label polysemy than logical labels in multi-label learning. There are currently two strategies including LDL (label distribution learning) and LE (label enhancement) to predict label distributions. LDL requires experts to annotate instances with label distributions and learn a predictive mapping on such a training set. LE requires experts to annotate instances with logical labels and generates label distributions from them. However, LDL requires costly annotation, and the performance of the LE is unstable. In this paper, we study the problem of predicting label distribution from multi-label ranking which is a compromise w.r.t. annotation cost but has good guarantees for performance. On the one hand, we theoretically investigate the relation between multi-label ranking and label distribution. We define the notion of EAE (expected approximation error) to quantify the quality of an annotation, give the bounds of EAE for multi-label ranking, and derive the optimal range of label distribution corresponding to a particular multi-label ranking. On the other hand, we propose a framework of label distribution predicting from multi-label ranking via conditional Dirichlet mixtures. This framework integrates the processes of recovering and learning label distributions end-to-end and allows us to easily encode our knowledge about current tasks by a scoring function. Finally, we implement extensive experiments to validate our proposal. | Accept | This paper studies the problem of predicting label distribution from multi-label ranking. First, the authors give a theoretical analysis to prove the superiority of the multi-label ranking over the logical labels. Then an end-to-end framework called DRAM is proposed for recovering and learning label distributions. The corresponding experiments validate the effectiveness of the proposed algorithms. Overall, this work is technically solid. The concerns raised by reviewers are not so serious and have been answered. I recommend accepting this paper. | train | [
"_3wxVtiHYYdB",
"CuGyNf5IJ7",
"fhem4Jv8fO",
"_d0otkqdRn",
"SG9wdAsr77q",
"IaIn2W2yIVI",
"SNu_osq8WJ",
"9-_wBdkaN6d"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your suggestions. Below we give the point-by-point responses to your questions.\n\n**Q1: It seems that the theoretical analysis has little relatedness with the proposed framework and there are not experiments to support the theoretical results. It is something that looks beautiful.**\n\nA: The theor... | [
-1,
-1,
-1,
-1,
5,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"9-_wBdkaN6d",
"SNu_osq8WJ",
"IaIn2W2yIVI",
"SG9wdAsr77q",
"nips_2022_8wtaJ9dE9Y2",
"nips_2022_8wtaJ9dE9Y2",
"nips_2022_8wtaJ9dE9Y2",
"nips_2022_8wtaJ9dE9Y2"
] |
nips_2022_tbdk6XLYmZj | Learning Best Combination for Efficient N:M Sparsity | By forcing N out of M consecutive weights to be non-zero, the recent N:M fine-grained network sparsity has received increasing attention with its two attractive advantages over traditional irregular network sparsity methods: 1) Promising performance at a high sparsity. 2) Significant speedups when performed on NVIDIA A100 GPUs. Current implementation on N:M sparsity requires a tedious pre-training phase or computationally heavy from-scratch training. To circumvent these problems, this paper presents an efficient solution for achieving N:M fine-grained sparsity from scratch. Specifically, we first make a re-formulation to convert the N:M fine-grained sparsity into a combinatorial problem, in which, the object falls into choosing the best weight combination among $C_M^N$ candidates. Then, we equip each combination with a learnable importance score, which can be jointly optimized along with its associated weights. Through rigorous proof, we demonstrate that the magnitude of the optimized score well reflects the importance of its corresponding weights combination to the training loss. Therefore, by gradually removing combinations with smaller scores till the best one is left, N:M fine-grained sparsity can be efficiently optimized during the normal training phase without any extra expenditure. Comprehensive experimental results have demonstrated that our proposed method for learning best combination, dubbed as LBC, consistently increases the efficacy of the off-the-shelf N:M methods across varying networks and datasets. Our project is released at https://github.com/zyxxmu/LBC.
| Accept | The paper presents a novel method on training N:M sparse-weight neural networks, which can be significantly accelerated by NVIDIA A100 GPUs. The optimal N:M pattern can be found via jointly solving a series of combinatorial problems with finite collections of candidates. Majority of the reviewers found the paper
The AC believes that the concern of 11EU can be solved by re-phrasing the descriptions but doesn't affect the effectiveness and novelty of this paper. | train | [
"uvMZVQK27k2",
"iAXdxkkEWwl",
"qt2LrB-IWPt",
"6g1wcdDVEC",
"ZKKWwTjWGyu",
"4U_ghpcF65a",
"aUkaOGnFAOy",
"1BHL8Ggqb5K",
"1w09gsKz66v",
"9oHhSisvYqAZ",
"-Qk6bouBqY5",
"GBInIo0p3p9",
"l5U-UHBcYey",
"aXWq5W3qXP_",
"w4wOswr4Bxx"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank you for your timely and constructive feedback. Also, we appreciate your effort in reviewing this paper. We believe we introduce a sound approach and have made a strong contribution to N:M sparsity, which are already appraised by the other three reviewers. Our further responses are provided belo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
3
] | [
"iAXdxkkEWwl",
"9oHhSisvYqAZ",
"4U_ghpcF65a",
"w4wOswr4Bxx",
"aXWq5W3qXP_",
"l5U-UHBcYey",
"GBInIo0p3p9",
"w4wOswr4Bxx",
"aXWq5W3qXP_",
"GBInIo0p3p9",
"l5U-UHBcYey",
"nips_2022_tbdk6XLYmZj",
"nips_2022_tbdk6XLYmZj",
"nips_2022_tbdk6XLYmZj",
"nips_2022_tbdk6XLYmZj"
] |
nips_2022_6H2pBoPtm0s | ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation | Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art. The code and models are available at https://github.com/ViTAE-Transformer/ViTPose. | Accept | This submission received positive reviews. After rebuttal and discussions, all the reviewers feel positive about this submission with raised concerns addressed. After checking all the reviews and rebuttal, the AC stands on the reviewers' side and believe the current work is suitable for publication in this venue. The authors shall revise the manuscript according to the suggestions from the reviewers in the camera-ready submissions. | train | [
"g1zIftl3BW",
"fUFmPud53T_",
"oQGVgj1TsZU",
"qpWxKv4yI-g",
"wArqLFBkdnI",
"bj759I0exbM",
"0pmrjJgt2t",
"ZMMvMh8ctVy",
"46AzOFeL0nd",
"F1K_2ig6GI",
"E9jPdR4MC0I",
"I4F4wdyIyWm",
"DPqq0H43in_",
"Rzcqq8Teoc0",
"7dG5O3_FS2",
"O1hmiFQgI4I",
"G1nIEvXS55U",
"V3QsQXQklIA",
"1-KjMQxveOd",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for your valuable comments and suggestions! We are encouraged by the resolution of your major concerns and appreciate your constructive comments to improve our work. We promise that we will incorporate all feedback in the revised version and carefully amend the paper.",
" The authors have well addressed ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"fUFmPud53T_",
"I4F4wdyIyWm",
"qpWxKv4yI-g",
"E9jPdR4MC0I",
"ZMMvMh8ctVy",
"0pmrjJgt2t",
"F1K_2ig6GI",
"nips_2022_6H2pBoPtm0s",
"nips_2022_6H2pBoPtm0s",
"5Edx4cWukJV",
"TO2Drlc6R98",
"DPqq0H43in_",
"Rzcqq8Teoc0",
"qeph9O4Yy3z",
"O1hmiFQgI4I",
"G1nIEvXS55U",
"V3QsQXQklIA",
"oNWIMSRZ... |
nips_2022_QNBzcgY0f4e | Easy incremental learning methods to consider for commercial fine-tuning applications | Fine-tuning deep learning models for commercial use cases is growing exponentially as more and more companies are adopting AI to enhance their core products and services, as well as automate their diurnal processes and activities. However, not many countries like the U.S. and those in Europe follow quality data collection methods for AI vision or NLP related automation applications. Thus, on many of these kinds of data, existing state-of-the-art pre-trained deep learning models fail to perform accurately, and when fine-tuning is done on these models, issues like catastrophic forgetting or being less specific in predictions as expected occur. Hence, in this paper, simplified incremental learning methods are introduced to be considered in existing fine-tuning infrastructures of pre-trained models (such as those available in huggingface.com) to help mitigate the aforementioned issues for commercial applications. The methods introduced are: 1) Fisher Shut-off, 2) Fractional Data Retention and 3) Border Control. Results show that when applying these methods on vanilla pre-trained models, the models are in fact able to add more to their knowledge without hurting much on what they had learned previously. | Reject | This paper motivates problems related to fine tuning of pre-trained deep learning models for commercial applications and proposes three solutions for incremental learning: Fisher Shut-off, Fractional Data Retention and Border Control). The reviewers thought the work was well-motivated and they were in agreement that this is a timely and important topic. However, they all found the novelty to be too low and the experiments unconvincing for NeurIPS. Therefore the recommendation is to reject the paper. | train | [
"DdIu6Jddvoj",
"2Vjvv7sP_D",
"Zr7qul0-gHG",
"D8PyJntYJNB",
"v68PMXKalQN",
"_IkoFCytqcy"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" FDR may not be novel, but Border Control or anything similar has not been proposed yet.\n\nIdea of the paper was to introduce easy incremental learning methods for commercial applications so that an infrastructure could be built out of it. And so visuals on the performance of these methods was key for the paper t... | [
-1,
-1,
-1,
2,
1,
2
] | [
-1,
-1,
-1,
4,
5,
5
] | [
"D8PyJntYJNB",
"v68PMXKalQN",
"_IkoFCytqcy",
"nips_2022_QNBzcgY0f4e",
"nips_2022_QNBzcgY0f4e",
"nips_2022_QNBzcgY0f4e"
] |
nips_2022_Ho6oWAslz5L | Saliency-Aware Neural Architecture Search | Recently a wide variety of NAS methods have been proposed and achieved considerable success in automatically identifying highly-performing architectures of neural networks for the sake of reducing the reliance on human experts. Existing NAS methods ignore the fact that different input data elements (e.g., image pixels) have different importance (or saliency) in determining the prediction outcome. They treat all data elements as being equally important and therefore lead to suboptimal performance. To address this problem, we propose an end-to-end framework which dynamically detects saliency of input data, reweights data using saliency maps, and searches architectures on saliency-reweighted data. Our framework is based on four-level optimization, which performs four learning stages in a unified way. At the first stage, a model is trained with its architecture tentatively fixed. At the second stage, saliency maps are generated using the trained model. At the third stage, the model is retrained on saliency-reweighted data. At the fourth stage, the model is evaluated on a validation set and the architecture is updated by minimizing the validation loss. Experiments on several datasets demonstrate the effectiveness of our framework. | Accept | This paper proposed a novel method that reweights data using saliency maps and searches architecture using saliency-reweighted data.
There are four official reviewers for this submission. The reviewers consistently agree with the novelty, presentation, and experimental validation of this submission. The ratings are: borderline accept/accept/weak accept. The concerns raised by the reviewers are well addressed during the rebuttal.
Thus the AC would like to recommend acceptance.
| train | [
"YlZ3cz-4ms9",
"KcMezNfu05s",
"l0jFlQRR3Uy",
"OOWKUrakmZ",
"UFJQU87EDmu",
"QP2qfml677G",
"DMibAvHci1e",
"-zZGKtjmewF",
"HWdsUHTnFW9",
"Gx7pkD8gon1",
"df_5cUA2F_",
"7XDBFzKilW"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for reading our response and raising the score. We highly appreciate the reviewer's constructive suggestions.",
" Thank you for your feedback. The paper seems a good one to be further discussed officially in the community. I am raising the score.",
" We would like to thank the reviewer f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"KcMezNfu05s",
"UFJQU87EDmu",
"OOWKUrakmZ",
"DMibAvHci1e",
"7XDBFzKilW",
"df_5cUA2F_",
"Gx7pkD8gon1",
"HWdsUHTnFW9",
"nips_2022_Ho6oWAslz5L",
"nips_2022_Ho6oWAslz5L",
"nips_2022_Ho6oWAslz5L",
"nips_2022_Ho6oWAslz5L"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.