paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_WrZZcwxMNhT | One Positive Label is Sufficient: Single-Positive Multi-Label Learning with Label Enhancement | Multi-label learning (MLL) learns from the examples each associated with multiple labels simultaneously, where the high cost of annotating all relevant labels for each training example is challenging for real-world applications. To cope with the challenge, we investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label and show that one can successfully learn a theoretically grounded multi-label classifier for the problem. In this paper, a novel SPMLL method named SMILE, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed. Specifically, an unbiased risk estimator is derived, which could be guaranteed to approximately converge to the optimal risk minimizer of fully supervised learning and shows that one positive label of each instance is sufficient to train the predictive model. Then, the corresponding empirical risk estimator is established via recovering the latent soft label as a label enhancement process, where the posterior density of the latent soft labels is approximate to the variational Beta density parameterized by an inference model. Experiments on benchmark datasets validate the effectiveness of the proposed method. | Accept | This paper studies the single-positive multi-label learning problem, in which each example is annotated with only one relevant label. The problem is practical and challenging. To address this problem, this paper proposes a new unbiased estimator with a theoretical guarantee. The idea is novel and technically sound. The experimental results also demonstrate the effectiveness of the proposal. All reviewers agree that this study is novel and solid. So I recommend acceptance. | train | [
"lzunL7j4o-0",
"qBT-1jhyJoe",
"Tvvhxil2ePV",
"u9ndfvf-DdnG",
"eyA9PuS2fut8",
"7vCh5llaTku",
"RUHoxTdYSvn",
"MjgjEAlDyo_",
"e0KodBGU6ZL"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Table 3. Recovery performance on measured by cosine coefficient $\\uparrow$\n\n|\t|ML\t|GLLE\t|LESC\t|PLEML\t|Ours|\n| ---- | ----| ---- | ---- | ----| ---- |\n| SJAFFE\t| 0.8039 \t| 0.9590 \t| 0.9713 \t| 0.9609 \t|$ 0.9729 $|\n| Yeast_spoem|0.8721 \t|0.9747 \t|0.9760 \t|0.9627 \t|$0.9835 $|\n| Yeast_spo5\t|0.790... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"eyA9PuS2fut8",
"e0KodBGU6ZL",
"MjgjEAlDyo_",
"RUHoxTdYSvn",
"7vCh5llaTku",
"nips_2022_WrZZcwxMNhT",
"nips_2022_WrZZcwxMNhT",
"nips_2022_WrZZcwxMNhT",
"nips_2022_WrZZcwxMNhT"
] |
nips_2022_ohk8bILFDkk | Semi-infinitely Constrained Markov Decision Processes | We propose a generalization of constrained Markov decision processes (CMDPs) that we call the \emph{semi-infinitely constrained Markov decision process} (SICMDP).
Particularly, in a SICMDP model, we impose a continuum of constraints instead of a finite number of constraints as in the case of ordinary CMDPs.
We also devise a reinforcement learning algorithm for SICMDPs that we call SI-CRL.
We first transform the reinforcement learning problem into a linear semi-infinitely programming (LSIP) problem and then use the dual exchange method in the LSIP literature to solve it.
To the best of our knowledge, we are the first to apply tools from semi-infinitely programming (SIP) to solve reinforcement learning problems.
We present theoretical analysis for SI-CRL, identifying its sample complexity and iteration complexity.
We also conduct extensive numerical examples to illustrate the SICMDP model and validate the SI-CRL algorithm. | Accept | This paper develops an RL method for semi-infinitely constrained MDPs. Apparently they are the first to do this, though the algorithmic contributions are not particularly innovative and the theoretical results are not particularly surprising. I see this as a fine, albeit technical, contribution to the literature. I am on the fence with regards to acceptance. | train | [
"T9spOIkYYK5",
"oz9llOL41XDC",
"qrjFCcAkyXM",
"vb48_zzINZ0",
"6iLA8cLaJ-R",
"OTeaUcbw5x",
"WGMj_BDLyUG"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer V6h8,\n\nAs the author-reviewer discussion period will end soon, we will greatly appreciate it if you could let us know whether our response satisfactorily addresses your concerns. If not, we are glad to give further explanations.\n\nYours,\nAuthors",
" Thanks for your feedback and constructive ad... | [
-1,
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"6iLA8cLaJ-R",
"6iLA8cLaJ-R",
"OTeaUcbw5x",
"WGMj_BDLyUG",
"nips_2022_ohk8bILFDkk",
"nips_2022_ohk8bILFDkk",
"nips_2022_ohk8bILFDkk"
] |
nips_2022_pELM0QgWIjn | Quasi-Newton Methods for Saddle Point Problems | This paper studies quasi-Newton methods for strongly-convex-strongly-concave saddle point problems.
We propose random Broyden family updates, which have explicit local superlinear convergence rate of ${\mathcal O}\big(\big(1-1/(d\varkappa^2)\big)^{k(k-1)/2}\big)$, where $d$ is the dimension of the problem, $\varkappa$ is the condition number and $k$ is the number of iterations. The design and analysis of proposed algorithm are based on estimating the square of indefinite Hessian matrix, which is different from classical quasi-Newton methods in convex optimization. We also present two specific Broyden family algorithms with BFGS-type and SR1-type updates, which enjoy the faster local convergence rate of $\mathcal O\big(\big(1-1/d\big)^{k(k-1)/2}\big)$. Our numerical experiments show proposed algorithms outperform classical first-order methods. | Accept | Thank you for your submission to NeurIPS. The reviewers unanimously found the work to address an important, relevant problem, and the paper to be clear and generally well-written. All four reviewers unanimously recommend accepting the paper.
The work has obvious impact for the ML community: the idea of rewriting the Newton update $z_+ = z - H^{-1}g$ in terms of the positive definite squared Hessian $z_+ = z - H^{-2} (H g)$ and then using a quasi-Newton scheme to approximate $H^{-2}$ is immediately intuitive and applicable to a wide range of practical problems. The paper provides a rigorous guarantee that such a quasi-Newton scheme converges superlinearly within a neighborhood of the saddle point.
Please incorporate reviewer feedback in preparing the camera ready version. In particular, please take care to include the Comparison to Concurrent Work [23, 46] in your response to reviewer MK9. | train | [
"Wo-Z6lwChRj",
"wTfjHFEFeOs",
"t9TAdFaQstL",
"ZJHunLG_fQ0",
"2nePUEBsoJi",
"SUfE_1eMSvW",
"pENHTnz_3MB",
"h4m6BybQsgR",
"cFDdeIdfq3f",
"StIK-Tktcgi",
"wT4hpe7GSEd"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciated the authors for their detailed response and clarification on each comments. I remained my point of view. Thanks.",
" Thank you for the clarifications. I have no further concerns and would like to keep my current score. ",
" I thank the authors for their detailed response. In particular, I like t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"2nePUEBsoJi",
"SUfE_1eMSvW",
"ZJHunLG_fQ0",
"h4m6BybQsgR",
"cFDdeIdfq3f",
"StIK-Tktcgi",
"wT4hpe7GSEd",
"nips_2022_pELM0QgWIjn",
"nips_2022_pELM0QgWIjn",
"nips_2022_pELM0QgWIjn",
"nips_2022_pELM0QgWIjn"
] |
nips_2022_i3k6WjDXECC | Hierarchical Channel-spatial Encoding for Communication-efficient Collaborative Learning | It witnesses that the collaborative learning (CL) systems often face the performance bottleneck of limited bandwidth, where multiple low-end devices continuously generate data and transmit intermediate features to the cloud for incremental training. To this end, improving the communication efficiency by reducing traffic size is one of the most crucial issues for realistic deployment. Existing systems mostly compress features at pixel level and ignore the characteristics of feature structure, which could be further exploited for more efficient compression. In this paper, we take new insights into implementing scalable CL systems through a hierarchical compression on features, termed Stripe-wise Group Quantization (SGQ). Different from previous unstructured quantization methods, SGQ captures both channel and spatial similarity in pixels, and simultaneously encodes features in these two levels to gain a much higher compression ratio. In particular, we refactor feature structure based on inter-channel similarity and bound the gradient deviation caused by quantization, in forward and backward passes, respectively. Such a double-stage pipeline makes SGQ hold a sublinear convergence order as the vanilla SGD-based optimization. Extensive experiments show that SGQ achieves a higher traffic reduction ratio by up to 15.97 times and provides 9.22 times image processing speedup over the uniform quantized training, while preserving adequate model accuracy as FP32 does, even using 4-bit quantization. This verifies that SGQ can be applied to a wide spectrum of edge intelligence applications. | Accept | This paper proposes a novel communication-efficient learning method that significantly reduces feature size and communication traffic. The rebuttal solved the reviewers concerns about the dataset size and accuracy / latency trade off. | train | [
"T6rczj6WmKZ",
"Xu-AC4l698",
"TV4FsdY0esK",
"Z1SaEYo5tua",
"3VcshIUpEiY",
"D8nzXsHp80",
"KSmDX5ybScS",
"lqDBEcCTsvU",
"6_PPaD3c3-QU",
"YRN2aJB7C6HF",
"uWFGcZU2NGs",
"vj4Kdzf-2xM",
"EzzAKMZ7cRZ",
"YPBdEC5hY7w",
"Er8fFzLXub9",
"qKgzmHhUElB",
"yeiWpkG-QWi"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer jfAG,\n\nThank you very much for your kind comments and suggestions. We are sincerely encouraged by your acknowledgment of our work. In the final version, we will add detailed ImageNet-1K results to the appendix and try our best to improve the paper. \n\nThank you for your time again!\n\nBest,\n\nAu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
1,
3
] | [
"Xu-AC4l698",
"lqDBEcCTsvU",
"qKgzmHhUElB",
"YPBdEC5hY7w",
"nips_2022_i3k6WjDXECC",
"nips_2022_i3k6WjDXECC",
"YPBdEC5hY7w",
"YPBdEC5hY7w",
"yeiWpkG-QWi",
"yeiWpkG-QWi",
"qKgzmHhUElB",
"qKgzmHhUElB",
"Er8fFzLXub9",
"nips_2022_i3k6WjDXECC",
"nips_2022_i3k6WjDXECC",
"nips_2022_i3k6WjDXECC... |
nips_2022_XEoih0EwCwL | Retrospective Adversarial Replay for Continual Learning | Continual learning is an emerging research challenge in machine learning that addresses the problem where models quickly fit the most recently trained-on data but suffer from catastrophic forgetting of previous data due to distribution shifts --- it does this by maintaining a small historical replay buffer in replay-based methods. To avoid these problems, this paper proposes a method, ``Retrospective Adversarial Replay (RAR)'', that synthesizes adversarial samples near the forgetting boundary. RAR perturbs a buffered sample towards its nearest neighbor drawn from the current task in a latent representation space. By replaying such samples, we are able to refine the boundary between previous and current tasks, hence combating forgetting and reducing bias towards the current task. To mitigate the severity of a small replay buffer, we develop a novel MixUp-based strategy to increase replay variation by replaying mixed augmentations. Combined with RAR, this achieves a holistic framework that helps to alleviate catastrophic forgetting. We show that this excels on broadly-used benchmarks and outperforms other continual learning baselines especially when only a small buffer is available. We conduct a thorough ablation study over each key component as well as a hyperparameter sensitivity analysis to demonstrate the effectiveness and robustness of RAR. | Accept | This paper proposes adversarially perturbed samples from a replay buffer to simulate examples that are on "forgetting boundary" in the continual learning setting. After reading the paper, I found it interesting and insightful: the methodology makes sense, the analysis of their method is thorough, and the results validate their method. The reviewers asked some good questions, and I believe the authors did a good job at answering them sufficiently.
I therefore recommend the paper for acceptance. We ask however that the authors fix first line in the abstract which seems to suggest that all CL methods use a memory buffer.
I'm disappointed the reviewers did not participate more: they were reminded both my me as well as the authors about the discussion, yet no one participated after they gave their initial, albeit useful, signal. | train | [
"vczWUTEotKw",
"6BWa2GVOyLS",
"p8Rq-zwNB8c",
"WZwSN_EbFpk",
"sXGWBlJiUmf",
"V_mqW1FVGqV",
"bf8r0bY0ikm",
"2plNlH5-DL8",
"JfTrsNZgLYu",
"pGIwbEKBGz",
"XFSPjOKf97Q",
"ocvwMLOdusG",
"Xuhwfpjzt67",
"M9_UPEewYZv"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my comments and questions. I sustain my score and weakly recommend acceptance of this work.",
" We thank you sincerely for your thorough and helpful review! **In order to address your concerns about the computational cost reduction and the comparison to standard data augmentation method... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"6BWa2GVOyLS",
"bf8r0bY0ikm",
"JfTrsNZgLYu",
"V_mqW1FVGqV",
"2plNlH5-DL8",
"pGIwbEKBGz",
"M9_UPEewYZv",
"Xuhwfpjzt67",
"ocvwMLOdusG",
"XFSPjOKf97Q",
"nips_2022_XEoih0EwCwL",
"nips_2022_XEoih0EwCwL",
"nips_2022_XEoih0EwCwL",
"nips_2022_XEoih0EwCwL"
] |
nips_2022_BbaSRgUHW3 | LTMD: Learning Improvement of Spiking Neural Networks with Learnable Thresholding Neurons and Moderate Dropout | Spiking Neural Networks (SNNs) have shown substantial promise in processing spatio-temporal data, mimicking biological neuronal mechanisms, and saving computational power. However, most SNNs use fixed model regardless of their locations in the network. This limits SNNs’ capability of transmitting precise information in the network, which becomes worse for deeper SNNs. Some researchers try to use specified parametric models in different network layers or regions, but most still use preset or suboptimal parameters. Inspired by the neuroscience observation that different neuronal mechanisms exist in disparate brain regions, we propose a new spiking neuronal mechanism, named learnable thresholding, to address this issue. Utilizing learnable threshold values, learnable thresholding enables flexible neuronal mechanisms across layers, proper information flow within the network, and fast network convergence. In addition, we propose a moderate dropout method to serve as an enhancement technique to minimize inconsistencies between independent dropout runs. Finally, we evaluate the robustness of the proposed learnable thresholding and moderate dropout for image classification with different initial thresholds for various types of datasets. Our proposed methods produce superior results compared to other approaches for almost all datasets with fewer timesteps. Our codes are available at https://github.com/sq117/LTMD.git. | Accept | This paper introduces an trainable threshold and a dropout variant to improve training of spiking neural networks. There were serious issues raised by the reviewers primarily about 1) the idea of having trainable spiking thresholds being not new, and 2) the paper providing insufficient computational validation. The authors addressed the second point. About the first point, the authors provided the following: "The aim of our work is not to show that neuronal heterogeneity is a critical property for spiking neurons and can benefit SNN performance, some other works [R7, R8] have already introduced that. In this work, we want to show a practical threshold optimization methodology to enhance neuronal heterogeneity to those who are already interested in this special property of SNNs and expecting an easy implementation." The AC agrees that the paper achieves this goal, and, after much thought, agreed that this is a sufficient contribution for acceptance to NeurIPS.
In addition, as a minor point, the AC agrees with the opinion of Reviewer BGk9 that the term "dynamic thresholding" is misleading. | train | [
"vle-IzEthD4",
"4IIgwu3os1",
"AoUCq-mgpFA",
"1P-khL1wO5VA",
"TaREzUi2RIv",
"XlBN4m5ClGZ",
"kZUEVz8Qbt",
"csgfyJONWhh",
"TO6TK0TVtdl",
"23yehIT93wJ",
"SU00tJEGJoh",
"igJ18huJxjx",
"xwupdBxcFLv",
"8QZtFjku0o7",
"y_80si1DO9Q",
"GuvDx05YSD"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the additional experimental results and more details related to my questions! \n\n1. The proposed method is not a dynamic threshold, especially since many dynamic threshold schemes have been proposed and tested. The proposed method is a learnable threshold, specifically learned for a specific task. The... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"8QZtFjku0o7",
"AoUCq-mgpFA",
"1P-khL1wO5VA",
"y_80si1DO9Q",
"8QZtFjku0o7",
"GuvDx05YSD",
"GuvDx05YSD",
"y_80si1DO9Q",
"y_80si1DO9Q",
"y_80si1DO9Q",
"8QZtFjku0o7",
"8QZtFjku0o7",
"nips_2022_BbaSRgUHW3",
"nips_2022_BbaSRgUHW3",
"nips_2022_BbaSRgUHW3",
"nips_2022_BbaSRgUHW3"
] |
nips_2022_lCGDKJGHoUv | Benign Underfitting of Stochastic Gradient Descent | We study to what extent may stochastic gradient descent (SGD) be understood as a ``conventional'' learning rule that achieves generalization performance by obtaining a good fit to training data. We consider the fundamental stochastic convex optimization framework, where (one pass, $\textit{without}$-replacement) SGD is classically known to minimize the population risk at rate $O(1/\sqrt n)$, and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of $\Omega(1)$. Consequently, it turns out that SGD is not algorithmically stable in $\textit{any}$ sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related $\textit{with}$-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. | Accept | This paper analyzes the behavior of SGD for stochastic convex optimization, showing that there exist problem instances where the SGD solution exhibits both significant empirical risk and generalization gap in the without replacement case. The finding is potentially impactful in the context of SGD generalization bounds. The paper is well-structured and provides good intuition for the proof technique. However, I encourage the author(s) to provide a construction of the failure that is more general and less artificial. | val | [
"j3LyehJuFVK",
"2iMkmkiLbrSy",
"oJt4XOBuLZl",
"xcY7KPsCZxK",
"f_iWanEf7GU",
"Z6bHkvLgKe",
"qopLudnhclc",
"GxIqO9kc4W1",
"OA8ycknaZwB",
"jGkqBJ2J3wi"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for this message. We are pleased to have been able to address most of your concerns - we would be glad to try and clarify any remaining ones. Thanks!",
" Thanks for your response which addressed nearly all of my concerns. Thanks for pointing out that Theorem 1, 2 covers the last iteration solution!"... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
4
] | [
"2iMkmkiLbrSy",
"oJt4XOBuLZl",
"jGkqBJ2J3wi",
"OA8ycknaZwB",
"GxIqO9kc4W1",
"qopLudnhclc",
"nips_2022_lCGDKJGHoUv",
"nips_2022_lCGDKJGHoUv",
"nips_2022_lCGDKJGHoUv",
"nips_2022_lCGDKJGHoUv"
] |
nips_2022_ywxtmG1nU_6 | Equivariant Graph Hierarchy-Based Neural Networks | Equivariant Graph neural Networks (EGNs) are powerful in characterizing the dynamics of multi-body physical systems. Existing EGNs conduct flat message passing, which, yet, is unable to capture the spatial/dynamical hierarchy for complex systems particularly, limiting substructure discovery and global information fusion. In this paper, we propose Equivariant Hierarchy-based Graph Networks (EGHNs) which consist of the three key components: generalized Equivariant Matrix Message Passing (EMMP) , E-Pool and E-UnPool. In particular, EMMP is able to improve the expressivity of conventional equivariant message passing, E-Pool assigns the quantities of the low-level nodes into high-level clusters, while E-UnPool leverages the high-level information to update the dynamics of the low-level nodes. As their names imply, both E-Pool and E-UnPool are guaranteed to be equivariant to meet physic symmetry. Considerable experimental evaluations verify the effectiveness of our EGHN on several applications including multi-object dynamics simulation, motion capture, and protein dynamics modeling. | Accept | All reviewers agree that this work makes progress on the front of E3-equivariant message-passing neural networks. Though not surprising in light of previous work (e.g., on graph coarsening and pooling for GNNs), the introduced modifications appear to be novel in the context of EGNN. The proposed ideas are well motivated and explained while also yielding numerical benefits. | train | [
"tibHv-DNhgS",
"_MN6JX8CB3",
"yHsISp2L6nF",
"WdXV00uZDt4",
"e4iy5K4JqZE",
"72IGeh54Km",
"hU8QDGRXALY",
"OQTs0tVd1d",
"Xi7IHR2wgq3",
"MrSL_b9b8Ds",
"VnQIiCQJjY",
"HNMxB28B12L",
"Y2KoWYs2Njp"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 2KPJ,\n\nThank you for the support and comments which help improve the paper!\n\nBest,\n\nAuthors",
" I thank the authors for their reply. I believe they have fairly address my concerns. Thus, I have increased my score.",
" Dear Reviewer LcX5,\n\nThank you for the support and helpful advice on t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"_MN6JX8CB3",
"MrSL_b9b8Ds",
"e4iy5K4JqZE",
"72IGeh54Km",
"OQTs0tVd1d",
"hU8QDGRXALY",
"Xi7IHR2wgq3",
"Y2KoWYs2Njp",
"HNMxB28B12L",
"VnQIiCQJjY",
"nips_2022_ywxtmG1nU_6",
"nips_2022_ywxtmG1nU_6",
"nips_2022_ywxtmG1nU_6"
] |
nips_2022_GXOC0zL0ZI | Learning Neural Set Functions Under the Optimal Subset Oracle | Learning set functions becomes increasingly important in many applications like product recommendation and compound selection in AI-aided drug discovery. The majority of existing works study methodologies of set function learning under the function value oracle, which, however, requires expensive supervision signals. This renders it impractical for applications with only weak supervisions under the Optimal Subset (OS) oracle, the study of which is surprisingly overlooked. In this work, we present a principled yet practical maximum likelihood learning framework, termed as EquiVSet, that simultaneously meets the following desiderata of learning neural set functions under the OS oracle: i) permutation invariance of the set mass function being modeled; ii) permission of varying ground set; iii) minimum prior and iv) scalability. The main components of our framework involve: an energy-based treatment of the set mass function, DeepSet-style architectures to handle permutation invariance, mean-field variational inference, and its amortized variants. Thanks to the delicate combination of these advanced architectures, empirical studies on three real-world applications (including Amazon product recommendation, set anomaly detection, and compound selection for virtual screening) demonstrate that EquiVSet outperforms the baselines by a large margin. | Accept | Reviewers have expressed strongly in favour of acceptance, two improving their score after the rebuttal and discussion. I’m happy to recommend acceptance. | train | [
"cP1oW0gyItl",
"BIOTiO4YwO",
"wnh1u9NA344",
"Z-cH23rgNG4",
"L7YEBNvSEXI",
"w8Yy6Rss9Nt",
"BsF1uzj7VDO",
"V7lCyST7Yyed",
"hJeRfaR-1kT",
"i8hdiAKSsE",
"yUS6KkoyMYN",
"FPKCdqVo7ZY",
"QCRqoJkBQu",
"3vPLs3eaV6z",
"NUveaYO1U9",
"CjkyPQ9enzd",
"DewuLMGD_8",
"9ybg3wgBIv",
"DnShvnSBtdu",
... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewers,\n\nThanks a lot for your favor in accepting our work, and thanks for your helpful and constructive comments. We have revised the manuscript accordingly. The main changes are marked in blue. Here are they:\n\n- Appendix E.4 for R**6kuR:** \"*assumption on the data distribution*\";\n- Appendix F.3 f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4,
4
] | [
"nips_2022_GXOC0zL0ZI",
"wnh1u9NA344",
"QCRqoJkBQu",
"L7YEBNvSEXI",
"w8Yy6Rss9Nt",
"BsF1uzj7VDO",
"V7lCyST7Yyed",
"hJeRfaR-1kT",
"FPKCdqVo7ZY",
"yUS6KkoyMYN",
"nips_2022_GXOC0zL0ZI",
"PnvghDoczu1",
"3vPLs3eaV6z",
"_l6z1hjBAcq",
"CjkyPQ9enzd",
"c_Vflp6072V",
"9ybg3wgBIv",
"4195mTmks... |
nips_2022_W23_S057z94 | Conditional Independence Testing with Heteroskedastic Data and Applications to Causal Discovery | Conditional independence (CI) testing is frequently used in data analysis and machine learning for various scientific fields and it forms the basis of constraint-based causal discovery. Oftentimes, CI testing relies on strong, rather unrealistic assumptions. One of these assumptions is homoskedasticity, in other words, a constant conditional variance is assumed. We frame heteroskedasticity in a structural causal model framework and present an adaptation of the partial correlation CI test that works well in the presence of heteroskedastic noise, given that expert knowledge about the heteroskedastic relationships is available. Further, we provide theoretical consistency results for the proposed CI test which carry over to causal discovery under certain assumptions. Numerical causal discovery experiments demonstrate that the adapted partial correlation CI test outperforms the standard test in the presence of heteroskedasticity and is on par for the homoskedastic case. Finally, we discuss the general challenges and limits as to how expert knowledge about heteroskedasticity can be accounted for in causal discovery. | Accept | In real problems, data often exhibit the heterogeneity property. As a consequence, conditional independence tests that rely on the data homoskedasticity assumption may perform suboptimally, leading to inaccurate causal discovery results. This paper adapts the partial correlation tests to account for heteroskedastic noise and provides some necessary theoretical guarantees and empirical results. Reviewers agree that the studied problem is well motivated and that the solution is sensible. | train | [
"thE63PHVTZ_",
"zFFS9Vk3FmZ",
"iRJeR3TyDGF",
"e1yQG9nH1rK",
"FhW06oAyWzI",
"Uezy6JIzXmu",
"yDCHKcdqaXb",
"wggSWbb4Wx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers, \n\nThank you for reviewing this paper. Could you respond to the author feedback, or at least acknowledge that you have read the response (thanks to reviewer vG5d for having done it)? Please indicate whether the author response addresses your concerns.\n\nThanks,\nAC",
" Thanks for a detailed re... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_W23_S057z94",
"FhW06oAyWzI",
"wggSWbb4Wx",
"yDCHKcdqaXb",
"Uezy6JIzXmu",
"nips_2022_W23_S057z94",
"nips_2022_W23_S057z94",
"nips_2022_W23_S057z94"
] |
nips_2022_ATfARCRmM-a | Molecule Generation by Principal Subgraph Mining and Assembling | Molecule generation is central to a variety of applications. Current attention has been paid to approaching the generation task as subgraph prediction and assembling. Nevertheless, these methods usually rely on hand-crafted or external subgraph construction, and the subgraph assembling depends solely on local arrangement. In this paper, we define a novel notion, principal subgraph that is closely related to the informative pattern within molecules. Interestingly, our proposed merge-and-update subgraph extraction method can automatically discover frequent principal subgraphs from the dataset, while previous methods are incapable of. Moreover, we develop a two-step subgraph assembling strategy, which first predicts a set of subgraphs in a sequence-wise manner and then assembles all generated subgraphs globally as the final output molecule. Built upon graph variational auto-encoder, our model is demonstrated to be effective in terms of several evaluation metrics and efficiency, compared with state-of-the-art methods on distribution learning and (constrained) property optimization tasks. | Accept | This paper proposes a molecule generation method using frequent subgraphs. There was a positive consensus among reviewers that paper is novel (B6na, EktL) and well-analyzed (B6na, ESyH), and minor suggestions for improved evaluations and presentation (EktL, ESyH) were well-handled during rebuttals to alleviate reviewer concerns. | val | [
"i3liXuSeoVh",
"OhZH66OEpSdg",
"F-_qVLdWtp",
"PRTdyMaVOfz",
"QtHdoilbxuY",
"OLFgPFZTvDH",
"VGbg9bItY8I",
"JzYWvDt2OIP",
"04HZxWglMBG",
"3RoYTetEa_z",
"hqlog8ZUmvM",
"6rBoT2NDrAS",
"7wzEmAiy91N"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your insightful comments, which help improve our paper.",
" Thank you for providing additional experimental results and detailed complexity analysis. This largely alleviates my reservations.",
" Thank you for your insightful review. We first answer your questions (Q1-Q3) and then provide responses ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"OhZH66OEpSdg",
"QtHdoilbxuY",
"7wzEmAiy91N",
"7wzEmAiy91N",
"7wzEmAiy91N",
"6rBoT2NDrAS",
"6rBoT2NDrAS",
"6rBoT2NDrAS",
"hqlog8ZUmvM",
"nips_2022_ATfARCRmM-a",
"nips_2022_ATfARCRmM-a",
"nips_2022_ATfARCRmM-a",
"nips_2022_ATfARCRmM-a"
] |
nips_2022_CgkjJaKBvkX | Receding Horizon Inverse Reinforcement Learning | Inverse reinforcement learning (IRL) seeks to infer a cost function that explains the underlying goals and preferences of expert demonstrations. This paper presents Receding Horizon Inverse Reinforcement Learning (RHIRL), a new IRL algorithm for high-dimensional, noisy, continuous systems with black-box dynamic models. RHIRL addresses two key challenges of IRL: scalability and robustness. To handle high-dimensional continuous systems, RHIRL matches the induced optimal trajectories with expert demonstrations locally in a receding horizon manner and "stitches" together the local solutions to learn the cost; it thereby avoids the "curse of dimensionality". This contrasts sharply with earlier algorithms that match with expert demonstrations globally over the entire high-dimensional state space. To be robust against imperfect expert demonstrations and control noise, RHIRL learns a state-dependent cost function ``disentangled'' from system dynamics under mild conditions. Experiments on benchmark tasks show that RHIRL outperforms several leading IRL algorithms in most instances. We also prove that the cumulative error of RHIRL grows linearly with the task duration. | Accept | This paper presents a new algorithm for inverse reinforcement learning (IRL) that uses receding horizon to locally match expert demonstrations, which generally results in better performance compared to global methods, especially with long horizons. The algorithm is shown to outperform some state of the art alternatives on a few simulated robotic environments.
The paper is overall well-written, technically strong, and the proposed idea is interesting. The main issue, raised by one of the reviewers, is the assumption that the algorithm has access to a resettable dynamics model, but it seems like this assumption is also made by other model-free methods that are compared here, such as GAIL. | train | [
"XeG42SA7O62",
"CZ05tgA4txR",
"-hiN9Aesxcd",
"UFzNRBVN9m",
"7ITx8n-LHsh",
"0olEfXwcon",
"Ge5v7_di_EQ",
"ID1sm7Wp2vV",
"tlNKZYLAtm",
"TEoSehIClxh",
"aloA8ckSWtz",
"yPlHwEKcVGK",
"vRFAb03TTZ",
"2PRdq7iHRVi",
"t9JO4bKcQA",
"IAC7lpy3LjE",
"P3_gt4v_FR",
"TAGS1eJarX",
"o_Qe3qAxfLs",
... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
... | [
" Thanks for sharing these additional results, which do provide strong evidence for the good performance of the proposed method. I will take them into consideration.",
" You are welcome! \n\nThank you for taking the time to review and help us improve this work.\n\n> FYI: The top/bottom margins on the current pape... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
2
] | [
"7ITx8n-LHsh",
"Ge5v7_di_EQ",
"0olEfXwcon",
"2PRdq7iHRVi",
"t9JO4bKcQA",
"ebLRGPZ_BFZ",
"ID1sm7Wp2vV",
"nips_2022_CgkjJaKBvkX",
"TEoSehIClxh",
"aloA8ckSWtz",
"yPlHwEKcVGK",
"vRFAb03TTZ",
"IAC7lpy3LjE",
"TAGS1eJarX",
"P3_gt4v_FR",
"pIBw7vdejU",
"skbSF0ITiCn",
"L8NCFzpm4ee",
"NzY5a... |
nips_2022__7bphw9JosH | AgraSSt: Approximate Graph Stein Statistics for Interpretable Assessment of Implicit Graph Generators | We propose and analyse a novel statistical procedure, coined AgraSSt, to assess the quality of graph generators which may not be available in explicit forms. In particular, AgraSSt can be used to determine whether a learned graph generating process is capable of generating graphs which resemble a given input graph. Inspired by Stein operators for random graphs, the key idea of AgraSSt is the construction of a kernel discrepancy based on an operator obtained from the graph generator. AgraSSt can provide interpretable criticisms for a graph generator training procedure and help identify reliable sample batches for downstream tasks. We give theoretical guarantees for a broad class of random graph models. Moreover, we provide empirical results on both synthetic input graphs with known graph generation procedures, and real-world input graphs that the state-of-the-art (deep) generative models for graphs are trained on. | Accept | After the author response period, the reviewer ratings were all positive. The reviewers felt that the paper tackles an important problem in assessing the quality of graph generators and proposes a novel, general purpose, and effective approach.
The reviewers pointed out several points for clarification in their reviews, which we hope that the authors will address in the final version of the paper. Many of these have been addressed during the author response period. | val | [
"amIesef9qK_",
"EDzJfSxk2ph",
"1Z8zsOA6HpH",
"zVocKbuepYL",
"UTyxI1oSrW",
"V2SCeZwcV2J",
"CGOAHfBRZ92",
"eNgJdV1wio-",
"51G-xL-_Ez1",
"CnwPuI7NxHy",
"Mz1LB9vH_W3",
"tZ2pXnWIJlP",
"2t35BwGFuOT",
"ibgVt685XYH",
"-hMkth2m_ij",
"QKw0HJdtvsj"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your revision of your score and for your helpful suggestion. We may have misunderstood before what you meant. In our latest version, we have now clarified the definition of edge-exchangeable graphs just before Theorem 3.3, to read\n\n\n''In SI.A we prove the following result for edge-exchangeable ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
3
] | [
"EDzJfSxk2ph",
"tZ2pXnWIJlP",
"zVocKbuepYL",
"51G-xL-_Ez1",
"V2SCeZwcV2J",
"Mz1LB9vH_W3",
"eNgJdV1wio-",
"QKw0HJdtvsj",
"2t35BwGFuOT",
"QKw0HJdtvsj",
"-hMkth2m_ij",
"ibgVt685XYH",
"nips_2022__7bphw9JosH",
"nips_2022__7bphw9JosH",
"nips_2022__7bphw9JosH",
"nips_2022__7bphw9JosH"
] |
nips_2022_yfrDD_rmD5 | First is Better Than Last for Language Data Influence | The ability to identify influential training examples enables us to debug training data and explain model behavior. Existing techniques to do so are based on the flow of training data influence through the model parameters. For large models in NLP applications, it is often computationally infeasible to study this flow through all model parameters, therefore techniques usually pick the last layer of weights. However, we observe that since the activation connected to the last layer of weights contains "shared logic", the data influenced calculated via the last layer weights prone to a "cancellation effect", where the data influence of different examples have large magnitude that contradicts each other. The cancellation effect lowers the discriminative power of the influence score, and deleting influential examples according to this measure often does not change the model's behavior by much. To mitigate this, we propose a technique called TracIn-WE that modifies a method called TracIn to operate on the word embedding layer instead of the last layer, where the cancellation effect is less severe. One potential concern is that influence based on the word embedding layer may not encode sufficient high level information. However, we find that gradients (unlike embeddings) do not suffer from this, possibly because they chain through higher layers. We show that TracIn-WE significantly outperforms other data influence methods applied on the last layer significantly on the case deletion evaluation on three language classification tasks for different models. In addition, TracIn-WE can produce scores not just at the level of the overall training input, but also at the level of words within the training input, a further aid in debugging. | Accept | This paper studies an important problem, of identifying influential training examples. It exposes a potential shortcoming in prior work, of focusing on the last layer, and proposes an alternative method. The approach cleverly assures looking at word overlap via overlapping word embeddings while still aggregating high level information from the back flowing gradients. The reviewers appreciated the topic, the insights and observations, and the empirical observations. Overall this paper is making novel contributions to an important area.
There was some worry of novelty but I agree that the findings are novel and meaningful. The reviewers also raised various questions about vagueness of terms, which the authors addressed in the their response. There were also comments on missing controls and ablations, which the authors partly addressed in their response. To this, during the discussion, a suggestion was made to "to add the variance of multiple runs or a significant test, since the robustness towards randomness is also very important to a reliable data influence measurement". I strongly agree.
Some technical questions by Reviewer iZ6x were answered. Please make sure to include the answers to them in the next revision, as well as all the clarifications and ablations provided in the author responses.
Finally, I would strongly suggest adding experiments with at least one stronger model besides BERT, such as RoBERTa or DeBERTa. This would help give confidence that the approach is relevant for newer models.
AC | train | [
"Vv_9Om0_Qy",
"cPnWpRnC0PS",
"S2wO9CffM6o",
"QEkC7qUEhwn",
"dRexFqu9iS2",
"XpvG63B6H-S",
"0VUimykXDeM",
"5iHmGfCjcV",
"Pao4Fc3hlvQ",
"TIRBqCrcMfz"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nWe hope that our response clarifies our motivation example 3.1 and the meaning of cancellation. If there are any remaining questions we hope to have a chance to answer before discussion deadline.",
" Dear reviewer,\n\nWe would like to thank you for your feedback. We hope that our rebuttal has ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"Pao4Fc3hlvQ",
"TIRBqCrcMfz",
"5iHmGfCjcV",
"Pao4Fc3hlvQ",
"TIRBqCrcMfz",
"Pao4Fc3hlvQ",
"5iHmGfCjcV",
"nips_2022_yfrDD_rmD5",
"nips_2022_yfrDD_rmD5",
"nips_2022_yfrDD_rmD5"
] |
nips_2022_0vJH6C_h4- | Learning to Share in Multi-Agent Reinforcement Learning | In this paper, we study the problem of networked multi-agent reinforcement learning (MARL), where a number of agents are deployed as a partially connected network and each interacts only with nearby agents. Networked MARL requires all agents to make decisions in a decentralized manner to optimize a global objective with restricted communication between neighbors over the network. Inspired by the fact that sharing plays a key role in human's learning of cooperation, we propose LToS, a hierarchically decentralized MARL framework that enables agents to learn to dynamically share reward with neighbors so as to encourage agents to cooperate on the global objective through collectives. For each agent, the high-level policy learns how to share reward with neighbors to decompose the global objective, while the low-level policy learns to optimize the local objective induced by the high-level policies in the neighborhood. The two policies form a bi-level optimization and learn alternately. We empirically demonstrate that LToS outperforms existing methods in both social dilemma and networked MARL scenarios across scales. | Accept | While the scores are borderline, reviewers found the paper interesting and the experiments convincing, so I recommend acceptance. While there were originally concerns about the appropriateness of comparison with QMIX and the relationship to related work, I think these were adequately addressed in the rebuttal. | val | [
"V8XwnaHxdAa",
"vgnDLlDl0-p",
"Ako-LcB2s9N",
"yYNJwT9-_2E",
"J_Ex9p7BG6i",
"sI7hezLxNdN",
"6HOLjXRDb_",
"SmDOnBbVIGAI",
"kBpkcQQAD5_",
"nEbtaQkNAi",
"1Njq1cs_7nm",
"FktuShs1pjf",
"Fv7bdD0W361",
"gfMyGL7fl3M"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for providing clarification regarding the fixed weight sharing baseline and the need for dynamic weight sharing. That makes sense to me. I also appreciate the author’s clarification about Figure 3 and connections to some of the related work I mentioned. As some of my main concerns about the paper have b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nEbtaQkNAi",
"Fv7bdD0W361",
"6HOLjXRDb_",
"J_Ex9p7BG6i",
"sI7hezLxNdN",
"SmDOnBbVIGAI",
"kBpkcQQAD5_",
"1Njq1cs_7nm",
"gfMyGL7fl3M",
"Fv7bdD0W361",
"FktuShs1pjf",
"nips_2022_0vJH6C_h4-",
"nips_2022_0vJH6C_h4-",
"nips_2022_0vJH6C_h4-"
] |
nips_2022_PeJO709WUup | EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization | In distributed or federated optimization and learning, communication between the different computing units is often the bottleneck and gradient compression is widely used to reduce the number of bits sent within each communication round of iterative methods. There are two classes of compression operators and separate algorithms making use of them. In the case of unbiased random compressors with bounded variance (e.g., rand-k), the DIANA algorithm of Mishchenko et al. (2019), which implements a variance reduction technique for handling the variance introduced by compression, is the current state of the art. In the case of biased and contractive compressors (e.g., top-k), the EF21 algorithm of Richtárik et al. (2021), which instead implements an error-feedback mechanism, is the current state of the art. These two classes of compression schemes and algorithms are distinct, with different analyses and proof techniques. In this paper, we unify them into a single framework and propose a new algorithm, recovering DIANA and EF21 as particular cases. Our general approach works with a new, larger class of compressors, which has two parameters, the bias and the variance, and includes unbiased and biased compressors as particular cases. This allows us to inherit the best of the two worlds: like EF21 and unlike DIANA, biased compressors, like top-k, whose good performance in practice is recognized, can be used. And like DIANA and unlike EF21, independent randomness at the compressors allows to mitigate the effects of compression, with the convergence rate improving when the number of parallel workers is large. This is the first time that an algorithm with all these features is proposed. We prove its linear convergence under certain conditions. Our approach takes a step towards better understanding of two so-far distinct worlds of communication-efficient distributed learning. | Accept | It is strongly suggested by the reviewers that the authors explicitly mention the limitations in their paper -- essentially everything that came out of the discussion period, and not exaggerate the results. It has been noted that the advantage of EF BV over existing EF21 scheme is marginal. Further, authors seem to be oblivious of some relevant works on gradient quantization. However, the generalization framework of error-feedback is an interesting contribution and the community will be benefited from this knowledge. | train | [
"WXUAbqV85Ll",
"2Ha7bXeM9Z",
"_33i4mhpcD6",
"lEWFtJFmSPL",
"0afN9VAHCq-u",
"0U1YQIhAeh1",
"mR9ud1Uw9jI",
"kWu_pRI-yr-",
"oTpnAFladLa",
"iPH1fyWQCJ",
"qI-zDTQ4k_k",
"TjLWs3Efebd",
"Ds3_ke5ueM",
"Au0cg7Nq8a",
"hfuqcDDLvN2"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer LBgw,\n\n> I thank you for the detailed point-by-point response, and also apologize for not actively participating in the author-reviewer discussions, although I have been staying on track with the rebuttal.\n\nThanks, appreciated!\n\n> All my queries were adequately answered. \n\nWe are happy to he... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"2Ha7bXeM9Z",
"0afN9VAHCq-u",
"qI-zDTQ4k_k",
"hfuqcDDLvN2",
"Au0cg7Nq8a",
"Ds3_ke5ueM",
"Ds3_ke5ueM",
"Au0cg7Nq8a",
"Au0cg7Nq8a",
"Ds3_ke5ueM",
"hfuqcDDLvN2",
"nips_2022_PeJO709WUup",
"nips_2022_PeJO709WUup",
"nips_2022_PeJO709WUup",
"nips_2022_PeJO709WUup"
] |
nips_2022_Magl9CSHB87 | Improving Generative Adversarial Networks via Adversarial Learning in Latent Space | For Generative Adversarial Networks which map a latent distribution to the target distribution, in this paper, we study how the sampling in latent space can affect the generation performance, especially for images. We observe that, as the neural generator is a continuous function, two close samples in latent space would be mapped into two nearby images, while their quality can differ much as the quality generally does not exhibit a continuous nature in pixel space. From such a continuous mapping function perspective, it is also possible that two distant latent samples can be mapped into two close images (if not exactly the same). In particular, if the latent samples are mapped in aggregation into a single mode, mode collapse occurs. Accordingly, we propose adding an implicit latent transform before the mapping function to improve latent $z$ from its initial distribution, e.g., Gaussian. This is achieved using well-developed adversarial sample mining techniques, e.g. iterative fast gradient sign method (I-FGSM). We further propose new GAN training pipelines to obtain better generative mappings w.r.t quality and diversity by introducing targeted latent transforms into the bi-level optimization of GAN. Experimental results on visual data show that our method can effectively achieve improvement in both quality and diversity. | Accept | The paper under consideration proposes to improve sample quality in GANs by performing optimization-based adversarial mining on the latent space as a pre-processing step, using the fast gradient sign method of Goodfellow et al (2015). Optimization in the latent space is not a new area, and I was surprised to see e.g. neither Bojanowski et al, 2017's "Optimizing the Latent Space of Generative Networks" nor Azadi et al, 2018's "Discriminator rejection sampling" cited as related ideas.
Reviewers found the idea interesting and the experiments mostly convincing, the contributions clear. The motivation for this paper hinges upon a proof under certain assumptions that complex distributions require a latent prior with disjoint/disconnected support; some reviewers were unclear how this was connected to FGSM but this was cleared up in rebuttal. Several reviewers expressed concerns about the far-from-state-of-the-art GAN "backbone"/small scale of the experiments, however new experiments involving larger resolution datasets and a StyleGAN2 base model have quelled these concerns.
To the AC, the method is intriguing, appears well-motivated and well-validated, especially in light of new experiments on larger scale data. Leveraging an adversarial example discovery procedure for the purpose of improving the prior sampling distribution is a clever and non-obvious innovation, and as the authors note in their rebuttal, while other feed-forward generative networks may suffer from the same issues around being "too continuous", the training procedure used by GANs (wherein the discriminator defines a non-stationary objective function, and its gradients are used as the generator's learning signal) uniquely position them to exploit this trick of performing "surgery" on the latent prior. For all of their difficulties, GANs have a reputation for accomplishing quite a lot in terms of sample fidelity for a fixed model capacity, and ideas like the one presented herein may serve to further alleviate training challenges, as well as elucidate niches in which GANs remain useful (at a time when the preponderance of attention has shifted to diffusion models). I recommend acceptance. | val | [
"7aYKnOSEqUJ",
"jc__OLWtRGi",
"E1eq4vdriBy",
"D7RUWw5Np3b",
"5B7flVC83ZG",
"lLVUHvJzJRD",
"_PY-furoN7_",
"gQr5bWPtqbA",
"j2beW13XEk3",
"Ip-jNrV7sOa",
"P2y-CrSk5nf",
"iWE8zl8GATZ",
"VVDEm4H8Oh",
"ClKY3gypJqc",
"wLquJiXuZjK",
"wis0gN2wn3j",
"lBD1N1v9VV5",
"HXNJyww6wKn",
"5gNAlNspbX... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_... | [
" I have read the reply from the author and other reviews. I appreciate the author's effort, all my concerns have been resolved. I think this paper is techenically sound. I tend to accept this paper and increase my score to 8.",
" Thanks for your nice feedback and valuable suggestion. Since we can no longer updat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"K3c4Ek2S5Pv",
"E1eq4vdriBy",
"wis0gN2wn3j",
"jtMXyvzN9EI",
"_PY-furoN7_",
"nips_2022_Magl9CSHB87",
"gQr5bWPtqbA",
"iWE8zl8GATZ",
"nips_2022_Magl9CSHB87",
"5gNAlNspbXi",
"5gNAlNspbXi",
"5gNAlNspbXi",
"kfJTGAWAotx",
"wLquJiXuZjK",
"K3c4Ek2S5Pv",
"lBD1N1v9VV5",
"jtMXyvzN9EI",
"nips_2... |
nips_2022_wlqb_RfSrKh | Self-supervised Amodal Video Object Segmentation | Amodal perception requires inferring the full shape of an object that is partially occluded. This task is particularly challenging on two levels: (1) it requires more information than what is contained in the instant retina or imaging sensor, (2) it is difficult to obtain enough well-annotated amodal labels for supervision. To this end, this paper develops a new framework of Self-supervised amodal Video object segmentation (SaVos). Our method efficiently leverages the visual information of video temporal sequences to infer the amodal mask of objects. The key intuition is that the occluded part of an object can be explained away if that part is visible in other frames, possibly deformed as long as the deformation can be reasonably learned. Accordingly, we derive a novel self-supervised learning paradigm that efficiently utilizes the visible object parts as the supervision to guide the training on videos. In addition to learning type prior to complete masks for known types, SaVos also learns the spatiotemporal prior, which is also useful for the amodal task and could generalize to unseen types. The proposed framework achieves the state-of-the-art performance on the synthetic amodal segmentation benchmark FISHBOWL and the real world benchmark KINS-Video-Car. Further, it lends itself well to being transferred to novel distributions using test-time adaptation, outperforming existing models even after the transfer to a new distribution. | Accept | The paper develops a system for amodal object segmentation in video, trained in a self-supervised manner by requiring consistency between amodal and modal masks, as well as amodal masks estimated for a frame and those temporally propagated according to estimated object motion. After rebuttal and discussion, all reviewers favor accepting the paper, citing novelty of the approach and convincing results. The Area Chair agrees with the reviewer consensus. | train | [
"7Bt89qXP_mT",
"2hyJHabGXuv",
"22orSN6J7sA",
"2s3WVfjrVA8",
"BBJjeJJSe-y",
"KWaHaDBjzsA",
"4EOUCkK0q-2",
"6yv_faaVlsv",
"GtmiLJCS93b",
"NX4Ss54r12w",
"6uTRyBHn3Yr",
"57FiVUtm2Ky",
"XfKHe0S-Ykg",
"3LWwU-6s96y",
"7Y5pt4YZV-",
"EbZvBJDdGJQ",
"5gJC6P5Srtw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the other reviews and all author responses. The responses contain many new experiments and ablations that make the paper stronger. I am happy to raise my rating to accept.",
" Thanks again for the detailed review as well as the suggestions for improvement. \n\nWe tried our best to address each of th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"GtmiLJCS93b",
"5gJC6P5Srtw",
"EbZvBJDdGJQ",
"7Y5pt4YZV-",
"3LWwU-6s96y",
"4EOUCkK0q-2",
"6yv_faaVlsv",
"5gJC6P5Srtw",
"NX4Ss54r12w",
"EbZvBJDdGJQ",
"57FiVUtm2Ky",
"7Y5pt4YZV-",
"3LWwU-6s96y",
"nips_2022_wlqb_RfSrKh",
"nips_2022_wlqb_RfSrKh",
"nips_2022_wlqb_RfSrKh",
"nips_2022_wlqb_... |
nips_2022_9h3KsOVXhLZ | SwinTrack: A Simple and Strong Baseline for Transformer Tracking | Recently Transformer has been largely explored in tracking and shown state-of-the-art (SOTA) performance. However, existing efforts mainly focus on fusing and enhancing features generated by convolutional neural networks (CNNs). The potential of Transformer in representation learning remains under-explored. In this paper, we aim to further unleash the power of Transformer by proposing a simple yet efficient fully-attentional tracker, dubbed SwinTrack, within classic Siamese framework. In particular, both representation learning and feature fusion in SwinTrack leverage the Transformer architecture, enabling better feature interactions for tracking than pure CNN or hybrid CNN-Transformer frameworks. Besides, to further enhance robustness, we present a novel motion token that embeds historical target trajectory to improve tracking by providing temporal context. Our motion token is lightweight with negligible computation but brings clear gains. In our thorough experiments, SwinTrack exceeds existing approaches on multiple benchmarks. Particularly, on the challenging LaSOT, SwinTrack sets a new record with 0.713 SUC score. It also achieves SOTA results on other benchmarks. We expect SwinTrack to serve as a solid baseline for Transformer tracking and facilitate future research. Our codes and results are released at https://github.com/LitingLin/SwinTrack. | Accept | Authors present a method for single object tracking (SOT) that is entirely comprised of transformers. The architecture is simple:
1) Swin is used to generate embeddings for both template and search region
2) Embeddings are concatenated
3) An encoder transformer performs MHSA of the embeddings.
4) A decoder performs cross-attention from search tokens to template tokens and a special "motion token" which is constructed from a linear operation over the prior motion trajectory relative to the frame.
5) Output token is fed to final layers that perform IoU aware classification and bounding box regression.
Evaluations are performed on 5 SOT datasets, achieving SOTA on all of them.
Pros:
- [AC/R] Important problem, technically sound, and new SOTA on this task.
- [AC/R] New motion token approach is novel and provides significant improvement.
- [AC/R] Simple and elegant architecture
- [AC/R] Insightful discussions
- [AC/R] Clearly written and easy to follow
- [AC/R] Interesting ablations
- [AC/R] High frame rate
Cons:
- [R] Low novelty of transformer approach, but this is negated by the novelty of the motion token.
- [R] Details regarding pretraining are missing. Authors provide in response.
- [R] Motivation of motion token design is not clear. Authors provided further ablations of different implementations of the motion token, showing the current form performs best.
- [R] Provide additional details regarding where and how SwinTrack outperforms other approaches on the benchmarks. Authors provided additional granularity of performance stratifications within the LaSOT benchmark.
- [R] Add more recent high performing trackers. Authors added several methods published in 2022.
- [R] Some additional questions about various details were posed by reviewers, which will all answered by the authors.
The single reviewer with reject recommendation changed to accept in their comments after the discussion period but did not update their score. Given unanimous agreement on accept, AC recommendation is accept.
AC Rating: Strong Accept | train | [
"O0iLb5XMT0Q",
"wgRREJ-zDfF",
"cUdn-IsilR",
"RrX-FS0ZloW",
"96CON2PcTbu",
"hZPFptJ14i",
"levOdFWjcVA",
"TFL_9NGX4v3",
"7P2UUHOCp5E",
"QPIfsoTgeqm",
"bRs70MHxi4s",
"G-DWsV-PQhM",
"lh9DUjUPyD"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for providing valuable and thoughtful comments on our work. Below, we address the concerns and remain committed to clarifying further questions that may arise during the discussion period.\n\n***\n***Q1: Although the common experimental procedure on standard datasets including ablation studies has been ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
5
] | [
"G-DWsV-PQhM",
"G-DWsV-PQhM",
"bRs70MHxi4s",
"bRs70MHxi4s",
"bRs70MHxi4s",
"G-DWsV-PQhM",
"bRs70MHxi4s",
"lh9DUjUPyD",
"QPIfsoTgeqm",
"nips_2022_9h3KsOVXhLZ",
"nips_2022_9h3KsOVXhLZ",
"nips_2022_9h3KsOVXhLZ",
"nips_2022_9h3KsOVXhLZ"
] |
nips_2022_T2DBbSh6_uY | MaskPlace: Fast Chip Placement via Reinforced Visual Representation Learning | Placement is an essential task in modern chip design, aiming at placing millions of circuit modules on a 2D chip canvas. Unlike the human-centric solution, which requires months of intense effort by hardware engineers to produce a layout to minimize delay and energy consumption, deep reinforcement learning has become an emerging autonomous tool. However, the learning-centric method is still in its early stage, impeded by a massive design space of size ten to the order of a few thousand. This work presents MaskPlace to automatically generate a valid chip layout design within a few hours, whose performance can be superior or comparable to recent advanced approaches. It has several appealing benefits that prior arts do not have. Firstly, MaskPlace recasts placement as a problem of learning pixel-level visual representation to comprehensively describe millions of modules on a chip, enabling placement in a high-resolution canvas and a large action space. It outperforms recent methods that represent a chip as a hypergraph. Secondly, it enables training the policy network by an intuitive reward function with dense reward, rather than a complicated reward function with sparse reward from previous methods. Thirdly, extensive experiments on many public benchmarks show that MaskPlace outperforms existing RL approaches in all key performance metrics, including wirelength, congestion, and density. For example, it achieves 60%-90% wirelength reduction and guarantees zero overlaps. We believe MaskPlace can improve AI-assisted chip layout design. The deliverables are released at https://laiyao1.github.io/maskplace. | Accept | The reviewers are enthusiastic about the work, and all recommended for the acceptance of the paper. The reviewers think the work is solid and novel, and potentially impactful. For example, Reviewer MdDg noted "This paper transforms the geometric placement problem into multiple visual representations using three masks, which opens the possibility of using mature convolution networks to extract the global layout information." Thanks to the authors for the detail rebuttal and the thorough discussion with the reviewers. Incorporating these points raised in the communication will further improve the paper. | test | [
"k6I3709OP35",
"dtr4YOFM1Bl",
"DzEOyD-BeML",
"x4obO_iLGI",
"6t6GQ0ij5av",
"PFsqhc6RbGI",
"DlN74MbR8-_",
"YrIjH0rsEg8",
"ZqAAA5BJTP",
"03L__lgOIaY",
"FZ3qkJ-xCTD",
"qLnxk6GTj9v",
"AFo9s8cP4Ze",
"t-VkVI1p2te",
"RhVNVCugcAg",
"ckQ37N5LyCF",
"jSmF3aIHjL_"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for updating and constructive suggestions. We will consider including the pin offset information in the graph in our future work.",
" In [1], node embedding and edge embeddings can be jointly learnt with a bi-directional GNN. I don't think anything you've mentioned, including the pin offset,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"dtr4YOFM1Bl",
"x4obO_iLGI",
"6t6GQ0ij5av",
"6t6GQ0ij5av",
"YrIjH0rsEg8",
"jSmF3aIHjL_",
"nips_2022_T2DBbSh6_uY",
"jSmF3aIHjL_",
"jSmF3aIHjL_",
"ckQ37N5LyCF",
"RhVNVCugcAg",
"t-VkVI1p2te",
"t-VkVI1p2te",
"nips_2022_T2DBbSh6_uY",
"nips_2022_T2DBbSh6_uY",
"nips_2022_T2DBbSh6_uY",
"nips... |
nips_2022_-zlJOVc580 | Mask-based Latent Reconstruction for Reinforcement Learning | For deep reinforcement learning (RL) from pixels, learning effective state representations is crucial for achieving high performance. However, in practice, limited experience and high-dimensional inputs prevent effective representation learning. To address this, motivated by the success of mask-based modeling in other research fields, we introduce mask-based reconstruction to promote state representation learning in RL. Specifically, we propose a simple yet effective self-supervised method, Mask-based Latent Reconstruction (MLR), to predict complete state representations in the latent space from the observations with spatially and temporally masked pixels. MLR enables better use of context information when learning state representations to make them more informative, which facilitates the training of RL agents. Extensive experiments show that our MLR significantly improves the sample efficiency in RL and outperforms the state-of-the-art sample-efficient RL methods on multiple continuous and discrete control benchmarks. Our code is available at https://github.com/microsoft/Mask-based-Latent-Reconstruction. | Accept | Unanimous accept (scoring 5675, with confidence, from reviewers who have published in similar areas)
All four reviewers all agree that the work is novel (using such masked-based representatives/auxiliary losses in an RL setting in the _latent_ space opposed to the original RGB space) which doesn't sound like an obvious idea a-priori, but (all reviewers agree) the authors back this up with extensive experiments and ablations of their own method in Atari and DMC. The authors were very responsive with reviewers, resulting in 3 reviewers increasing their scores each by one. All reviewers agree this paper is easy to follow, and most of their stated weaknesses/confusions have now been addressed or clarified.
My view of this work is that the main contribution is a latent reconstruction loss, as an additional objective to whatever the RL task objective is. And this is useful for using data augmentation / self-supervised learning in RL tasks where representations that can do pixel-reconstructions aren't what's really required (and they show this experimentally) e.g. distractions exist in images, so they focus on the lossy latents instead. This seems more novel and distinct to simply pre-training some representation using contrastive / masked methods common in the literature. And then in their (extensive) DMC+Atari experiments they augment by removing (pixel) spatial-time cubes in video sequences (given correlations nearby etc) and force the latent structure to capture whatever still needs capturing, perhaps a Q-function input for SAC, trained jointly. So I agree with the other reviewers that this work seems interesting and novel where the NeurIPS community would benefit from reading + understanding it | val | [
"aSKRZ5L19CL",
"ejblkiOydPD",
"mza4ZcrgQyH",
"AkxTLRCfyg_",
"9BzAG77D9Md",
"--pi0gY8L3x",
"H2gqyaht6Uv",
"IDkKTepYJ6G",
"gA3Hd5x7hO0",
"UjgrrDgo7AM",
"NxDn8iUYIEL",
"8mGnImwArDX",
"ZlgwCW_PNz5",
"pYPK8nN-epYi",
"kO-vo46Yt_U",
"kiAL7mL44NV",
"2tTCUVoTAXU",
"P1k4k_iZ0dE",
"BrRw_0Su... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your positive comments and very valuable suggestions on our work! Our proposed method is generally applicable. It can still handle but does not gain as that much in the environments where background/viewpoint changes are very drastic (the spatial prediction capability still helps while the temporal ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"mza4ZcrgQyH",
"AkxTLRCfyg_",
"ZlgwCW_PNz5",
"--pi0gY8L3x",
"kO-vo46Yt_U",
"H2gqyaht6Uv",
"IDkKTepYJ6G",
"NxDn8iUYIEL",
"UjgrrDgo7AM",
"pYPK8nN-epYi",
"8mGnImwArDX",
"BrRw_0SuJH",
"P1k4k_iZ0dE",
"2tTCUVoTAXU",
"kiAL7mL44NV",
"nips_2022_-zlJOVc580",
"nips_2022_-zlJOVc580",
"nips_202... |
nips_2022_ocg4JWjYZ96 | Debugging and Explaining Metric Learning Approaches: An Influence Function Based Perspective | Deep metric learning (DML) learns a generalizable embedding space where the representations of semantically similar samples are closer. Despite achieving good performance, the state-of-the-art models still suffer from the generalization errors such as farther similar samples and closer dissimilar samples in the space. In this work, we design an empirical influence function (EIF), a debugging and explaining technique for the generalization errors of state-of-the-art metric learning models. EIF is designed to efficiently identify and quantify how a subset of training samples contributes to the generalization errors. Moreover, given a user-specific error, EIF can be used to relabel a potentially noisy training sample as mitigation. In our quantitative experiment, EIF outperforms the traditional baseline in identifying more relevant training samples with statistical significance and 33.5% less time. In the field study on well-known datasets such as CUB200, CARS196, and InShop, EIF identifies 4.4%, 6.6%, and 17.7% labelling mistakes, indicating the direction of the DML community to further improve the model performance. Our code is available at https://github.com/lindsey98/Influence_function_metric_learning. | Accept | This paper approaches the problem of debugging failures in deep metric learning, developing and applying a more efficient version of influence functions called EIF (empirical influence functions) that is shown to effectively help root-cause failures to ambiguous or poorly labeled examples in standard training sets.
The reviewers found this paper to be deeply interesting, appreciating the efficiency but moreover the novelty of applying IF-style approaches to the problem of DML. From my perspective, I think the novelty point is even stronger than some of the reviewers called out -- yes, it there is some novelty in the development of EIF, but the real novelty is in creating and evaluating effective debugging strategies for learned metric spaces. This kind of analysis and debugging is sorely lacking in the overall literature, and I am very happy to see this work present a compelling and intuitive approach -- both for its own merit, and also for the similar ideas it may well inspire in adjacent areas of research. | train | [
"_bGrPA20XR",
"yrhAy-N4-fd",
"fAB6UI8YLp8",
"PyXUyrQ9aA",
"YyreT6BCeM",
"tqhhicMGhhC",
"Q2qt5ohoRor",
"_VeFDBMYipc",
"MN9qvtZ8LvK",
"_FrlEqtfmu",
"n1-Xvezn70t",
"NgyRns6mm9v"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I would like to maintain my score, as the authors have addressed the questions appropriately.",
" While I'm still not convinced about the usefulness of the observations about how the dataset itself is causing problems (which may simply be because I do not work in DML), I do think th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"MN9qvtZ8LvK",
"fAB6UI8YLp8",
"PyXUyrQ9aA",
"NgyRns6mm9v",
"tqhhicMGhhC",
"NgyRns6mm9v",
"_VeFDBMYipc",
"_FrlEqtfmu",
"n1-Xvezn70t",
"nips_2022_ocg4JWjYZ96",
"nips_2022_ocg4JWjYZ96",
"nips_2022_ocg4JWjYZ96"
] |
nips_2022_sYDX_OxNNjh | Unsupervised Skill Discovery via Recurrent Skill Training | Being able to discover diverse useful skills without external reward functions is beneficial in reinforcement learning research. Previous unsupervised skill discovery approaches mainly train different skills in parallel. Although impressive results have been provided, we found that parallel training procedure can sometimes block exploration when the state visited by different skills overlap, which leads to poor state coverage and restricts the diversity of learned skills. In this paper, we take a deeper look into this phenomenon and propose a novel framework to address this issue, which we call Recurrent Skill Training (ReST). Instead of training all the skills in parallel, ReST trains different skills one after another recurrently, along with a state coverage based intrinsic reward. We conduct experiments on a number of challenging 2D navigation environments and robotic locomotion environments. Evaluation results show that our proposed approach outperforms previous parallel training approaches in terms of state coverage and skill diversity. Videos of the discovered skills are available at https://sites.google.com/view/neurips22-rest. | Accept | The paper identifies a problem in prior works on skill-learning: some states that are visited in training by diverse skills may not be visited after skills are learned (dubbed as exploration degradation problem). The problem is well-motivated and is an important one in unsupervised skill learning. Authors then propose a method to overcome the identified issue. Comparisons are made to state-of-the-art methods such as DADS and LEXA.
The reviewers are split in their opinion: UvQx recommends an accept, nWfC recommends borderline accept, while kkQh recommends rejecting the paper. Authors addressed the primary concern of nWfC on comparison with LEXA. kkQh's main concerns are about the solution not being elegant. While I agree that more elegant solutions can be found, I think that identifying a bottleneck in learning diverse skills is of good value to the community. Furthermore, the proposed solution works across a range of environments. Therefore, I am lean towards a positive recommendation for this paper. I encourage the authors to clearly address the reviewers suggestions and comments in the camera ready version. | test | [
"K60odXhMVJw",
"QEGu8cU-PdD",
"ZvBdhdln7N",
"PAhJKy0c1KGd",
"8xKvHryr5bi",
"sgK4BKs6ob9",
"zVLojnu7981S",
"tr9chuPHPOf",
"ot9H9wu3Em2",
"9kErrDpHHbN",
"VvITeboaKaV",
"cXPBdJzsE3C",
"fnjCy_IlUe8",
"jtYRqUHmpy4",
"NZ8sGk02MaC",
"8PggJCBSZ1V",
"PGgSOIRPdh4",
"3_wrJY6msQK",
"rPuPHssH... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Dear reviewer nWfC and reviewer UvQx: \n\nWe have uploaded the revised paper and appendix. Please refer to Appendix F.1, F.2 and F.3 for detailed empirical results and analysis. \n\nThank you for helping us making our paper a stronger submission! \n\nBest regards, \n\nThe authors",
" Dear reviewer kKQH: \n\nWe ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"tr9chuPHPOf",
"rPuPHssHA-B",
"tr9chuPHPOf",
"rPuPHssHA-B",
"rPuPHssHA-B",
"tr9chuPHPOf",
"tr9chuPHPOf",
"PGgSOIRPdh4",
"9kErrDpHHbN",
"VvITeboaKaV",
"fnjCy_IlUe8",
"3_wrJY6msQK",
"KVMPLpJZLFl",
"KVMPLpJZLFl",
"KVMPLpJZLFl",
"rQCWv-az2G",
"rQCWv-az2G",
"rPuPHssHA-B",
"nips_2022_s... |
nips_2022_jxPJ4QA0KAb | Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited | Designing spectral convolutional networks is a challenging problem in graph learning. ChebNet, one of the early attempts, approximates the spectral graph convolutions using Chebyshev polynomials. GCN simplifies ChebNet by utilizing only the first two Chebyshev polynomials while still outperforming it on real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions. Such conclusions are counter-intuitive in the field of approximation theory, where it is established that the Chebyshev polynomial achieves the optimum convergent rate for approximating a function.
In this paper, we revisit the problem of approximating the spectral graph convolutions with Chebyshev polynomials. We show that ChebNet's inferior performance is primarily due to illegal coefficients learnt by ChebNet approximating analytic filter functions, which leads to over-fitting. We then propose ChebNetII, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshev polynomial approximation while reducing the Runge phenomenon. We conducted an extensive experimental study to demonstrate that ChebNetII can learn arbitrary graph convolutions and achieve superior performance in both full- and semi-supervised node classification tasks. Most notably, we scale ChebNetII to a billion graph ogbn-papers100M, showing that spectral-based GNNs have superior performance. Our code is available at https://github.com/ivam-he/ChebNetII. | Accept | The authors consider the traditional problem of approximating graph convolutional networks using Chebyshev polynomial, which is known as Chebnet. Then, the authors propose a new GNN model called ChebNetII enhancing for reducing the Runge phenomenon; this is an important contribution to GNN. Overall, the reviewers are positive about the paper. Thus, I also vote for acceptance. | test | [
"f5Yiugg0KOD",
"SYkvGt7Irv",
"LIZ0vCMAA5O",
"2sVzWvKb9_R",
"5bSZ0JJXxuI",
"FwcAIJXCAI9",
"vaun9oL_DOy",
"WC25vjC7S9",
"1ieJGgVWzEG",
"IZnDIPI5esi",
"Qwf4y2vPT35",
"A6KSIIrNXkV"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your helpful feedback and for supporting our work! We will include the new results in the final version of the paper.",
" Thank you for your useful feedback and for supporting our work!",
" The authors have addressed all the issues raised in my previous comments. I am happy to hold the evaluatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"2sVzWvKb9_R",
"LIZ0vCMAA5O",
"1ieJGgVWzEG",
"vaun9oL_DOy",
"nips_2022_jxPJ4QA0KAb",
"A6KSIIrNXkV",
"A6KSIIrNXkV",
"Qwf4y2vPT35",
"IZnDIPI5esi",
"nips_2022_jxPJ4QA0KAb",
"nips_2022_jxPJ4QA0KAb",
"nips_2022_jxPJ4QA0KAb"
] |
nips_2022_XDZhagjfMP | Gradient Methods Provably Converge to Non-Robust Networks | Despite a great deal of research, it is still unclear why neural networks are so susceptible to adversarial examples. In this work, we identify natural settings where depth-$2$ ReLU networks trained with gradient flow are provably non-robust (susceptible to small adversarial $\ell_2$-perturbations), even when robust networks that classify the training dataset correctly exist. Perhaps surprisingly, we show that the well-known implicit bias towards margin maximization induces bias towards non-robust networks, by proving that every network which satisfies the KKT conditions of the max-margin problem is non-robust. | Accept | The result of the paper provides a particular vignette to the problem of lp-robustness of trained (1 hidden layer) neural networks.
The authors do not consider the statistical learning setting, but define a *robust model* essentially in terms of smoothness/robustness of the function around the *training* examples - hence not discussing generalization or function spaces (allowing arbitrary labelings). Their result states that even though a robust model (around the training samples) exists, the neural network trained with gradient descent does not find it. Or more severely, the NN solution is not robust for every training sample.
The intuition of the existence of a robust solution goes as follows (see proof sketch): Since the points are far away, we can pick each (of the m non-zero) weight roughly in the direction of each training point (s.t. only one neuron is active for one sample point) and a corresponding bias that makes sure that each training points lies "deep" inside the active region. This then leads to the network being robust (have the same prediction) around each training point in a radius of ~\sqrt{d}/2.
The lower bound for max-margin implicit bias of GD on the neural networks relies heavily on prior work by Lyu & Li '19 and Ji & Telgarsky '20.
As the reviewers agree, the story is interesting and worth publishing at Neurips, though the proof does not require significant new techniques or insights. For the camera-ready version, the authors are encouraged to add a discussion on the positioning of the insight - contrasting the current result to other works on robust *generalization*. For future work it would also be interesting to extend the results to tell a similar story in the usual learning theoretic setting. | train | [
"D3udm68lFS2",
"Ej46KcAQIv3I",
"nRQIsjK0lf1",
"-edNVldofDH",
"fABUXfrTO3B",
"_ADmKzkeYWG",
"kTJJgbenZui",
"WOCiqDw9nCi"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the feedback and the comments.\n\n“This main contribution of this paper is just Theorem 4.1 and a naive result Theorem 3.1 and simulation experiments, and the models and datas is somehow simple, I think the amount of work is not so enough in some sense, but the phenomenon it reveals is q... | [
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"WOCiqDw9nCi",
"kTJJgbenZui",
"_ADmKzkeYWG",
"fABUXfrTO3B",
"nips_2022_XDZhagjfMP",
"nips_2022_XDZhagjfMP",
"nips_2022_XDZhagjfMP",
"nips_2022_XDZhagjfMP"
] |
nips_2022_74fJwNrBlPI | TA-GATES: An Encoding Scheme for Neural Network Architectures | Neural architecture search tries to shift the manual design of neural network (NN) architectures to algorithmic design. In these cases, the NN architecture itself can be viewed as data and needs to be modeled. A better modeling could help explore novel architectures automatically and open the black box of automated architecture design. To this end, this work proposes a new encoding scheme for neural architectures, the Training-Analogous Graph-based ArchiTecture Encoding Scheme (TA-GATES). TA-GATES encodes an NN architecture in a way that is analogous to its training. Extensive experiments demonstrate that the flexibility and discriminative power of TA-GATES lead to better modeling of NN architectures. We expect our methodology of explicitly modeling the NN training process to benefit broader automated deep learning systems. The code is available at https://github.com/walkerning/aw_nas. | Accept | The paper proposes a new encoding scheme for neural architecture search. All reviewers agreed that operations with different trainable parameters need different encodings. The paper is clearly written and well-motivated. It will shed great light on future work to consider the trainable parameter for NAS.
As suggested by dXbo and zsxx, the paper still needs polish. It should improve the competitor methods, especially providing other strong NAS baselines and NN encoding schemes. | train | [
"6sDW1t45IB",
"YNWAGv1enxr",
"dyZXFEW9IB",
"cbNphIn0kM",
"2H4p0IfkGl",
"F9QuxJNicwt",
"jcGTCf9qJZX",
"s5YK9MLjcQI",
"aVEWsiSPII",
"K7k78TL5LVD",
"IECpjqjGpuU",
"T2d8C4stcJu",
"LXxr3fNhfIA",
"lT8WhkYKKlw",
"dgQdQ_x_pdb"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We're encouraged to see your recognition to our work! Also, your constructive feedback really helps us to improve our paper. We'll follow the suggestions to prepare the revision.",
" Thank you for the additional metrics provided and for the comments addressing my concerns. It seems like TA-GATES is an effective... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"YNWAGv1enxr",
"jcGTCf9qJZX",
"cbNphIn0kM",
"K7k78TL5LVD",
"F9QuxJNicwt",
"aVEWsiSPII",
"dgQdQ_x_pdb",
"lT8WhkYKKlw",
"lT8WhkYKKlw",
"LXxr3fNhfIA",
"T2d8C4stcJu",
"nips_2022_74fJwNrBlPI",
"nips_2022_74fJwNrBlPI",
"nips_2022_74fJwNrBlPI",
"nips_2022_74fJwNrBlPI"
] |
nips_2022_LYXTPNWJLr | PaCo: Parameter-Compositional Multi-task Reinforcement Learning | The purpose of multi-task reinforcement learning (MTRL) is to train a single policy that can be applied to a set of different tasks. Sharing parameters allows us to take advantage of the similarities among tasks. However, the gaps between contents and difficulties of different tasks bring us challenges on both which tasks should share the parameters and what parameters should be shared, as well as the optimization challenges due to parameter sharing.
In this work, we introduce a parameter-compositional approach (PaCo) as an attempt to address these challenges. In this framework, a policy subspace represented by a set of parameters is learned. Policies for all the single tasks lie in this subspace and can be composed by interpolating with the learned set. It allows not only flexible parameter sharing, but also a natural way to improve training.
We demonstrate the state-of-the-art performance on Meta-World benchmarks, verifying the effectiveness of the proposed approach. | Accept | This paper proposes a multi-task RL architecture that composes task-independent parameters and task-dependent parameters to construct a task-specific policy. The results on the MetaWorld multi-task learning benchmark shows that the proposed method outperforms relevant baselines. In general, the reviewers found the idea interesting and novel, and the results are solid. Several concerns about the lack of single-task baseline and the lack of scores given intermediate steps were addressed with updated results during the rebuttal period, which made all the reviewers lean towards acceptance. Thus, I recommend accepting this paper. In the meantime, there are still remaining comments about the result beyond 2M steps (suggested by Reviewer CCrp) and additional baselines (suggested by Reviewer DW33). I highly suggest the authors to include these results for the camera-ready version. | test | [
"dSCAuUcGkvd",
"mUw6q1EbMPV",
"ANGikGfRR4O",
"O-Em7c3pwyv",
"7ut_CzQy3o2",
"YO0fRx2gBn",
"8O9_SyrevtS",
"EtCwU9CJgj",
"kopTrv60HLe",
"Chj7kcGlGpK",
"GibaRVE0vmB",
"EeZkYE5Z0l",
"nitvqk_M_kF",
"ANuxHpuOHIo",
"4IKKNyR0ZR",
"JkSFM3oA17",
"C8VEfbxcWir",
"HDKtUvjbfq",
"fFuc1H3Uj-"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Evaluation protocols**\n\nWe thank reviewer CCrp for the valuable feedback and discussions on evaluation protocols. In our work, we put the most effort into improving the performance and stability of the method and put less effort into evaluation. We do agree that how to evaluate MTRL methods is very important.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"ANGikGfRR4O",
"ANGikGfRR4O",
"7ut_CzQy3o2",
"HDKtUvjbfq",
"8O9_SyrevtS",
"8O9_SyrevtS",
"ANuxHpuOHIo",
"ANuxHpuOHIo",
"fFuc1H3Uj-",
"fFuc1H3Uj-",
"fFuc1H3Uj-",
"HDKtUvjbfq",
"HDKtUvjbfq",
"C8VEfbxcWir",
"C8VEfbxcWir",
"nips_2022_LYXTPNWJLr",
"nips_2022_LYXTPNWJLr",
"nips_2022_LYXT... |
nips_2022_0Kv7cLhuhQT | NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis | Infinite visual synthesis aims to generate high-resolution images, long-duration videos, and even visual generation of infinite size. Some recent work tried to solve this task by first dividing data into processable patches and then training the models on them without considering the dependencies between patches. However, since they fail to model global dependencies between patches, the quality and consistency of the generation can be limited. To address this issue, we propose NUWA-Infinity, a patch-level \emph{``render-and-optimize''} strategy for infinite visual synthesis. Given a large image or a long video, NUWA-Infinity first splits it into non-overlapping patches and uses the ordered patch chain as a complete training instance, a rendering model autoregressively predicts each patch based on its contexts. Once a patch is predicted, it is optimized immediately and its hidden states are saved as contexts for the next \emph{``render-and-optimize''} process. This brings two advantages: ($i$) The autoregressive rendering process with information transfer between contexts provides an implicit global probabilistic distribution modeling; ($ii$) The timely optimization process alleviates the optimization stress of the model and helps convergence. Based on the above designs, NUWA-Infinity shows a strong synthesis ability on high-resolution images and long-duration videos. The homepage link is \url{https://nuwa-infinity.microsoft.com}. | Accept | This paper proposes a patch-level autoregressive model for infinite visual synthesis based on two extensions: (1) transfer information across patches via context vectors and (2) timely optimization for each patch. Note that hierarchical autoregressive models have been explored in previous works such as VQ-VAE2 (https://arxiv.org/pdf/1906.00446.pdf). But the idea of context pool and arbitrary direction modeling seems interesting. The paper has received consistently positive reviews. Reviewers found the idea intuitive and the results visually compelling. The rebuttal further addressed the concerns such as the missing comparisons (w/ InfinityGAN and ALIS), running speed, and clarifications w.r.t existing works. The AC agreed with the reviewers’ consensus and recommended accepting the paper. | train | [
"ceU9XZE5fv",
"XW6ttSmdPTp",
"XoTz4DI_2RP",
"Iaz5DlRfNX8",
"-yD71lpeYAM",
"oKPUAL4HNN0",
"mLBe4QafmS",
"fiGI7-9E-HJ",
"Bt9igq4CpkI",
"B9X4vEVtQtR",
"R24yX96tUeK",
"QaOb3DWQSi4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response from authors. Better to include the above discussion and analysis in the revised version. I will keep my acceptance rating. It is a good work.",
" Thanks for your detailed response. The rebuttal addressed my concerns. I will keep my acceptance rating.",
" Dear reviewers, \n\nThank you ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
2
] | [
"mLBe4QafmS",
"-yD71lpeYAM",
"nips_2022_0Kv7cLhuhQT",
"oKPUAL4HNN0",
"QaOb3DWQSi4",
"R24yX96tUeK",
"B9X4vEVtQtR",
"Bt9igq4CpkI",
"nips_2022_0Kv7cLhuhQT",
"nips_2022_0Kv7cLhuhQT",
"nips_2022_0Kv7cLhuhQT",
"nips_2022_0Kv7cLhuhQT"
] |
nips_2022_mxzIrQIOGIK | Multi-Objective Online Learning | This paper presents a systematic study of multi-objective online learning. We first formulate the framework of Multi-Objective Online Convex Optimization, which encompasses two novel multi-objective regret definitions. The regret definitions build upon an equivalent transformation of the multi-objective dynamic regret based on the commonly used Pareto suboptimality gap metric in zero-order multi-objective bandits, making it amenable to be optimized via first-order iterative methods. To motivate the algorithm design, we give an explicit example in which equipping OMD with the vanilla min-norm solver for gradient composition will incur a linear regret, which shows that only regularizing the iterates, as in single-objective online learning, is not enough to guarantee sublinear regrets in the multi-objective setting. To resolve this issue, we propose a novel min-regularized-norm solver that regularizes the composite weights. Combining min-regularized-norm with OMD results in the Doubly Regularized Online Mirror Multiple Descent algorithm. We further derive both the static and dynamic regret bounds for the proposed algorithm, each of which matches the corresponding optimal bound in the single-objective setting. Extensive experiments on both simulation and real-world datasets verify the effectiveness of the proposed algorithm. | Reject | First, I must say that this is quite an interesting and elegant work, and regardless of the decision on this paper, I would like the authors to continue along this path. The paper is nicely written with a good flow, and the authors naturally arrive at their new algorithm, Doubly Regularized Online Mirror Multiple Descent. In the static case, the regret bounds the authors can show are solid and there are no complaints from any of the reviewers. This is almost true for the dynamic regret case, except for the large caveat that (in terms of theory) the authors' algorithm needs to know $V_T$. I discuss this issue more below. From the experiments side, although the authors show some improvement over the linearization baseline, the sense from the reviewers is that this improvement is not that much. For this reason, I truly think this paper needs to be solid theoretically.
**Regarding knowledge of $V_T$:**
First, to make it seem fair to request adaptivity to $V_T$, let me mention that in, e.g., the work of Campolongo and Orabona (2021) — which the authors cited in the context of their assumption of known $V_T$ — there are algorithms that automatically adapt to unknown $V_T$. Now, the authors' current need for knowledge of $V_T$ wouldn't necessarily be a problem if a doubling trick or meta-algorithm could be used. It is quite unclear whether the former could work, and in the reviewer discussion we have doubts about whether VariationHedge could be used in order to get adaptivity to $V_T$. I strongly suggest the authors look into this. Also, I think it is worth mentioning that the authors mentioned in one of their responses to a reviewer that their algorithm only uses first-order (gradient) feedback. It seems that this would mean that, even at the end of the game, $V_T$ could not be computed.
In light of the currently unclear significant benefit over linearization empirically and also the issue with adapting to $V_T$ for the dynamic regret results, this paper does not quite meet the bar for acceptance. However, the decision was a close one, and I strongly encourage the authors to increase discussion of both issues (and technically address adaptivity to $V_T$, working out details using VariationHedge if the authors can indeed do this). | train | [
"ELcQ4FfUZYc",
"qXVSl9sYJCJ",
"4w6IQ6DBgX0D",
"HyD4PSt5pnJ",
"ZV2MfR3K0fy",
"JMwG_CcOKgT",
"JXT45fL5DI",
"u_czsnur8MG",
"JVLi9oxIQ_h",
"O2Ec4qV0fgG",
"c9U5L9XIgSI",
"GgGDc9uyvim",
"GbVyZjh4do",
"Atf8D3mPbD1",
"BrqrGRhsUSu"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the second round of feedback and the detailed explanations! My concerns on the motivation (especially on the DNN part), comparison with the previous work, and the technical contribution are properly addressed by the rebuttal.\n\nAs for the step size tuning issue, I am not still quite sure whether th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"qXVSl9sYJCJ",
"4w6IQ6DBgX0D",
"u_czsnur8MG",
"nips_2022_mxzIrQIOGIK",
"BrqrGRhsUSu",
"Atf8D3mPbD1",
"BrqrGRhsUSu",
"BrqrGRhsUSu",
"Atf8D3mPbD1",
"GbVyZjh4do",
"GgGDc9uyvim",
"nips_2022_mxzIrQIOGIK",
"nips_2022_mxzIrQIOGIK",
"nips_2022_mxzIrQIOGIK",
"nips_2022_mxzIrQIOGIK"
] |
nips_2022_-KPNRZ8i0ag | A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal Retrieval | Cross-modal retrieval aims to build correspondence between multiple modalities by learning a common representation space. Typically, an image can match multiple texts semantically and vice versa, which significantly increases the difficulty of this task. To address this problem, probabilistic embedding is proposed to quantify these many-to-many relationships. However, existing datasets (e.g., MS-COCO) and metrics (e.g., Recall@K) cannot fully represent these diversity correspondences due to non-exhaustive annotations. Based on this observation, we utilize semantic correlation computed by CIDEr to find the potential correspondences. Then we present an effective metric, named Average Semantic Precision (ASP), which can measure the ranking precision of semantic correlation for retrieval sets. Additionally, we introduce a novel and concise objective, coined Differentiable ASP Approximation (DAA). Concretely, DAA can optimize ASP directly by making the ranking function of ASP differentiable through a sigmoid function. To verify the effectiveness of our approach, extensive experiments are conducted on MS-COCO, CUB Captions, and Flickr30K, which are commonly used in cross-modal retrieval. The results show that our approach obtains superior performance over the state-of-the-art approaches on all metrics. The code and trained models are released at https://github.com/leolee99/2022-NeurIPS-DAA. | Accept | This paper investigates the multiple correspondence issue of cross-modal retrieval and proposes a method to improve and evaluate the multiplicity of probabilistic embedding. Sufficient experiments are carried out to prove the effectiveness of the proposed method. In addition, the rebuttal successfully addressed the major concerns and, in the end, there is a general consensus about accepting the paper. | train | [
"7pUY4fr-92Z",
"c0xo7yXaPOX",
"m4BvMKifQx_",
"bT0y60J5V7Q",
"ZKLiSmIFsub",
"3r4Set0F_kH",
"pp1VhEUsU7",
"9EqW2pl4Nje",
"m51TkJii8MN",
"gxOzoIaaLEk",
"xKRjKrEDqM2",
"5ajJ48FDsDy",
"B-iMwGWCYee",
"VcoBUDjtJOj",
"kWMZ_0PHSCL",
"Ld7ypoLe-4e",
"0Nd1bd6TDCB",
"f4LLjo7ZVPY"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed rebuttal. My concerns have been addressed. I have raised my rating.",
" We have added more analyses and all new experiments to the supplementary materials. The details are as follows:\n- The motivation of the specific formulation of ASP (Eq.4) is clarified in **Appendix.A**.\n- The comp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
5
] | [
"pp1VhEUsU7",
"VcoBUDjtJOj",
"Ld7ypoLe-4e",
"kWMZ_0PHSCL",
"f4LLjo7ZVPY",
"0Nd1bd6TDCB",
"f4LLjo7ZVPY",
"0Nd1bd6TDCB",
"0Nd1bd6TDCB",
"Ld7ypoLe-4e",
"kWMZ_0PHSCL",
"kWMZ_0PHSCL",
"nips_2022_-KPNRZ8i0ag",
"nips_2022_-KPNRZ8i0ag",
"nips_2022_-KPNRZ8i0ag",
"nips_2022_-KPNRZ8i0ag",
"nips... |
nips_2022_m97Cdr9IOZJ | Para-CFlows: $C^k$-universal diffeomorphism approximators as superior neural surrogates | Invertible neural networks based on Coupling Flows (CFlows) have various applications such as image synthesis and data compression. The approximation universality for CFlows is of paramount importance to ensure the model expressiveness. In this paper, we prove that CFlows}can approximate any diffeomorphism in $C^k$-norm if its layers can approximate certain single-coordinate transforms. Specifically, we derive that a composition of affine coupling layers and invertible linear transforms achieves this universality. Furthermore, in parametric cases where the diffeomorphism depends on some extra parameters, we prove the corresponding approximation theorems for parametric coupling flows named Para-CFlows. In practice, we apply Para-CFlows as a neural surrogate model in contextual Bayesian optimization tasks, to demonstrate its superiority over other neural surrogate models in terms of optimization performance and gradient approximations. | Accept | The paper proves universal approximation in C^k norm for general families of invertible flows that can approximate a certain class of "entrywise" diffeomorphisms. An example of such a family are for instance, compositions of affine couplings (even entrywise) and invertible linear maps. This improves upon prior results from Teshima et al' 22 by handling arbitrary k, and the proof technique is fairly natural and clean. They also consider a certain "parametric" scenario where we want to approximate maps that are "conditional" diffeomorphisms for any fixing of some of the coordinates. Finally, they propose a new class of models, Para-CFlows which shows promising experimental results (albeit on low-dimensional, simple data). | train | [
"8VXt6DNaDFZ",
"z0mb7o1zaA",
"KW9XsNP8NjP",
"2watNCWSZ7",
"EGL76CKxPOYm",
"PYaycTESpZ",
"kiqAkv2MRsi",
"a2FbVr6gZbQ",
"-GpgvkVvvpa",
"KH38KRfoWJC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the explanation! I would expect the authors to clarify necessary definitions, and make the appropriate changes in the theorem statement, but I would keep my score.",
" I thank the authors for the response. I expect the authors to fix the aformentioned points in the revised version. I keep my score.",... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
"KW9XsNP8NjP",
"EGL76CKxPOYm",
"2watNCWSZ7",
"kiqAkv2MRsi",
"KH38KRfoWJC",
"-GpgvkVvvpa",
"a2FbVr6gZbQ",
"nips_2022_m97Cdr9IOZJ",
"nips_2022_m97Cdr9IOZJ",
"nips_2022_m97Cdr9IOZJ"
] |
nips_2022_stAKQ6vnFti | Learning Contrastive Embedding in Low-Dimensional Space | Contrastive learning (CL) pretrains feature embeddings to scatter instances in the feature space so that the training data can be well discriminated. Most existing CL techniques usually encourage learning such feature embeddings in the highdimensional space to maximize the instance discrimination. However, this practice may lead to undesired results where the scattering instances are sparsely distributed in the high-dimensional feature space, making it difficult to capture the underlying similarity between pairwise instances. To this end, we propose a novel framework called contrastive learning with low-dimensional reconstruction (CLLR), which adopts a regularized projection layer to reduce the dimensionality of the feature embedding. In CLLR, we build the sparse / low-rank regularizer to adaptively reconstruct a low-dimensional projection space while preserving the basic objective for instance discrimination, and thus successfully learning contrastive embeddings that alleviate the above issue. Theoretically, we prove a tighter error bound for CLLR; empirically, the superiority of CLLR is demonstrated across multiple domains. Both theoretical and experimental results emphasize the significance of learning low-dimensional contrastive embeddings. | Accept | This paper proposes a new plug-in style method for contrastive self-supervised learning to yield low-dimensional embeddling. The authors argue that high-dimensional embeeding induced by negative pair-based constrastive learning leads to "curse of dimensionality", which might harm the generalization performance. So, the authors employs a sparse project layer on the top of typical CSSL encoder for low-dimensional representation.
The scores of the reviewers are split: two strong acceptance (8 and 7) and two rejection (3 and 4).
Unfortunately, the overall situation does not changes (8, 6->7, 4->3, 3->4) after rebuttal and two discussion periods.
AC carefully read the paper, the author rebuttal, and discussion comments.
All reviews agree that this paper has novelty, interesting idea, theoretical justification, and extensive-promising results.
The main controversal point is the motivation that "curse of dimensionality" by high-dimensional representation harms the generalization performance. On this issue, three reviewers debated very enthusiastically. It is difficult to decide whether the performance degradation of high-dimensional embedding is meaningful or not. Also, it is not very clear that this degradation is directly caused by the curse of dimensionality. However, empirical results show the proposed low-dimensional method consistently improves the high-dimensional method with sparse regularization including negarive-free CL methods. Practically, low-dimensional embedding has advantages in memory footprint and inference time in the real-world applications, which these merits are not discussed.
Overall, this paper can contrubte to ML community despite its controversial motivation. So AC recommedns accepting this paer.
Please clarify the motivation considering two reviewers' comments.
| train | [
"4aBy_ZGM_F",
"tWvQgnRicI-",
"XiJnx_v-f-m",
"-lwjVNnLX3",
"J4CNNGj5IoC",
"HVPsqSADWr1",
"iyJM-gPC7I8-",
"Wq__yWEWNWK",
"Le_Cy_1wlq",
"7wiydoikD2S",
"dKMXR2krry",
"ZB49pQwNZW9",
"Q0yU6OJh-QX",
"KSzdA9-zohT",
"7I3hG8ICLnb",
"u0Bc-6Ssiv"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer-cX3t,\n\nI feel that the authors have actually addressed your Question (Comment) 1.\n\nFor one thing, I think the extremely bad cases (namely, very poor performance) may not really exist because the well-known CL methods (such as SimCLR, CMC, and CO2) could already get good and acceptable classifica... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
3
] | [
"XiJnx_v-f-m",
"XiJnx_v-f-m",
"ZB49pQwNZW9",
"dKMXR2krry",
"HVPsqSADWr1",
"Wq__yWEWNWK",
"nips_2022_stAKQ6vnFti",
"u0Bc-6Ssiv",
"7I3hG8ICLnb",
"7I3hG8ICLnb",
"KSzdA9-zohT",
"Q0yU6OJh-QX",
"nips_2022_stAKQ6vnFti",
"nips_2022_stAKQ6vnFti",
"nips_2022_stAKQ6vnFti",
"nips_2022_stAKQ6vnFti"... |
nips_2022_0VFQhPGF1M3 | Improving Transformer with an Admixture of Attention Heads | Transformers with multi-head self-attention have achieved remarkable success in sequence modeling and beyond. However, they suffer from high computational and memory complexities for computing the attention matrix at each head. Recently, it has been shown that those attention matrices lie on a low-dimensional manifold and, thus, are redundant. We propose the Transformer with a Finite Admixture of Shared Heads (FiSHformers), a novel class of efficient and flexible transformers that allow the sharing of attention matrices between attention heads. At the core of FiSHformer is a novel finite admixture model of shared heads (FiSH) that samples attention matrices from a set of global attention matrices. The number of global attention matrices is much smaller than the number of local attention matrices generated. FiSHformers directly learn these global attention matrices rather than the local ones as in other transformers, thus significantly improving the computational and memory efficiency of the model. We empirically verify the advantages of the FiSHformer over the baseline transformers in a wide range of practical applications including language modeling, machine translation, and image classification. On the WikiText-103, IWSLT'14 De-En and WMT'14 En-De, FiSHformers use much fewer floating-point operations per second (FLOPs), memory, and parameters compared to the baseline transformers. | Accept | This work proposes a version of transformers with an admixture of attention heads. The reviewers find the idea interesting. They find the paper to be well organized and presented, and with sufficient empirical support for the main conclusions. There was some initial concern over similarity to prior work, but the reviewer indicated this has been resolved in the discussion with the authors. I therefore recommend accepting the paper. | train | [
"YJB24QQGHl_",
"zWbwVNceBH",
"ney_am-qx-",
"fHJSkQdVtuW",
"9VCY9eOfm9i",
"fNYstxHmafs",
"PWV8Mui7lzO",
"nBxXqCDDSUx",
"y8ksKIPsX2",
"tDvBdfd7yDi",
"tApbQvaa2g_",
"DFH09v59_X",
"YwqjoWkR9-2M",
"X52SyGg4kuE",
"rILTsQZ5vlB",
"b0iUU1tyTOV",
"vCHd-SFB5mg",
"fN2uZWVHmY_",
"Bc22zn9T2ic"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"... | [
" Thanks for your further feedback and we appreciate your endorsement. We have fixed Figure 3 as you suggested in our revision. \n\nRegarding your concern that “it is hard to judge whether the model performs with \"much less computational cost\"”, **Figure 7 and 8 in Appendix C of our paper plot the ratio of the n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"zWbwVNceBH",
"PWV8Mui7lzO",
"fHJSkQdVtuW",
"X52SyGg4kuE",
"YwqjoWkR9-2M",
"rILTsQZ5vlB",
"Bc22zn9T2ic",
"nips_2022_0VFQhPGF1M3",
"nips_2022_0VFQhPGF1M3",
"tApbQvaa2g_",
"DFH09v59_X",
"nips_2022_0VFQhPGF1M3",
"X52SyGg4kuE",
"lG2ALRYec3",
"b0iUU1tyTOV",
"vCHd-SFB5mg",
"fN2uZWVHmY_",
... |
nips_2022_cNrglG_OAeu | On the Theoretical Properties of Noise Correlation in Stochastic Optimization | Studying the properties of stochastic noise to optimize complex non-convex functions has been an active area of research in the field of machine learning. Prior work~\citep{zhou2019pgd, wei2019noise} has shown that the noise of stochastic gradient descent improves optimization by overcoming undesirable obstacles in the landscape. Moreover, injecting artificial Gaussian noise has become a popular idea to quickly escape saddle points.
Indeed, in the absence of reliable gradient information, the noise is used to explore the landscape, but it is unclear what type of noise is optimal in terms of exploration ability. In order to narrow this gap in our knowledge, we study a general type of continuous-time non-Markovian process, based on fractional Brownian motion, that allows for the increments of the process to be correlated. This generalizes processes based on Brownian motion, such as the Ornstein-Uhlenbeck process. We demonstrate how to discretize such processes which gives rise to the new algorithm ``fPGD''. This method is a generalization of the known algorithms PGD and Anti-PGD~\citep{orvieto2022anti}. We study the properties of fPGD both theoretically and empirically, demonstrating that it possesses exploration abilities that, in some cases, are favorable over PGD and Anti-PGD. These results open the field to novel ways to exploit noise for training machine learning models. | Accept | This paper investigates the use of non-Gaussian (specifically "fractional Brownian") noise in SDEs. The reviewers found a variety of weaknesses, but overall were positive; I point in particular to the many valuable comments left by reviewer XeCx. As such, I mark this paper as accept, though I strongly urge the authors to further refine their work based on the copious review feedback below. | train | [
"uEbtSDP8HF",
"mswlX43KvK",
"6aehnxzhmw",
"iGBNqz_Ybhx",
"vFVJWuWvUpu",
"9n2qwHXR62B",
"YACgldBVW0l",
"NAAGO8Jyt1",
"lY0LAw8o_2K",
"wFQss7XelAj",
"kBET285Zn5Z",
"4byTJ5QA58S",
"LadUkU5267",
"YQIsF4W7Sww",
"BrgjUlaZrT",
"Q7SeP763nM",
"EPOSVPm6wDO",
"Ywo_Ubbniea"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThank you for your feedback, this is sincerely greatly appreciated.\n\nRegarding your question in point 4, yes the bound appears to be tight experimentally, see the newly added experiment in section B.1. \n\nWe will follow your suggestions regarding the other comments your made, thank you.\n\nBe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"6aehnxzhmw",
"iGBNqz_Ybhx",
"YQIsF4W7Sww",
"vFVJWuWvUpu",
"9n2qwHXR62B",
"YACgldBVW0l",
"NAAGO8Jyt1",
"lY0LAw8o_2K",
"4byTJ5QA58S",
"Ywo_Ubbniea",
"Q7SeP763nM",
"Ywo_Ubbniea",
"EPOSVPm6wDO",
"BrgjUlaZrT",
"Q7SeP763nM",
"nips_2022_cNrglG_OAeu",
"nips_2022_cNrglG_OAeu",
"nips_2022_c... |
nips_2022_Vi-sZWNA_Ue | Temporally Disentangled Representation Learning | Recently in the field of unsupervised representation learning, strong identifiability results for disentanglement of causally-related latent variables have been established by exploiting certain side information, such as class labels, in addition to independence. However, most existing work is constrained by functional form assumptions such as independent sources or further with linear transitions, and distribution assumptions such as stationary, exponential family distribution. It is unknown whether the underlying latent variables and their causal relations are identifiable if they have arbitrary, nonparametric causal influences in between. In this work, we establish the identifiability theories of nonparametric latent causal processes from their nonlinear mixtures under fixed temporal causal influences and analyze how distribution changes can further benefit the disentanglement. We propose TDRL, a principled framework to recover time-delayed latent causal variables and identify their relations from measured sequential data under stationary environments and under different distribution shifts. Specifically, the framework can factorize unknown distribution shifts into transition distribution changes under fixed and time-varying latent causal relations, and under global changes in observation. Through experiments, we show that time-delayed latent causal influences are reliably identified and that our approach considerably outperforms existing baselines that do not correctly exploit this modular representation of changes. | Accept | Decision: Accept
This paper presents identifiability results of causal relationships in non-stationary time series under the assumption of latent dynamics with non-linear mixtures. The authors also propose an implementation of the assumed causal model as a sequential deep generative model. Experiments compare the proposed method with existing causal ML methods on time series & sequential VAE and obtained better results.
Reviewers commended the paper as well motivated and agreed that the paper's assumptions are more practical than existing causal ML work on time series data. There were questions on the assumptions & technical results from some reviewers, which were largely addressed in author feedback. I'd also encourage the authors to include some of the replies in author feedback to the camera ready, as this will help clarifying the approach further.
As a side note, some reviewers are not very comfortable with the paper title "Causal Disentanglement for Time Series", as they think the title is perhaps "too ambitious and general to reflect this paper's main focus and disappoints audiences who imagine something else". I'd encourage the authors to consider this comment from the reviewers. | train | [
"4XbMb0NqGYC",
"S2XhyeTBX3a",
"0iXH3DFDqsk",
"MuroAzmJ3CS",
"Qhg1WmJ2-Zx",
"EfH3JSejn_k",
"41UMF7ol9sk",
"nvfFhR3nCV-",
"ST4_0SBJoB4",
"RIxv57uSOva",
"kVK9wzpZjgr",
"XJvhR3o2ljt",
"dIvqwYwGsLa",
"0vKNBZTNwQf",
"J0eDbaBVxlA",
"rRXHDe4gP7bv",
"LXWF-hW95ND",
"zYzevoFJOal",
"rmTGpxGw... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Dear Reviewer U8Ti,\n\nWe understand you are busy and appreciate your time. Here we are re-sending a previous message to make sure you see it. Thank you very much for letting us know that your major concerns have been nicely addressed. In this case, could you please consider updating your recommendation to refl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"0iXH3DFDqsk",
"0iXH3DFDqsk",
"kVK9wzpZjgr",
"Qhg1WmJ2-Zx",
"kcJ8b8oZgKj",
"sz_UjR2M5Of",
"gSYBPO1SgR",
"RIxv57uSOva",
"dIvqwYwGsLa",
"XJvhR3o2ljt",
"sdpQqHEw-mc",
"J0eDbaBVxlA",
"rmTGpxGw1M",
"nips_2022_Vi-sZWNA_Ue",
"rRXHDe4gP7bv",
"LXWF-hW95ND",
"zYzevoFJOal",
"YSVrJPJzFtt",
"... |
nips_2022_eRBVi61Vct1 | Robust Rent Division | In fair rent division, the problem is to assign rooms to roommates and fairly split the rent based on roommates' reported valuations for the rooms. Envy-free rent division is the most popular application on the fair division website Spliddit. The standard model assumes that agents can correctly report their valuations for each room. In practice, agents may be unsure about their valuations, for example because they have had only limited time to inspect the rooms. Our goal is to find a robust rent division that remains fair even if agent valuations are slightly different from the reported ones. We introduce the lexislack solution, which selects a rent division that remains envy-free for valuations within as large a radius as possible of the reported valuations. We also consider robustness notions for valuations that come from a probability distribution, and use results from learning theory to show how we can find rent divisions that (almost) maximize the probability of being envy-free, or that minimize the expected envy. We show that an almost optimal allocation can be identified based on polynomially many samples from the valuation distribution. Finding the best allocation given these samples is NP-hard, but in practice such an allocation can be found using integer linear programming. | Accept | Reviewers are all positive and excited about the paper: interesting and natural model, novel and robust mechanism with theoretical guarantees, nice sample complexity analysis, experiments on real-world data. | train | [
"J1RMEJShF5",
"LhncFcPVbZu",
"oNHgH79CBN",
"_PqyZmaV2WP",
"Uf1yp2BkCMb",
"v4gPHBz4zsM",
"J5sEniRilHd",
"MAk7ozFnvIM",
"Yosb-wTsT9Z",
"XQPYoUJHRkh",
"se_1oUXSXEE",
"Bab6cRLEeeM"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I don’t really see any ethical issues beyond the use of potentially sensitive data from spliddit, which the authors ought to write a quick section on how the data was acquired, handled, any IRB applications, etc. The authors address this issue to the reviewer who raised it. Should the authors write an ethical con... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"nips_2022_eRBVi61Vct1",
"oNHgH79CBN",
"J5sEniRilHd",
"nips_2022_eRBVi61Vct1",
"Bab6cRLEeeM",
"se_1oUXSXEE",
"XQPYoUJHRkh",
"Yosb-wTsT9Z",
"nips_2022_eRBVi61Vct1",
"nips_2022_eRBVi61Vct1",
"nips_2022_eRBVi61Vct1",
"nips_2022_eRBVi61Vct1"
] |
nips_2022_I4aSjFR7jOm | Truncated Matrix Power Iteration for Differentiable DAG Learning | Recovering underlying Directed Acyclic Graph (DAG) structures from observational data is highly challenging due to the combinatorial nature of the DAG-constrained optimization problem. Recently, DAG learning has been cast as a continuous optimization problem by characterizing the DAG constraint as a smooth equality one, generally based on polynomials over adjacency matrices. Existing methods place very small coefficients on high-order polynomial terms for stabilization, since they argue that large coefficients on the higher-order terms are harmful due to numeric exploding. On the contrary, we discover that large coefficients on higher-order terms are beneficial for DAG learning, when the spectral radiuses of the adjacency matrices are small, and that larger coefficients for higher-order terms can approximate the DAG constraints much better than the small counterparts. Based on this, we propose a novel DAG learning method with efficient truncated matrix power iteration to approximate geometric series based DAG constraints. Empirically, our DAG learning method outperforms the previous state-of-the-arts in various settings, often by a factor of $3$ or more in terms of structural Hamming distance. | Accept | During the initial review phase, the reviewers were mostly positive in their opinions of the manuscript, noting that it is well-written, novel, and solves an important problem. However, the reviewers noted a few perceived flaws and technical weaknesses in this manuscript as well. Fortunately, these issues appear to have been positively resolved in a productive discussion during the author response period, particularly with reviewer iGwL.
Following the response period, the reviewers agreed that the proposed resolutions would significantly strengthen the paper and render it suitable for acceptance.
I strongly encourage the authors to make the requested changes in preparing an updated version of their manuscript. | train | [
"5C2SOrBU3eT",
"kGi_rq7DXmc",
"zzp5MIITwRX",
"g-zti86XAyu",
"ri_c19hQwu",
"Lj-RsTAfx86",
"jgw_-LgmKPH",
"iTself5Muf1",
"eqHLy8Xy33E",
"CRHRXuoOuJ8",
"TYU6Ohb-fgP",
"mNfewmphvx",
"zAiWzrzwjft",
"rq10Bt6Mpms",
"qK04rbI4SeK",
"ERYl6KM1IDP",
"q-25gC1N0R",
"pSU7t0VGTB_",
"SXwTFqbOc-1"... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" We are thankful to the reviewer for reading our response and for the acknowledgement. We will incorporate all the suggestions in the final version. Many thanks again.",
" Dear Reviewer XcDH,\n\nWe sincerely thank you for your feedback and time. We have provided responses to your comments and an updated submissi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"g-zti86XAyu",
"pSU7t0VGTB_",
"ri_c19hQwu",
"rq10Bt6Mpms",
"eqHLy8Xy33E",
"pSU7t0VGTB_",
"nips_2022_I4aSjFR7jOm",
"CRHRXuoOuJ8",
"CRHRXuoOuJ8",
"TYU6Ohb-fgP",
"mNfewmphvx",
"zAiWzrzwjft",
"0_qw1gPqkW",
"38Eqj9BVbdZ",
"SXwTFqbOc-1",
"pSU7t0VGTB_",
"nips_2022_I4aSjFR7jOm",
"nips_2022... |
nips_2022_3AxaYRmJ2KY | Semantic Field of Words Represented as Non-Linear Functions | State-of-the-art word embeddings presume a linear vector space, but this approach does not easily incorporate the nonlinearity that is necessary to represent polysemy. We thus propose a novel semantic FIeld REepresentation, called FIRE, which is a $D$-dimensional field in which every word is represented as a set of its locations and a nonlinear function covering the field. The strength of a word's relation to another word at a certain location is measured as the function value at that location. With FIRE, compositionality is represented via functional additivity, whereas polysemy is represented via the set of points and the function's multimodality. By implementing FIRE for English and comparing it with previous representation methods via word and sentence similarity tasks, we show that FIRE produces comparable or even better results. In an evaluation of polysemy to predict the number of word senses, FIRE greatly outperformed BERT and Word2vec, providing evidence of how FIRE represents polysemy. The code is available at https://github.com/kduxin/firelang. | Accept | Two reviewers who suggested that we accept the paper had significant engagement with the authors and that improved the paper quite a bit. The fourth reviewer also made several suggestions regarding further analysis, which also was incorporated by the authors to a large extent, and it improved the paper. I disagree with the fourth reviewer that Neurips is not the right audience for this paper, while a CL conference is, and don't agree that it is a reason for rejection. The other review suggestion a strong reject does not hold too much ground since they are suggesting that the authors compare with more baselines (and they do compare with several reasonable baselines) but ignoring the interesting method that the authors have presented. Overall, I think the paper is quite interesting even though the results are not extraordinarily SOTA and hence we should give it an audience at Neurips. | train | [
"jgrJO-VOLzx",
"bHsLBqx5Ov",
"pkJp0Da6eBR",
"DM5hC8f248d",
"kWLiHB_Lauy",
"MV_6oCcADdm",
"y2PRt2ZpZj",
"e4h9TRly_Tw",
"DsLbbSzgaAg",
"ZZDQENA8Xv",
"NCgvh0R9tNo",
"9IRAEgEk1E6"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your kind comments. Yes, we will do exactly as you say. For other points that you kindly indicated, we intend to integrate as much as possible. Thank you again for your wonderful advice.",
" Thank you for addressing my concerns. I think you should integrate your last reply to the paper and... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"bHsLBqx5Ov",
"MV_6oCcADdm",
"DM5hC8f248d",
"kWLiHB_Lauy",
"9IRAEgEk1E6",
"NCgvh0R9tNo",
"ZZDQENA8Xv",
"DsLbbSzgaAg",
"nips_2022_3AxaYRmJ2KY",
"nips_2022_3AxaYRmJ2KY",
"nips_2022_3AxaYRmJ2KY",
"nips_2022_3AxaYRmJ2KY"
] |
nips_2022_AhccnBXSne | VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training | Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking with an extremely high ratio. This simple design makes video reconstruction a more challenging and meaningful self-supervision task, thus encouraging extracting more effective video representations during the pre-training process. We obtain three important findings with VideoMAE: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance for VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets is an important factor. Notably, our VideoMAE with the vanilla ViT backbone can achieve 87.4% on Kinects-400, 75.4% on Something-Something V2, 91.3% on UCF101, and 62.6% on HMDB51, without using any extra data. Code is available at https://github.com/MCG-NJU/VideoMAE. | Accept | This paper studies application of masked autoencoders to video data. It is a very empirical paper with lots of ablations and experiments. All three reviewers lean toward the acceptance of the paper. Reviewer C915 has a slight concern regarding the novelty of the paper over concurrent works including [50,53]. The reviewers believe that the ablation study is exhaustive and the paper has a good reproducibility. The authors are encouraged to add new experiments with kinetics pretraining in the final version. | train | [
"0VF67jp3pyN",
"o9UHhIPeJ2c",
"AUAIkjK_Gww",
"LH66xmajxia",
"hUZnkCYAIrZ",
"S2tx0ewqrWu",
"RMa3SDUmXjT",
"6vSndhkcyPo",
"Py8RzLewo8C"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for the details response. Most of my concerns are addresses. I would consider raise my rating in my final review.",
" I thank the authors for their effort in preparing the rebuttal. While I still have not fully convinced about the extent of the novelty of this approach, most of my other conce... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"LH66xmajxia",
"hUZnkCYAIrZ",
"Py8RzLewo8C",
"6vSndhkcyPo",
"RMa3SDUmXjT",
"nips_2022_AhccnBXSne",
"nips_2022_AhccnBXSne",
"nips_2022_AhccnBXSne",
"nips_2022_AhccnBXSne"
] |
nips_2022_NySDKS9SxN | Most Activation Functions Can Win the Lottery Without Excessive Depth | The strong lottery ticket hypothesis has highlighted the potential for training deep neural networks by pruning, which has inspired interesting practical and theoretical insights into how neural networks can represent functions. For networks with ReLU activation functions, it has been proven that a target network with depth L can be approximated by the subnetwork of a randomly initialized neural network that has double the target's depth 2L and is wider by a logarithmic factor. We show that a depth L+1 is sufficient. This result indicates that we can expect to find lottery tickets at realistic, commonly used depths while only requiring logarithmic overparametrization. Our novel construction approach applies to a large class of activation functions and is not limited to ReLUs. Code is available on Github (RelationalML/LT-existence). | Accept | The submission provides a nice extension of the previous work that shows a random network with 2L layer contains a subnetwork (or lottery ticket) that approximates the target network of depth L well. Instead of 2L, they show L+1 layer plus a logarithmic factor wide suffices. Overall it is a nice solid contribution to the community. In the camera ready version, AC would advise the authors to clearly specify the difference between this submission and [1].
The concerns raised by reviewer Puqv are due to misunderstanding of lottery ticket hypothesis (LTH) and are largely not valid. For example, "Linear Mode Connectivity and the Lottery Ticket Hypothesis" by Frankle et al, in fact shows *after the initial phase of the network training*, all Lottery Ticket solutions are within the same basin. For different randomly initialized networks, the resulting LTHs can be very different. | train | [
"NwDMY_pTWCr",
"tBjzZUYM5k",
"DhGQ57-tsMZ",
"O-oQbzp8ml",
"xxkIYrSN9Cd",
"a0qUTfXuOXZ",
"mhgXRn8bqx",
"Wm0Wf3E1Hd",
"ABsimqHvcly",
"LxhOuYB2jbs",
"mhEJjAYftpa",
"EQlkxxMbv_F",
"DrkyRytKYHX",
"PsFzzxygn28"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Feedback on rebuttal**\n\nWe appreciate the feedback by Reviewer Puqv and the time spent on reviewing.\nWe did not mean to hurt their feelings with the statement that they misunderstood the paper but we had to point it out to clarify the problem. \nIn fact, we answered every raised question, also the ones that ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"tBjzZUYM5k",
"ABsimqHvcly",
"Wm0Wf3E1Hd",
"xxkIYrSN9Cd",
"LxhOuYB2jbs",
"mhgXRn8bqx",
"PsFzzxygn28",
"ABsimqHvcly",
"DrkyRytKYHX",
"EQlkxxMbv_F",
"nips_2022_NySDKS9SxN",
"nips_2022_NySDKS9SxN",
"nips_2022_NySDKS9SxN",
"nips_2022_NySDKS9SxN"
] |
nips_2022_zVglD2W0EAS | Debiased, Longitudinal and Coordinated Drug Recommendation through Multi-Visit Clinic Records | AI-empowered drug recommendation has become an important task in healthcare research areas, which offers an additional perspective to assist human doctors with more accurate and more efficient drug prescriptions. Generally, drug recommendation is based on patients' diagnosis results in the electronic health records. We assume that there are three key factors to be addressed in drug recommendation: 1) elimination of recommendation bias due to limitations of observable information, 2) better utilization of historical health condition and 3) coordination of multiple drugs to control safety. To this end, we propose DrugRec, a causal inference based drug recommendation model. The causal graphical model can identify and deconfound the recommendation bias with front-door adjustment. Meanwhile, we model the multi-visit in the causal graph to characterize a patient's historical health conditions. Finally, we model the drug-drug interactions (DDIs) as the propositional satisfiability (SAT) problem, and solving the SAT problem can help better coordinate the recommendation. Comprehensive experiment results show that our proposed model achieves state-of-the-art performance on the widely used datasets MIMIC-III and MIMIC-IV, demonstrating the effectiveness and safety of our method. | Accept | This paper proposes a drug recommendation model, called DrugRec, based on causal inference. The two proposed modeling schemes are capable of handling multiple patient visits to better model the patient's past health status. The authors performed large-scale experiments and showed the effectiveness of the proposed model. The model formulates bias as a confounder through causal effect. The authors clearly answered the questions from the reviewers, and so the revised paper would be an achievement worthy of being accepted as a new NeurIPS paper. | train | [
"lUuMa5zuqr_",
"UpmX4MSihSH",
"imhH9UlqMGh",
"w82r8Kop4iBY",
"J7VbRrhgfRC",
"bTbFg-ufGHj",
"ps5CBfQmyle",
"RiFHc8HxzO-",
"PufmiyTnWhX",
"uMC1bUYgyCX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my concerns. I've updated my rating accordingly.",
" Thank you for looking into the concerns/suggestions and including the ones which were possible in the current timeline. I do not have any further concerns. I believe my current overall recommendation would remain the same for the pape... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
3
] | [
"imhH9UlqMGh",
"J7VbRrhgfRC",
"RiFHc8HxzO-",
"uMC1bUYgyCX",
"ps5CBfQmyle",
"PufmiyTnWhX",
"nips_2022_zVglD2W0EAS",
"nips_2022_zVglD2W0EAS",
"nips_2022_zVglD2W0EAS",
"nips_2022_zVglD2W0EAS"
] |
nips_2022_NmUWaaFEDdn | On the Double Descent of Random Features Models Trained with SGD | We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice. | Accept | While the reviewers showed some disagreement, the majority of them considered the paper novel and interesting. Moreover, during the discussion period the authors improved the clarify of the presentation and of the proved results. Overall, the paper makes a good contribution to the conference. | val | [
"jWpXXcnyT5",
"Trh3EPlB8oy",
"jDNNc7lM5KY",
"yNIb4G8QNEg",
"cymMv-57Ln",
"mlRhejO0D8c",
"jeEYM7Hw0mL",
"qYSnXaCdW4E",
"rrRzzRR8YdJ",
"fg3Gojoso_Z",
"9fX-GMMVa3O",
"5IcbFv20CG",
"RRS56dmvxGk",
"MEyDhGgDfd6",
"7eEITwARTjy",
"McL2lnQJN7U",
"tHRtg3UzA8y",
"fLc-UP3c0wR"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nWe thank all the reviewers' effort and constructive feedback on this work during the author-reviewer discussion period.\n\nAccording to suggestions from all the reviewers, we have revised this paper for better presentation and added more discussions on the obtained result and techniques.\n\n- W... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3,
4
] | [
"nips_2022_NmUWaaFEDdn",
"jDNNc7lM5KY",
"jeEYM7Hw0mL",
"nips_2022_NmUWaaFEDdn",
"mlRhejO0D8c",
"rrRzzRR8YdJ",
"5IcbFv20CG",
"fg3Gojoso_Z",
"fg3Gojoso_Z",
"fLc-UP3c0wR",
"tHRtg3UzA8y",
"RRS56dmvxGk",
"McL2lnQJN7U",
"7eEITwARTjy",
"nips_2022_NmUWaaFEDdn",
"nips_2022_NmUWaaFEDdn",
"nips... |
nips_2022_157Usp_kbi | Knowledge Distillation from A Stronger Teacher | Unlike existing knowledge distillation methods focus on the baseline settings, where the teacher models and training strategies are not that strong and competing as state-of-the-art approaches, this paper presents a method dubbed DIST to distill better from a stronger teacher. We empirically find that the discrepancy of predictions between the student and a stronger teacher may tend to be fairly severer. As a result, the exact match of predictions in KL divergence would disturb the training and make existing methods perform poorly. In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly. Besides, considering that different instances have different semantic similarities to each class, we also extend this relational match to the intra-class level. Our method is simple yet practical, and extensive experiments demonstrate that it adapts well to various architectures, model sizes and training strategies, and can achieve state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at: https://github.com/hunto/DIST_KD. | Accept | The paper presents a new KD loss different from the widely used KL divergence for learning from strong teachers who have large gaps between students. The authors provide a comprehensive study and improve the challenging benchmarks. The contribution is significant to the KD community and AC recommends accept. Authors may want to carefully upgrade the paper with constructive comments for the camera-ready version.
| train | [
"2WstvYeRkc",
"njQ9hh6K09j",
"Go2KV7azYrj",
"qYrZe2Cgka7",
"WHiGcTzLzeC",
"qTpapEBX_mz",
"rVVyb95t_Gt",
"P054lKAbTa5",
"5EH6Pl33WW",
"ZhqLZBa7Nv",
"8MpHlTusmme",
"4oqLcxHDOv-",
"p_jsNkiK56z"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. My concerns have been well addressed. This is a novel and effective method, which may inspire the community. ",
" Dear Reviewer FMUS,\n\nWe sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered yo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
5
] | [
"qTpapEBX_mz",
"p_jsNkiK56z",
"P054lKAbTa5",
"p_jsNkiK56z",
"p_jsNkiK56z",
"4oqLcxHDOv-",
"8MpHlTusmme",
"ZhqLZBa7Nv",
"nips_2022_157Usp_kbi",
"nips_2022_157Usp_kbi",
"nips_2022_157Usp_kbi",
"nips_2022_157Usp_kbi",
"nips_2022_157Usp_kbi"
] |
nips_2022_bF4eYy3LTR9 | Generic bounds on the approximation error for physics-informed (and) operator learning | We propose a very general framework for deriving rigorous bounds on the approximation error for physics-informed neural networks (PINNs) and operator learning architectures such as DeepONets and FNOs as well as for physics-informed operator learning. These bounds guarantee that PINNs and (physics-informed) DeepONets or FNOs will efficiently approximate the underlying solution or solution-operator of generic partial differential equations (PDEs). Our framework utilizes existing neural network approximation results to obtain bounds on more-involved learning architectures for PDEs. We illustrate the general framework by deriving the first rigorous bounds on the approximation error of physics-informed operator learning and by showing that PINNs (and physics-informed DeepONets and FNOs) mitigate the curse of dimensionality in approximating nonlinear parabolic PDEs. | Accept | The paper studies approximation error bound for physics informed neural networks and operator learning. The result is technical sound and useful for practical applications. The authors also adequately addressed concerns by the referees. The meta-reviewer recommends acceptance of the paper. | train | [
"HwNVWfyej7",
"RIXh98RJWr",
"J6rbxvAQ_mJ",
"xB55NqMuRxW",
"wm9_H8K-Eci",
"lNJfZEFj2l0",
"y9KKb68lbud",
"pSa2GowVDr",
"WZYsQsoXESG",
"vlFGulcJ6FLt",
"LmQY26CfR6",
"TdOjw3L8zIR",
"Wl-lJ7RakfF",
"iVxn_a-HeTC"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As the author-reviewer discussion phase will soon conclude, we take the opportunity to thank the reviewer again for your very positive appreciation of our paper and your constructive comments and suggestions which enables us to improve the quality of our paper. We hope that we have addressed the remaining concern... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"WZYsQsoXESG",
"vlFGulcJ6FLt",
"xB55NqMuRxW",
"wm9_H8K-Eci",
"lNJfZEFj2l0",
"y9KKb68lbud",
"pSa2GowVDr",
"iVxn_a-HeTC",
"Wl-lJ7RakfF",
"TdOjw3L8zIR",
"nips_2022_bF4eYy3LTR9",
"nips_2022_bF4eYy3LTR9",
"nips_2022_bF4eYy3LTR9",
"nips_2022_bF4eYy3LTR9"
] |
nips_2022_3AV_53iRfTi | Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop | No-reference image quality assessment (NR-IQA) aims to quantify how humans perceive visual distortions of digital images without access to their undistorted references. NR-IQA models are extensively studied in computational vision, and are widely used for performance evaluation and perceptual optimization of man-made vision systems. Here we make one of the first attempts to examine the perceptual robustness of NR-IQA models. Under a Lagrangian formulation, we identify insightful connections of the proposed perceptual attack to previous beautiful ideas in computer vision and machine learning. We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models (as approximations to human perception of just-noticeable differences). Through carefully designed psychophysical experiments, we find that all four NR-IQA models are vulnerable to the proposed perceptual attack. More interestingly, we observe that the generated counterexamples are not transferable, manifesting themselves as distinct design flows of respective NR-IQA methods. | Accept | All the reviewers are in agreement in their recommendations to accept the paper. The topic of the paper brings along many different sub-fields, including adversarial attacks and image quality assessment, and should be interesting to several folks in the community. There are several constructive comments by the reviewers that I’d encourage the authors to address, especially the ones asking for clearly state the motivation of this line of research, as well synthesis of ideas from the study that would enable ideas for better defense against such attacks. | train | [
"EMznf8DPeUE",
"KaREF3X1fje",
"CY8g9_zlYx",
"wXZc_FMIdpL",
"fWsDdlmSV4G",
"Nvvu_MNnlA",
"nqOsXf5_eT",
"mqFGNaTiJAF",
"wNPjyy810Qf",
"clRhX0Q2AQm",
"iT1Pm_rHgc9"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Regarding the basis for choosing these 6 images and their corresponding distortions**: We selected the 6 images from the test set of the LIVE dataset, the very first ``large-scale’’ human-rated IQA dataset. The four distortions (i.e., JPEG compression, JPEG2000 compression, Gaussian blur, and white noise) are t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
3,
3
] | [
"wXZc_FMIdpL",
"CY8g9_zlYx",
"iT1Pm_rHgc9",
"clRhX0Q2AQm",
"wNPjyy810Qf",
"nqOsXf5_eT",
"mqFGNaTiJAF",
"nips_2022_3AV_53iRfTi",
"nips_2022_3AV_53iRfTi",
"nips_2022_3AV_53iRfTi",
"nips_2022_3AV_53iRfTi"
] |
nips_2022_eCUeRHHupF | Refining Low-Resource Unsupervised Translation by Language Disentanglement of Multilingual Translation Model | Numerous recent work on unsupervised machine translation (UMT) implies that competent unsupervised translations of low-resource and unrelated languages, such as Nepali or Sinhala, are only possible if the model is trained in a massive multilingual environment, where these low-resource languages are mixed with high-resource counterparts. Nonetheless, while the high-resource languages greatly help kick-start the target low-resource translation tasks, the language discrepancy between them may hinder their further improvement. In this work, we propose a simple refinement procedure to separate languages from a pre-trained multilingual UMT model for it to focus on only the target low-resource task. Our method achieves the state of the art in the fully unsupervised translation tasks of English to Nepali, Sinhala, Gujarati, Latvian, Estonian and Kazakh, with BLEU score gains of 3.5, 3.5, 3.3, 4.1, 4.2, and 3.3, respectively. Our codebase is available at https://github.com/nxphi47/refine_unsup_multilingual_mt | Accept | This paper presents a four-stage process for training completely unsupervised machine translation models. The results are fairly strong. After some discussion all reviewers are convinced that the evaluation is sound.
The reviewers are somewhat split about the novelty, some say:
+ "The proposed method is very interesting, intuitive, innovative, and well-motivated."
Others say:
- "The idea itself is not novel in that there exists prior work on isolating target language specific representation by feed-forward network in the decoder side."
Some reviewers find the approach with its four steps cumbersome, while others don't have an issue with this.
On balance, just above the decision boundary. | test | [
"iuYIHGI_HyN",
"vkS1wWWkEis",
"TD-kh4f__W",
"ObVo-4DwK7O",
"_9CAtmV7qT4",
"NHJGF871rov",
"7_X-ogwfOqv",
"T0436jxGDbg",
"gJdTXSIXaAj",
"hH2GLdz1UBn",
"ADeg3UIe2zN",
"oSdywGXasRA",
"a44DXj7Zyp_",
"NYIJ9bRPQkj"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nWe greatly appreciate the reviewers’ effort and time in reviewing our paper. We have submitted our responses to each reviewer and uploaded a rebuttal revision of the paper with details of the changes. While we are greatly grateful that reviewer *hAiK* has replied and discussed our responses, we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"nips_2022_eCUeRHHupF",
"ObVo-4DwK7O",
"T0436jxGDbg",
"gJdTXSIXaAj",
"nips_2022_eCUeRHHupF",
"NYIJ9bRPQkj",
"a44DXj7Zyp_",
"gJdTXSIXaAj",
"oSdywGXasRA",
"ADeg3UIe2zN",
"nips_2022_eCUeRHHupF",
"nips_2022_eCUeRHHupF",
"nips_2022_eCUeRHHupF",
"nips_2022_eCUeRHHupF"
] |
nips_2022_ez6VHWvuXEx | GT-GAN: General Purpose Time Series Synthesis with Generative Adversarial Networks | Time series synthesis is an important research topic in the field of deep learning, which can be used for data augmentation. Time series data types can be broadly classified into regular or irregular. However, there are no existing generative models that show good performance for both types without any model changes. Therefore, we present a general purpose model capable of synthesizing regular and irregular time series data. To our knowledge, we are the first designing a general purpose time series synthesis model, which is one of the most challenging settings for time series synthesis. To this end, we design a generative adversarial network-based method, where many related techniques are carefully integrated into a single framework, ranging from neural ordinary/controlled differential equations to continuous time-flow processes. Our method outperforms all existing methods. | Accept | This paper presents a new model for time series data that can handle data sampled at irregular time intervals. The proposed model makes extensive use of continuous time processes in a GAN framework. Experiment results show that the proposed model consistently outperforms existing approaches. All reviewers lean on the accept side and I support that consensus. | train | [
"Kb39IIMQ_1",
"tYwZLstHCVE",
"icO4bHgDBxN",
"lAm8khz6qDG",
"btGmYZVZN9e",
"kQGfuOxQ8yr",
"BrOgXeVSV2o",
"Vbf7NJ_9gW1",
"QUmp-IGoIP3",
"3pVv0vvszlP",
"FnE-a2mgHU6",
"hPONiaAGhg3",
"etov3fijwyU",
"pGVZ-u7oW7i"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear All Reviewers,\n\nFirst of all, thanks for your comments. We could significantly improve our paper during the discussion period. We revised the following points and uploaded a new version:\n\n1. We revised Introduction to strengthen our motivations for the general purpose time series synthesis model.\n\n2. W... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"nips_2022_ez6VHWvuXEx",
"3pVv0vvszlP",
"lAm8khz6qDG",
"btGmYZVZN9e",
"kQGfuOxQ8yr",
"QUmp-IGoIP3",
"etov3fijwyU",
"pGVZ-u7oW7i",
"pGVZ-u7oW7i",
"etov3fijwyU",
"hPONiaAGhg3",
"nips_2022_ez6VHWvuXEx",
"nips_2022_ez6VHWvuXEx",
"nips_2022_ez6VHWvuXEx"
] |
nips_2022_rxrLt7rTlAr | Fair Wrapping for Black-box Predictions | We introduce a new family of techniques to post-process (``wrap") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimization can correct any twist in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an $\alpha$-tree, which modifies the prediction. We provide two generic boosting algorithms to learn $\alpha$-trees. We show that our modification has appealing properties in terms of composition of $\alpha$-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value-at-risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets. | Accept | This paper tackles the important and interesting problem of how to transform black-box models so that their outputs having improved fairness. The proposal of using an "alpha-tree", an axis-aligned decision tree that re-weights the existing model, seems elegant and does indeed have some useful form of interpretability. The primary issue---which was essentially raised by all reviewers---was that of the paper's clarity: the presentation often prioritizes rigor over readability. I think the authors have done a good job in the comments explaining their work, and as no technical issues were raised, I recommend the paper for acceptance. However, the authors should take steps to improve readability (e.g. my incorporating many of the discussion comments within the text, as space allows). | train | [
"WEZyRalP-Cl",
"hdKoNPrsxt-",
"N15I7EvaRbY",
"TMUR9BPZRz",
"EIT4tHSg6vI",
"xDHLNW7iqGS",
"Wgnn3ByG2A",
"edmmjdIl2I2",
"-5zvf98sPm7",
"_kluxkas-Z_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for the detailed response and the clarifications and changes! They strengthen the paper and address some of my concerns, esp. regarding clarity. I have updated my rating accordingly.\n\nHowever, I still have some remaining issues with the work, primarily wrt. its practical applicability to fairness p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
"EIT4tHSg6vI",
"N15I7EvaRbY",
"_kluxkas-Z_",
"-5zvf98sPm7",
"xDHLNW7iqGS",
"edmmjdIl2I2",
"nips_2022_rxrLt7rTlAr",
"nips_2022_rxrLt7rTlAr",
"nips_2022_rxrLt7rTlAr",
"nips_2022_rxrLt7rTlAr"
] |
nips_2022_L_1GMG_7UTL | Fast Instrument Learning with Faster Rates | We investigate nonlinear instrumental variable (IV) regression given high-dimensional instruments. We propose a simple algorithm which combines kernelized IV methods and an arbitrary, adaptive regression algorithm, accessed as a black box. Our algorithm enjoys faster-rate convergence and adapts to the dimensionality of informative latent features, while avoiding an expensive minimax optimization procedure, which has been necessary to establish similar guarantees. It further brings the benefit of flexible machine learning models to quasi-Bayesian uncertainty quantification, likelihood-based model selection, and model averaging. Simulation studies demonstrate the competitive performance of our method.
| Accept | An instrumental variable (IV) regression for high-dimensional data is a challenging problem as it involves either a cumbersome two-stage method or a complex minimax procedure. Kernel methods recently arise as promising tools that enable the solutions to be computed in closed form. Unfortunately, the convergence rate of the vanilla estimators is sub-optimal as it depends on the choice of kernel functions on the instruments (i.e., similar to how the second-stage estimation depends on the first-stage estimation). This paper contributes by showing that the convergence rate can be improved if the kernels on instruments are learned from data. The expert reviewers agree that the paper provides a novel approach to learning the instrument kernels, rigorous theoretical analyses, and convincing empirical results. The major limitation, which can be improved in future work, is that most of the theoretical analyses rely on the assumption that the true structural function $f_0$ lies in a reproducing kernel Hilbert space (RKHS). | train | [
"R6KbdbPh5L",
"fQPUg4UGeMt",
"byXHP_JiD1W",
"OTFepFiE10d",
"dL6E0fs2M0I",
"X3dCb9UrC11",
"_1KPCUBxCWb",
"v0VvPmECAk6",
"kh6JTZQg6AO",
"-FbUAMwng3e",
"dwggpakebUkl",
"7QymoN0p3N1",
"hbvZwkCOoYB",
"0faHkvhrUB8",
"JJDg46yHTQK",
"CzzEwjN2IyM",
"VVEuzIIiaWI",
"Xw7NV6o-XaS4",
"ASqPwz2C... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewer bZoU,\n\nThank you very much for the clarification. We highly appreciate that! We are also very happy to see that most of the reviewers (including you) appreciated our contributions and found our response satisfactory. \n\nBest, \nAuthors\n\n",
" Thank you very much for the feedback. We're happy t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2,
3
] | [
"Xw7NV6o-XaS4",
"-FbUAMwng3e",
"v0VvPmECAk6",
"X3dCb9UrC11",
"kh6JTZQg6AO",
"0faHkvhrUB8",
"WmKaZyMOtMe",
"JJDg46yHTQK",
"VVEuzIIiaWI",
"CzzEwjN2IyM",
"hbvZwkCOoYB",
"BmmyfXPGIm1",
"BmmyfXPGIm1",
"WmKaZyMOtMe",
"HF1NjKLcAoc",
"xD8_m8WA23r",
"ASqPwz2CRo7",
"BmmyfXPGIm1",
"nips_202... |
nips_2022_3e3IQMLDSLP | Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination | The learned policy of model-free offline reinforcement learning (RL) methods is often constrained to stay within the support of datasets to avoid possible dangerous out-of-distribution actions or states, making it challenging to handle out-of-support region. Model-based RL methods offer a richer dataset and benefit generalization by generating imaginary trajectories with either trained forward or reverse dynamics model. However, the imagined transitions may be inaccurate, thus downgrading the performance of the underlying offline RL method. In this paper, we propose to augment the offline dataset by using trained bidirectional dynamics models and rollout policies with double check. We introduce conservatism by trusting samples that the forward model and backward model agree on. Our method, confidence-aware bidirectional offline model-based imagination, generates reliable samples and can be combined with any model-free offline RL method. Experimental results on the D4RL benchmarks demonstrate that our method significantly boosts the performance of existing model-free offline RL algorithms and achieves competitive or better scores against baseline methods. | Accept | **Strengths**: The paper introduces a new and interesting idea of "double checking" with bi-directional models, and thoroughly evaluates the idea on a variety of offline RL datasets and through multiple ablations.
**Weaknesses**: The main weaknesses seem to be that (1) some of the performance improvements are small, and (2) like prior methods, the method is heavily reliant on tuning a hyperparameter based on the quality of the dataset. It also is somewhat strange that the paper uses v0 datasets from D4RL, since the more recent versions have fixed bugs in the datasets. Otherwise, the author response did a good job at discussing and addressing the other reviewer concerns.
The reviewers and AC agree that the strengths outweigh the weaknesses, and would make a valuable addition to NeurIPS. | val | [
"oCu6Ici6yw",
"gJ6M1NS9t6v",
"RnxEaalR9zfG",
"IeE_pq7m9dm",
"nnTZqe5H6O1",
"OxpmMPAI_6q",
"fiInbAyKd05",
"Z6Y1vCVkERr",
"-wxBIP_CqJ",
"60H8TNYibCK",
"Qr0zvj800yg"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer rJ1S,\n\nWe first would like to thank the reviewer's efforts and time in reviewing our work. We were wondering if our responses have resolved your concerns. We will be happy to have further discussions with the reviewer if there are still some remaining questions! More discussions and suggestions on... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"60H8TNYibCK",
"RnxEaalR9zfG",
"IeE_pq7m9dm",
"nnTZqe5H6O1",
"Qr0zvj800yg",
"fiInbAyKd05",
"60H8TNYibCK",
"-wxBIP_CqJ",
"nips_2022_3e3IQMLDSLP",
"nips_2022_3e3IQMLDSLP",
"nips_2022_3e3IQMLDSLP"
] |
nips_2022_ZFjPtJsQPOv | Bootstrapped Transformer for Offline Reinforcement Learning | Offline reinforcement learning (RL) aims at learning policies from previously collected static trajectory data without interacting with the real environment. Recent works provide a novel perspective by viewing offline RL as a generic sequence generation problem, adopting sequence models such as Transformer architecture to model distributions over trajectories and repurposing beam search as a planning algorithm. However, the training datasets utilized in general offline RL tasks are quite limited and often suffering from insufficient distribution coverage, which could me harmful to training sequence generation models yet has not drawn enough attention in the previous works. In this paper, we propose a novel algorithm named Bootstrapped Transformer, which incorporates the idea of bootstrapping and leverages the learned model to self-generate more offline data to further boost the training of sequence model. We conduct extensive experiments on two offline RL benchmarks and demonstrate that our model can largely remedy the limitations of the existing offline RL training and beat other strong baseline methods. We also analyze the generated pseudo data and the revealed characteristics may shed some light on offline RL training. | Accept | ## Summary
Offline RL (RL) algorithms aim to learn policies without interacting with an environment purely from the state and actions covered in the offline datasets. However, in real-world datasets, the coverage can be insufficient to learn good policies. Thus it is an important research direction to improve the sample efficiency of those methods. This paper approaches offline RL from a sequence modeling perspective. The paper adopts a variant of trajectory transformers for data generation, and they investigate two of the main design decisions in those models:
* Sampling methods (autoregressive vs. teacher-forcing based).
* Reuse of model-generated data
The authors validate their idea on two D4RL tasks:
* adroit
* locomotion
## Decision
Overall the paper is well-written and easy to understand. The results and experiments are thorough, and the paper goes for more depth in the experiments rather than breadth. The ideas in the paper are not novel, but the paper does not overclaim its contributions, and results are interesting. As a result, I think both NeurIPS and the broader offline RL communities would benefit from the findings of this paper. I am nominating this paper for acceptance.
The reviewers were very positive about this paper during the rebuttal and discussion paper. They all agreed that the paper is valuable and interesting contribution to the community. The main criticism of this paper that came up during the discussion period was that the idea is just a straightforward combination of the existing techniques. However, the idea presented in the paper is coherent, reasonable, and executed well.
The authors provided a very detailed rebuttal with clarifications to the points that reviewers raised. As a result of the rebuttal, some of the reviewers increased their scores. I would recommend that the authors incorporate some of those clarifications into the camera-ready paper version. Some of those are:
* In response to reviewer 9mfT’s question on the results with CQL on additional data generated by Boot is very interesting. I think the authors should include it in the camera-ready version of the paper.
* Additional experiments on other datasets from the D4RL gym environment as asked by reviewer 9mfT.
* The experiments asked by the reviewer *hqzJ*.
| train | [
"VJFVUtjarpj",
"0IVBkqqCj4Z",
"Hey8doktY1n",
"oP2nu0ipiyG",
"9zHxQMNzjYj",
"Sw7uKZUwVd",
"28OjC7XGi0",
"2RKYvh_LZae",
"Q6UXf7tKAn2",
"LqeqD9C1jw",
"lXZhaOUF5lO",
"xDhNcEK0l__",
"8AA2A-lxVPyM",
"E07Hl5Fv69A",
"KaMWJ_s_PFF",
"QoAVQzU37ap",
"Vobo9qBR-3A"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > Compared to the previous perturbation-based method S4RL which simply adds random noise on the input states, our method applys perturbations on the whole trajectory data, including states, actions and rewards.\n\nThis perspective is interesting. Good to see that it is better than random perturbation, but current... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"9zHxQMNzjYj",
"E07Hl5Fv69A",
"nips_2022_ZFjPtJsQPOv",
"8AA2A-lxVPyM",
"Sw7uKZUwVd",
"lXZhaOUF5lO",
"Vobo9qBR-3A",
"Vobo9qBR-3A",
"QoAVQzU37ap",
"QoAVQzU37ap",
"QoAVQzU37ap",
"KaMWJ_s_PFF",
"KaMWJ_s_PFF",
"KaMWJ_s_PFF",
"nips_2022_ZFjPtJsQPOv",
"nips_2022_ZFjPtJsQPOv",
"nips_2022_ZFj... |
nips_2022_jv1bis_HYBL | Towards Skill and Population Curriculum for MARL | Recent advances in multi-agent reinforcement learning (MARL) allow agents to coordinate their behaviors in complex environments. However, common MARL algorithms still suffer from scalability and sparse reward issues. One promising approach to resolve them is automated curriculum learning (ACL), where a student (curriculum learner) train on tasks of increasing difficulty controlled by a teacher (curriculum generator). Unfortunately, in spite of its success, ACL's applicability is restricted due to: (1) lack of a general student framework to deal with the varying number of agents across tasks and the sparse reward problem, and (2) the non-stationarity in the teacher's task due to the ever-changing student strategies. As a remedy for ACL, we introduce a novel automatic curriculum learning framework, Curriculum Oriented Skills and Tactics (COST), adapting curriculum learning to multi-agent coordination. To be specific, we endow the student with population-invariant communication and a hierarchical skill set. Thus, the student can learn cooperation and behavior skills from distinct tasks with a varying number of agents. In addition, we model the teacher as a contextual bandit conditioned by student policies. As a result, a team of agents can change its size while retaining previously acquired skills. We also analyze the inherent non-stationarity of this multi-agent automatic curriculum teaching problem, and provide a corresponding regret bound. Empirical results show that our method improves scalability, sample efficiency, and generalization in MPE and Google Research Football. The source code and the video can be found at https://sites.google.com/view/neurips2022-cost/. | Reject | This paper addresses some important problems in curriculum learning. Reviewers were generally happy with the approach. The main criticisms are around the evaluation. There is a lot of prior and similar work in the the literature, and reviewers felt that the paper lacked strong baselines against state of the art curriculum learning methods (reviewer 1MvZ and mgEj). Reviewer Vwt6 also pointed out issues with evaluation. For example, it was unclear if difference in reported performance was due to differences in algorithm, or in neural net sizes. Including stronger evaluation, and more references to related methods from the literature, would make the paper much stronger. Our conclusion is that the paper is not yet ready for publication. | train | [
"nO2wyh6wfZt",
"OAKAZ7fmlN2",
"fXoH2H9rEUD",
"JJhrEeFuEtl",
"trXoqmfkrB",
"nchnuzdRwZw",
"CvMRXZYDt0",
"0EhnrHlDwnb",
"p5UDOVQt2tt",
"GlLj2aCFHM4",
"7hZk9Gq4Ger",
"9cvdP1GDeP",
"JavAE5erwhm",
"Hm0m-rRCHNZ",
"mGA-1768_iY",
"r8MaNf04EY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors’ valuable response. Some of my concerns are solved and explained but still with some weaknesses:\n1. Many recent works have applied the attention mechanism to MARL like REFIL[1] and UPDeT[2] to solve the varying number problem. I think there should be a paragraph reviewing these works in th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"GlLj2aCFHM4",
"0EhnrHlDwnb",
"trXoqmfkrB",
"p5UDOVQt2tt",
"nchnuzdRwZw",
"CvMRXZYDt0",
"7hZk9Gq4Ger",
"r8MaNf04EY",
"mGA-1768_iY",
"Hm0m-rRCHNZ",
"JavAE5erwhm",
"nips_2022_jv1bis_HYBL",
"nips_2022_jv1bis_HYBL",
"nips_2022_jv1bis_HYBL",
"nips_2022_jv1bis_HYBL",
"nips_2022_jv1bis_HYBL"
... |
nips_2022_RIArO3o_74Z | Learning from Distributed Users in Contextual Linear Bandits Without Sharing the Context | Contextual linear bandits is a rich and theoretically important model that has many practical applications. Recently, this setup gained a lot of interest in applications over wireless where communication constraints can be a performance bottleneck, especially when the contexts come from a large $d$-dimensional space. In this paper, we consider the distributed contextual linear bandit learning problem, where the agents who observe the contexts and take actions are geographically separated from the learner who performs the learning while not seeing the contexts. We assume that contexts are generated from a distribution and propose a method that uses $\approx 5d$ bits per context for the case of unknown context distribution and $0$ bits per context if the context distribution is known, while achieving nearly the same regret bound as if the contexts were directly observable. The former bound improves upon existing bounds by a $\log(T)$ factor, where $T$ is the length of the horizon, while the latter achieves information theoretical tightness. | Accept | The reviewers are overall positive about the theoretical contributions of the paper, for which I share the same (generally) positive evaluation.
Please make sure you address all the reviewers' comments and incorporate them (and any new experimental results, if applicable) in your camera-ready. | train | [
"GgO_haY7oo8",
"AgCta9iY3DO",
"Cv5VpZKXqSB",
"C23-VNxZUo5",
"BzYSYMK02h5",
"nvU33VgGvPL",
"j3zfT0kdni5",
"3qh959SQRb4",
"fYGTi2sdh4b"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their valuable comments and time to review the paper. Please let us know if you have any additional concerns. \n\nBased on the reviewers comments, we uploaded a revision of the paper with the following edits:\n- Added a discussion on the cooperative agents (see line 81).\n- Discussions ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2022_RIArO3o_74Z",
"fYGTi2sdh4b",
"3qh959SQRb4",
"3qh959SQRb4",
"fYGTi2sdh4b",
"j3zfT0kdni5",
"nips_2022_RIArO3o_74Z",
"nips_2022_RIArO3o_74Z",
"nips_2022_RIArO3o_74Z"
] |
nips_2022_8SY8ete3zu | Self-explaining deep models with logic rule reasoning | We present SELOR, a framework for integrating self-explaining capabilities into a given deep model to achieve both high prediction performance and human precision. By “human precision”, we refer to the degree to which humans agree with the reasons models provide for their predictions. Human precision affects user trust and allows users to collaborate closely with the model. We demonstrate that logic rule explanations naturally satisfy them with the expressive power required for good predictive performance. We then illustrate how to enable a deep model to predict and explain with logic rules. Our method does not require predefined logic rule sets or human annotations and can be learned efficiently and easily with widely-used deep learning modules in a differentiable way. Extensive experiments show that our method gives explanations closer to human decision logic than other methods while maintaining the performance of the deep learning model. | Accept | The paper deals with the important topic of devising accurate predictive models that are able to distill explainations that can be easily understood by humans. In particular, authors propose a deep neural network that predicts logical rules over which a human-specified prior can be imposed.
The reviewers agreed that the scope of the contribution is relevant and the contribution is timely. At the same time, they highlighted some shortcomings concerning the experimental setting (e.g. some metrics or baselines missing), the motivation and effect of certain assumptions over the explainations (e.g., are rules consistent).
During the rebuttal, the authors managed to address the aforementioned concerns in a satisfactory way which saw some scores improve (kudos!)
The paper is accepted conditioned on the inclusion of all the experimental material and discussion that emerged during the rebuttal. | train | [
"rFj7ztLelFD",
"TkkRCF42yio",
"UJ3EnJ97Mc",
"yR5lBcwdM9",
"I-A7xPUfiJD",
"VdF7gKUB4i",
"HHPUsemsNP",
"IqnmZVVq-8u",
"k5BXvYsEh_HL",
"kGGfiZHJxsE",
"o8vyev4lhX-",
"N1S8IOdxAFA",
"DPVHTet4lfQ",
"lsylbMC601n",
"zkoiMnF5kn2D",
"S96flafhKjo",
"Ib36ZM8YH-",
"GZgwCTOfj_l",
"tdg88w1P-5i"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for the clarifications. All my questions have been addressed and I'm happy to increase my score. Congratulations",
" We sincerely thank all reviewers for your deep understanding of our method, the expertise you possess, the constructive, considerate and insightful suggestions, and the patience you demons... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"yR5lBcwdM9",
"nips_2022_8SY8ete3zu",
"IqnmZVVq-8u",
"IqnmZVVq-8u",
"kGGfiZHJxsE",
"o8vyev4lhX-",
"N1S8IOdxAFA",
"Ib36ZM8YH-",
"v_UAt7ZKvtu",
"tnCO1FIQ_zY",
"20dywCaSo0MH",
"WleGsbm85iv",
"GZgwCTOfj_l",
"tdg88w1P-5i",
"tdg88w1P-5i",
"tdg88w1P-5i",
"tdg88w1P-5i",
"iSO3mNuXqOd",
"M... |
nips_2022_2gZccSOY04p | Action-modulated midbrain dopamine activity arises from distributed control policies | Animal behavior is driven by multiple brain regions working in parallel with distinct control policies. We present a biologically plausible model of off-policy reinforcement learning in the basal ganglia, which enables learning in such an architecture. The model accounts for action-related modulation of dopamine activity that is not captured by previous models that implement on-policy algorithms. In particular, the model predicts that dopamine activity signals a combination of reward prediction error (as in classic models) and "action surprise," a measure of how unexpected an action is relative to the basal ganglia's current policy. In the presence of the action surprise term, the model implements an approximate form of $Q$-learning. On benchmark navigation and reaching tasks, we show empirically that this model is capable of learning from data driven completely or in part by other policies (e.g. from other brain regions). By contrast, models without the action surprise term suffer in the presence of additional policies, and are incapable of learning at all from behavior that is completely externally driven. The model provides a computational account for numerous experimental findings about dopamine activity that cannot be explained by classic models of reinforcement learning in the basal ganglia. These include differing levels of action surprise signals in dorsal and ventral striatum, decreasing amounts movement-modulated dopamine activity with practice, and representations of action initiation and kinematics in dopamine activity. It also provides further predictions that can be tested with recordings of striatal dopamine activity. | Accept | This paper presents an off-policy RL model for providing biologically plausible explanations of of dopamine activity and basal ganglia recordings.
The reviews express a positive evaluation for the research question and framework and the biological plausibility within the context of the domain application. There are some concerns as to whether these models would compare to more well-established behaviour models. The authors have presented robust clairifications of the main concerns of the reviewers. | train | [
"jV-AzgIlphq",
"Ovp_0-dlX4G",
"UTSkDg85q4o",
"_g80HR4bQAq",
"ZzPl56p0gqZ",
"XpTNOtH1x9E",
"EIv9DIXG8d2",
"5HaVTkl_9Ij"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Reviewer QgmR brings up some good points, but as far as I can tell no fundamental criticisms of the paper. So I'm a bit perplexed why the rating is so low. I'd like to advocate that the rating be increased so that the review and rating are commensurate, particularly in light of the authors' detailed responses. Or... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
9
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"EIv9DIXG8d2",
"5HaVTkl_9Ij",
"_g80HR4bQAq",
"EIv9DIXG8d2",
"XpTNOtH1x9E",
"nips_2022_2gZccSOY04p",
"nips_2022_2gZccSOY04p",
"nips_2022_2gZccSOY04p"
] |
nips_2022_-eHlU74N9E | Causal Inference with Non-IID Data using Linear Graphical Models | Traditional causal inference techniques assume data are independent and identically distributed (IID) and thus ignores interactions among units. However, a unit’s treatment may affect another unit's outcome (interference), a unit’s treatment may be correlated with another unit’s outcome, or a unit’s treatment and outcome may be spuriously correlated through another unit. To capture such nuances, we model the data generating process using causal graphs and conduct a systematic analysis of the bias caused by different types of interactions when computing causal effects. We derive theorems to detect and quantify the interaction bias, and derive conditions under which it is safe to ignore interactions. Put differently, we present conditions under which causal effects can be computed with negligible bias by assuming that samples are IID. Furthermore, we develop a method to eliminate bias in cases where blindly assuming IID is expected to yield a significantly biased estimate. Finally, we test the coverage and performance of our methods through simulations. | Accept | This manuscript offers some potentially important theoretical and practical contributions in the area of causal inference, particularly surrounding the assumption of iid data and how this assumption can be violated in the presence of interference. The manuscript describes a modeling framework for such interactions and derives some theoretical analysis regarding the detection such interactions and when they may be ignored (and the data nonetheless treated as iid). For a field experiencing considerable growth and momentum, these developments are timely and have a strong potential for impact.
Although there was some disagreement among the reviewers, the balance of opinion was in favor of acceptance, especially after the insightful author rebuttals and ensuing discussion between authors and reviewers. I recommend that the authors take the fruits of these discussions into account when preparing an updated version of this manuscript. | train | [
"IuT2hOcaNJR",
"rtF63sxH5s7",
"gdcXNyc-boN",
"uYBFXWbwr4",
"AYVZQJxtysF",
"iZgY9d0hc89",
"SpI1P-gUbT7",
"IX_u8KXsIq",
"F7MRBSuDIKq",
"Ma92q6kiD7e",
"hWaO6wDGhxo",
"bicGO11SowT"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I will keep my score \"weak accept\". Hope the presentation can be further improved. ",
" Thank you for your reply.",
" Thank you for your reply. We agree and will add the paragraph to the revised paper.",
" The author does partially address my concern about linearity, so I would... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
2
] | [
"SpI1P-gUbT7",
"uYBFXWbwr4",
"AYVZQJxtysF",
"IX_u8KXsIq",
"iZgY9d0hc89",
"bicGO11SowT",
"hWaO6wDGhxo",
"Ma92q6kiD7e",
"nips_2022_-eHlU74N9E",
"nips_2022_-eHlU74N9E",
"nips_2022_-eHlU74N9E",
"nips_2022_-eHlU74N9E"
] |
nips_2022_Sffus7SolE | Off-Beat Multi-Agent Reinforcement Learning | We investigate model-free multi-agent reinforcement learning (MARL) in environments where off-beat actions are prevalent, i.e., all actions have pre-set execution durations. During execution durations, the environment changes are influenced by, but not synchronised with, action execution. Such a setting is ubiquitous in many real-world problems. However, most MARL methods assume actions are executed immediately after inference, which is often unrealistic and can lead to catastrophic failure for multi-agent coordination with off-beat actions. In order to fill this gap, we develop an algorithmic framework for MARL with off-beat actions. We then propose a novel episodic memory, LeGEM, for model-free MARL algorithms. LeGEM builds agents’ episodic memories by utilizing agents’ individual experiences. It boosts multi-agent learning by addressing the challenging temporal credit assignment problem raised by the off-beat actions via our novel reward redistribution scheme, alleviating the issue of non-Markovian reward. We evaluate LeGEM on various multi-agent scenarios with off-beat actions, including Stag-Hunter Game, Quarry Game, Afforestation Game, and StarCraft II micromanagement tasks. Empirical results show that LeGEM significantly boosts multi-agent coordination and achieves leading performance and improved sample efficiency. | Reject | The review scores for this paper were borderline, and while close to the bar for acceptance, I am unable to recommend acceptance at this time, due to the very competitive nature of NeurIPS submissions. Reviewers had mixed opinions on the applicability of the off-beat scenario presented, and concerns about missing baselines (especially using methods from single-agent RL that are better able to handle delayed effects, e.g. alternative memory architectures), clarity of presentation, and scalability. Personally, I find the problem setting interesting, but am most concerned about the aforementioned baselines. All of that said, I think this work is promising and encourage the authors to integrate reviewer feedback and resubmit. | train | [
"odiPKokkeE-",
"lhgEhoyOROY",
"z0ew550u6G",
"Be2hwhXCt4",
"m6qgH9JR3_",
"uvUlaanmgZQ",
"-aBnHuOBdyg",
"JdOuyvuWPVL",
"5Oi2S2jkOAk",
"IicceNkPICl",
"_LOamJsG-lj",
"jqJRhJLyDuX",
"yXZOmjbztw",
"OQLyWzvCd1",
"OvqglnNOMw4",
"NhSbO1LrpM9",
"ShbGrwMYbO0",
"FTRUKTRPbTG",
"HMwi9m_Pgvx",
... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"officia... | [
" Dear Reviewer dJd4,\n\nWe deeply appreciate your opinion about off-beat modeling in our paper. We thank you for raising the score. \n\nWe are grateful to have fruitful discussions with you! We introduced off-beat actions in Dec-POMDP, a very popular MARL modelling method. Many cooperative MARL methods, including ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"Be2hwhXCt4",
"yXZOmjbztw",
"m6qgH9JR3_",
"NhSbO1LrpM9",
"HMwi9m_Pgvx",
"fGhtpX0FEw8",
"5Oi2S2jkOAk",
"nips_2022_Sffus7SolE",
"TTzRxF0TgaC",
"oJL3lyeKd70",
"FugpZ_VAhsq",
"fGhtpX0FEw8",
"FTRUKTRPbTG",
"nips_2022_Sffus7SolE",
"nips_2022_Sffus7SolE",
"ShbGrwMYbO0",
"fGhtpX0FEw8",
"tk... |
nips_2022_uvE-fQHA4t_ | On the Importance of Gradient Norm in PAC-Bayesian Bounds | Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively. However, to obtain bounds, current techniques use strict assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these assumptions, in this paper, we follow an alternative approach: we relax uniform bounds assumptions by using on-average bounded loss and on-average bounded gradient norm assumptions. Following this relaxation, we propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities. These inequalities add an additional loss-gradient norm term to the generalization bound, which is intuitively a surrogate of the model complexity. We apply the proposed bound on Bayesian deep nets and empirically analyze the effect of this new loss-gradient norm term on different neural architectures. | Accept | This paper presents a generalization bound that strengthens influential earlier bounds in a clear and meaningful way. They provide support experiments. The paper is well written.
| train | [
"P_fta5gAQ1D",
"SoP2MbsDuKV",
"sObEEd4cPe",
"NABpFP5vhp",
"27wJ4q7H0Ea",
"F6WT7ktUl1d",
"3zqb1R9AaXj",
"t0woiMN85_2",
"WmuyakmHoVt",
"a4J-AD7aK4O",
"vfga3xN-WDU",
"bhn03i4OxfA",
"iqZgWZoMNVe",
"3nQtVkLNmwh",
"AbFI8T6kqn",
"08DPyW_n0jx"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Please acknowledge the authors' response.",
" Thanks a lot for reading our rebuttal. If the reviewer would like to elaborate, we are happy to address any remaining concerns.",
" Thanks a lot for reading our rebuttal. We uploaded a revised version of the manuscript. Significant changes are marked in blue. We w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"08DPyW_n0jx",
"27wJ4q7H0Ea",
"F6WT7ktUl1d",
"3zqb1R9AaXj",
"bhn03i4OxfA",
"a4J-AD7aK4O",
"vfga3xN-WDU",
"nips_2022_uvE-fQHA4t_",
"08DPyW_n0jx",
"AbFI8T6kqn",
"3nQtVkLNmwh",
"iqZgWZoMNVe",
"nips_2022_uvE-fQHA4t_",
"nips_2022_uvE-fQHA4t_",
"nips_2022_uvE-fQHA4t_",
"nips_2022_uvE-fQHA4t_... |
nips_2022_kvtVrzQPvgb | What are the best Systems? New Perspectives on NLP Benchmarking | In Machine Learning, a benchmark refers to an ensemble of datasets associated with one or multiple metrics together with a way to aggregate different systems performances. They are instrumental in {\it (i)} assessing the progress of new methods along different axes and {\it (ii)} selecting the best systems for practical use. This is particularly the case for NLP with the development of large pre-trained models (\textit{e.g.} GPT, BERT) that are expected to generalize well on a variety of tasks. While the community mainly focused on developing new datasets and metrics, there has been little interest in the aggregation procedure, which is often reduced to a simple average over various performance measures. However, this procedure can be problematic when the metrics are on a different scale, which may lead to spurious conclusions. This paper proposes a new procedure to rank systems based on their performance across different tasks. Motivated by the social choice theory, the final system ordering is obtained through aggregating the rankings induced by each task and is theoretically grounded. We conduct extensive numerical experiments (on over 270k scores) to assess the soundness of our approach both on synthetic and real scores (\textit{e.g.} GLUE, EXTREM, SEVAL, TAC, FLICKR). In particular, we show that our method yields different conclusions on state-of-the-art systems than the mean-aggregation procedure while being both more reliable and robust.
| Accept | The paper proposes to address system benchmarking across several tasks as a ranking optimization problem and proposes a solution essentially based on aggregating preference rankings using Borda count. This is validated on synthetic data and using a large amount of real benchmarking on multiple NLP tasks, illustrating the benefits of the proposed method over simple averaging. Reviewers questions regarding how ties are resolved, taking task difficulty, and use-case were addressed convincingly by the authors. I hope they revise their manuscript so that it is easier to follow, the results section is packed with information, too many graphs to the point where finding information is difficult. I would strongly urge the authors to remove some analyses in the appendix. It would be good to also tell us how the various systems are being evaluated. | train | [
"kJllLlM1jC",
"8Iuqs1gwQkX",
"r9TIxEh93uO",
"ZDoIXhvUxlK",
"OJ6tuwTJ7vu",
"Ghun1FEUEg6"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to warmly thank reviewer qERs for carefully reading our manuscript and for their enthusiasm about our work. We indeed hope that our work will be widely adopted by the community as we firmly believe it provides a more robust way to evaluate NLP systems.\n\nHere are our responses to qERs concerns.\n* ... | [
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
4,
3,
3
] | [
"Ghun1FEUEg6",
"OJ6tuwTJ7vu",
"ZDoIXhvUxlK",
"nips_2022_kvtVrzQPvgb",
"nips_2022_kvtVrzQPvgb",
"nips_2022_kvtVrzQPvgb"
] |
nips_2022_dNyCj1AbOb | Autoinverse: Uncertainty Aware Inversion of Neural Networks | Neural networks are powerful surrogates for numerous forward processes.
The inversion of such surrogates is extremely valuable in science and engineering. The most important property of a successful neural inverse method is the performance of its solutions when deployed in the real world, i.e., on the native forward process (and not only the learned surrogate). We propose Autoinverse, a highly automated approach for inverting neural network surrogates. Our main insight is to seek inverse solutions in the vicinity of reliable data which have been sampled form the forward process and used for training the surrogate model. Autoinverse finds such solutions by taking into account the predictive uncertainty of the surrogate and minimizing it during the inversion. Apart from high accuracy, Autoinverse enforces the feasibility of solutions, comes with embedded regularization, and is initialization free. We verify our proposed method through addressing a set of real-world problems in control, fabrication, and design. | Accept | Thanks to the authors for submitting this super interesting work. The reviewer discussion reflected an overall satisfaction with the submission, author responses, and updated manuscript. The additional experiments directly addressed reviewer concerns, and contributed to increased scores and clarifications in the submission. Given the clear consensus and reviewer enthusiasm, I recommend acceptance. Well done! | train | [
"6-MEYP3pkuC",
"fcai1y23jEP",
"PXPjBcnvu1MM",
"N5fbY92nxp",
"O87VEb5nNCV",
"HF34owgnk6s",
"ydynhFpXOeBF",
"r7IQUI2beaw",
"8zk6kfKP2ro",
"hHWznU85TRI"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their comprehensive rebuttal. I believe a lot of my concerns were adequately addressed, and I thank them for running experiments that I understand are difficult to get running in a short time. I think these results make the paper stronger, and I am updating my score accordingly!",
" ### ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"HF34owgnk6s",
"PXPjBcnvu1MM",
"N5fbY92nxp",
"hHWznU85TRI",
"8zk6kfKP2ro",
"r7IQUI2beaw",
"nips_2022_dNyCj1AbOb",
"nips_2022_dNyCj1AbOb",
"nips_2022_dNyCj1AbOb",
"nips_2022_dNyCj1AbOb"
] |
nips_2022_Ql75oqz1npy | Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching | In real-world federated learning scenarios, participants could have their own personalized labels which are incompatible with those from other clients, due to using different label permutations or tackling completely different tasks or domains. However, most existing FL approaches cannot effectively tackle such extremely heterogeneous scenarios since they often assume that (1) all participants use a synchronized set of labels, and (2) they train on the same tasks from the same domain. In this work, to tackle these challenges, we introduce Factorized-FL, which allows to effectively tackle label- and task-heterogeneous federated learning settings by factorizing the model parameters into a pair of rank-1 vectors, where one captures the common knowledge across different labels and tasks and the other captures knowledge specific to the task for each local model. Moreover, based on the distance in the client-specific vector space, Factorized-FL performs selective aggregation scheme to utilize only the knowledge from the relevant participants for each client. We extensively validate our method on both label- and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods. | Accept | In this paper, the authors study an interesting problem where clients are heterogeneous in both labels and learning tasks/domains. This problem is different from most of the existing pFL settings, and has great potential to broaden the application of FL. To handle this case, the authors propose a novel method based on parameter factorization and similarity matching. Experimental results on both label and domain heterogeneous cases are promising. To this end, I recommend accepting this submission.
However, I do have some concerns regarding the experimental setting of the heterogeneous domain (as pointed out by reviewer BK7j and dsaH). Without proper dataset simulation, it makes the experimental results less convincing.
This submission can be further improved from all the discussions between reviewers and the authors. Hope they find the discussion useful and make this submission a better one
| train | [
"GFHHyXrLVkt",
"92pgfDaNV3S",
"czkXdQtHwvd",
"ObaEmXve2u",
"Nq7iy1IIjXR",
"Sw_2rqI02zm",
"lZyqgnwqvjaY",
"8xGY2WEy609",
"r5OrJgGqxD",
"ebJjbNLFihI",
"JWgkqFL3JCd",
"KEFC2hd0sf",
"JhplJs591ai",
"y2a72xISaVu"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for getting back to us and raising your score. We responded to your remaining concerns below:\n\n---\n\n**Comment 1**: “The authors have re-emphasized their primary focus on the cross-silo scenario, but the current version is still somewhat unclear as to the positioning of the paper. As shown in respons... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"92pgfDaNV3S",
"y2a72xISaVu",
"nips_2022_Ql75oqz1npy",
"JhplJs591ai",
"KEFC2hd0sf",
"KEFC2hd0sf",
"JWgkqFL3JCd",
"y2a72xISaVu",
"y2a72xISaVu",
"y2a72xISaVu",
"nips_2022_Ql75oqz1npy",
"nips_2022_Ql75oqz1npy",
"nips_2022_Ql75oqz1npy",
"nips_2022_Ql75oqz1npy"
] |
nips_2022_lCGYC7pXWNQ | Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learning | Recent incremental learning for action recognition usually stores representative videos to mitigate catastrophic forgetting. However, only a few bulky videos can be stored due to the limited memory. To address this problem, we propose FrameMaker, a memory-efficient video class-incremental learning approach that learns to produce a condensed frame for each selected video. Specifically, FrameMaker is mainly composed of two crucial components: Frame Condensing and Instance-Specific Prompt. The former is to reduce the memory cost by preserving only one condensed frame instead of the whole video, while the latter aims to compensate the lost spatio-temporal details in the Frame Condensing stage. By this means, FrameMaker enables a remarkable reduction in memory but keep enough information that can be applied to following incremental tasks. Experimental results on multiple challenging benchmarks, i.e., HMDB51, UCF101 and Something-Something V2, demonstrate that FrameMaker can achieve better performance to recent advanced methods while consuming only 20% memory. Additionally, under the same memory consumption conditions, FrameMaker significantly outperforms existing state-of-the-arts by a convincing margin. | Accept | The reviewers appreciated that the proposed idea is interesting and is well supported by sufficient empirical evidence. There were some concerns in the initial review and the rebuttal successfully addressed most of them. As a result, two reviewers upgraded their ratings. Overall, this paper tackles an important problem of incremental learning and the proposed approach is efficient (memory-wise) and effective (performance-wise). We are happy to recommend acceptance. | val | [
"rNAQifJPql",
"wqocR94qNcc",
"fRr1fU2MZBE",
"S3Qbr13rDYx",
"-24juEGR_0L",
"p-P1lU-Yxh",
"54xKfBv5655",
"zyyl3JOo3oM",
"Zl5eUKHy0",
"Jmpmz3MfyX",
"qg1LorThFsQ",
"ZRVI9pV_Ls",
"1m4WKBLVWRLa",
"FO8BqsRAPF4a",
"LcNpxJlYLsP",
"mERAlhJpGzx",
"gBXwUknaaCp",
"PQg1IAcVlN",
"H8qr0CnYfKa"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to express our deepest gratitude for your insightful and valuable comments, and we also appreciate your adjustment of the final rating. We will further improve our manuscript and explore the rationale behind the Instance-Specific Prompts according to the suggestions from all the reviewers.",
" Tha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"p-P1lU-Yxh",
"54xKfBv5655",
"H8qr0CnYfKa",
"zyyl3JOo3oM",
"zyyl3JOo3oM",
"mERAlhJpGzx",
"Jmpmz3MfyX",
"Zl5eUKHy0",
"gBXwUknaaCp",
"PQg1IAcVlN",
"H8qr0CnYfKa",
"mERAlhJpGzx",
"mERAlhJpGzx",
"mERAlhJpGzx",
"mERAlhJpGzx",
"nips_2022_lCGYC7pXWNQ",
"nips_2022_lCGYC7pXWNQ",
"nips_2022_l... |
nips_2022_xDaoT2zlJ0r | FINDE: Neural Differential Equations for Finding and Preserving Invariant Quantities | Neural networks have shown promise for modeling dynamical systems from data. Recent models, such as Hamiltonian neural networks, have been designed to ensure known geometric structures of target systems and have shown excellent modeling accuracy. However, in most situations where neural networks learn unknown systems, their underlying structures are also unknown. Even in such cases, one can expect that target systems are associated with first integrals (a.k.a. invariant quantities), which are quantities remaining unchanged over time. First integrals come from the conservation laws of system energy, momentum, and mass, from constraints on states, and from other features of governing equations. By leveraging projection methods and discrete gradient methods, we propose first integral-preserving neural differential equations (FINDE). The proposed FINDE finds and preserves first integrals from data, even in the absence of prior knowledge about the underlying structures. Experimental results demonstrate that the proposed FINDE is able to predict future states of given systems much longer and find various quantities consistent with well-known first integrals of the systems in a unified manner. | Reject | Although all reviewers find an interesting point and the significance of this paper, some reviewers have several critical concerns in the paper such as the readability (very hard to follow, and lacking some important information) and the non-convincing empirical evaluations. Although it seems that the authors could answer parts of these concerns in their responses and , those seem to require much modification from the original draft. As a total, although this paper could be a good paper after reflecting the reviewers' comments, my recommendation for the current form of this paper is rejection from the relativistic perspective, compared with papers accepted to NeurIPS. | test | [
"uCfmf3xDAAC",
"5MQ1qqX67LK",
"nTVZvr6ck8O",
"XWwowS14P4n",
"0G33Y97szsu",
"h-NuOomFlkt",
"iGNyHKoxbca",
"kHkS572fMD",
"j_ml4FTp4Xe",
"IUo9KR1PrFt",
"qRc9Er1NHg-",
"TCLS6rnLhAQ",
"FI-kuxazeKQ"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear AC,\n\nWe are very grateful to the reviewers for their comments, which have helped us to improve the manuscript significantly.\n\nHowever, we have not yet received a response from Reviewers f3su and kyoV.\nCould you please remind the Reviewers to join the discussion with us?\n\n**Reviewer f3su has a confiden... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
5,
4
] | [
"nips_2022_xDaoT2zlJ0r",
"nTVZvr6ck8O",
"kHkS572fMD",
"FI-kuxazeKQ",
"FI-kuxazeKQ",
"TCLS6rnLhAQ",
"TCLS6rnLhAQ",
"qRc9Er1NHg-",
"IUo9KR1PrFt",
"nips_2022_xDaoT2zlJ0r",
"nips_2022_xDaoT2zlJ0r",
"nips_2022_xDaoT2zlJ0r",
"nips_2022_xDaoT2zlJ0r"
] |
nips_2022_2nJdh_C-UWe | Towards Effective and Interpretable Human-AI Collaboration in MOBA Games | MOBA games, e.g., Dota2 and Honor of Kings, have been actively used as the testbed for the recent AI research on games, and various AI systems have been developed at the human level so far. However, these AI systems merely focus on how to compete with humans, less exploring how to collaborate with humans. To this end, this paper makes the first attempt to investigate human-AI collaboration in MOBA games. In this paper, we propose to enable humans and agents to collaborate through explicit communications by designing an efficient and interpretable Meta-Command Communication-based framework, dubbed MCC, for accomplishing effective human-AI collaboration in MOBA games. The MCC framework consists of two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, to bridge the communication gap between humans and agents; 2) a meta-command value estimation model, i.e., the Meta-Command Selector, to select a valuable meta-command for each agent to achieve effective human-AI collaboration. Experimental results in Honor of Kings demonstrate that MCC agents can collaborate reasonably well with human teammates and even generalize to collaborate with different levels and numbers of human teammates. Videos are available at https://sites.google.com/view/mcc-demo. | Reject | The paper proposes a mechanism for human-AI collaborative play in the game Honor of Kings. The high-level approach is sensible: communicating happens within a small space of "meta-commands", which can be converted to coherent chunks of agent behaviour. The complexity of the system is high, the experiments are done at large scale, and the empirical findings are impressive (in single-human collaboration at least). However, the reviewers also point out a large array of concerns to be resolved; of particular concern, and insufficiently addressed, are the ethical issues (informed consent) raised by multiple reviewers.
Overall, the reviewers find that the paper is not ready for publication yet, and I concur. I hope the authors will integrate the rich feedback from this reviewing process in a future (extensively rewritten) iteration. | val | [
"WUkCLmFfKVp",
"vLu0A8HzjF_",
"kut1wzXZxmk",
"Gw6KSRuHJhh",
"URpdevLQ4wn",
"lf4VFgIwBeg",
"jcnpXV_LDQc",
"wc1zVGCBufn",
"ATTx7I6Harh",
"0XsJCXmjNOS",
"w-61YWBMB7",
"tEp7jxozrEi",
"alc5QzpqJ7T",
"VPLK0un5dQ5",
"z7s2guLJxy",
"hMICWPuRQN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for their reply to my comments. First, on the ethics issue, thank you to the authors for pointing to the additional methods information in the appendix. However, I don't think that \"a process similar to IRB\" is a sufficient amount of detail to provide on the review process. IRBs (and an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"0XsJCXmjNOS",
"URpdevLQ4wn",
"wc1zVGCBufn",
"nips_2022_2nJdh_C-UWe",
"lf4VFgIwBeg",
"hMICWPuRQN",
"z7s2guLJxy",
"ATTx7I6Harh",
"VPLK0un5dQ5",
"w-61YWBMB7",
"alc5QzpqJ7T",
"nips_2022_2nJdh_C-UWe",
"nips_2022_2nJdh_C-UWe",
"nips_2022_2nJdh_C-UWe",
"nips_2022_2nJdh_C-UWe",
"nips_2022_2nJ... |
nips_2022_ecNbEOOtqBU | OGC: Unsupervised 3D Object Segmentation from Rigid Dynamics of Point Clouds | In this paper, we study the problem of 3D object segmentation from raw point clouds. Unlike all existing methods which usually require a large amount of human annotations for full supervision, we propose the first unsupervised method, called OGC, to simultaneously identify multiple 3D objects in a single forward pass, without needing any type of human annotations. The key to our approach is to fully leverage the dynamic motion patterns over sequential point clouds as supervision signals to automatically discover rigid objects. Our method consists of three major components, 1) the object segmentation network to directly estimate multi-object masks from a single point cloud frame, 2) the auxiliary self-supervised scene flow estimator, and 3) our core object geometry consistency component. By carefully designing a series of loss functions, we effectively take into account the multi-object rigid consistency and the object shape invariance in both temporal and spatial scales. This allows our method to truly discover the object geometry even in the absence of annotations. We extensively evaluate our method on five datasets, demonstrating the superior performance for object part instance segmentation and general object segmentation in both indoor and the challenging outdoor scenarios. | Accept | This is an interesting paper that proposes a novel unsupervised approach for object segmentation from point clouds, for rigid objects. The strong results demonstrated can be impactful both for 3d as well as potentially 2d vision. After rebuttal, all 4 expert reviewers are convinced that the paper should be accepted, so the decision to accept the paper was easy. | train | [
"ck7lkT0L7fJ",
"dvRA0B5Beu",
"aNjoxP2kDx4",
"bvTzHyxDMA",
"MTCOVSdxXe8",
"mPH-V5tnQCg",
"_tYFfbaY3zU",
"XRwviiNIcQF",
"vTsWq9gVle",
"n56GUsJ51Vw",
"exefJcKyPQE",
"U3ohfC3Dk_1",
"WzLcLV9pVQm",
"OZce4svjOwx",
"Jr1XjfC81x4",
"Hso6iEZsXVm",
"UPO-QTUtmeT"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviwer 8Drm,\n\nWe really appreciate your very encouraging feedback. \n\nRegards,\nAuthors",
" Dear authors,\n\nThe reviewer is happy that the authors have addressed most of the initial concerns and improved the writing clarity. I do not have further questions at the current stage. I have increased the sc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"dvRA0B5Beu",
"bvTzHyxDMA",
"UPO-QTUtmeT",
"Hso6iEZsXVm",
"nips_2022_ecNbEOOtqBU",
"_tYFfbaY3zU",
"U3ohfC3Dk_1",
"vTsWq9gVle",
"n56GUsJ51Vw",
"UPO-QTUtmeT",
"Hso6iEZsXVm",
"Jr1XjfC81x4",
"OZce4svjOwx",
"nips_2022_ecNbEOOtqBU",
"nips_2022_ecNbEOOtqBU",
"nips_2022_ecNbEOOtqBU",
"nips_2... |
nips_2022_zfQrX05HzBO | Grow and Merge: A Unified Framework for Continuous Categories Discovery | Although a number of studies are devoted to novel category discovery, most of them assume a static setting where both labeled and unlabeled data are given at once for finding new categories. In this work, we focus on the application scenarios where unlabeled data are continuously fed into the category discovery system. We refer to it as the {\bf Continuous Category Discovery} ({\bf CCD}) problem, which is significantly more challenging than the static setting. A common challenge faced by novel category discovery is that different sets of features are needed for classification and category discovery: class discriminative features are preferred for classification, while rich and diverse features are more suitable for new category mining. This challenge becomes more severe for dynamic setting as the system is asked to deliver good performance for known classes over time, and at the same time continuously discover new classes from unlabeled data. To address this challenge, we develop a framework of {\bf Grow and Merge} ({\bf GM}) that works by alternating between a growing phase and a merge phase: in the growing phase, it increases the diversity of features through a continuous self-supervised learning for effective category mining, and in the merging phase, it merges the grown model with a static one to ensure satisfying performance for known classes. Our extensive studies verify that the proposed GM framework is significantly more effective than the state-of-the-art approaches for continuous category discovery. | Accept | This paper proposes a method for continuous category discovery in novel category discovery tasks, assuming a dynamic setting in which unlabeled data are continuously fed to the model for category discovery. Specifically, it is a technique that balances the discovery of new categories with the merging of newly discovered information with knowledge already known to the model, and has shown its usefulness experimentally. These results contribute to the progress of research in this field. | train | [
"59agUGGVWQ",
"Z2TfHExZTHB",
"gj78ROpiN0P",
"UzRg0lHN7VB",
"by-97t7fjG",
"HKD_orQS1kx",
"PR1EVKHg4RI",
"Q3MhMlNbJu0",
"OY20rVLuqQD",
"7AmuyCRftbU",
"Tx0qWTJs597",
"NIq0g1RioPB",
"I8q-yY9HSln",
"vzIwlSnvgX7",
"lF6o51SY8bO",
"ou_KAw0JSHR",
"YPGlzaNpjyh",
"c9oKe-YWbli",
"wv1Tf1wo15K... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"officia... | [
" Dear Chairs and Reviewers, \n\nHope this message finds you well.\n\nWith the closing of the discussion period, we present a brief summary of our discussion with the reviewers as an overview for reference.\n\nFirst of all, we thank the reviewers for their careful reading and valuable feedback. We are encouraged t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"nips_2022_zfQrX05HzBO",
"by-97t7fjG",
"by-97t7fjG",
"by-97t7fjG",
"ou_KAw0JSHR",
"PR1EVKHg4RI",
"7AmuyCRftbU",
"I8q-yY9HSln",
"I8q-yY9HSln",
"I8q-yY9HSln",
"lYU0uqmKV6y",
"AKsWkHClrrR",
"c9oKe-YWbli",
"lYU0uqmKV6y",
"lYU0uqmKV6y",
"lYU0uqmKV6y",
"DyFdR-nYCL2",
"DyFdR-nYCL2",
"AK... |
nips_2022_LvyJX20Rll | Factuality Enhanced Language Models for Open-Ended Text Generation | Pretrained language models (LMs) are susceptible to generate text with nonfactual information. In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation. We design the FactualityPrompts test set and metrics to measure the factuality of LM generations. Based on that, we study the factual accuracy of LMs with parameter sizes ranging from 126M to 530B. Interestingly, we find that larger LMs are more factual than smaller ones, although a previous study suggests that larger LMs can be less truthful in terms of misconceptions. In addition, popular sampling algorithms (e.g., top-p) in open-ended text generation can harm the factuality due to the ``uniform randomness'' introduced at every sampling step. We propose the factual-nucleus sampling algorithm that dynamically adapts the randomness to improve the factuality of generation while maintaining quality. Furthermore, we analyze the inefficiencies of the standard training method in learning correct associations between entities from factual text corpus (e.g., Wikipedia). We propose a factuality-enhanced training method that uses TopicPrefix for better awareness of facts and sentence completion as the training objective, which can vastly reduce the factual errors. | Accept | This paper proposed a new dataset and a new benchmark for factuality of open-ended text generation. Based on an analysis on the factuality of language models with different sizes using this benchmark, this paper proposed a modified top-k sampling strategy and a modified training method to improve the factuality of text generation. The work is solid. Three of four reviewers give positive ratings while reviewer 4pXB negative. The author(s) addressed the concerns adequately but the reviewer 4pXB did not reply. | train | [
"l8ma8r7zFF",
"agwxQpWAKIE",
"o2FKs0rJrZ",
"M9yuTIHC_hTx",
"sAJ6rtaw1dn",
"McYv8ejVBpx",
"LzR3qOWWqjDB",
"PW2FU6MCwzK",
"KGwLu5uNmXw",
"nKxhPaM4FJR",
"tBeuEcEZNI4",
"eOInRiR1iRn",
"kmV5vmrcbOS"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nMany thanks again for your review! \n\nWe hope our response could address your major concerns. In particular, per your nice suggestion, we have included the human evaluation results in Appendix A of the paper revision. For your convenience, we also introduce the results here.\n\nWe collect huma... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"eOInRiR1iRn",
"kmV5vmrcbOS",
"kmV5vmrcbOS",
"nKxhPaM4FJR",
"nKxhPaM4FJR",
"kmV5vmrcbOS",
"kmV5vmrcbOS",
"eOInRiR1iRn",
"tBeuEcEZNI4",
"nips_2022_LvyJX20Rll",
"nips_2022_LvyJX20Rll",
"nips_2022_LvyJX20Rll",
"nips_2022_LvyJX20Rll"
] |
nips_2022_37Rf7BTAtAM | Domain Generalization by Learning and Removing Domain-specific Features | Deep Neural Networks (DNNs) suffer from domain shift when the test dataset follows a distribution different from the training dataset. Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains. In this paper, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. Following this approach, we propose a novel framework called Learning and Removing Domain-specific features for Generalization (LRDG) that learns a domain-invariant model by tactically removing domain-specific features from the input images. Specifically, we design a classifier to effectively learn the domain-specific features for each source domain, respectively. We then develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed. With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification. Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods. | Accept | After the author-reviewer discussion, Reviewer 8K1L shows strong support for the paper, and Reviewer MBUN finds most concerns addressed and upgrades the score to Weak accept. Reviewer T9VV has some remaining concerns, but does agree the proposed method seems working empirically. After careful consideration, AC recommends accepting the paper. | train | [
"EInxcr50-_O",
"zs-9VqbGnFM",
"eyrbbeEhk0",
"MNQEO7txjX-",
"zdMY249mLh2",
"30Jj2FCsTL9",
"9TjSpMrMCRn",
"Kt7dOk1Ifks",
"0_5hZNVIWae",
"A9OylQZF4Yx",
"ionaUR66Tj"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The suggested assumption makes our framework and the whole paper more rigorous. We will surely incorporate it into our paper. Thanks again for all the comments.",
" I thank the authors for their detailed responses to both my concerns and other reviewers' as well. I believe this specific assumption should be mad... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"zs-9VqbGnFM",
"30Jj2FCsTL9",
"Kt7dOk1Ifks",
"zdMY249mLh2",
"9TjSpMrMCRn",
"ionaUR66Tj",
"A9OylQZF4Yx",
"0_5hZNVIWae",
"nips_2022_37Rf7BTAtAM",
"nips_2022_37Rf7BTAtAM",
"nips_2022_37Rf7BTAtAM"
] |
nips_2022_LGDfv0U7MJR | To update or not to update? Neurons at equilibrium in deep models | Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters. Such a discovery has a broad impact from theory to applications, driving the research towards methods to identify the minimum subset of parameters to train without look-ahead information exploitation. However, the methods proposed do not match the state-of-the-art performance, and rely on unstructured sparsely connected models.
In this work we shift our focus from the single parameters to the behavior of the whole neuron, exploiting the concept of neuronal equilibrium (NEq). When a neuron is in a configuration at equilibrium (meaning that it has learned a specific input-output relationship), we can halt its update; on the contrary, when a neuron is at non-equilibrium, we let its state evolve towards an equilibrium state, updating its parameters. The proposed approach has been tested on different state-of-the-art learning strategies and tasks, validating NEq and observing that the neuronal equilibrium depends on the specific learning setup. | Accept | Despite there not being complete agreement on the novelty of the method presented in the paper, most Reviewers praised the idea of proposing an early stopping scheme that is based on reusing concepts from the pruning literature but with the innovation of shifting the focus from connections to units.
A major criticism moved in the initial reviews was due to doubts on the contribution of the work and in particular its practical benefit in terms of training speedup. However, the rebuttals did a good job at dispelling these doubts.
Following the rebuttals, the paper garnered unanimously positive scores among Reviewers, owing to its significance for bridging the pruning literature with practical training speedup supported by convincing empirical evidence. | train | [
"gtdQUehsuUY",
"-yIQZA3uey6",
"nBnxXfhR5ux",
"pnpXDSnG_3m",
"X1SMZjBtJJI",
"NU3PLalJySCn",
"rGvQm-2ehrU",
"OKegYh7RtmN",
"JwTEKbTjXs",
"Ig2pYMCSamb",
"u9iqIaGNod4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your kind feedback. It is very well appreciated, and your suggestions really improve the quality of the paper. We address below your additional comments.\n\n[**ADAM vs SGD**] Very interesting question! In the paper we perform, in Sec.4.1.1, the comparison between ADAM and SGD *with momentum* (as ind... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"nBnxXfhR5ux",
"rGvQm-2ehrU",
"X1SMZjBtJJI",
"Ig2pYMCSamb",
"u9iqIaGNod4",
"JwTEKbTjXs",
"OKegYh7RtmN",
"nips_2022_LGDfv0U7MJR",
"nips_2022_LGDfv0U7MJR",
"nips_2022_LGDfv0U7MJR",
"nips_2022_LGDfv0U7MJR"
] |
nips_2022_L7n7BPTVAr3 | Leveraging Inter-Layer Dependency for Post -Training Quantization | Prior works on Post-training Quantization (PTQ) typically separate a neural network into sub-nets and quantize them sequentially. This process pays little attention to the dependency across the sub-nets, hence is less optimal. In this paper, we propose a novel Network-Wise Quantization (NWQ) approach to fully leveraging inter-layer dependency. NWQ faces a larger scale combinatorial optimization problem of discrete variables than in previous works, which raises two major challenges: over-fitting and discrete optimization problem. NWQ alleviates over-fitting via a Activation Regularization (AR) technique, which better controls the activation distribution. To optimize discrete variables, NWQ introduces Annealing Softmax (ASoftmax) and Annealing Mixup (AMixup) to progressively transition quantized weights and activations from continuity to discretization, respectively. Extensive experiments demonstrate that NWQ outperforms previous state-of-the-art by a large margin: 20.24\% for the challenging configuration of MobileNetV2 with 2 bits on ImageNet, pushing extremely low-bit PTQ from feasibility to usability. In addition, NWQ is able to achieve competitive results with only 10\% computation cost of previous works. | Accept | This paper studies post-training quantization by proposing Network-Wise Quantization (NWQ) an end-to-end quantization approach that takes into account relationships between layers rather than treating layers independently. Using this approach, the paper demonstrates compelling empirical gains across a number of architectures and compression factors. Reviewers recognized the practical success of the approach as demonstrated by these empirical results and praised the clarity of the manuscript. However, there were concerns regarding the novelty of the approach and whether the proposed method is simply a composition of previous methods. While I understand these concerns, I think there is a significant delta between this work and previous approaches, especially when taking into account the markedly improved performance and the challenges of determining how to apply these lines of thinking to end-to-end training. The authors also expanded their discussion of these works in their updated manuscript, clarifying the differences. There were also concerns regarding the hyperparameter tuning, but the authors clarified in their response that the large majority of experiments used a constant set of hyperparameters, suggesting that these results are not simply the effect of tuning. Altogether, I think this paper makes an impactful contribution and will be a valuable addition to the conference. | train | [
"A3wyUPbvjH",
"bUMGq3TZbg7",
"UMuQAtcyQvR",
"WkCjojhK-c",
"uatSzqV9xDh",
"TLyuIjh8Dg0",
"G-2DRPgk7Xs",
"9KCCctOAx8x",
"GyDbwGKyhDD",
"5YgMh9DKuf8j",
"FGw_m5Ms4iE",
"HqLmHm7bZrO",
"c32t1OI2a2-",
"1SeAfwinrQs",
"gMP-QWJpBp4"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your promotion and your thoughtful advice.\n\nWe didn't include the mathematical support because we thought inter-layer dependency is one of the natures of CNNs. However, we will include the mathematical support in our final version to make it more theoretically convincing .\n\nIn fact, the mathematica... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
5
] | [
"bUMGq3TZbg7",
"GyDbwGKyhDD",
"c32t1OI2a2-",
"uatSzqV9xDh",
"5YgMh9DKuf8j",
"gMP-QWJpBp4",
"gMP-QWJpBp4",
"1SeAfwinrQs",
"c32t1OI2a2-",
"HqLmHm7bZrO",
"nips_2022_L7n7BPTVAr3",
"nips_2022_L7n7BPTVAr3",
"nips_2022_L7n7BPTVAr3",
"nips_2022_L7n7BPTVAr3",
"nips_2022_L7n7BPTVAr3"
] |
nips_2022_2ZNPedOfwB | Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity | $\ell_0$ constrained optimization is prevalent in machine learning, particularly for high-dimensional problems, because it is a fundamental approach to achieve sparse learning. Hard-thresholding gradient descent is a dominant technique to solve this problem. However, first-order gradients of the objective function may be either unavailable or expensive to calculate in a lot of real-world problems, where zeroth-order (ZO) gradients could be a good surrogate. Unfortunately, whether ZO gradients can work with the hard-thresholding operator is still an unsolved problem.
To solve this puzzle, in this paper, we focus on the $\ell_0$ constrained black-box stochastic optimization problems, and propose a new stochastic zeroth-order gradient hard-thresholding (SZOHT) algorithm with a general ZO gradient estimator powered by a novel random support sampling. We provide the convergence analysis of SZOHT under standard assumptions. Importantly, we reveal a conflict between the deviation of ZO estimators and the expansivity of the hard-thresholding operator, and provide a theoretical minimal value of the number of random directions in ZO gradients. In addition, we find that the query complexity of SZOHT is independent or weakly dependent on the dimensionality under different settings. Finally, we illustrate the utility of our method on a portfolio optimization problem as well as black-box adversarial attacks. | Accept | The paper considers stochastic optimization in the presence of an L0 ball constraint. All reviewers agree that the theoretical derivations are solid and numerical experiments show promising performance. Although the algorithm might have to be finely tuned to perform well, it contains many novel and interesting elements that warrants its publication. | train | [
"gVaYJOWToUW",
"30kXMvWkBTU",
"ukfl7KrjnWI",
"0kyU8xY5J3G",
"Mtka0NTi9mh",
"j6SBjLXIPOf",
"Ivnss9Xa-rf",
"Nd19rJscaVV",
"iSxtQI4Cm0-",
"GaExOJqdUZ0",
"ZYfQfUxFl_b"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your feedback and improved score.",
" Thank you very much for your feedback and improved score.",
" Hi,\nYour rebuttal has addressed all of my main concerns!",
" Dear authors,\n\nThank you for the detailed response for my clarification questions. The response makes sense to me. I hav... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"0kyU8xY5J3G",
"ukfl7KrjnWI",
"GaExOJqdUZ0",
"Mtka0NTi9mh",
"ZYfQfUxFl_b",
"GaExOJqdUZ0",
"iSxtQI4Cm0-",
"nips_2022_2ZNPedOfwB",
"nips_2022_2ZNPedOfwB",
"nips_2022_2ZNPedOfwB",
"nips_2022_2ZNPedOfwB"
] |
nips_2022_ECQ-O1q0saD | Multi-view Subspace Clustering on Topological Manifold | Multi-view subspace clustering aims to exploit a common affinity representation by means of self-expression. Plenty of works have been presented to boost the clustering performance, yet seldom considering the topological structure in data, which is crucial for clustering data on manifold. Orthogonal to existing works, in this paper, we argue that it is beneficial to explore the implied data manifold by learning the topological relationship between data points. Our model seamlessly integrates multiple affinity graphs into a consensus one with the topological relevance considered. Meanwhile, we manipulate the consensus graph by a connectivity constraint such that the connected components precisely indicate different clusters. Hence our model is able to directly obtain the final clustering result without reliance on any label discretization strategy as previous methods do. Experimental results on several benchmark datasets illustrate the effectiveness of the proposed model, compared to the state-of-the-art competitors over the clustering performance. | Accept | All reviewer agree that this paper is innovative and well-written, so I recommend to accept. | train | [
"JiZKOi1Vcu1",
"S8DBwGyQ6eq",
"q1xl-8OkzSR",
"_ctsDT7NGpJ",
"HyTDiA6Lhcq",
"lqmnvUrGwHs",
"2ECC0nAk8YB",
"tcWvS4Mi1OM",
"UulMzd57XbK",
"u-2JvcMJrMd",
"su9q5sxLJRq"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 1etV:\n\nThanks a lot for reviewing our paper and giving us valuable comments. \n\nWe have tried our best to answer all the questions according to the comments. We sincerely hope that our responses can address all your concerns. Is there anything that needs us to further clarify the given concerns? ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"su9q5sxLJRq",
"q1xl-8OkzSR",
"tcWvS4Mi1OM",
"HyTDiA6Lhcq",
"2ECC0nAk8YB",
"su9q5sxLJRq",
"u-2JvcMJrMd",
"UulMzd57XbK",
"nips_2022_ECQ-O1q0saD",
"nips_2022_ECQ-O1q0saD",
"nips_2022_ECQ-O1q0saD"
] |
nips_2022_WuJfPCoj7pT | Globally Convergent Policy Search for Output Estimation | We introduce the first direct policy search algorithm which provably converges to the globally optimal dynamic filter for the classical problem of predicting the outputs of a linear dynamical system, given noisy, partial observations. Despite the ubiquity of partial observability in practice, theoretical guarantees for direct policy search algorithms, one of the backbones of modern reinforcement learning, have proven difficult to achieve. This is primarily due to the degeneracies which arise when optimizing over filters that maintain an internal state. In this paper, we provide a new perspective on this challenging problem based on the notion of informativity, which intuitively requires that all components of a filter’s internal state are representative of the true state of the underlying dynamical system. We show that informativity overcomes the aforementioned degeneracy. Specifically, we propose a regularizer which explicitly enforces informativity, and establish that gradient descent on this regularized objective - combined with a “reconditioning step” – converges to the globally optimal cost at a $O(1/T)$ rate. | Accept | The paper establishes the first global optimum convergence guarantee for solving the output estimation (OE) problem of a linear dynamical system through a model-free gradient descent algorithm. The contribution is novel and of interest for the community. All the reviewers are convinced by authors' answers and agree that the paper is a solid contribution. | train | [
"jXcTvH68jJC",
"uzeHu_mF_m4",
"1vZLrSuGgiV",
"sRAZDQo7Bd7",
"DdEXrvDRZoP",
"jEy93sjBmM",
"x29muHdyaED",
"CIhPdQy3IhG",
"DMP-IE0Kdbx",
"CCN75zwEhQU",
"9cn9kCrmSvU",
"ujCc5IvgqDy",
"JPT2KEPYOb",
"h6odMIVzO4y",
"mY074lubxwI",
"mcgfkyV8Lw"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for this feedback. We are in complete agreement; it would be much better to introduce the connection between informativity and gradient dominance earlier in the paper. In the revised manuscript (cf. the last sentence of \"Our techniques\" in Section 1.1), when we talk about quantitative notion... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
2,
1
] | [
"sRAZDQo7Bd7",
"CIhPdQy3IhG",
"DMP-IE0Kdbx",
"9cn9kCrmSvU",
"jEy93sjBmM",
"mcgfkyV8Lw",
"mY074lubxwI",
"h6odMIVzO4y",
"JPT2KEPYOb",
"nips_2022_WuJfPCoj7pT",
"ujCc5IvgqDy",
"nips_2022_WuJfPCoj7pT",
"nips_2022_WuJfPCoj7pT",
"nips_2022_WuJfPCoj7pT",
"nips_2022_WuJfPCoj7pT",
"nips_2022_WuJ... |
nips_2022_hgAuik7LoTh | Learning Distributions Generated by Single-Layer ReLU Networks in the Presence of Arbitrary Outliers | We consider a set of data samples such that a fraction of the samples are arbitrary outliers, and the rest are the output samples of a single-layer neural network with rectified linear unit (ReLU) activation. Our goal is to estimate the parameters (weight matrix and bias vector) of the neural network, assuming the bias vector to be non-negative. We estimate the network parameters using the gradient descent algorithm combined with either the median- or trimmed mean-based filters to mitigate the effect of the arbitrary outliers. We then prove that $\tilde{O}\left( \frac{1}{p^2}+\frac{1}{\epsilon^2p}\right)$ samples and $\tilde{O}\left( \frac{d^2}{p^2}+ \frac{d^2}{\epsilon^2p}\right)$ time are sufficient for our algorithm to estimate the neural network parameters within an error of $\epsilon$ when the outlier probability is $1-p$, where $2/3<p \leq 1$, and the problem dimension is $d$ (with log factors being ignored here). Our theoretical and simulation results provide insights into the training complexity of ReLU neural networks in terms of the probability of outliers and problem dimension. | Accept | This paper learns the distributions created by single layer ReLU generative models. The paper extends previous work on this problem in the presence of outliers under Huber's contamination model.
The key innovation of the new algorithm is the use of robust estimation and its combination with the previous method to robustify it. The authors argued about the technical issues that arise in robustifying the work of Wu et al. and did a good job in addressing all the reviewer comments.
This paper is novel, well written and well motivated.
Unfortunately the problem setting is still narrow because the authors can only learn single layer generative models, but still this turns out to be non-trivial. The proposed solution involves solid technical contributions that were well explained.
| train | [
"NVeMIfgr-F",
"ZkmCRExSLSS",
"9mLiJagbTFM",
"_4AXil03Q1I7",
"4OOlJ4d6Bgf",
"r032d0GmwBj",
"VJzy38Q-0sD",
"igKWfiUgV3kU",
"ftlVl3YoiM9",
"QwO_FDl4gGj",
"LeghoLNSg3H",
"ZwMj-jrSBU"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response showing the novelty of the work. I will raise my score from a 5 to a 6.",
" >*UPDATE: After going through the rebuttal, I believe that the authors have responded convincingly. I still believe that this is somewhat incremental in the light of previous work Wu et al. 2019 and Daskalaki... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"r032d0GmwBj",
"9mLiJagbTFM",
"4OOlJ4d6Bgf",
"4OOlJ4d6Bgf",
"ZwMj-jrSBU",
"VJzy38Q-0sD",
"LeghoLNSg3H",
"ftlVl3YoiM9",
"QwO_FDl4gGj",
"nips_2022_hgAuik7LoTh",
"nips_2022_hgAuik7LoTh",
"nips_2022_hgAuik7LoTh"
] |
nips_2022_T1dhAPdS-- | Why do We Need Large Batchsizes in Contrastive Learning? A Gradient-Bias Perspective | Contrastive learning (CL) has been the de facto technique for self-supervised representation learning (SSL), with impressive empirical success such as multi-modal representation learning. However, traditional CL loss only considers negative samples from a minibatch, which could cause biased gradients due to the non-decomposibility of the loss. For the first time, we consider optimizing a more generalized contrastive loss, where each data sample is associated with an infinite number of negative samples. We show that directly using minibatch stochastic optimization could lead to gradient bias. To remedy this, we propose an efficient Bayesian data augmentation technique to augment the contrastive loss into a decomposable one, where standard stochastic optimization can be directly applied without gradient bias. Specifically, our augmented loss defines a joint distribution over the model parameters and the augmented parameters, which can be conveniently optimized by a proposed stochastic expectation-maximization algorithm. Our framework is more general and is related to several popular SSL algorithms. We verify our framework on both small scale models and several large foundation models, including SSL of ImageNet and SSL for vision-language representation learning. Experiment results indicate the existence of gradient bias in all cases, and demonstrate the effectiveness of the proposed method on improving previous state of the arts. Remarkably, our method can outperform the strong MoCo-v3 under the same hyper-parameter setting with only around half of the minibatch size; and also obtains strong results in the recent public benchmark ELEVATER for few-shot image classification. | Accept | This paper argues that using mini-batch updates for contrastive learning leads to a gradient bias problem. Authors take a probabilistic view point and propose an efficient Bayesian data augmentation technique which leads to decomposing the loss function. They further come up with an efficient sEM algorithm for optimizing the the new objective. Finally, their large-scale experiments indicate that their proposed method improves over existing contrastive learning techniques.
Reviewers are in agreement that the ideas presented in the paper are novel. Furthermore, the presentation and arguments provided in the paper are reasonable. The main concern is that based on the provided empirical results, the gap between the proposed method and other techniques such as SimCLR shrinks significantly as the batch-size increases. That being said, the interesting discussions about gradient bias, the novelty and the improvement over SimCLR in small-batch setting is more than enough reason for accepting this paper for publication at NeurIPS. | train | [
"2q3LCT3EER",
"9NCLyPwJS9A",
"7mwAdi8FN1XZ",
"SqvbvGhDvX",
"c833LqbX0hK",
"q49_l75zEvM",
"21E7fmzVded",
"MwnYYeO04m6",
"k1oCo0EaW5g",
"ohAV6AlNsJ5t",
"ZpDlHEIzbvs",
"ct54yojAgDM",
"k3AFfqsJ9pI",
"ib-m9zRZoGd",
"wayD3TSOfRc",
"dsabsu7IgT5",
"8bajuy9HZ95",
"CLPEy7zGtV",
"gZPpxR6dk2... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Dear reviewers, thanks again for your helpful comments. As suggested by Reviewer ZEaA, we provide extra comparisons of our method with more baselines on CIFAT-10, including the popular contrastive learning methods (SimCLR, DCL, NNCLR, SwaV) and non-contrastive learning methods (BYOL, DINO, BarlowTwins). We used t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_T1dhAPdS--",
"SqvbvGhDvX",
"SjRqz4Jrhb",
"ib-m9zRZoGd",
"k1oCo0EaW5g",
"ohAV6AlNsJ5t",
"ZpDlHEIzbvs",
"ct54yojAgDM",
"k3AFfqsJ9pI",
"wayD3TSOfRc",
"dsabsu7IgT5",
"8bajuy9HZ95",
"ihXSLKbUFU-",
"vfOaXw3--Hm",
"dsabsu7IgT5",
"8bajuy9HZ95",
"SjRqz4Jrhb",
"gZPpxR6dk2o",
"ni... |
nips_2022_25XwID3wKsi | Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback | We consider regret minimization for Adversarial Markov Decision Processes (AMDPs), where the loss functions are changing over time and adversarially chosen, and the learner only observes the losses for the visited state-action pairs (i.e., bandit feedback). While there has been a surge of studies on this problem using Online-Mirror-Descent (OMD) methods, very little is known about the Follow-the-Perturbed-Leader (FTPL) methods, which are usually computationally more efficient and also easier to implement since it only requires solving an offline planning problem. Motivated by this, we take a closer look at FTPL for learning AMDPs, starting from the standard episodic finite-horizon setting. We find some unique and intriguing difficulties in the analysis and propose a workaround to eventually show that FTPL is also able to achieve near-optimal regret bounds in this case. More importantly, we then find two significant applications: First, the analysis of FTPL turns out to be readily generalizable to delayed bandit feedback with order-optimal regret, while OMD methods exhibit extra difficulties (Jin et al., 2022). Second, using FTPL, we also develop the first no-regret algorithm for learning communicating AMDPs in the infinite-horizon setting with bandit feedback and stochastic transitions. Our algorithm is efficient assuming access to an offline planning oracle, while even for the easier full-information setting, the only existing algorithm (Chandrasekaran and Tewari, 2021) is computationally inefficient. | Accept | This paper has received uniformly good reviews, and the reviewers were all happy with the author responses as well. Thus, the paper is clearly suitable for being published at NeurIPS 2022. I encourage the authors to execute all the small updates promised in the rebuttal period when preparing the final version of the paper. | train | [
"zEEB0P2qwVd",
"Eih69PKh2xS",
"iZq78bp0sBI",
"zGiSNS2-Bge",
"RgfnfNLWggs",
"wsxh26h02GG",
"0GUznCOXTtj",
"kCAZK7Jcre",
"1UUsR3Kqoy",
"mJV3e2RYtDG",
"d2kRCCnFnM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I will continue to think this paper deserves a 7.",
" My concerns regarding the novelty, presentation and some technical details are addressed. I would like to keep my score.",
" The response addresses my questions. I would like to thank the authors for explanation. ",
" Thank you f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"zGiSNS2-Bge",
"RgfnfNLWggs",
"wsxh26h02GG",
"d2kRCCnFnM",
"mJV3e2RYtDG",
"1UUsR3Kqoy",
"kCAZK7Jcre",
"nips_2022_25XwID3wKsi",
"nips_2022_25XwID3wKsi",
"nips_2022_25XwID3wKsi",
"nips_2022_25XwID3wKsi"
] |
nips_2022_kuJQ_NwJO8_ | Knowledge-Consistent Dialogue Generation with Knowledge Graphs | Pre-trained generative language models have achieved impressive performances on dialogue generation tasks. However, when generating responses for a conversation that requires complicated factual knowledge, they are far from perfect, due to the lack of mechanisms to retrieve, encode, and reflect the knowledge in the generated responses. Unlike the methods working with unstructured text that are inefficient in retrieving and encoding the knowledge, some of the knowledge-grounded dialogue generation methods tackle this problem by leveraging the structured knowledge from the Knowledge Graphs (KGs). However, existing methods do not guarantee that the language model utilizes a relevant piece of knowledge for the given dialogue, and that the model generates dialogues which are consistent with the knowledge, from the KG. To overcome this limitation, we propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating knowledge-consistent, context-relevant dialogues with a KG. Specifically, our method first retrieves the relevant subgraph from the given KG, and then enforces consistency across the facts by perturbing their word embeddings conditioned on the retrieved subgraph. Then, it learns the latent representation space using graph-text multi-modal contrastive learning which ensures that the generated texts have high similarity to the retrieved subgraphs. We validate the performance of our SURGE framework on the OpendialKG dataset and show that our method does generate high-quality dialogues that faithfully reflect the knowledge from the KG. | Reject | The paper presents a method for dialogue generation with knowledge graphs, where the goal is to increase faithfulness towards the provided knowledge graph. The research topic, while somewhat narrow, is well-motivated, and they propose a context-relevant subgraph retrieval method (with specialized graph encoding method preserving the permutation/relation inversion invariant objective, new loss function for the generation that encourages consistency with the knowledge subgraph), which shows a promising performance on OpendialKG benchmark dataset.
While I do some values and contributions in this paper, I am not convinced this merits a publication at NeurIPS — each component of the pipeline (contrastive learning, etc) is not novel and it addresses a fairly small domain. If they show this can be applied to more diverse settings, for example, different datasets (I saw that results on KOMODIS are presented in supplementary material, but they are not explained carefully and not very convincing as is. I’d recommend integrating it into the main paper). I also suggest novel metric (section 4) should be more carefully verified before being used to evaluate systems — e.g., how they align with human evaluation, etc.
It is unfortunate that two reviewers did not respond to the author's responses, the area chair examined the paper independently (and also read through the review/responses) to write this meta-review. | train | [
"jO4KgtRAAa",
"NL8jcH_P4Vy",
"rLW7wnLNzI",
"BV9ar46TXV7",
"A-kTUsxMKEa",
"QOj5LqchV3",
"UjYK7U8BjZ5",
"slLSbdz0u50",
"za8CcEuFlha",
"XpgpGmZ9YZ",
"fTILYY7vv1we",
"ZZz6Y3bY3DG",
"T584_mZJym",
"8rfhk3hB9yV",
"GloJyo0pxFa",
"O_a4WiIfOC",
"UZeM8lu1A6V",
"q0WJr8-8IQ_"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer JGjW,\n\nThank you for your valuable comments and suggestions again. We are looking forward to any further discussions that would help your re-assessment of our work.\n\nThanks, Authors",
" Dear Reviewer 75s5,\n\nThank you for your valuable comments and suggestions again. We are looking forward to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"q0WJr8-8IQ_",
"O_a4WiIfOC",
"q0WJr8-8IQ_",
"O_a4WiIfOC",
"QOj5LqchV3",
"UZeM8lu1A6V",
"UZeM8lu1A6V",
"q0WJr8-8IQ_",
"O_a4WiIfOC",
"nips_2022_kuJQ_NwJO8_",
"q0WJr8-8IQ_",
"q0WJr8-8IQ_",
"UZeM8lu1A6V",
"UZeM8lu1A6V",
"O_a4WiIfOC",
"nips_2022_kuJQ_NwJO8_",
"nips_2022_kuJQ_NwJO8_",
"n... |
nips_2022_xILbvAsHEV | Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent | A conceptually appealing approach for learning Extensive-Form Games (EFGs) is to convert them to Normal-Form Games (NFGs). This approach enables us to directly translate state-of-the-art techniques and analyses in NFGs to learning EFGs, but typically suffers from computational intractability due to the exponential blow-up of the game size introduced by the conversion. In this paper, we address this problem in natural and important setups for the \emph{$\Phi$-Hedge} algorithm---A generic algorithm capable of learning a large class of equilibria for NFGs. We show that $\Phi$-Hedge can be directly used to learn Nash Equilibria (zero-sum settings), Normal-Form Coarse Correlated Equilibria (NFCCE), and Extensive-Form Correlated Equilibria (EFCE) in EFGs. We prove that, in those settings, the \emph{$\Phi$-Hedge} algorithms are equivalent to standard Online Mirror Descent (OMD) algorithms for EFGs with suitable dilated regularizers, and run in polynomial time. This new connection further allows us to design and analyze a new class of OMD algorithms based on modifying its log-partition function. In particular, we design an improved algorithm with balancing techniques that achieves a sharp $\widetilde{\mathcal{O}}(\sqrt{XAT})$ EFCE-regret under bandit-feedback in an EFG with $X$ information sets, $A$ actions, and $T$ episodes. To our best knowledge, this is the first such rate and matches the information-theoretic lower bound. | Accept | All reviews are strongly positive for this paper. The reviewers appreciate the
many interesting contributions made by this paper. For my own part, I concur
with the reviewers: this is a very nice paper with several interesting results that deepen our understanding of several aspects of EFGs. | train | [
"mmSzwXb9j9t",
"Ui1LNfXZbuL",
"YtQUg4TDEOBd",
"OKYB_kSiuTH",
"QMXrH1MOXZI",
"QT8FlHw4vdD",
"oDK3CHR6soz"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I agree! Thanks for your response.",
" Thank you for your valuable reviews and positive feedback to our paper! We respond to the questions as follows.\n\n---“Model-free learning within bandit feedback… the decision space of each player (treeplex) needs to be discovered as part of the learning process… for insta... | [
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"Ui1LNfXZbuL",
"oDK3CHR6soz",
"QT8FlHw4vdD",
"QMXrH1MOXZI",
"nips_2022_xILbvAsHEV",
"nips_2022_xILbvAsHEV",
"nips_2022_xILbvAsHEV"
] |
nips_2022_DI3hGYPwfT | The Sample Complexity of One-Hidden-Layer Neural Networks | We study norm-based uniform convergence bounds for neural networks, aiming at a tight understanding of how these are affected by the architecture and type of norm constraint, for the simple class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm. We begin by proving that in general, controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees (independent of the network width), while a stronger Frobenius norm control is sufficient, extending and improving on previous work. Motivated by the proof constructions, we identify and analyze two important settings where (perhaps surprisingly) a mere spectral norm control turns out to be sufficient: First, when the network's activation functions are sufficiently smooth (with the result extending to deeper networks); and second, for certain types of convolutional networks. In the latter setting, we study how the sample complexity is additionally affected by parameters such as the amount of overlap between patches and the overall number of patches. | Accept | The paper proves a novel, tighter bound norm-based bound for the generalization error of two-layer networks. All the reviewers agree that this is an important theoretical result and should be accepted. | train | [
"OE6v3kyGdiE",
"fMK-gWK34WA",
"BWEO8L4Ni5f",
"pj6TBhAM9WY",
"ymhmehiyhPg",
"Fc1XriKKkU7",
"NKxgfY0WOIr",
"XBCtsUkSP5G",
"wgF1gFTG7W",
"z7sBLRRBbZD"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks! We will make sure to incorporate these good suggestions.\n\nRegarding your question (spectral norm is also necessary for smooth activation functions?): The answer is yes. One can see this already for the identity activation function, in which case the hypothesis class studied in theorems 4+5 is equivalent... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
3
] | [
"fMK-gWK34WA",
"Fc1XriKKkU7",
"z7sBLRRBbZD",
"wgF1gFTG7W",
"XBCtsUkSP5G",
"NKxgfY0WOIr",
"nips_2022_DI3hGYPwfT",
"nips_2022_DI3hGYPwfT",
"nips_2022_DI3hGYPwfT",
"nips_2022_DI3hGYPwfT"
] |
nips_2022_0um6VfuBfr | Functional Ensemble Distillation | Bayesian models have many desirable properties, most notable is their ability to generalize from limited data and to properly estimate the uncertainty in their predictions. However, these benefits come at a steep computational cost as Bayesian inference, in most cases, is computationally intractable. One popular approach to alleviate this problem is using a Monte-Carlo estimation with an ensemble of models sampled from the posterior. However, this approach still comes at a significant computational cost, as one needs to store and run multiple models at test time. In this work, we investigate how to best distill an ensemble's predictions using an efficient model. First, we argue that current approaches are limited as they are constrained to classification and the Dirichlet distribution. Second, in many limited data settings, all ensemble members achieve nearly zero training loss, namely, they produce near-identical predictions on the training set which results in sub-optimal distilled models. To address both problems, we propose a novel and general distillation approach, named Functional Ensemble Distillation (FED), and we investigate how to best distill an ensemble in this setting. We find that learning the distilled model via a simple augmentation scheme in the form of mixup augmentation significantly boosts the performance. We evaluated our method on several tasks and showed that it achieves superior results in both accuracy and uncertainty estimation compared to current approaches. | Accept | The paper proposes a method for ensemble distillation, motivated by the need for efficient Bayesian machine learning. The method seems novel and can potentially make a significant contribution to the literature of ensemble distillation and Bayesian machine learning.
The reviewers found the paper well-written and the empirical results compelling. Some concerns were raised about the fairness of the empirical evaluation in the first round of reviews, but these were mostly addressed during the discussion with the authors. Seeing as no major concerns remain, I'm happy to recommend acceptance.
| train | [
"KYJjsppmR7i",
"zmgdSFEeZ09",
"yThefUNMdHS",
"ZLiC2Ry33Ip",
"ZqOIhT2ynU",
"-SecvsymGph",
"UO-fpIjC8xd",
"MMMq-OOUeX4",
"olIMjLjrpVj",
"JwNw8ITnz-E",
"-VrLiTc2mUJ",
"RDC7ExHpntN",
"9iwzNN7ooBz",
"D_2fCVkIqOF",
"TpJUv4UE6aU"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the valuable feedback and score.\n",
" We thank the reviewer for reassessing the paper and raising the score based on our response.",
" We thank the reviewer for raising the score based on our response. In our experiments we noticed that cSG-MCMC generates correlated samples mainly w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"ZqOIhT2ynU",
"ZLiC2Ry33Ip",
"-SecvsymGph",
"MMMq-OOUeX4",
"UO-fpIjC8xd",
"JwNw8ITnz-E",
"TpJUv4UE6aU",
"D_2fCVkIqOF",
"9iwzNN7ooBz",
"RDC7ExHpntN",
"nips_2022_0um6VfuBfr",
"nips_2022_0um6VfuBfr",
"nips_2022_0um6VfuBfr",
"nips_2022_0um6VfuBfr",
"nips_2022_0um6VfuBfr"
] |
nips_2022_SY-TRGQmrG | Provable Benefit of Multitask Representation Learning in Reinforcement Learning | As representation learning becomes a powerful technique to reduce sample complexity in reinforcement learning (RL) in practice, theoretical understanding of its advantage is still limited. In this paper, we theoretically characterize the benefit of representation learning under the low-rank Markov decision process (MDP) model. We first study multitask low-rank RL (as upstream training), where all tasks share a common representation, and propose a new multitask reward-free algorithm called REFUEL. REFUEL learns both the transition kernel and the near-optimal policy for each task, and outputs a well-learned representation for downstream tasks. Our result demonstrates that multitask representation learning is provably more sample-efficient than learning each task individually, as long as the total number of tasks is above a certain threshold. We then study the downstream RL in both online and offline settings, where the agent is assigned with a new task sharing the same representation as the upstream tasks. For both online and offline settings, we develop a sample-efficient algorithm, and show that it finds a near-optimal policy with the suboptimality gap bounded by the sum of the estimation error of the learned representation in upstream and a vanishing term as the number of downstream samples becomes large. Our downstream results of online and offline RL further capture the benefit of employing the learned representation from upstream as opposed to learning the representation of the low-rank model directly. To the best of our knowledge, this is the first theoretical study that characterizes the benefit of representation learning in exploration-based reward-free multitask RL for both upstream and downstream tasks. | Accept | The reviewers are largely in consensus that the questions posed in the paper are extremely relevant today and, while somewhat unsurprising, the paper provides solid value by establishing "provable benefit", including algorithms with novel components and numerous analysis tools that look attractive for re-use by future researchers in this emerging domain.
The AC is surprised that not even a toy set of experiments was deemed necessary to validate the predicted multi-task performance in practice. However, the reviewers have all believed in the value of the theory alone, and I appreciate the setup of a large enough multi-task set would require significant work, though I hope this is pursued soon and often in future work. | train | [
"oGM0z92iVHX",
"3BugVfHKluL",
"Uh1yq5ONJAF",
"i4-xVgRoIb6",
"cQji0Lnqi6i",
"9ZYifK2HL7",
"2tNsFsSawsI",
"7TXIFyRDfi6",
"xtlk02srB6E",
"TfpyC_yI3FN",
"yI641lUz33y"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the further feedback and for the positive comments about our setting and execution. Regarding the novelty of the algorithm, we want to bring to the reviewer's attention of our response to Reviewer hbyz's first question (Q1), where we explained the new components in our algorithm design c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
3
] | [
"3BugVfHKluL",
"2tNsFsSawsI",
"i4-xVgRoIb6",
"yI641lUz33y",
"9ZYifK2HL7",
"TfpyC_yI3FN",
"7TXIFyRDfi6",
"xtlk02srB6E",
"nips_2022_SY-TRGQmrG",
"nips_2022_SY-TRGQmrG",
"nips_2022_SY-TRGQmrG"
] |
nips_2022_1Xb3eVZdWp7 | GAGA: Deciphering Age-path of Generalized Self-paced Regularizer | Nowadays self-paced learning (SPL) is an important machine learning paradigm that mimics the cognitive process of humans and animals. The SPL regime involves a self-paced regularizer and a gradually increasing age parameter, which plays a key role in SPL but where to optimally terminate this process is still non-trivial to determine. A natural idea is to compute the solution path w.r.t. age parameter (i.e., age-path). However, current age-path algorithms are either limited to the simplest regularizer, or lack solid theoretical understanding as well as computational efficiency. To address this challenge, we propose a novel Generalized Age-path Algorithm (GAGA) for SPL with various self-paced regularizers based on ordinary differential equations (ODEs) and sets control, which can learn the entire solution spectrum w.r.t. a range of age parameters. To the best of our knowledge, GAGA is the first exact path-following algorithm tackling the age-path for general self-paced regularizer. Finally the algorithmic steps of classic SVM and Lasso are described in detail. We demonstrate the performance of GAGA on real-world datasets, and find considerable speedup between our algorithm and competing baselines. | Accept | The paper received 4 positive reviews after the rebuttal. The technical concerns raised by the reviewers were addressed properly. Overall this work introduces a challenging and realistic setting that can be of large interest to the community working on self-paced learning. | train | [
"RPdugYM4EXd",
"bUfw9vYfFz",
"zyJvXmUbRiV",
"KjHv5K5wZWe",
"ZrLQ4gACYME",
"uVxpJd-9bV",
"aAydPhUfkC",
"vSbJo4ZKDkMC",
"ha_WarnvGi8",
"_NWTo4reU6",
"hVOHd9mWvy",
"nDtdCwkm3DBY",
"AD42RmF4dN8",
"_kqSMRAOUY",
"vln9cMiBdH",
"Zd1SfQXioz0",
"Ho6hWcufzeQ",
"i3IiimeBRlP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors' response addressed most of the concerns with the additional discussions and experimental comparisons with the related works, thus I raise my score from 4 to 6.",
" Thank you for the detailed response. Most of my concerns are well addressed, and I would increase the score from 5 to 6.",
" Dear Rev... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
2
] | [
"zyJvXmUbRiV",
"KjHv5K5wZWe",
"i3IiimeBRlP",
"Zd1SfQXioz0",
"vln9cMiBdH",
"i3IiimeBRlP",
"Zd1SfQXioz0",
"vln9cMiBdH",
"vln9cMiBdH",
"vln9cMiBdH",
"i3IiimeBRlP",
"Ho6hWcufzeQ",
"Zd1SfQXioz0",
"nips_2022_1Xb3eVZdWp7",
"nips_2022_1Xb3eVZdWp7",
"nips_2022_1Xb3eVZdWp7",
"nips_2022_1Xb3eVZ... |
nips_2022_SOqGrmufeRg | A High Performance and Low Latency Deep Spiking Neural Networks Conversion Framework | Spiking Neural Networks (SNN) are promised to be energy-efficient and achieve Artificial Neural Networks (ANN) comparable performance through conversion processes. However, a converted SNN relies on large timesteps to compensate for conversion errors, which as a result compromises its efficiency in practice. In this paper, we propose a novel framework to convert an ANN to its SNN counterpart losslessly with minimal timesteps. By studying the errors introduced by the whole conversion process, an overlooked inference error is reveald besides the coding error occured during converting. Inspired by the quantization aware traning, a QReLU activation is introduced during training to eliminate the coding error theoretically. Furthermore, a buffered non-leaky-integrate-and-fire neuron that utilizes the same basic operations as in conventional neurons is designed to reduce the inference error. Experiments on classification and detection tasks show that our proposed method attains ANNs level performance using only $16$ timesteps. To the best of our knowledge, it is the first time converted SNNs with low latency demonstrate their capability to achieve high performance on nontrivial vision tasks. Source code will be released later.
| Reject | Spiking Neural Networks (SNNs) have some advantages, especially in terms of power consumption, over standard Artificial Neural Networks (ANNs). However, most trained networks are ANN and therefore this work presents a conversion scheme from SNNs to ANNs with some desired properties. Unfortunately, the reviewers found the contribution of this work insufficient in terms of novelty. The authors argued, in the rebuttal, that there are differences between prior art and the current paper. The reviewers acknowledged these differences but were not convinced that the differences are significant enough. The authors also noted that this work was sent to publication in Jan 2021 and was rejected while some of the relevant papers were published after this date. We understand the frustration that this situation is likely to generate. However, when reviewing this work the relevant date is the deadline for NeurIPS submission deadline and therefore these studies should be compared to in the paper. | val | [
"tDbt8fo8KO0",
"7VhhQwQ3Ih8",
"bSMj4bwGQP7",
"aVGwSeTCdT_",
"mffk909xmfu",
"KG3cYvJ5fsM",
"sKj77ck-NvKN",
"yTuAZueUXovi",
"4KjT70l9_EM",
"slG9-FbXru8J",
"znRtZqAc8_",
"QRdx8c_ZIUb",
"BSlrSzX3MZs",
"4GwtVo5QYb",
"i0qs6h-jrpy",
"EY5eroqewE"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This paper proposes methods that make it more efficient and fast to run computer vision models- including tasks such as object detection models, classification tasks etc.\n\nThis can make the adoption and upkeep of CV models for various tasks such as surveillance more attractive as well. The authors have removed ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"nips_2022_SOqGrmufeRg",
"bSMj4bwGQP7",
"mffk909xmfu",
"KG3cYvJ5fsM",
"znRtZqAc8_",
"4KjT70l9_EM",
"yTuAZueUXovi",
"EY5eroqewE",
"i0qs6h-jrpy",
"znRtZqAc8_",
"4GwtVo5QYb",
"nips_2022_SOqGrmufeRg",
"nips_2022_SOqGrmufeRg",
"nips_2022_SOqGrmufeRg",
"nips_2022_SOqGrmufeRg",
"nips_2022_SOq... |
nips_2022_R3JMyR4MvoU | Learn to Match with No Regret: Reinforcement Learning in Markov Matching Markets | We study a Markov matching market involving a planner and a set of strategic agents on the two sides of the market.
At each step, the agents are presented with a dynamical context, where the contexts determine the utilities.
The planner controls the transition of the contexts to maximize the cumulative social welfare, while the agents aim to find a myopic stable matching at each step. Such a setting captures a range of applications including ridesharing platforms. We formalize the problem by proposing a reinforcement learning framework that integrates optimistic value iteration with maximum weight matching.
The proposed algorithm addresses the coupled challenges of sequential exploration, matching stability, and function approximation. We prove that the algorithm achieves sublinear regret. | Accept | This paper extends work on learning equilibria in matching markets from bandit feedback to more general Markov structure. The paper brings to RL/MDP approaches to bear in this context. An algorithm is presented and sublinear regret bound established. This seems like a solid contribution and the reviewers are supportive of acceptance. | train | [
"UDemxHftn1V",
"34h05P-dQUi",
"IvMe4EkJdHS",
"5UP6f3HbWC5",
"5x2n6uN-p4e",
"WD8kAHMXU9Z",
"sy4pEkxPuNp",
"JCkevzzuGm7",
"eKCHFexAoBA",
"M3ILEXvgSIP",
"UCGVNWNK58x",
"3SvCTGOuUiF",
"FbhT9wbv09g",
"FqXfvNHIART",
"X9VCBFjshe8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your detailed response. I have no further comments. ",
" I would like to thank the authors again for the feedback. I have no other concerns.",
" Thank you for your feedback! \n\n---\nThe reviewer is correct that we do not need an unbiased estimator for the planner’s reward since an opt... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
4,
3
] | [
"eKCHFexAoBA",
"IvMe4EkJdHS",
"5x2n6uN-p4e",
"JCkevzzuGm7",
"sy4pEkxPuNp",
"UCGVNWNK58x",
"X9VCBFjshe8",
"FqXfvNHIART",
"M3ILEXvgSIP",
"FbhT9wbv09g",
"3SvCTGOuUiF",
"nips_2022_R3JMyR4MvoU",
"nips_2022_R3JMyR4MvoU",
"nips_2022_R3JMyR4MvoU",
"nips_2022_R3JMyR4MvoU"
] |
nips_2022_w1CF57sLstO | Provable Generalization of Overparameterized Meta-learning Trained with SGD | Despite the empirical success of deep meta-learning, theoretical understanding of overparameterized meta-learning is still limited. This paper studies the generalization of a widely used meta-learning approach, Model-Agnostic Meta-Learning (MAML), which aims to find a good initialization for fast adaptation to new tasks. Under a mixed linear regression model, we analyze the generalization properties of MAML trained with SGD in the overparameterized regime. We provide both upper and lower bounds for the excess risk of MAML, which captures how SGD dynamics affect these generalization bounds. With such sharp characterizations, we further explore how various learning parameters impact the generalization capability of overparameterized MAML, including explicitly identifying typical data and task distributions that can achieve diminishing generalization error with overparameterization, and characterizing the impact of adaptation learning rate on both excess risk and the early stopping time. Our theoretical findings are further validated by experiments. | Accept | This paper explores the generalization of SGD, as used in a MAML-style algorithm for meta learning in an overparametrized setting. The generative model considered is "mixed linear regression", in which tasks follows a linear + Gaussian noise data model (a different direction per task, with minimal assumptions on the distributions for the directions --- precisely, the mean and covariance of the distribution). Overparametrization means d >> T, where d is the dimension of the vector of parameters, T is the number of tasks. The main conceptual takeaway is that there is a "phase transition" effect depending on what the step size is, and whether the risk decays as $T \to \infty$ --- depending (in a fairly complicated way) on various quantities, perhaps most interestingly a notion of "task diversity" (captured through the covariance of the direction for the tasks). The proof techniques largely follow [Zou et al '21] which considers overparametrization for SGD in just the case of linear regression --- which involves a bias/variance decomposition and a series of concentration bounds to bound each respectively. Since data coming from different tasks has a different SVD, this makes the proofs non-trivial to extend.
| train | [
"5HYCQWwsWfk",
"UWOSt5IZxv",
"10rad7mqcNt",
"saehFo5oCn",
"mHMS1MqfF4",
"q5tsYJjzsbQ",
"CrXIvySTpsw",
"iioZ-psMjP-",
"v-9L4413LSV",
"z55QWAXR630",
"3FwvSNxUVlbX",
"AwnmNDyPNrz",
"zHxwY6_3hCJ",
"vdFYgs_PIc",
"yROT8V-84Vo"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We first truly thank all reviewers for their insightful and constructive suggestions. We specially thank Reviewer **T2R8**, whose questions suggested interesting possible extensions of our analysis to other settings. We note that it is sufficient that the reviewer’s evaluation is based on the original submission... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2
] | [
"nips_2022_w1CF57sLstO",
"10rad7mqcNt",
"zHxwY6_3hCJ",
"zHxwY6_3hCJ",
"zHxwY6_3hCJ",
"iioZ-psMjP-",
"zHxwY6_3hCJ",
"yROT8V-84Vo",
"vdFYgs_PIc",
"3FwvSNxUVlbX",
"zHxwY6_3hCJ",
"nips_2022_w1CF57sLstO",
"nips_2022_w1CF57sLstO",
"nips_2022_w1CF57sLstO",
"nips_2022_w1CF57sLstO"
] |
nips_2022_jftNpltMgz | Accelerated Linearized Laplace Approximation for Bayesian Deep Learning | Laplace approximation (LA) and its linearized variant (LLA) enable effortless adaptation of pretrained deep neural networks to Bayesian neural networks. The generalized Gauss-Newton (GGN) approximation is typically introduced to improve their tractability. However, LA and LLA are still confronted with non-trivial inefficiency issues and should rely on Kronecker-factored, diagonal, or even last-layer approximate GGN matrices in practical use. These approximations are likely to harm the fidelity of learning outcomes. To tackle this issue, inspired by the connections between LLA and neural target kernels (NTKs), we develop a Nystrom approximation to NTKs to accelerate LLA. Our method benefits from the capability of popular deep learning libraries for forward mode automatic differentiation, and enjoys reassuring theoretical guarantees. Extensive studies reflect the merits of the proposed method in aspects of both scalability and performance. Our method can even scale up to architectures like vision transformers. We also offer valuable ablation studies to diagnose our method. Code is available at https://github.com/thudzj/ELLA. | Accept | This paper introduces an approach to accelerating linearized Laplace approximations to Bayesian neural network posteriors, particularly considering prediction tasks, by performing a Nyström approximation to the neural tangent kernel.
The reviewers all recommended acceptance, eventually — one reviewer was initially quite critical but revised after a rather extensive discussion with the authors revised to a borderline accept. In particular some additional experiments analyzing overfitting in these models, in general, were appreciated.
While this is perhaps borderline on the scores (5,6,7), given the overall quality of the work and the extent to which this was updated and improved during the rebuttal period, I would recommend acceptance. | train | [
"AZSjEnY2e9g",
"ec6cv3AMxfo",
"3OHA_S_JLYg",
"UiTMMU_186j",
"_ho277gDi36",
"M-joW7HY14L",
"rQkbQIMqhuJ",
"EExH-iAKfcM",
"v_cuSso2jtv",
"3Xjw2SQjNf3",
"HPwEFt0YzVP",
"hap6336JAuQ",
"u3CoIAmnraL",
"AO6d4uNbqZx",
"U2GFl55BP1xF",
"mFbvenLy8_r",
"HZZ-ylTXMEW",
"0phy38qoK_3",
"X8pYDnt3... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thank you again for the clarifications and additional experiments that answered most of my questions.\nI have raised the \"soundness\" and \"presentation\" score as well as the overall score and hope the authors address all of the promised changes (improvements in explanation + presentation; include additional ex... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ec6cv3AMxfo",
"3Xjw2SQjNf3",
"UiTMMU_186j",
"U2GFl55BP1xF",
"M-joW7HY14L",
"HZZ-ylTXMEW",
"nips_2022_jftNpltMgz",
"v_cuSso2jtv",
"HPwEFt0YzVP",
"HPwEFt0YzVP",
"C0MAzEtEv0L",
"C0MAzEtEv0L",
"C0MAzEtEv0L",
"g3lUD2uUTKT",
"g3lUD2uUTKT",
"X8pYDnt3FMa",
"X8pYDnt3FMa",
"nips_2022_jftNpl... |
nips_2022_EwLChH1fJJK | Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the Centroid | Few-shot learning (FSL) targets at generalization of vision models towards unseen tasks without sufficient annotations. Despite the emergence of a number of few-shot learning methods, the sample selection bias problem, i.e., the sensitivity to the limited amount of support data, has not been well understood. In this paper, we find that this problem usually occurs when the positions of support samples are in the vicinity of task centroid—the mean of all class centroids in the task. This motivates us to propose an extremely simple feature transformation to alleviate this problem, dubbed Task Centroid Projection Removing (TCPR). TCPR is applied directly to all image features in a given task, aiming at removing the dimension of features along the direction of the task centroid. While the exact task centoid cannot be accurately obtained from limited data, we estimate it using base features that are each similar to one of the support features. Our method effectively prevents features from being too close to the task centroid. Extensive experiments over ten datasets from different domains show that TCPR can reliably improve classification accuracy across various feature extractors, training algorithms and datasets. The code has been made available at https://github.com/KikimorMay/FSL-TCBR. | Accept | The paper investigates the bias of support points in few shot learning when they are too close to the task centroid. This can cause a drop in accuracy. It proposes a method to mitigate this by projecting out the task centroid direction. Initial reviews were split on the paper with three reviewers having a positive opinion while the other three had concerns. However the author response has addressed many of the reviewer concerns (also acknowledged by one of the reviewers). The paper makes a solid contribution to the few shot learning problem and will be a good addition to the program. | train | [
"vVX9mDlcDoD",
"004ZZSUgfr-",
"nrn4QpGgw8",
"iIpYyYildP3",
"gT_l_Vio1BR",
"6obozm8a5W",
"NtZUcTeTdcJl",
"D5qjP3WZbb1",
"Xb4MWwrOt6P",
"-UO7plROynk",
"VoZdaFkI34m",
"_x55DuLJSso",
"jKWffFVvmACf",
"Zc6Aj2Ke-VM",
"GA4PgjiM0m",
"4zmMtNSeeH",
"eKqlzOLjrBp",
"uXdbV9KD6id",
"o-lKcYAsJbQ... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, thanks again for your valued advice! Since most of your concerns have been clarified, could you please reconsider the score?",
" I thank the authors for the additional and insightful experiments and visualization. The authors have addressed most of my concerns, especially showing that in practice... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
4,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5,
4,
5
] | [
"004ZZSUgfr-",
"D5qjP3WZbb1",
"o-lKcYAsJbQ",
"4zmMtNSeeH",
"6obozm8a5W",
"o-lKcYAsJbQ",
"uXdbV9KD6id",
"Xb4MWwrOt6P",
"4zmMtNSeeH",
"VoZdaFkI34m",
"eKqlzOLjrBp",
"GA4PgjiM0m",
"Zc6Aj2Ke-VM",
"nips_2022_EwLChH1fJJK",
"nips_2022_EwLChH1fJJK",
"nips_2022_EwLChH1fJJK",
"nips_2022_EwLChH1... |
nips_2022_PrkarCHiUsg | Proximal Learning With Opponent-Learning Awareness | Learning With Opponent-Learning Awareness (LOLA) (Foerster et al. [2018a]) is a multi-agent reinforcement learning algorithm that typically learns reciprocity-based cooperation in partially competitive environments. However, LOLA often fails to learn such behaviour on more complex policy spaces parameterized by neural networks, partly because the update rule is sensitive to the policy parameterization. This problem is especially pronounced in the opponent modeling setting, where the opponent's policy is unknown and must be inferred from observations; in such settings, LOLA is ill-specified because behaviorally equivalent opponent policies can result in non-equivalent updates. To address this shortcoming, we reinterpret LOLA as approximating a proximal operator, and then derive a new algorithm, proximal LOLA (POLA), which uses the proximal formulation directly. Unlike LOLA, the POLA updates are parameterization invariant, in the sense that when the proximal objective has a unique optimum, behaviorally equivalent policies result in behaviorally equivalent updates. We then present practical approximations to the ideal POLA update, which we evaluate in several partially competitive environments with function approximation and opponent modeling. This empirically demonstrates that POLA achieves reciprocity-based cooperation more reliably than LOLA.
| Accept | I read the paper, review, and responses. I'm not an expert in this sub-area. I'm consider it an incremental work with borderline accept/reject evaluation based on the paper and reviews. OK with an accept. | train | [
"oSTZVD0GPJK",
"fOVfnqJl6J",
"IsvIl2bVtJ6",
"1DfhbyUCsX",
"V-SBONxweFY",
"KQSEFUKO17e",
"O_dKpT_dryO",
"UhoW5as8ts",
"YOdjhNtXkFo",
"d3EqX6nkKBz"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response and suggestions! We have added Section 1.1 to highlight our motivation and contributions. We also hope this makes the connections between subsequent sections more clear.\n\nWe understand that there is a lot of information in the paper, and despite our best efforts to be clear and under... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"fOVfnqJl6J",
"V-SBONxweFY",
"1DfhbyUCsX",
"O_dKpT_dryO",
"UhoW5as8ts",
"YOdjhNtXkFo",
"d3EqX6nkKBz",
"nips_2022_PrkarCHiUsg",
"nips_2022_PrkarCHiUsg",
"nips_2022_PrkarCHiUsg"
] |
nips_2022_nyBJcnhjAoy | Explaining a Reinforcement Learning Agent via Prototyping | While deep reinforcement learning has proven to be successful in solving control tasks, the ``black-box'' nature of an agent has received increasing concerns. We propose a prototype-based post-hoc \emph{policy explainer}, ProtoX, that explains a black-box agent by prototyping the agent's behaviors into scenarios, each represented by a prototypical state. When learning prototypes, ProtoX considers both visual similarity and scenario similarity. The latter is unique to the reinforcement learning context since it explains why the same action is taken in visually different states. To teach ProtoX about visual similarity, we pre-train an encoder using contrastive learning via self-supervised learning to recognize states as similar if they occur close together in time and receive the same action from the black-box agent. We then add an isometry layer to allow ProtoX to adapt scenario similarity to the downstream task. ProtoX is trained via imitation learning using behavior cloning, and thus requires no access to the environment or agent. In addition to explanation fidelity, we design different prototype shaping terms in the objective function to encourage better interpretability. We conduct various experiments to test ProtoX. Results show that ProtoX achieved high fidelity to the original black-box agent while providing meaningful and understandable explanations. | Accept | This paper proposes ProtoX, a method to identify input prototypes of reinforcement learning agents that are representationally similar to tetst-time inputs. This allows one to surface "relevant training examples" matching the action predictive behavior seen at test time. This is proposed as an "interpretability method to explain agent decisions". Two reviewers have advocated strongly for accepting the paper, while one reviewer has advocated for a strong reject, primarily on the grounds that the paper makes unsubstantiated claims that "ProtoX explains why an agent took a particular action".
After some deliberation with the reviewers, I will recommend accepting the paper. Here is my rationale:
1. there are several tools one can use to better understand how training data and inputs within an example affect predictions. one can take a "formal, axiomatic approach to attribution" (e.g. integrated gradients) or an "informal, non-axiomatic approach to attribution" (e.g. smoothgrad) when it comes to showing "how inputs/training data explain predictions" . Both types of methods have their uses in the toolkit of every deep learning researcher. This paper proposes an *informal* tool to link training data with test time inputs. ProtoX is dataset-level attribution (via computation of prototypes), which is exciting because most explainability techniques focus on example-level attribution. Even though this paper does not prove implementation invariance of ProtoX method, it as a useful debugging tool for RL practitioners.
2. defining "what does it mean for a neural network to be 'interpretable?" and "why did this neural net make a certain decision" is a broad question within the ML community (bordering on irreducible philosophy), and it's too high of a bar to expect any paper to have a definitive, one-size fits all solution to explainability. I suspect reviewer m9zL found the claims of the paper to imply that they were proposing a more axiomatic attribution method, whereas it is really intended as a tool to diagnose any RL agent.
3. Given that RL agents are very hard to get right, I could see this method (or perhaps an improved version of it with less moving pieces) as a useful tool.
If I were to point out a weakness of the paper, it is that it bears a lot of similarity to non-parametric imitation learning algorithms, e.g.
VINN paper (Surprising Effectiveness of Representation Learning for Visual Imitation Learning, by Pari et al. 2021 https://arxiv.org/pdf/2112.01511.pdf) where k-NN on training set essentially also surfaces attributable examples from a training set wrt test time images, by design. I have no connection to the VINN paper, but would appreciate it if the authors cited this paper or mentioned prior literature on non-parametric methods in learned embedding spaces as an existing tool already used by the RL community.
| train | [
"ROQAvPh3m0V",
"QIMzQyaUiOn",
"GBzxMz3XhTe",
"_yovYiPJCiA",
"KykFclCnc4i",
"pXVr_mP3HqF",
"J-6NWOUFpEF",
"qbn5SAONgp",
"5wHwRMbOwm6",
"fhGAfjKzGJ",
"J1wM6DW2EtE",
"AfcOqiHh5tA",
"Opj44kf_fM",
"M0jS6gEtFHh",
"QxnAR0u5GNG",
"tdvzun3yPz-",
"Wnbl7_oEyBD",
"sZOGWMUzCoWt",
"CZPlqwwn2DA... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"a... | [
" Thank you reviewer QULj! ",
" The authors have provided significant additional experiments during the rebuttal period, namely additional results for the sensitivity to flip points experiment which I requested, and the new results for generating importance maps that they describe above. I believe these new resul... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
2
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"QIMzQyaUiOn",
"CZPlqwwn2DA",
"_yovYiPJCiA",
"J-6NWOUFpEF",
"J-6NWOUFpEF",
"qbn5SAONgp",
"qbn5SAONgp",
"fhGAfjKzGJ",
"sZOGWMUzCoWt",
"AfcOqiHh5tA",
"M0jS6gEtFHh",
"pWF0H91Fdyg",
"tdvzun3yPz-",
"tdvzun3yPz-",
"EAL7i-5CON",
"Wnbl7_oEyBD",
"drMti5-fjhm",
"CZPlqwwn2DA",
"pWF0H91Fdyg"... |
nips_2022_fn0FXlXkzL | Secure Split Learning against Property Inference and Data Reconstruction Attacks | Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of the guest and the host, which may come from different backgrounds, holding features partitioned vertically. However, SplitNN creates a new attack surface for the adversarial participant, holding back its practical use in the real world. By investigating the adversarial effects of two highly threatening attacks, i.e., property inference and data reconstruction, adapted from security studies of federated learning, we identify the underlying vulnerability of SplitNN. To prevent potential threats and ensure learning guarantees of SplitNN, we design a privacy-preserving tunnel for information exchange between the guest and the host. The intuition behind our design is to perturb the propagation of knowledge in each direction with a controllable unified solution. To this end, we propose a new activation function named $\text{R}^3$eLU, transferring private smashed data and partial loss into randomized responses in forward and backward propagations, respectively. Moreover, we give the first attempt to achieve a fine-grained privacy budget allocation scheme for SplitNN. The analysis of privacy loss proves that our privacy-preserving SplitNN solution requires a tight privacy budget, while the experimental result shows that our solution outperforms existing solutions in attack defense and model usability. | Reject | Though reviewers increased their scores, they maintained some skepticism regarding several issues in the paper. The authors are strongly encouraged to consider addressing these in a later submission.
- Privacy amplification by subsampling seems to require that specific inputs used in a batch (after subsampling) should not be known to the host. This condition does not seem to be readily satisfied in split-learning as the input-ids need to be communicated to the host during training.
- Gains remain modest compared to simple baselines.
- Steps within the pipeline remain heuristics, and are not formally justified. | train | [
"HvJqj7tB5l5",
"CIDDtaitrEX",
"qF7i5QwVQeR",
"WHN6vHJdU2F",
"u2O9uZXUyfI",
"HprzCgbLU5a",
"Ha8Bsva08Yl",
"CubS-sEap7y",
"Y9-_E-A9HYO",
"borggl_rbC",
"6dnHJUil9T",
"BVxRedNdDA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification.",
" Thanks for your reply. First of all, we apologize that our inappropriate explanations caused misunderstanding. We realized there was a misunderstanding during the discussion. **The privacy leakage caused will not exceed the privacy budget in our solution, which has been form... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"CIDDtaitrEX",
"qF7i5QwVQeR",
"WHN6vHJdU2F",
"u2O9uZXUyfI",
"CubS-sEap7y",
"Y9-_E-A9HYO",
"BVxRedNdDA",
"6dnHJUil9T",
"borggl_rbC",
"nips_2022_fn0FXlXkzL",
"nips_2022_fn0FXlXkzL",
"nips_2022_fn0FXlXkzL"
] |
nips_2022_f-fVCElZ-G1 | ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers | How to efficiently serve ever-larger trained natural language models in practice has become exceptionally challenging even for powerful cloud servers due to their prohibitive memory/computation requirements.
In this work, we present an efficient and affordable post-training quantization approach to compress large Transformer-based models, termed as \OURS.
\OURS is an end-to-end quantization and inference pipeline with three main components:
(1) a fine-grained hardware-friendly quantization scheme for both weight and activations;
(2) a novel affordable layer-by-layer knowledge distillation algorithm (\lwd) even without the original training data access;
(3) a highly-optimized quantization system backend support to remove the quantization/dequantization overhead.
As such, we are able to show that:
(1) \OURS can reduce the precision for weight and activations to INT8 in a cost-free way for both \bert and \gpt-style
models with minimal accuracy impact, which leads to up to 5.19x/4.16x speedup on \bert/\gpt-style models compared to FP16 inference, separately;
(2) \OURS plus \lwd can affordably quantize the weights in the fully-connected module to INT4 along with INT8 weights in the attention module and INT8 activations, resulting in 3x memory footprint reduction compared to the FP16 model;
(3) \OURS can be directly applied to two of the largest open-sourced language models, including \gptneox, for which our INT8 model achieves similar accuracy as the FP16 model but achieves 5.2x better efficiency.
Our code is open-sourced at~\cite{code_compression}. | Accept | This paper explores post-training quantization on large transformer-based models, using techniques such as group-wise weight quantization, token-wise activation quantization and layer-by-layer distillation. The paper also provides a system backend support that demonstrate speedup on commercial GPU devices.
Overall, this is a solid paper. The quantization methods used in the paper are not exactly novel. Not only the group-wise and token-wise quantization as pointed out by the reviewers, but also the layer-by-layer distillation using high precision models have been reported recently (e.g. BRECQ [ICLR2021] and AdaQuant[ICML2021]). However, it is appreciated that the authors could take these techniques further to 1) implement and optimize GPU kernels to demonstrate real speedup and 2) evaluate the techniques on large scale models. With the open-source code (as promised), I think the research community and industry can all benefit from this work.
In addition, the paper is well written and easy to follow. The methods are evaluated thoroughly. During the rebuttal period, the authors provide very detailed response to address the questions and concerns raised by the reviewers. Therefore, this paper is recommended for acceptance.
| test | [
"OVOg5AGSwzJ",
"0XDAsLctiFa",
"2ony3sriLRN",
"GTZiNXmgcMC",
"vlruAvjDdap",
"_gPCeN3678D",
"jb9ZpguqyI",
"6_vbHfd7cH",
"Q5YdbxVtu19",
"4C-zB-m9VsSc",
"kv5vaEJ7MYz",
"pUdIEPttrS",
"yl8WYHV_WoW",
"Ny2fhyMnxlP",
"Pv9HjNw4VZd"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The response has addressed my concerns, and after reading rebuttal, modifications, and other reviews, I maintain my score on positive side.",
" Thanks a lot for taking the time to review our work and providing your constructive feedback. Our detailed response is listed below.\n\n* Q1: Though it works great, the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"0XDAsLctiFa",
"Pv9HjNw4VZd",
"GTZiNXmgcMC",
"Ny2fhyMnxlP",
"_gPCeN3678D",
"jb9ZpguqyI",
"6_vbHfd7cH",
"yl8WYHV_WoW",
"4C-zB-m9VsSc",
"kv5vaEJ7MYz",
"pUdIEPttrS",
"nips_2022_f-fVCElZ-G1",
"nips_2022_f-fVCElZ-G1",
"nips_2022_f-fVCElZ-G1",
"nips_2022_f-fVCElZ-G1"
] |
nips_2022_VarZY6BY12h | Distributional Reinforcement Learning via Sinkhorn Iterations | Distributional reinforcement learning~(RL) is a class of state-of-the-art algorithms that estimate the whole distribution of the total return rather than only its expectation. The representation manner of each return distribution and the choice of distribution divergence are pivotal for the empirical success of distributional RL. In this paper, we propose a new class of \textit{Sinkhorn distributional RL~(SinkhornDRL)} algorithm that learns a finite set of statistics, i.e., deterministic samples, from each return distribution and then leverages Sinkhorn iterations to evaluate the Sinkhorn distance between the current and target Bellmen distributions. Remarkably, Sinkhorn divergence interpolates between the Wasserstein distance and Maximum Mean Discrepancy~(MMD). This allows our proposed SinkhornDRL algorithm to find a sweet spot leveraging the geometry of optimal transport-based distance and the unbiased gradient estimates of MMD. Finally, experiments on the suit of 55 Atari games reveal the competitive performance of SinkhornDRL algorithm as opposed to existing state-of-the-art algorithms. | Reject | All reviewers acknowledged that this paper is an interesting contribution to the distributional RL literature. Some concerns brought up by reviewers in their initial reports included issues with the statement of Theorem 1, a non-standard evaluation protocol for the Atari experiments, and that some claims were overly strong, such as the robustness claim of Figure 3.
During the discussion, reviewers found that the rebuttal addressed a few of these concerns, but that a few concerns still remained.
While the claims of Theorem 1 have been revised significantly, and the reviewers appreciated this. However, the clarify of the statement is still not clearly presented, and the reviewers could not verify the correctness of the proof in its current form. For instance, one reviewer pointed out that"T^pi is a closely non-expansive operator" is not a precise mathematical statement and is not clearly defined in the paper. Another point was that the proof could be more clear and therefore it was difficult to verify its correctness, in addition to a number of typos: (Eqns (39), (40), 1/Z_1 should be 1/Z_2; equation below Line 518 is missing an implication sign; Line 510 K(2U,2V) should be K(aU, aV)).
Several other reviewers were not convinced by the empirical results in that they felt like the paper would be stronger if (1) it used the same evaluation protocol as other distributional RL papers and (2) results on the behavior of the algorithm when sweeping over key hyperparameters (epsilon, cost choice, L), would also considerably strengthen the paper by clarifying how the method works in practice.
Overall, reviewers found the contribution to be promising and a step in the right direction. After another revision addressing these issues, I think the paper will be a strong contribution to the distributional RL literature.
| train | [
"KEiVU8t_JsB",
"EipRvOoqGRl",
"kIt97XN_2n",
"8Wf6pkM88QG",
"Mz_N7SAhTB",
"1lbLjQ5jUNS",
"c1lVDGqlAwp",
"3RFRSA1G1b",
"kqppzaMIbZI",
"Gu4aG-Nj8B",
"98WovTGi9DF",
"IL8g36bJic",
"CHlPvmuWtT",
"8PU47QiEWZl",
"VXcvA9QwEf",
"MGTbDJxHETm",
"E7H89KlV5mu",
"FoI9EXKiswN"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nWe sincerely appreciate your further input and efforts to improve our paper. We revised our paper based on your suggestions.\n\n**An updated version about the expectation part.** To make it clearer and posit our original contribution, we remove the expectation part in Theorem 3 in the main text and only discuss... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5,
3
] | [
"1lbLjQ5jUNS",
"Mz_N7SAhTB",
"CHlPvmuWtT",
"98WovTGi9DF",
"c1lVDGqlAwp",
"3RFRSA1G1b",
"kqppzaMIbZI",
"kqppzaMIbZI",
"IL8g36bJic",
"nips_2022_VarZY6BY12h",
"FoI9EXKiswN",
"E7H89KlV5mu",
"MGTbDJxHETm",
"VXcvA9QwEf",
"nips_2022_VarZY6BY12h",
"nips_2022_VarZY6BY12h",
"nips_2022_VarZY6BY... |
nips_2022_rDT-n9xysO | Symbolic Distillation for Learned TCP Congestion Control | Recent advances in TCP congestion control (CC) have achieved tremendous success with deep reinforcement learning (RL) approaches, which use feedforward neural networks (NN) to learn complex environment conditions and make better decisions. However, such ``black-box'' policies lack interpretability and reliability, and often, they need to operate outside the traditional TCP datapath due to the use of complex NNs. This paper proposes a novel two-stage solution to achieve the best of both worlds: first to train a deep RL agent, then distill its (over-)parameterized NN policy into white-box, light-weight rules in the form of symbolic expressions that are much easier to understand and to implement in constrained environments. At the core of our proposal is a novel symbolic branching algorithm that enables the rule to be aware of the context in terms of various network conditions, eventually converting the NN policy into a symbolic tree. The distilled symbolic rules preserve and often improve performance over state-of-the-art NN policies while being faster and simpler than a standard neural network. We validate the performance of our distilled symbolic rules on both simulation and emulation environments. Our code is available at https://github.com/VITA-Group/SymbolicPCC. | Accept | I thank the authors for their submission and active participation in the discussions. The paper presents an RL method for TCP congestion control. While this application paper is borderline, all reviewers unanimously agree that this paper's strengths outweigh its weaknesses. In particular, reviewers remarked that the method is efficient [sAC3], and practical [aJ6X], evaluated well against baselines [Qv2f] with promising results [W18N]. Thus, I am recommending acceptance of the paper but highly encourage the authors to further improve their paper based on the reviewer feedback. | train | [
"7R477IidoF",
"mBhrVCBHiEk",
"ZCI9Iv8QTE6",
"uNzpkE9NlnK",
"BOmxj_RWZO",
"3d0h-yQXNDV",
"VMXQ83LP31C",
"5lYw92wHcoP",
"pLECxrnI0M",
"CbsVlIsFzW5W",
"-d_rXa3418O",
"bpevBNfQHTN",
"xsuDzeK36OK",
"JASPLzLF_K1",
"vClOi7WKM7",
"6fCz98ZeKbP",
"EABGZ6xRX1",
"yjhMCVQUFt6",
"pgNJizufxHM",... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_... | [
" Looks good!",
" Dear reviewer sAC3:\n\nWe have just modified the abstract, introduction page 2, the conclusion section, as well as a few other scattered places on the term usage. Please kindly have a check.\n\nBest,\n\nAuthors\n",
" The misleading claims are still present. For example, in the abstract the aut... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"mBhrVCBHiEk",
"ZCI9Iv8QTE6",
"BOmxj_RWZO",
"CbsVlIsFzW5W",
"5lYw92wHcoP",
"pLECxrnI0M",
"bpevBNfQHTN",
"XH5RqLvJu3G",
"pgNJizufxHM",
"xsuDzeK36OK",
"vClOi7WKM7",
"EABGZ6xRX1",
"yjhMCVQUFt6",
"XH5RqLvJu3G",
"m-uq0_xEia",
"Er6m2ZE7N9-",
"Er6m2ZE7N9-",
"U44WFvwEs4V",
"fuGd6yvVYh5",... |
nips_2022_Q5kXC6hCr1 | Accelerating Sparse Convolution with Column Vector-Wise Sparsity |
Weight sparsity is a promising approach to reducing the model size and computation cost of convolutional neural networks (CNNs). Nevertheless, non-zero weights often distribute randomly in sparse CNN models, introducing enormous difficulty in obtaining actual speedup on common hardware (e.g., GPU) over their dense counterparts. Existing acceleration solutions either require hardware modifications for irregular memory access support or rely on a partially structured sparsity pattern. Neither of these methods is capable of achieving fruitful speedup on convolution layers.
In this work, we propose an algorithm-software co-designed sparse convolution based on a novel out-vector-wise (OVW) sparse pattern.
Building on the insight that vertical vector integrity can preserve continuous memory access in IM2COL, the OVW pattern treats a $V\times1$ vector as an entirety. To reduce the error caused by sparsity, we propose an equivalent transformation process, i.e., clustering-based channel permutation, to gather similar rows together. Experimental evaluations demonstrate that our method achieves a $1.7\times$ and $3.2\times$ speedup over the SOTA solution and the dense convolution of ResNet50 on NVIDIA V100 at 75\% sparsity, respectively, with only negligible accuracy loss. Moreover, compared to the SOTA solution that achieves speedups only on data with 60\% sparsity or more, our method begins to obtain speedups on data with only 10\% sparsity. | Accept | While the reviewers had a difference of opinion in their scores of the paper, on balance I think the overall evaluation described in the text of the reviews leans strongly towards acceptance. This is especially the case because the only negative reviewer wrote "if these two changes are made (editing for language/grammar, and reporting confidence intervals in the main text) I would be willing to increase my score to accept" and I think that the authors have shown that they are going to do that in their updated version. The paper is well structured (if not so clearly written), and provides a novel approach to co-designed sparse convolution. It's really nice to see this sort of co-design, which is common in the ML systems space at the hardware-software level, being applied higher in the systems stack to achieve such significant speedups (even with not that much sparsity) for structured sparsity on existing commodity hardware. In general, when I see a technically strong paper where the weaknesses are presentation and spelling/grammar, I lean toward acceptance, with the understanding that the authors can incorporate the reviewers' feedback to improve the writing in the final version of their paper. Applying that reasoning here, I recommend acceptance. | train | [
"_7MowwXpIr",
"b5akcAjqCQT",
"TISRB02n-ay",
"gUNLnmm8Vs7",
"G1aA0YRyzEA",
"w550uayrZwq",
"51FG5kv9kA9",
"8jSLVo7jHb",
"YguWBg_289j",
"-sksaHLE2eu"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your constructive feedback. We appreciate your efforts in helping us improve our paper.\n\n**Writing**\n\nWe have revised those incorrect or inappropriate phrases you addressed and fixed the missing header in Table 1. \nWe also checked and edited the rest of this paper carefully.\n\n**Experiment ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"b5akcAjqCQT",
"G1aA0YRyzEA",
"51FG5kv9kA9",
"8jSLVo7jHb",
"8jSLVo7jHb",
"YguWBg_289j",
"-sksaHLE2eu",
"nips_2022_Q5kXC6hCr1",
"nips_2022_Q5kXC6hCr1",
"nips_2022_Q5kXC6hCr1"
] |
nips_2022_msBC-W9Elaa | Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers | In this paper, we propose a new covering technique localized for the trajectories of SGD. This localization provides an algorithm-specific complexity measured by the covering number, which can have dimension-independent cardinality in contrast to standard uniform covering arguments that result in exponential dimension dependency. Based on this localized construction, we show that if the objective function is a finite perturbation of a piecewise strongly convex and smooth function with $P$ pieces, i.e., non-convex and non-smooth in general, the generalization error can be upper bounded by $O(\sqrt{(\log n\log(nP))/n})$, where $n$ is the number of data samples. In particular, this rate is independent of dimension and does not require early stopping and decaying step size. Finally, we employ these results in various contexts and derive generalization bounds for multi-index linear models, multi-class support vector machines, and $K$-means clustering for both hard and soft label setups, improving the previously known state-of-the-art rates. | Accept | This paper introduces a new framework for proving generalization bounds for SGD that is based on covering the space of trajectories. When the underlying function is smooth and strongly convex, the fact that gradient descent contracts the distance between points by a constant factor, can be used to construct a good cover. More interestingly, their framework can be applied to nonconvex functions too, by approximating them by a piecewise strongly convex function. In general, the number of pieces will grow exponentially with the dimension, but in some important applications, like multiclass SVMs and k-means clustering, they are able to use their framework to derive interesting new generalization bounds. The paper is well-written and there was almost uniform consensus that it ought to be accepted. | train | [
"lLl3YAiUGZt",
"2rMd1bzgVmT",
"8JJAoQusNxH",
"yFcnvhRPneA",
"H1iG6f0rLa",
"x3sGDGWtIYD",
"uhcwNFexvvG",
"julXeF-GBWN",
"soKXUTgsUyN",
"5-Ule_G-hdY",
"-U9oCj9MVRR",
"Yf3q8Mylv1o",
"cLuvYvb5Nl",
"tnPY6bQ9aqN",
"Wxhtb5qqlj"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their reply. I am keeping my score of acceptance.",
" We sincerely appreciate the reviewer's reevaluation. We reiterate that we would be happy to clarify any further questions that may come up during the discussion period.",
" Dear authors,\n\nThank you for clarifying my concerns. In... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
3,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3,
3
] | [
"julXeF-GBWN",
"8JJAoQusNxH",
"5-Ule_G-hdY",
"nips_2022_msBC-W9Elaa",
"-U9oCj9MVRR",
"uhcwNFexvvG",
"cLuvYvb5Nl",
"Yf3q8Mylv1o",
"tnPY6bQ9aqN",
"Wxhtb5qqlj",
"nips_2022_msBC-W9Elaa",
"nips_2022_msBC-W9Elaa",
"nips_2022_msBC-W9Elaa",
"nips_2022_msBC-W9Elaa",
"nips_2022_msBC-W9Elaa"
] |
nips_2022_iFJJevyrIEf | Pyramid Attention For Source Code Summarization | This paper presents a multi-granularity method for the task of source code summarization, which generates a concise functional description for the given code snippet. We notice that skilled programmers write and read source codes hierarchically and pay close attention to conceptual entities like statements, tokens, sub-tokens, and the mapping relations between them. The entities have specific emphasis according to their granularities, e.g., statements in coarse-granularity re- veal the global logical semantics of code, and the sub-tokens in fine-granularity are more related to the textual semantics. Driven by this observation, we argue that a multi-granularity formulation incorporating these conceptual entities may benefit the code summarization task. Concretely, the source code is trans- formed into a pyramidal representation, and then a pyramid attention mechanism is applied for efficient feature aggregation among different hierarchies in it. We instantiate our multi-granularity method using the proposed pyramid attention and name it PA-former (Pyramid Attention transformer), which is evaluated on two source code summarization benchmarks where it surpasses the prior works and achieves new state-of-the-art results. Our code and data are available at https://github.com/leichainju/pa-former. | Accept | The paper presents a multi-granularity input represenaion and a pyramid attention mechanism for code summarizaiton tasks. After extensive discussion, the reviewers still cannot agree on accepting or rejecting this paper. The key discussion points and my opinion are summarized in below.
1. Performance improvement -- a few reviewers point out the performance improvement is relatively small (about 1%) compared with baseline. With the additional error bar provided by the authors, it seems to me the improvement is statistically significant. The authors also provide sufficient ablation study to justify the improvement. Although it's arguable if the proposed appracoh is substantial, the progress of AI is often driven by incremental improvement in terms of performance. Therefore, I'm less concerned by this issue.
2. Comparison only on 1 language and 2 datasets. I partially agree with the authors and reviwers Xtyo that the paper already conducted extensive experiments and the merits of the proposed approach are justified. However, I disagree with the attritue that the comparison on 1 language is sufficient given the recent progress of code summariziton. As the proposed approach is mainly justified by empirical comparison, conducting results on a limited dataset raises the concern whether the proposed approach is generalizble to other languages and datasets. It also makes future work harder to compare with this work. I especially disagree the point that some earlier papers only compared on limited datasets. These papers are published earlier than the benchmark CodeXGlue has been released. As most recent baselines are compared on CodeXGlue, there is a need to justify the missing of results on this dataset. Besides, the argument of the dataet is noisy the performance is low do not seem rigorous and reasonable to me.
3. Human evaluation. The authors provide a preliminary study of human evaluation on the generated outputs. On one hand, it shows the proposed approach indeed improve the quality of the summary, but on the other hand, the study requires more rigorous design. I would suggest including the human evaluation on the main text rather than in appendix.
4. Presentation. The paper is mostly well-written and provide nice intuition behind the propose method. However, I also agree with Q2Zz that some statements might overclaim and require justification. The later part of the paper has significant number of typos and require a careful proofread. It's pity that the authors do not take the opportunity during the rebuttal period to revise the paper.
Overall, I think the paper has sufficient merits but still have room to improve. | train | [
"CSGOe6W05pk",
"TR9LtScS-gQ",
"15-scGoHsZs",
"UopJzrJkNlL",
"IeD4_DDo_O_",
"mD38tiwHd9f",
"aAIYn7mFZuc",
"SBCAhPkDZ9D",
"sKBUVy_jW2g",
"GT7KuKZ37nF",
"WDmHUwPPwJ",
"WXQiE_-lqZ_",
"jnOKWIDZ8vi",
"tsVBH_DYdHe",
"gbHanZ0AOSn",
"uWGQ7nA49A0",
"Ct-rTZ6Suw4"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the appreciation of our work, it gives us a lot of confidence! And we will improve the quality of writing as soon as possible.",
" We would like to know if the supplementary reply (partially) addresses your other questions?",
" Thanks for the reply! \n\nWe do agree that _dependency probe_ could ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"SBCAhPkDZ9D",
"UopJzrJkNlL",
"UopJzrJkNlL",
"IeD4_DDo_O_",
"mD38tiwHd9f",
"jnOKWIDZ8vi",
"jnOKWIDZ8vi",
"GT7KuKZ37nF",
"WXQiE_-lqZ_",
"Ct-rTZ6Suw4",
"uWGQ7nA49A0",
"gbHanZ0AOSn",
"tsVBH_DYdHe",
"nips_2022_iFJJevyrIEf",
"nips_2022_iFJJevyrIEf",
"nips_2022_iFJJevyrIEf",
"nips_2022_iFJ... |
nips_2022_zXE8iFOZKw | When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning | Learning effective reinforcement learning (RL) policies to solve real-world complex tasks can be quite challenging without a high-fidelity simulation environment. In most cases, we are only given imperfect simulators with simplified dynamics, which inevitably lead to severe sim-to-real gaps in RL policy learning. The recently emerged field of offline RL provides another possibility to learn policies directly from pre-collected historical data. However, to achieve reasonable performance, existing offline RL algorithms need impractically large offline data with sufficient state-action space coverage for training. This brings up a new question: is it possible to combine learning from limited real data in offline RL and unrestricted exploration through imperfect simulators in online RL to address the drawbacks of both approaches? In this study, we propose the Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning (H2O) framework to provide an affirmative answer to this question. H2O introduces a dynamics-aware policy evaluation scheme, which adaptively penalizes the Q function learning on simulated state-action pairs with large dynamics gaps, while also simultaneously allowing learning from a fixed real-world dataset. Through extensive simulation and real-world tasks, as well as theoretical analysis, we demonstrate the superior performance of H2O against other cross-domain online and offline RL algorithms. H2O provides a brand new hybrid offline-and-online RL paradigm, which can potentially shed light on future RL algorithm design for solving practical real-world tasks. | Accept | The authors present a novel but realistic problem, where you want to learn from limited offline real-world data, and from unlimited simulation data, as is generally the case for sim2real with real-world finetuning. Experiments are performed on D4RL HalfCheetah with three artificially created dynamics gaps, as well as real-world wheeled robot.
Thanks to the reviewers and authors for engaging in active discussions in experimental details, and appreciate the authors for updating the draft with additional experiments with TD3+BC, Random_HalfCheetah etc. I recommend the paper for acceptance, as this paper could encourage further research into this important problem setting.
| train | [
"YHCebPUEDN",
"3djyt8eJX2",
"XgxKInkVaMI",
"yN_HrMIJsX5",
"mXb5C_vj-rL",
"okJjcyCeWF",
"4NwfBx4io25",
"1bftHun2bBD",
"k3JHUxe9_0hF",
"lLwUifobJGP",
"RLpvnPllXw6",
"WhDIjZIiJAl",
"dxselCp9w6o",
"xLj3pwmt7Pp",
"_6a_y6y85vu",
"zBZVJv3iBGbb",
"lq1HNPjDn-",
"5lrLmOP7eDj",
"27knwFm1PZp... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",... | [
" We have completed another set of experiments on HalfCheetah-Gravity task with D4RL random datasets, as well as additional comparative experiments with the hybrid TD3+BC baseline mentioned by the reviewer. We summarize the detailed results as follows.\n\n## Additional experiments to demonstrate the performance of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"yN_HrMIJsX5",
"1bftHun2bBD",
"1bftHun2bBD",
"1bftHun2bBD",
"4NwfBx4io25",
"RLpvnPllXw6",
"5lrLmOP7eDj",
"k3JHUxe9_0hF",
"FgGPXfewRnp",
"6zJFtyQhje",
"27knwFm1PZp",
"FgGPXfewRnp",
"FgGPXfewRnp",
"FgGPXfewRnp",
"FgGPXfewRnp",
"FgGPXfewRnp",
"Jdy1BQSXWfo",
"Jdy1BQSXWfo",
"COdbGkevc... |
nips_2022_xTYL1J6Xt-z | FasterRisk: Fast and Accurate Interpretable Risk Scores | Over the last century, risk scores have been the most popular form of predictive model used in healthcare and criminal justice. Risk scores are sparse linear models with integer coefficients; often these models can be memorized or placed on an index card. Typically, risk scores have been created either without data or by rounding logistic regression coefficients, but these methods do not reliably produce high-quality risk scores. Recent work used mathematical programming, which is computationally slow. We introduce an approach for efficiently producing a collection of high-quality risk scores learned from data. Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm. Each of these continuous solutions is transformed into a separate risk score through a "star ray" search, where a range of multipliers are considered before rounding the coefficients sequentially to maintain low logistic loss. Our algorithm returns all of these high-quality risk scores for the user to consider. This method completes within minutes and can be valuable in a broad variety of applications. | Accept | Thank you for submitting your paper to NeurIPS! This paper makes a valuable contribution to the scoring model literature, providing a fast and scalable algorithm to derive sparse risk scores. The reviewers uniformly appreciated the methodological approach (integrating beam search with logistic regression, diverse feature selection, and star search for choosing integer coefficients), and noted that the stand-alone Python implementation is also advantageous over competitors that rely on mathematical programming solvers. I am pleased to recommend acceptance of this practically relevant work. | train | [
"P7ujMZLqBH",
"QYkOr8haGm",
"1m7u2GpEG6b",
"15PuFVq0qSm",
"ZoARRwGhvpk",
"fnmXG5_uBu",
"tilqa8PSArB",
"hDYcngHdlgq",
"X3fZeJvDzA7",
"x2z82euYSrP",
"i-i0eHnRLJ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I believe this work is a valuable addition to the literature of scoring systems.\n\nOne minor suggestion: In the response, the authors mentioned that \"FasterRisk solves the optimization automatically and engages the user only in selecting the best model.\" While it is important to automate the learning process, ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"ZoARRwGhvpk",
"1m7u2GpEG6b",
"tilqa8PSArB",
"i-i0eHnRLJ",
"x2z82euYSrP",
"X3fZeJvDzA7",
"hDYcngHdlgq",
"nips_2022_xTYL1J6Xt-z",
"nips_2022_xTYL1J6Xt-z",
"nips_2022_xTYL1J6Xt-z",
"nips_2022_xTYL1J6Xt-z"
] |
nips_2022_8SilFGuXgmk | Taming Fat-Tailed (“Heavier-Tailed” with Potentially Infinite Variance) Noise in Federated Learning | In recent years, federated learning (FL) has emerged as an important distributed machine learning paradigm to collaboratively learn a global model with multiple clients, while keeping data local and private. However, a key assumption in most existing works on FL algorithms' convergence analysis is that the noise in stochastic first-order information has a finite variance. Although this assumption covers all light-tailed (i.e., sub-exponential) and some heavy-tailed noise distributions (e.g., log-normal, Weibull, and some Pareto distributions), it fails for many fat-tailed noise distributions (i.e., ``heavier-tailed'' with potentially infinite variance) that have been empirically observed in the FL literature. To date, it remains unclear whether one can design convergent algorithms for FL systems that experience fat-tailed noise. This motivates us to fill this gap in this paper by proposing an algorithmic framework called $\mathsf{FAT}$-$\mathsf{Clipping}~$ (\ul{f}ederated \ul{a}veraging with \ul{t}wo-sided learning rates and \ul{clipping}), which contains two variants: $\mathsf{FAT}$-$\mathsf{Clipping}~$ per-round ($\mathsf{FAT}$-$\mathsf{Clipping}$-$\mathsf{PR}$) and $\mathsf{FAT}$-$\mathsf{Clipping}~$ per-iteration ($\mathsf{FAT}$-$\mathsf{Clipping}$-$\mathsf{PI}$). Specifically, for the largest $\alpha \in (1,2]$ such that the fat-tailed noise in FL still has a bounded $\alpha$-moment, we show that both variants achieve $\mathcal{O}((mT)^{\frac{2-\alpha}{\alpha}})$ and $\mathcal{O}((mT)^{\frac{1-\alpha}{3\alpha-2}})$ convergence rates in the strongly-convex and general non-convex settings, respectively, where $m$ and $T$ are the numbers of clients and communication rounds. Moreover, at the expense of more clipping operations compared to $\mathsf{FAT}$-$\mathsf{Clipping}$-$\mathsf{PR}$, $\mathsf{FAT}$-$\mathsf{Clipping}$-$\mathsf{PI}~$ further enjoys a linear speedup effect with respect to the number of local updates at each client and being lower-bound-matching (i.e., order-optimal). Collectively, our results advance the understanding of designing efficient algorithms for FL systems that exhibit fat-tailed first-order oracle information. | Accept | This is a borderline paper, and we ended up soliciting two additional reviewers (4F3m and Eovh) for additional feedback after the main review/author discussion period.
The general sentiment shared by the reviewers (including the two additional ones) is that the paper studies an interesting/important problem and provides some interesting discussion and results -- albeit largely based on natural extensions of existing methods (e.g. in the centralized setting). Some of these results could also be improved (both in terms of better discussion on novelty and just stronger guarantees), but the reviewers generally appreciate that the authors have started discussion and work in this area.
Multiple reviewers think that the hypothesis relating non-iidness with fat tailed distributions is interesting, but are underwhelmed by the supporting analysis and commentary. It would be great if the authors could improve this discussion, either with more concrete analysis or with better empirical evidence.
The biggest criticism of this paper that is shared by most if not all of the reviewers is that the current experimental section is severely lacking. The authors mentioned that they would add in more experiments in the final draft of the paper (e.g. https://openreview.net/forum?id=8SilFGuXgmk¬eId=ShuBB-7e3B-). This response seemed satisfactory to the reviewers, so this paper will be accepted based on the premise that the authors will follow up on that promise. In particular, it is expected that the authors will go over the reviews and try to carefully address all comments about the experiments (e.g. replicating experiments on at least a few other datasets and models -- ideally at larger scale, using experiments to better verify the non-iid hypothesis, designing experiments that verify the theoretical guarantees). | val | [
"R2vooXfZ_E",
"m39glngmgtx",
"ryvAPhZhhzF",
"FfpdckA69L",
"E9S7kMW4_tH",
"aKWB1XKdX6O",
"HvmcixtQvD",
"sajK2c5W8Aat",
"ShuBB-7e3B-",
"RgpR4SIYfgX",
"qRXvILoAdm",
"MqrAVa15EVd",
"LkDvTEQozBI",
"h5K2sns2dcY",
"VKeEvslHVL",
"Rfbod8KGjYn",
"zotkS6iGiAd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" (I have been added as a reviewer only very late in the review process. Even though I am adding a few questions below I am fully aware that the authors cannot respond appropriately. Nevertheless, I believe these remarks might still be useful for the internal discussion an AC).\n\nThis paper studies stochastic grad... | [
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
5
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"nips_2022_8SilFGuXgmk",
"nips_2022_8SilFGuXgmk",
"FfpdckA69L",
"RgpR4SIYfgX",
"aKWB1XKdX6O",
"HvmcixtQvD",
"zotkS6iGiAd",
"Rfbod8KGjYn",
"VKeEvslHVL",
"qRXvILoAdm",
"MqrAVa15EVd",
"h5K2sns2dcY",
"nips_2022_8SilFGuXgmk",
"nips_2022_8SilFGuXgmk",
"nips_2022_8SilFGuXgmk",
"nips_2022_8Sil... |
nips_2022_tWBMPooTayE | FreGAN: Exploiting Frequency Components for Training GANs under Limited Data | Training GANs under limited data often leads to discriminator overfitting and memorization issues, causing divergent training. Existing approaches mitigate the overfitting by employing data augmentations, model regularization, or attention mechanisms. However, they ignore the frequency bias of GANs and take poor consideration towards frequency information, especially high-frequency signals that contain rich details. To fully utilize the frequency information of limited data, this paper proposes FreGAN, which raises the model's frequency awareness and draws more attention to synthesising high-frequency signals, facilitating high-quality generation. In addition to exploiting both real and generated images' frequency information, we also involve the frequency signals of real images as a self-supervised constraint, which alleviates the GAN disequilibrium and encourages the generator to synthesis adequate rather than arbitrary frequency signals. Extensive results demonstrate the superiority and effectiveness of our FreGAN in ameliorating generation quality in the low-data regime (especially when training data is less than 100). Besides, FreGAN can be seamlessly applied to existing regularization and attention mechanism models to further boost the performance. | Accept | The majority of reviewers voted for accept, and we had a 7 among the mix, which I think cancels out the 4. This paper proposes using a wavelet method to bias the GAN towards generating the proper frequency distribution, which helps especially in the low-data regime. Overall I think low-data data modeling is important, and although this approach doesn't seem extremely novel, it seems useful, and on balance the reviewers voted to accept. I agree. | train | [
"nsR0dWDy1_v",
"mUfqsufFOMt",
"BrKe6HMysYY",
"gIyIyVcpl_W",
"lSu_3oO_PDj",
"LrvA6Qk5dXK",
"mTRFeERwqa2",
"Ei7HeVxN-gj",
"6JjAk1GJe7V",
"c7c_SBboF9W",
"acRNI2Mdqbh",
"_FMJ-BpNF20",
"M28lIMZALcD",
"k9p1ZDt7-bE",
"U5rIJREp0uH"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Xdtv,\n\nIt is our pleasure to address your concerns and thanks again for your comments and constructive suggestions.\n\nBest, Authors.",
" Dear authors,\nthank you for clarifying this, it answers my remaining questions.",
" Dear Reviewer Xdtv,\n\nThank you a lot for the review and the response.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"mUfqsufFOMt",
"BrKe6HMysYY",
"gIyIyVcpl_W",
"Ei7HeVxN-gj",
"6JjAk1GJe7V",
"Ei7HeVxN-gj",
"Ei7HeVxN-gj",
"_FMJ-BpNF20",
"c7c_SBboF9W",
"U5rIJREp0uH",
"M28lIMZALcD",
"k9p1ZDt7-bE",
"nips_2022_tWBMPooTayE",
"nips_2022_tWBMPooTayE",
"nips_2022_tWBMPooTayE"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.