paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_a3W4_OUIRgD
Queue Up Your Regrets: Achieving the Dynamic Capacity Region of Multiplayer Bandits
Consider $N$ cooperative agents such that for $T$ turns, each agent $n$ takes an action $a_{n}$ and receives a stochastic reward $r_{n}\left(a_{1},\ldots,a_{N}\right)$. Agents cannot observe the actions of other agents and do not know even their own reward function. The agents can communicate with their neighbors on a connected graph G with diameter $d\left(G\right)$. We want each agent $n$ to achieve an expected average reward of at least $\lambda_{n}$ over time, for a given quality of service (QoS) vector $\boldsymbol{\lambda}$. A QoS vector $\boldsymbol{\lambda}$ is not necessarily achievable. By giving up on immediate reward, knowing that the other agents will compensate later, agents can improve their achievable capacity region. Our main observation is that the gap between $\lambda_{n}t$ and the accumulated reward of agent n, which we call the QoS regret, behaves like a queue. Inspired by this observation, we propose a distributed algorithm that aims to learn a max-weight matching of agents to actions. In each epoch, the algorithm employs a consensus phase where the agents agree on a certain weighted sum of rewards by communicating only $O\left(d\left(G\right)\right)$ numbers every turn. Then, the algorithm uses distributed successive elimination on a random subset of action profiles to approximately maximize this weighted sum of rewards. We prove a bound on the sum of expected QoS regrets of all agents, that holds if $\boldsymbol{\lambda}$ is a safety margin $\varepsilon_{T}$ away from the boundary of the capacity region, where $\varepsilon_{T}\rightarrow0$ as $T\rightarrow\infty$. This bound implies that, for large $T$, our algorithm can achieve any $\boldsymbol{\lambda}$ in the interior of the dynamic capacity region, while all agents are guaranteed an empirical average expected QoS regret of $\tilde{O}\left(1\right)$ over $t=1,\ldots,T$ which never exceeds $\tilde{O}\left(\sqrt{t}\right)$ for any $t$. We then extend our result to time-varying i.i.d. communication graphs.
Accept
Two of the three reviewers are positive about this work. One reviewer is negative, suggesting that stronger results can be easily derived from the existing literature on matrix games. The authors have replied extensively, explaining why they believe this is not possible, and the same reviewer did not engage their argument in detail. Consequently, I would recommend to accept this paper, albeit the authors should explicitly insert into the main body of the paper the shortcomings of the matrix-game approach.
train
[ "gv_0zLRMDm5", "NthWLzIAlmo", "xJIA9Yo_1tW", "iTKwB1jJA2w", "k1fPrZgEWI", "KfW6RlxJPqa", "wxLBoAAwHs", "pSKCksQjVMb", "ckwFi7Kzkhg", "ZGaDnFOki2t", "KkmjoiBa-Z", "wgHlxISSGxo" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 1. With regards to the authors' response about \"Structural Originality,\" I'm afraid I remain unconvinced. Departure processes that are determined by other agents in the system are extremely common in random access mechanisms and are well-studied. The same goes for non integral queue lengths.\n2. I'm not convinc...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "xJIA9Yo_1tW", "nips_2022_a3W4_OUIRgD", "iTKwB1jJA2w", "k1fPrZgEWI", "ZGaDnFOki2t", "wxLBoAAwHs", "KkmjoiBa-Z", "ckwFi7Kzkhg", "wgHlxISSGxo", "nips_2022_a3W4_OUIRgD", "nips_2022_a3W4_OUIRgD", "nips_2022_a3W4_OUIRgD" ]
nips_2022_N7-EIciq3R
High-Order Pooling for Graph Neural Networks with Tensor Decomposition
Graph Neural Networks (GNNs) are attracting growing attention due to their effectiveness and flexibility in modeling a variety of graph-structured data. Exiting GNN architectures usually adopt simple pooling operations~(\eg{} sum, average, max) when aggregating messages from a local neighborhood for updating node representation or pooling node representations from the entire graph to compute the graph representation. Though simple and effective, these linear operations do not model high-order non-linear interactions among nodes. We propose the Tensorized Graph Neural Network (tGNN), a highly expressive GNN architecture relying on tensor decomposition to model high-order non-linear node interactions. tGNN leverages the symmetric CP decomposition to efficiently parameterize permutation-invariant multilinear maps for modeling node interactions. Theoretical and empirical analysis on both node and graph classification tasks show the superiority of tGNN over competitive baselines. In particular, tGNN achieves the most solid results on two OGB node classification datasets and one OGB graph classification dataset.
Accept
The reviewers initially disagree on whether this paper has sufficiently advanced the research topic of graph pooling and have concerns on missing the comparison with Wang and Ji (2020), which have also introduced higher-order pooling for graph neural networks (GNNs). After extensive discussions with the authors, the reviewers have reached a consensus towards acceptance: While the performance improvement is marginal, and in some cases, not statistically significant, the proposed tensor decomposition-based pooling algorithm is new and provides a theoretically-sound valuable addition to advanced neighborhood aggregation methods for GNNs.
train
[ "HW1TPpzXp9", "Am_UT9KwDCt", "KIe9q3Zl_r6", "0WqnmpsW3fO", "PEQUSJGSbNA", "dfUM3XkmdI", "Y1wXwku_2sW", "tj99UvQcXFk", "yEUOdLReCOT", "XDavQ6f-rf-", "NUzB6fqzZVT", "A_mLItC2nkX", "Z1drtfQxED", "1YRxq5AEheN", "-cvPIi9pV8k", "CSmQjWiEYMj", "zxk0PKHTNMxE", "5T7eN-z2Cpz", "S1VazkgQnc"...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "...
[ " Thanks for the response. The authors have addressed all my concerns.", " Thanks for all your good suggestions which improve the paper quality and presentation. I have made a revision that includes our discussions. I manage to change some minor parts in the main paper (due to the page limitation) and write addit...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "yEUOdLReCOT", "nips_2022_N7-EIciq3R", "0WqnmpsW3fO", "1YRxq5AEheN", "Y1wXwku_2sW", "A_mLItC2nkX", "Z1drtfQxED", "GJZw971T7k7", "XDavQ6f-rf-", "ChHIfcQaJjN", "VmrDVRexkw9_", "CSmQjWiEYMj", "zxk0PKHTNMxE", "-cvPIi9pV8k", "lt_uc8C3yS7", "zud_h0jKoBO", "S1VazkgQnc", "AVJFNWGeXh_", "...
nips_2022_iqCO3jbPjYF
Imitating Past Successes can be Very Suboptimal
Prior work has proposed a simple strategy for reinforcement learning (RL): label experience with the outcomes achieved in that experience, and then imitate the relabeled experience. These outcome-conditioned imitation learning methods are appealing because of their simplicity, strong performance, and close ties with supervised learning. However, it remains unclear how these methods relate to the standard RL objective, reward maximization. In this paper, we prove that existing outcome-conditioned imitation learning methods do not necessarily improve the policy. However, we show that a simple modification results in a method that does guarantee policy improvement. Our aim is not to develop an entirely new method, but rather to explain how a variant of outcome-conditioned imitation learning can be used to maximize rewards
Accept
Paper shows that outcome-conditioned behavior cloning (OCBC) is not guaranteed to maximize long-term reward in multi-task RL. A simple, but effective variation of OCBC is proposed that does guarantee policy improvement. While the scope of the paper is rather specific (a certain form of behavior cloning in multi-task RL), this family of methods has gained some momentum recently, while there are still many theoretical questions around it. This paper addresses some of these in a clear way and proposes a specific improvement. A clear accept in my view.
test
[ "xmGPcl4gf6o", "FXTT1BE-GMi", "yMTyn0mugwW", "XoWAzQLXI8", "45BHj_sEMd1", "fc997-4kLs8", "8my9v6IKv39", "xxuQew2TyC8", "7EZsSeEIlp0", "0XdWcOM-WDB", "kZoxPqK0aCY", "B6LvQb6QRoL", "ACQALEy3tau" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThanks for the update!\n\n> relation and impact of these results to multi-objective RL\n\nWe agree that fully understanding these connections is an interesting question. One potential impact of these results is that they might suggest how ideas from multi-objective RL might be used to build even...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "FXTT1BE-GMi", "XoWAzQLXI8", "45BHj_sEMd1", "45BHj_sEMd1", "0XdWcOM-WDB", "B6LvQb6QRoL", "ACQALEy3tau", "kZoxPqK0aCY", "0XdWcOM-WDB", "B6LvQb6QRoL", "nips_2022_iqCO3jbPjYF", "nips_2022_iqCO3jbPjYF", "nips_2022_iqCO3jbPjYF" ]
nips_2022_uRSvcqwOm0
Bivariate Causal Discovery for Categorical Data via Classification with Optimal Label Permutation
Causal discovery for quantitative data has been extensively studied but less is known for categorical data. We propose a novel causal model for categorical data based on a new classification model, termed classification with optimal label permutation (COLP). By design, COLP is a parsimonious classifier, which gives rise to a provably identifiable causal model. A simple learning algorithm via comparing likelihood functions of causal and anti-causal models suffices to learn the causal direction. Through experiments with synthetic and real data, we demonstrate the favorable performance of the proposed COLP-based causal model compared to state-of-the-art methods. We also make available an accompanying R package COLP, which contains the proposed causal discovery algorithm and a benchmark dataset of categorical cause-effect pairs.
Accept
All reviewers are convinced by the scientific soundness and evaluation results about this paper. The reviewers had some concerns regarding clarity and evaluation but in general liked various aspects of the paper. The authors did a good job of addressing the reviewers' concerns so acceptance is recommended.
train
[ "pxE1jlxeGqF", "XorAoVikvMA", "IZ6wGqHV16j", "9J3r0n_5OtT", "5PxOAWyTfAl", "oeHiuOmG_wu", "PSfe6s5c0y2", "XNA-I2SSeq2", "tT_wHFwxTwn", "xaRfrOOM2LO", "mK2PmzwT3Bw", "jWO4U2_FYQs", "CsCr_Q98Y1", "Ob3uQK8sLz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you the response and for addressing my concerns.", " Thank the authors for your detailed response. My concerns about the correctness of this paper is well addressed.\nAs promised I will increase my score to 5. However I could not give any higher, because the assumption that \"not only a link function F ex...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "oeHiuOmG_wu", "PSfe6s5c0y2", "XNA-I2SSeq2", "5PxOAWyTfAl", "PSfe6s5c0y2", "mK2PmzwT3Bw", "Ob3uQK8sLz", "tT_wHFwxTwn", "CsCr_Q98Y1", "jWO4U2_FYQs", "nips_2022_uRSvcqwOm0", "nips_2022_uRSvcqwOm0", "nips_2022_uRSvcqwOm0", "nips_2022_uRSvcqwOm0" ]
nips_2022_LYfFj-Vk6lt
Joint Model-Policy Optimization of a Lower Bound for Model-Based RL
Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning. However, models that achieve better training performance (e.g., lower MSE) are not necessarily better for control: an RL agent may seek out the small fraction of states where an accurate model makes mistakes, or it might act in ways that do not expose the errors of an inaccurate model. As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them. In this work, we propose a single objective for jointly training the model and the policy, such that updates to either component increase a lower bound on expected return. To the best of our knowledge, this is the first lower bound for model-based RL that holds globally and can be efficiently estimated in continuous settings; it is the only lower bound that mends the objective mismatch problem. A version of this bound becomes tight under certain assumptions. Optimizing this bound resembles a GAN: a classifier distinguishes between real and fake transitions, the model is updated to produce transitions that look realistic, and the policy is updated to avoid states where the model predictions are unrealistic. Numerical simulations demonstrate that optimizing this bound yields reward maximizing policies and yields dynamics that (perhaps surprisingly) can aid in exploration. We also show that a deep RL algorithm loosely based on our lower bound can achieve performance competitive with prior model-based methods, and better performance on certain hard exploration tasks.
Accept
The paper addresses an important problem in model-based RL: the objective mis-match problem, in which the objective being optimized is different from the actual objective needed. The paper proposes a lower bound function on the expected return. The training resembles a GAN approach. The paper also presents numerical study to justify the method. The reviewers unanimously agree that the paper could be accepted to NeurIPS. The reviewers appreciate the good writing of the paper and also recognize the technical novelty. However a major concern remains about the experiments. For instance, the reviewer believes there is a gap between the theory presented in the paper and the empirical experiments. It would be good to investigate deeper why the classifier term hurts exploration.
train
[ "IojAdG1TUt", "C2teI4B4pE2", "yk04Id6soMn", "ErvWCbUE1YU", "g1YgdKFr3w", "bQCVBDsVc8l", "6tageGsoGlR", "2M84UIIfEUZ", "XfXU6jhfleB4", "HvgkESiYYT-", "jACgH-v521b", "rkvoGXLb3v", "QNRwqqsgu4Q", "SQ-iyBr349Y", "ZgId81Qr-fY", "XSTaMl0oGDe", "DJffJvPlq9v", "P7CyXFdJe1f", "vDDcDs6aviW...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Dear Reviewer,\n\nThank you for the response. This discussion is helpful for us to clarify the contributions of the paper, including the capabilities and limitations of the proposed method.\n\n> difference between MnM and MnM-approx\n\nWe agree that this change affects the training of the policy; in response to R...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "C2teI4B4pE2", "vDDcDs6aviW", "ErvWCbUE1YU", "g1YgdKFr3w", "bQCVBDsVc8l", "SQ-iyBr349Y", "2M84UIIfEUZ", "hHfikU-yR05", "HvgkESiYYT-", "QNRwqqsgu4Q", "SQ-iyBr349Y", "ZgId81Qr-fY", "XSTaMl0oGDe", "qgwuYnxQO7C", "J82qdlAQns", "PmSHYtVZra8", "qgwuYnxQO7C", "vDDcDs6aviW", "J82qdlAQns"...
nips_2022_hHrO6-IfskR
TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets
The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc. Neural architecture search (NAS) for tabular datasets is an important but under-explored problem. Previous NAS algorithms designed for image search spaces incorporate resource constraints directly into the reinforcement learning (RL) rewards. However, for NAS on tabular datasets, this protocol often discovers suboptimal architectures. This paper develops TabNAS, a new and more effective approach to handle resource constraints in tabular NAS using an RL controller motivated by the idea of rejection sampling. TabNAS immediately discards any architecture that violates the resource constraints without training or learning from that architecture. TabNAS uses a Monte-Carlo-based correction to the RL policy gradient update to account for this extra filtering step. Results on several tabular datasets demonstrate the superiority of TabNAS over previous reward-shaping methods: it finds better models that obey the constraints.
Accept
The reviewers unanimously recommend accept. Many important clarification points were discussed in the author rebuttal. Please make sure to incorporate those changes into the final camera ready.
train
[ "D80W31FzJWa", "KcbTvSU0bPQ", "yZfLxO37h0G", "Fov8rxK3oC", "lp8BmfgidH", "j42QMjC59l", "-bMD5lXmTHC", "cug2HWC62l", "1bt5plvhiFOJ", "ugHzrL3szb", "tJetVtOr81", "KwPTU0xzOrp", "15C32DfYrDA", "J2ecS2H9NsS", "k8Ug6T7C66J", "bip1NVTjM7Z" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Your method is performing better than the baselines and the rejection sampling is a neat idea. I would like to increase my score.", " Thank you for reviewing and providing constructive feedbacks. We will for sure incorporate the response in the revised version. ", " I believe the paper has more merits than fl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 2 ]
[ "cug2HWC62l", "yZfLxO37h0G", "lp8BmfgidH", "-bMD5lXmTHC", "j42QMjC59l", "ugHzrL3szb", "bip1NVTjM7Z", "k8Ug6T7C66J", "J2ecS2H9NsS", "J2ecS2H9NsS", "nips_2022_hHrO6-IfskR", "nips_2022_hHrO6-IfskR", "nips_2022_hHrO6-IfskR", "nips_2022_hHrO6-IfskR", "nips_2022_hHrO6-IfskR", "nips_2022_hHrO...
nips_2022_GiUpEVQmNx8
SAPD+: An Accelerated Stochastic Method for Nonconvex-Concave Minimax Problems
We propose a new stochastic method SAPD+ for solving nonconvex-concave minimax problems of the form $\min\max\mathcal{L}(x,y)=f(x)+\Phi(x,y)-g(y)$, where $f,g$ are closed convex and $\Phi(x,y)$ is a smooth function that is weakly convex in $x$, (strongly) concave in $y$. For both strongly concave and merely concave settings, SAPD+ achieves the best known oracle complexities of $\mathcal{O}(L\kappa_y\epsilon^{-4})$ and $\mathcal{O}(L^3\epsilon^{-6})$, respectively, without assuming compactness of the problem domain, where $\kappa_y$ is the condition number, and $L$ is the Lipschitz constant. We also propose SAPD+ with variance reduction, which enjoys the best known oracle complexity of $\mathcal{O}(L\kappa_y^2\epsilon^{-3})$ for weakly convex-strongly concave setting. We demonstrate the efficiency of SAPD+ on a distributionally robust learning problem with a nonconvex regularizer and also on a multi-class classification problem in deep learning.
Accept
The authors propose a new algorithm that attains the state-of-the-art rates for an important non-convex concave problem template, which includes distributionally robust learning problems as a special case. The main contribution of the work is the new proof technique that goes beyond [42] to obtain the new rates. However, the supplementary material (49 pages!) that contains the proof is quite messy and it is difficult to verify it. On one hand, the appendix needs to be completely re-written so that one can verify the full proof. On the other hand, the authors provided a proof sketch that makes sense during the rebuttal. The numerical demonstrations do support the authors' case. As a result, after discussions with the SAC, the AC decided to give the authors the benefit of the doubt. We advise the authors to completely clean up the proof in the appendix for camera ready and make it accessible.
train
[ "nXp6BOQPJlu", "bpb2BHXhE3D", "uiqvOeKO4L", "9AQHJTmwXp8", "KdUVKpDBSt6", "dcLMrMrVmYa", "MpKxNtYNUsa", "tyqan038WNw", "jlh4pwYy23F" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the quick response, which basically clarifies all confusions. Authors can integrate the above sketch into the revision.\n\nAlso you mentioned this is the first analysis based on GNME in WCSC. I understand that Moreau envelope will smoothen the primal function in WCMC case. But in WCSC, the primal fu...
[ -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 1, 4 ]
[ "bpb2BHXhE3D", "uiqvOeKO4L", "dcLMrMrVmYa", "jlh4pwYy23F", "tyqan038WNw", "MpKxNtYNUsa", "nips_2022_GiUpEVQmNx8", "nips_2022_GiUpEVQmNx8", "nips_2022_GiUpEVQmNx8" ]
nips_2022_wJwHTgIoE0P
Procedural Image Programs for Representation Learning
Learning image representations using synthetic data allows training neural networks without some of the concerns associated with real images, such as privacy and bias. Existing work focuses on a handful of curated generative processes which require expert knowledge to design, making it hard to scale up. To overcome this, we propose training with a large dataset of twenty-one thousand programs, each one generating a diverse set of synthetic images. These programs are short code snippets, which are easy to modify and fast to execute using OpenGL. The proposed dataset can be used for both supervised and unsupervised representation learning, and reduces the gap between pre-training with real and procedurally generated images by 38%.
Accept
This work presents a method for training neural nets on synthetic data. This data is collected from a collection of thousands of OpenGL programs that rendered images, which are then used for representation learning. The big advantage of this approach is that it avoids of a lot of the biases that are present in natural image datasets. The proposed method is competitive for supervised and unsupervised scenarios. I find the results, especially those with finetuning (as done during the rebuttal period) relatively compelling. While I agree with reviewer vdrw that, on the whole, the major contribution (an image collection) is not very strong, I still think this kind of approach will be widely interesting to the NeurIPS community. Precisely because the work shows carefully (albeit empirically only) that procedurally generated datasets could be useful for representation learning, especially if you want to avoid the various pitfalls of natural image sets.
test
[ "xm8lST3bNYK", "ImqaVI9LeNz", "hrEYMqUiOa", "Jv4Av9DCnpH", "qs3yYXLr4-", "EAui49xWK5U", "lyh9sgUSxmL", "KgDxTq_QTT-", "XUOfK3gq5o", "bj1juJ4LBJ6s", "XMBmS3Zpx2j", "IC50ys2knWU" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Could the reviewer engage in the discussion of our rebuttal, specially the following points of their review, which we believe to be incorrect:\n- ***W1. only shows the performance of their approach on the ImageNet dataset*.** We perform experiments on 21 dataset-tasks.\n- ***W3. More experimental results: What ha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "bj1juJ4LBJ6s", "lyh9sgUSxmL", "Jv4Av9DCnpH", "qs3yYXLr4-", "EAui49xWK5U", "IC50ys2knWU", "XMBmS3Zpx2j", "nips_2022_wJwHTgIoE0P", "bj1juJ4LBJ6s", "nips_2022_wJwHTgIoE0P", "nips_2022_wJwHTgIoE0P", "nips_2022_wJwHTgIoE0P" ]
nips_2022_wk5zDkuSHq
Online Agnostic Multiclass Boosting
Boosting is a fundamental approach in machine learning that enjoys both strong theoretical and practical guarantees. At a high-level, boosting algorithms cleverly aggregate weak learners to generate predictions with arbitrarily high accuracy. In this way, boosting algorithms convert weak learners into strong ones. Recently, Brukhim et al. [2020] extended boosting to the online agnostic binary classification setting. A key ingredient in their approach is a clean and simple reduction to online convex optimization, one that efficiently converts an arbitrary online convex optimizer to an agnostic online booster. In this work, we extend this reduction to multiclass problems and give the first boosting algorithm for online agnostic mutliclass classification. Our reduction also enables the construction of algorithms for statistical agnostic, online realizable, and statistical realizable multiclass boosting.
Accept
The reviewers agree that this is a solid contribution. Please do revise the paper according to the reviewers comments and the discussion.
train
[ "N_D1GyuF3I0", "vcAcZnGmFfd", "yV6Vgw1C5Y", "1rW6sQxFPu-", "tciBIKjJKEgF", "T2Cm6_DtCwI", "KivOQauLvBr", "n4Rl4ruJFX8", "yTcY_2TySh", "HGS-SVehL9J", "shnlQj-T8m", "bNpUZpGdx8y", "b-rZocpcoQW", "CT7u1TjJs5Z" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors, \n\nThanks for your response.\n \nI do not fully understand your answer that the weak learner does not observe the reference labels $\\ell_t^i$, because they seem to be a crucial part of measuring its performance in Definition 1, and they are also used in the example constructions of weak learners i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 3 ]
[ "vcAcZnGmFfd", "CT7u1TjJs5Z", "CT7u1TjJs5Z", "b-rZocpcoQW", "b-rZocpcoQW", "b-rZocpcoQW", "bNpUZpGdx8y", "shnlQj-T8m", "shnlQj-T8m", "shnlQj-T8m", "nips_2022_wk5zDkuSHq", "nips_2022_wk5zDkuSHq", "nips_2022_wk5zDkuSHq", "nips_2022_wk5zDkuSHq" ]
nips_2022_x56v-UN7BjD
Momentum Aggregation for Private Non-convex ERM
We introduce new algorithms and convergence guarantees for privacy-preserving non-convex Empirical Risk Minimization (ERM) on smooth $d$-dimensional objectives. We develop an improved sensitivity analysis of stochastic gradient descent on smooth objectives that exploits the recurrence of examples in different epochs. By combining this new approach with recent analysis of momentum with private aggregation techniques, we provide an $(\epsilon,\delta)$-differential private algorithm that finds a gradient of norm $O\left(\frac{d^{1/3}}{(\epsilon N)^{2/3}}\right)$ in $O\left(\frac{N^{7/3}\epsilon^{4/3}}{d^{2/3}}\right)$ gradient evaluations, improving the previous best gradient bound of $\tilde O\left(\frac{d^{1/4}}{\sqrt{\epsilon N}}\right)$.
Accept
The authors obtain improved guarantees for differentially private smooth, non-convex empirical risk minimization. Following the discussion stage, there appears to be a consensus in favor of accepting this paper, due largely to the clear theoretical improvement and the fundamental nature of the problem. However, reviewer nfxU, who remains unenthusiastic about the paper, is still concerned about lack of clarity regarding a number of algorithm design choices: the necessity of making the entire optimization trajectory DP, whether or not the magnitude of noise changes between steps, and whether shuffling is the data is truly necessary. While these points do not cause direct concern over the correctness of the results, they are nevertheless important to clarify, and I urge the authors to do so thoroughly in the camera-ready revision.
train
[ "aphXjwDQtrP", "WrYWHYyPIYV", "DPU2hVS5Ldn", "0jA8ZaNZtVh", "XSVB8T8thli", "BLaGrEhX44A", "pJUa5WBQd3", "_gLSYJUHpH_", "Wy0o5g_AkBx" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 1. Thank you. Putting the results in perspective would be nice. Proving lower bounds in this case is indeed difficult, and at least the paper should comment on this aspect.\n\n2. Thank you for the clarification.\n\n3. Thank you. I can see some notational improvements, yet I still have some major concerns about th...
[ -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "WrYWHYyPIYV", "DPU2hVS5Ldn", "0jA8ZaNZtVh", "Wy0o5g_AkBx", "_gLSYJUHpH_", "pJUa5WBQd3", "nips_2022_x56v-UN7BjD", "nips_2022_x56v-UN7BjD", "nips_2022_x56v-UN7BjD" ]
nips_2022_lzZstLVGVGW
Earthformer: Exploring Space-Time Transformers for Earth System Forecasting
Conventionally, Earth system (e.g., weather and climate) forecasting relies on numerical simulation with complex physical models and hence is both expensive in computation and demanding on domain expertise. With the explosive growth of spatiotemporal Earth observation data in the past decade, data-driven models that apply Deep Learning (DL) are demonstrating impressive potential for various Earth system forecasting tasks. The Transformer as an emerging DL architecture, despite its broad success in other domains, has limited adoption in this area. In this paper, we propose Earthformer, a space-time Transformer for Earth system forecasting. Earthformer is based on a generic, flexible and efficient space-time attention block, named Cuboid Attention. The idea is to decompose the data into cuboids and apply cuboid-level self-attention in parallel. These cuboids are further connected with a collection of global vectors. We conduct experiments on the MovingMNIST dataset and a newly proposed chaotic $N$-body MNIST dataset to verify the effectiveness of cuboid attention and figure out the best design of Earthformer. Experiments on two real-world benchmarks about precipitation nowcasting and El Niño/Southern Oscillation (ENSO) forecasting show that Earthformer achieves state-of-the-art performance.
Accept
This work proposes a Transformer-based architecture using 3D blocks for spatio-temporal prediction, designed specifically for Earth system forecasting applications. The authors show that this method considerably outperforms other approaches both on two climate/weather forecasting tasks and on unrelated synthetic tasks. The reviewers agree that this is a strong submission, and it addresses an important area of problems too often neglected by our community. While one reviewer held a conflicting opinion to the other three, this review was completed 20 days after the deadline, and no response was made by this reviewer to the author rebuttal despite my requests to the reviewer. I believe that their concerns have been adequately addressed by the authors. 
I accordingly recommend acceptance of this paper.
val
[ "ueTRkt1EsRP", "CITFt7HTbv5", "VigWZnGRxPV", "fN7CA0FXg43", "pOXuGm2nfat", "X0LVMcuOW-4L", "xwhbiYmHEX", "rdUQGEB1IRa", "gyXLNIMfA0eu", "3bWvKMwgyW", "uOgUmPwTYQU", "TXQq_mgEHYd", "GFf54gBKGIw", "sXF42U7z4S5", "mkdWTAiE3aU", "wRi8Y6z2ykb", "dAyrYaRFfo3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. \n\nW1: The reviewer still holds the viewpoint that similar ideas have been pointed out (e.g. Coordination Among Neural Modules Through a Shared Global Workspace: https://arxiv.org/abs/2103.01197). Based on that idea, even if the architecture implemented is novel, it is still more lik...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ "wRi8Y6z2ykb", "sXF42U7z4S5", "sXF42U7z4S5", "sXF42U7z4S5", "sXF42U7z4S5", "sXF42U7z4S5", "dAyrYaRFfo3", "dAyrYaRFfo3", "wRi8Y6z2ykb", "wRi8Y6z2ykb", "mkdWTAiE3aU", "mkdWTAiE3aU", "nips_2022_lzZstLVGVGW", "nips_2022_lzZstLVGVGW", "nips_2022_lzZstLVGVGW", "nips_2022_lzZstLVGVGW", "nip...
nips_2022_igMc_C9pgYG
Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative
This paper targets at improving the generalizability of hypergraph neural networks in the low-label regime, through applying the contrastive learning approach from images/graphs (we refer to it as HyperGCL). We focus on the following question: How to construct contrastive views for hypergraphs via augmentations? We provide the solutions in two folds. First, guided by domain knowledge, we fabricate two schemes to augment hyperedges with higher-order relations encoded, and adopt three vertex augmentation strategies from graph-structured data. Second, in search of more effective views in a data-driven manner, we for the first time propose a hypergraph generative model to generate augmented views, and then an end-to-end differentiable pipeline to jointly learn hypergraph augmentations and model parameters. Our technical innovations are reflected in designing both fabricated and generative augmentations of hypergraphs. The experimental findings include: (i) Among fabricated augmentations in HyperGCL, augmenting hyperedges provides the most numerical gains, implying that higher-order information in structures is usually more downstream-relevant; (ii) Generative augmentations do better in preserving higher-order information to further benefit generalizability; (iii) HyperGCL also boosts robustness and fairness in hypergraph representation learning. Codes are released at https://github.com/weitianxin/HyperGCL.
Accept
First of all, the majority of reviewers very clearly acknowledged the clarity of presentation and writing within the manuscript. There might not be a large degree of novelty in the paper, but the approach is well evaluated and exhibits good performance. After carefully reading the author responses to all reviewer comments, my impression is that the authors spent quite some time addressing all raised questions in a satisfactory manner. While the scores of the paper are certainly borderline, I think this paper warrants acceptance which I do recommend at this point.
train
[ "KU_Exeg2AcP", "kWa5VvWVQFy", "uBsCJ7R5nf", "xzVYwrODUuL", "_1kBEJin6O", "6RnjfICIoKs", "-2Gv0z0_IgE", "l0AkRAOOD8W", "p82pPbCEdee", "1NvK35IOtNg", "AZTzsZaSe2C", "0QybOcXiqPj", "DXtg1Lpnr3W", "Jg3fC4lT1aL", "lw8--OYZ560", "dpjVjt-kZ41", "tHZfJWLAkc", "DVyow2KGQl0", "m_lwMHcMfWJ"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Dear reviewer 7pvs,\n\nWe sincerely appreciate your follow-up discussion with us. We hope that you have found our clarification clear. We would be thrilled to discuss more to address your remaining concerns, or if you have already found our response satisfactory, we humbly remind you of a fitting update of the ra...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 4, 4, 2 ]
[ "DXtg1Lpnr3W", "xzVYwrODUuL", "_1kBEJin6O", "DVyow2KGQl0", "DXtg1Lpnr3W", "dpjVjt-kZ41", "l0AkRAOOD8W", "1NvK35IOtNg", "1NvK35IOtNg", "7SkKwmkjHf", "oN2k4DePaH5", "pz0BPCnKpbN", "m_lwMHcMfWJ", "DVyow2KGQl0", "tHZfJWLAkc", "nips_2022_igMc_C9pgYG", "nips_2022_igMc_C9pgYG", "nips_2022...
nips_2022_tTWCQrgjuM
Data Augmentation MCMC for Bayesian Inference from Privatized Data
Differentially private mechanisms protect privacy by introducing additional randomness into the data. Restricting access to only the privatized data makes it challenging to perform valid statistical inference on parameters underlying the confidential data. Specifically, the likelihood function of the privatized data requires integrating over the large space of confidential databases and is typically intractable. For Bayesian analysis, this results in a posterior distribution that is doubly intractable, rendering traditional MCMC techniques inapplicable. We propose an MCMC framework to perform Bayesian inference from the privatized data, which is applicable to a wide range of statistical models and privacy mechanisms. Our MCMC algorithm augments the model parameters with the unobserved confidential data, and alternately updates each one. For the potentially challenging step of updating the confidential data, we propose a generic approach that exploits the privacy guarantee of the mechanism to ensure efficiency. We give results on the computational complexity, acceptance rate, and mixing properties of our MCMC. We illustrate the efficacy and applicability of our methods on a naïve-Bayes log-linear model and on a linear regression model.
Accept
This paper presents a general MCMC algorithm for approximating the joint posterior distribution of model parameters and private data, given output from a differentially private algorithm. The paper provides new tools for Bayesian inference under privacy constraints, which can be useful for the differential privacy community. There are a few suggestions from the reviewers/discussion. First, even though the authors claimed their results are fully general, they should consider toning it down or at least clarifying upfront their "Record Additivity" assumption, which seems non-trivial. As one of the reviewers remarked, one of the limitations of this approach is that it will not scale well with high-dimensional data. The AC suggests the authors add this limitation in the discussion of the paper. Other comments: While not critical, the AC also has a question regarding the following part: - In line 105: the assumption is that the density of the mechanism's output distribution is known. This has nothing to do with what privacy variant you use to analyze the mechanism.
train
[ "Nilx9gZHbD", "9ITYpeLEDFL", "LR1COahJuzT", "tHAWkiP_-GJ", "3UwXF0S1cfq", "lNQCzrkshRg", "5jy0ZDjPFCz" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to respond. I have read the comments carefully. This paper seems to have minor contributions and most of my concerns still stand. I thus keep my score. ", " We are very appreciative that Reviewer xd2f saw the merits of our work. We hope that the discussion below will help clarify o...
[ -1, -1, -1, -1, 3, 5, 7 ]
[ -1, -1, -1, -1, 5, 4, 4 ]
[ "tHAWkiP_-GJ", "5jy0ZDjPFCz", "lNQCzrkshRg", "3UwXF0S1cfq", "nips_2022_tTWCQrgjuM", "nips_2022_tTWCQrgjuM", "nips_2022_tTWCQrgjuM" ]
nips_2022_Ls0yzIkEk1
Improving Zero-Shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions
Reinforcement learning (RL) agents are widely used for solving complex sequential decision-making tasks, but still exhibit difficulty generalizing to scenarios not seen during training. While prior online approaches demonstrated that using additional signals beyond the reward function can lead to better generalization capabilities in RL agents, i.e. using self-supervised learning (SSL), they struggle in the offline RL setting, i.e. learning from a static dataset. We show that the performance of online algorithms for generalization in RL can be hindered in the offline setting due to poor estimation of similarity between observations. We propose a new theoretically-motivated framework called Generalized Similarity Functions (GSF), which uses contrastive learning to train an offline RL agent to aggregate observations based on the similarity of their expected future behavior, where we quantify this similarity using generalized value functions. We show that GSF is general enough to recover existing SSL objectives while improving zero-shot generalization performance on two complex pixel-based offline RL benchmarks.
Accept
This paper studies an interesting problem, and overall the reviewers agreed the exposition and validation are sufficient. We encourage the authors to consider the issues raised by the reviewers and further improve the work in the final version.
train
[ "oUQeSrEFFi5", "zfXoljpjhZH", "uJ0266TiiPb", "hne7eO3J-71", "621xUHbYSwj", "Bdf_Zy97oNs", "NFOmXsZAWo5", "lEmGg_cRbaQ", "5v6LEiCXry", "qx4SG2RFWa9", "LDMUJ7AEC9" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi, excellent response!\n\nThe toughest question that I raised above may be the generality of GVFs cumulant function design. But one paper can not solve all the problems. And the proposed generalization performance based on GVF similarity makes sense to me. I will raise my score a little bit.\n\nThanks.", " Th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "NFOmXsZAWo5", "hne7eO3J-71", "Bdf_Zy97oNs", "621xUHbYSwj", "LDMUJ7AEC9", "qx4SG2RFWa9", "5v6LEiCXry", "nips_2022_Ls0yzIkEk1", "nips_2022_Ls0yzIkEk1", "nips_2022_Ls0yzIkEk1", "nips_2022_Ls0yzIkEk1" ]
nips_2022_4OHRr7gmhd4
Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework
We propose a model-based reinforcement learning framework to derive untargeted poisoning attacks against federated learning (FL) systems. Our framework first approximates the distribution of the clients' aggregated data using model updates from the server. The learned distribution is then used to build a simulator of the FL environment, which is utilized to learn an adaptive attack policy through reinforcement learning. Our framework is capable of learning strong attacks automatically even when the server adopts a robust aggregation rule. We further derive an upper bound on the attacker's performance loss due to inaccurate distribution estimation. Experimental results on real-world datasets demonstrate that the proposed attack framework significantly outperforms state-of-the-art poisoning attacks. This indicates the importance of developing adaptive defenses for FL systems.
Accept
The recommendation is based on the reviewers' comments, the area chair's personal evaluation, and the post-rebuttal discussion. This paper proposed a model-based reinforcement learning framework for data poisoning attacks on federated learning. All reviewers find the results convincing and valuable. The authors' rebuttal has successfully addressed the reviewers' concerns. Given the unilateral agreement, I am recommending acceptance
train
[ "LA5f0y-BVws", "wjG87uL8qd", "ViUHCiL0Cn", "3Avs7Rnjy28", "vFPWbGfwLIB", "osFFTprMx2", "ytJ9SfnG3H", "MQess1Yhi1", "dqhwQkZplQ", "_jumTjQ6und", "TMFsuftDOVq", "VkbqCpCJGy1", "NqtBFXii3xY", "MJsmXvztcHT", "aFwGnEuMQc" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We hope that we have addressed your major concerns. If there are still concerns or questions, we would be happy to hear and discuss them. We would like to highlight our contributions to federated learning security by developing novel poisoning attacks in challenging settings. Our RL-based attack not only achieves...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "TMFsuftDOVq", "osFFTprMx2", "3Avs7Rnjy28", "ytJ9SfnG3H", "dqhwQkZplQ", "_jumTjQ6und", "MQess1Yhi1", "aFwGnEuMQc", "MJsmXvztcHT", "NqtBFXii3xY", "VkbqCpCJGy1", "nips_2022_4OHRr7gmhd4", "nips_2022_4OHRr7gmhd4", "nips_2022_4OHRr7gmhd4", "nips_2022_4OHRr7gmhd4" ]
nips_2022_g-I_qqceH2n
Communication-efficient distributed eigenspace estimation with arbitrary node failures
We develop an eigenspace estimation algorithm for distributed environments with arbitrary node failures, where a subset of computing nodes can return structurally valid but otherwise arbitrarily chosen responses. Notably, this setting encompasses several important scenarios that arise in distributed computing and data-collection environments such as silent/soft errors, outliers or corrupted data at certain nodes, and adversarial responses. Our estimator builds upon and matches the performance of a recently proposed non-robust estimator up to an additive $\tilde{O}(\sigma \sqrt{\alpha})$ error, where $\sigma^2$ is the variance of the existing estimator and $\alpha$ is the fraction of corrupted nodes.
Accept
The scores on this paper are mostly positive, with only one being below the threshold. That reviewer was primarily concerned about novelty and significance, which is a rather subjective matter, and at least one reviewer defended the paper in this regard (and all 3 other reviews indicated it to be sufficient). Most other concerns appear to have been generally resolved during the rebuttal period. Even the reviewer that defended the novelty/significance did acknowledge that the presentation could be significantly improved, particularly for making clear what the paper offers compared to the existing literature. I strongly encourage the authors to carefully consider the presentation of terminology, notation, and discussion of contributions/novelty, so that readers are able to grasp these things as easily as possible. Overall, while the decision is not quite definite, this paper does appear to pass the bar.
train
[ "H7QCFhCfbeX", "VR7rfCgyQqS", "OV-sFVJRWcS", "7O-L5wR1xJ5", "l0Q3eFqEbRB", "zV-KtiU7bGB", "xlMh8AKVVqY", "pX78vqrafbx", "GCdOkh9qjci", "V6PqHCsDzx", "tKAb3VfQ1YW", "ZF4gZOZCxi_", "yDJooC-ony" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your time and the bump in score.\n\nGiven our discussion, we will certainly add a note on setting $p$ appropriately (or possibly just restate the theorem with the probability and error bound we describe in our original response to you) in the revised manuscript.\n\nWe are also happy to hea...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "OV-sFVJRWcS", "tKAb3VfQ1YW", "xlMh8AKVVqY", "yDJooC-ony", "ZF4gZOZCxi_", "tKAb3VfQ1YW", "pX78vqrafbx", "V6PqHCsDzx", "nips_2022_g-I_qqceH2n", "nips_2022_g-I_qqceH2n", "nips_2022_g-I_qqceH2n", "nips_2022_g-I_qqceH2n", "nips_2022_g-I_qqceH2n" ]
nips_2022_UQJoGBNRX4
Weighted Mutual Learning with Diversity-Driven Model Compression
Online distillation attracts attention from the community as it simplifies the traditional two-stage knowledge distillation process into a single stage. Online distillation collaboratively trains a group of peer models, which are treated as students, and all students gain extra knowledge from each other. However, memory consumption and diversity among peers are two key challenges to the scalability and quality of online distillation. To address the two challenges, this paper presents a framework called Weighted Mutual Learning with Diversity-Driven Model Compression (WML) for online distillation. First, at the base of a hierarchical structure where peers share different parts, we leverage the structured network pruning to generate diversified peer models and reduce the memory requirements. Second, rather than taking the average of peers, this paper, for the first time, leverages a bi-level formulation to estimate the relative importance of peers with a close-form, to further boost the effectiveness of the distillation from each other. Extensive experiments show the generalization of the proposed framework, which outperforms existing online distillation methods on a variety of deep neural networks. More interesting, as a byproduct, \WML produces a series of pruned models under different model sizes in a single run, which also achieves competitive results compared with existing channel pruning methods.
Accept
This paper proposes an online distillation technique: Weighted Mutual Learning with Diversity-Driven Model Compression (WML). It uses a novel bi-level formulation that estimates the importance of the peers thus enabling optimized training of the entire network of peers. The proposed method got significant improvement over the state of the art in terms of accuracy. The proposed idea is novel and could be used in many tasks.
train
[ "v5VKz59JeP", "dTZ_IrmIdSK", "zxdc8q2w53B", "M3dXginKbAb", "V5igj1UvDVC", "WulhsNLXUHr", "XC6Y1cMbLkY", "606IPGw-u0Y", "kW9nKKiwNBO", "BcP3Dqsv7yt", "3jlVbiChviV", "L6i02NTUJN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for the rebuttal. I think my concerns have been addressed. I have raised my rating.", " Thank you for your responses! The revised version address most of my concerns, and I would like to raise my rating.", " Dear Reviewer WYtc,\n\nAs the discussion period is close to the end and we have ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "zxdc8q2w53B", "XC6Y1cMbLkY", "3jlVbiChviV", "nips_2022_UQJoGBNRX4", "L6i02NTUJN", "3jlVbiChviV", "BcP3Dqsv7yt", "kW9nKKiwNBO", "nips_2022_UQJoGBNRX4", "nips_2022_UQJoGBNRX4", "nips_2022_UQJoGBNRX4", "nips_2022_UQJoGBNRX4" ]
nips_2022_RnjDFZmGqli
NeuForm: Adaptive Overfitting for Neural Shape Editing
Neural representations are popular for representing shapes as they can be used for data cleanup, model completion, shape editing, and shape synthesis. Current neural representations can be categorized as either overfitting to a single object instance, or representing a collection of objects. However, neither allows accurate editing of neural scene representations: on the one hand, methods that overfit objects achieve highly accurate reconstructions but do not support editing, as they do not generalize to unseen object configurations; on the other hand, methods that represent a family of objects with variations do generalize but produce approximate reconstructions. We propose NeuForm to combine the advantages of both overfitted and generalizable representations by adaptively overfitting a generalizable representation to regions where reliable data is available, while using the generalizable representation everywhere else. We achieve this with a carefully designed architecture and an approach that blends the network weights of the two representations. We demonstrate edits that successfully reconfigure parts of human-made shapes, such as chairs, tables, and lamps, while preserving the accuracy of an overfitted shape representation. We compare with two state-of-the-art competitors and demonstrate clear improvements in terms of plausibility and fidelity of the resultant edits.
Accept
All the reviewers appreciated this work combining generalization capabilities of shape-space models and the fidelity achieved in a single shape overfit/optimization.
train
[ "TqjO4ax18L0", "1IovuTPNJjU", "cV7rqbdGkKp", "8wkxZQXDAk", "GE5-ryT7M4L", "wZYTZJq_sf" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the feedback and suggestions. We will correct all smaller issues; below we address the main concerns:\n\n### Long overfitting times\nWe used a conservatively long overfitting schedule that was shared by all shapes in our experiment, to make sure the overfitting results for each shape had fully converge...
[ -1, -1, -1, 7, 8, 7 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "wZYTZJq_sf", "GE5-ryT7M4L", "8wkxZQXDAk", "nips_2022_RnjDFZmGqli", "nips_2022_RnjDFZmGqli", "nips_2022_RnjDFZmGqli" ]
nips_2022_OVb3ZY0fzMk
Non-stationary Bandits with Knapsacks
In this paper, we study the problem of bandits with knapsacks (BwK) in a non-stationary environment. The BwK problem generalizes the multi-arm bandit (MAB) problem to model the resource consumption associated with playing each arm. At each time, the decision maker/player chooses to play an arm, and s/he will receive a reward and consume certain amount of resource from each of the multiple resource types. The objective is to maximize the cumulative reward over a finite horizon subject to some knapsack constraints on the resources. Existing works study the BwK problem under either a stochastic or adversarial environment. Our paper considers a non-stationary environment which continuously interpolates between these two extremes. We first show that the traditional notion of variation budget is insufficient to characterize the non-stationarity of the BwK problem for a sublinear regret due to the presence of the constraints, and then we propose a new notion of global non-stationarity measure. We employ both non-stationarity measures to derive upper and lower bounds for the problem. Our results are based on a primal-dual analysis of the underlying linear programs and highlight the interplay between the constraints and the non-stationarity. Finally, we also extend the non-stationarity measure to the problem of online convex optimization with constraints and obtain new regret bounds accordingly.
Accept
The paper provides a nontrivial extension of dynamic regret minimization to knapsack bandit problems in the mixed (obliviously adversarially chosen distributions), and identifies new measures of complexity for constrained decision making problems. Its algorithmic novelty is rather limited, though, as it borrows the 'usual' ideas of windowed averaging with statistical upper/lower bounds for enabling 'forgetting' which is crucially required in nonstationary environments. The reviewers for the paper are largely appreciative of the paper's contributions in developing and arguing for new measures of nonstationarity, in terms of dependence on both the constraint and the reward side. Their concerns have been responded to in detail by the author(s), explaining both technical and motivation-related aspects on one hand, and connections to existing related work on the other. With the belief that this line of work may spur other creative directions in constrained decision-making / bandit problems, I am happy to recommend acceptance. I urge the author(s) to use the extra page of space to incorporate their responses to many of the reviewers' queries.
train
[ "Jc-o5u7p5IJ", "30xWuidGLlQ", "suASnFU6woR", "ZBsiDQdQ_Eh", "H-Je6Rgf7gY", "fdgCHTb_Ds0", "LCPVAiUrlQz", "Z21Xe48Iz61", "-oSbHFd1ypM", "A6f0fvKhnC" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for your comments. We appreciate your suggestions on the exposition of our paper and we will improve accordingly in future versions. We'd also like to take this opportunity to thank all the reviewers for the helpful feedbacks in improving our paper.\n\nWe don’t want to bother you, other reviewers...
[ -1, -1, -1, -1, -1, -1, 5, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "30xWuidGLlQ", "suASnFU6woR", "-oSbHFd1ypM", "A6f0fvKhnC", "Z21Xe48Iz61", "LCPVAiUrlQz", "nips_2022_OVb3ZY0fzMk", "nips_2022_OVb3ZY0fzMk", "nips_2022_OVb3ZY0fzMk", "nips_2022_OVb3ZY0fzMk" ]
nips_2022_v2es9YoukWO
SKFlow: Learning Optical Flow with Super Kernels
Optical flow estimation is a classical yet challenging task in computer vision. One of the essential factors in accurately predicting optical flow is to alleviate occlusions between frames. However, it is still a thorny problem for current top-performing optical flow estimation methods due to insufficient local evidence to model occluded areas. In this paper, we propose the Super Kernel Flow Network (SKFlow), a CNN architecture to ameliorate the impacts of occlusions on optical flow estimation. SKFlow benefits from the super kernels which bring enlarged receptive fields to complement the absent matching information and recover the occluded motions. We present efficient super kernel designs by utilizing conical connections and hybrid depth-wise convolutions. Extensive experiments demonstrate the effectiveness of SKFlow on multiple benchmarks, especially in the occluded areas. Without pre-trained backbones on ImageNet and with a modest increase in computation, SKFlow achieves compelling performance and ranks $\textbf{1st}$ among currently published methods on the Sintel benchmark. On the challenging Sintel clean and final passes (test), SKFlow surpasses the best-published result in the unmatched areas ($7.96$ and $12.50$) by $9.09\%$ and $7.92\%$. The code is available at https://github.com/littlespray/SKFlow.
Accept
The paper proposes an optical flow estimation network using very large convolution kernels. Three reviewers consider the paper above the bar, while one considers it below the bar. After consulting the paper, reviews, and rebuttal, the area chair decides to side with the positive reviewers due to the strong optical flow estimation results in the benchmark datasets. The AC believes the paper could provide the field some inspiration to look into large convolution kernels for other visual recognition and processing task.
train
[ "fUAdUuZ8mE", "e7IGGcxvG_g", "O5c9rv30Sei", "PubecOR7fqN", "DcdYamTFo68", "xAiz03bYmaj", "yzt3AGnAYSJ", "EyKZxH3euYGm", "X_bRz2gINNX", "byNM5wNpM-d", "ojcmbhzgqW1", "eMZgaCbctpr", "rbbhJQsyQrP", "8miE5slXDVc", "TQOXHKQKWJd" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your feedback. We will include this discussion in the supplementary material.", " We thank the reviewer for the response. We do not regard it as a fundamental issue. Firstly, we contend that the relatively poor performance of ```SKMoE + GMAGRU-LargeKernel``` makes sense because it directly increas...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 4 ]
[ "PubecOR7fqN", "DcdYamTFo68", "xAiz03bYmaj", "X_bRz2gINNX", "ojcmbhzgqW1", "yzt3AGnAYSJ", "EyKZxH3euYGm", "TQOXHKQKWJd", "8miE5slXDVc", "rbbhJQsyQrP", "eMZgaCbctpr", "nips_2022_v2es9YoukWO", "nips_2022_v2es9YoukWO", "nips_2022_v2es9YoukWO", "nips_2022_v2es9YoukWO" ]
nips_2022_6at6rB3IZm
Towards Understanding Grokking: An Effective Theory of Representation Learning
We aim to understand grokking, a phenomenon where models generalize long after overfitting their training set. We present both a microscopic analysis anchored by an effective theory and a macroscopic analysis of phase diagrams describing learning performance across hyperparameters. We find that generalization originates from structured representations, whose training dynamics and dependence on training set size can be predicted by our effective theory (in a toy setting). We observe empirically the presence of four learning phases: comprehension, grokking, memorization, and confusion. We find representation learning to occur only in a "Goldilocks zone" (including comprehension and grokking) between memorization and confusion. Compared to the comprehension phase, the grokking phase stays closer to the memorization phase, leading to delayed generalization. The Goldilocks phase is reminiscent of "intelligence from starvation" in Darwinian evolution, where resource limitations drive discovery of more efficient solutions. This study not only provides intuitive explanations of the origin of grokking, but also highlights the usefulness of physics-inspired tools, e.g., effective theories and phase diagrams, for understanding deep learning.
Accept
There was a consensus among reviewers that this paper should be accepted. In particular, reviewers felt, that the contribution of studying the rate at which different parts of a neural net absorb information, and the effects it has on learning and generalization is a worthwhile one that would enrich the literature and that the paper is a solid contribution to the NeurIPS literature.
train
[ "A3E1fKgvDl0", "rKv7Ni0Gx_", "nmaTED2oyMyj", "3AFCpr3ScFV", "R7MQQC2Txg", "Svd4Idfc5TD", "ZQfA_84m3NU", "5mydj7FB-cR", "dVKC_Lqp41C", "0lmtJJJUIL3", "15-slHkOBk5" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your extensive rebuttal. You have addressed a number of my questions and concerns, and I am increasing my score to a 6: weak accept. I appreciate the new appendices, they add extra depth and understanding to the paper. The one remaining concern is scale: results on large-scale systems would still ad...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "rKv7Ni0Gx_", "nmaTED2oyMyj", "3AFCpr3ScFV", "dVKC_Lqp41C", "0lmtJJJUIL3", "ZQfA_84m3NU", "15-slHkOBk5", "nips_2022_6at6rB3IZm", "nips_2022_6at6rB3IZm", "nips_2022_6at6rB3IZm", "nips_2022_6at6rB3IZm" ]
nips_2022_ZxOO5jfqSYw
Dynamic Sparse Network for Time Series Classification: Learning What to “See”
The receptive field (RF), which determines the region of time series to be “seen” and used, is critical to improve the performance for time series classification (TSC). However, the variation of signal scales across and within time series data, makes it challenging to decide on proper RF sizes for TSC. In this paper, we propose a dynamic sparse network (DSN) with sparse connections for TSC, which can learn to cover various RF without cumbersome hyper-parameters tuning. The kernels in each sparse layer are sparse and can be explored under the constraint regions by dynamic sparse training, which makes it possible to reduce the resource cost. The experimental results show that the proposed DSN model can achieve state-of-art performance on both univariate and multivariate TSC datasets with less than 50\% computational cost compared with recent baseline methods, opening the path towards more accurate resource-aware methods for time series analyses. Our code is publicly available at: \url{https://github.com/QiaoXiao7282/DSN}.
Accept
The paper presents a sparse network model for time series classification which dynamically determines receptive fields through the use of sparse kernels. The method matches state of the art performance, while reducing cost. All the reviewers agreed that the paper solves an important problem and that the authors conducted extensive experiments. There was the question of novelty raised by reviewer Ni8e, although the authors included, in their response, a detailed description of how their work is different from existing methods. The authors also provided clarifications to the issues raised by the other reviewers, even adding new baselines, which were found convincing by reviewers 3Hph and skdR.
train
[ "zG313UCtPar", "4gG1N8vX6Tn", "EBLUaOHM5R4", "V63arc_-WYY", "TQTQhqInkXc", "-RHmijZX_ba", "okFTOgXAcJ", "3G-b-ezjif3", "x5XmGhtm_fF", "7JgFqsf2YO", "_-1q0O-SIJY", "YxjbK0wjFka", "U6sCu5e0i9R", "gedBMBCsO97", "V-rovi5U0Lx" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Reviewer Ni8e,\n\nToday is the deadline to respond to the rebuttal of paper 4394.\nPlease read the response asap and check whether it addresses your concerns.\n\nBest regards,\n\nThe AC", " Thanks for the comments! We have tried our best to address all the concerns and updated new experiments to address your co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1 ]
[ "_-1q0O-SIJY", "U6sCu5e0i9R", "3G-b-ezjif3", "V-rovi5U0Lx", "U6sCu5e0i9R", "okFTOgXAcJ", "x5XmGhtm_fF", "V-rovi5U0Lx", "7JgFqsf2YO", "gedBMBCsO97", "YxjbK0wjFka", "U6sCu5e0i9R", "nips_2022_ZxOO5jfqSYw", "nips_2022_ZxOO5jfqSYw", "nips_2022_ZxOO5jfqSYw" ]
nips_2022_VdUeCoF-0tS
Smooth Fictitious Play in Stochastic Games with Perturbed Payoffs and Unknown Transitions
Recent extensions to dynamic games of the well known fictitious play learning procedure in static games were proved to globally converge to stationary Nash equilibria in two important classes of dynamic games (zero-sum and identical-interest discounted stochastic games). However, those decentralized algorithms need the players to know exactly the model (the transition probabilities and their payoffs at every stage). To overcome these strong assumptions, our paper introduces regularizations of the recent algorithms which are moreover, model-free (players don't know the transitions and their payoffs are perturbed at every stage). Our novel procedures can be interpreted as extensions to stochastic games of the classical smooth fictitious play learning procedures in static games (where players best responses are regularized, thanks to a smooth perturbation of their payoff functions). We prove the convergence of our family of procedures to stationary regularized Nash equilibria in the same classes of dynamic games (zero-sum and identical interests discounted stochastic games). The proof uses the continuous smooth best-response dynamics counterparts, and stochastic approximation methods. In the case of a MDP (a one-player stochastic game), our procedures globally converge to the optimal stationary policy of the regularized problem. In that sense, they can be seen as an alternative to the well known Q-learning procedure.
Accept
This paper examines the convergence of stochastic fictitious play (SFP) in certain classes of discounted stochastic games - more specifically, two-player zero-sum and potential / common-interest games. The players are not assumed to know the game's reward functions and/or transition probabilities beforehand, and instead "learn" - or, rather, estimate - these aspects of the game as they go. These estimates are subsequently used as proxies for the players' continuation payoffs, in the spirit of previous work by Leslie et al (2020) and Sayin et a (2020). The authors' proofs rely on asynchronous stochastic approximation techniques, and they leverage recent results of Baudin and Laraki (2022) to derive (and study) the mean-field, continuous-time limit of the SFP process. The reviewers appreciated the paper's technical contributions, and the authors addressed the reviewers' concerns satisfactorily during the discussion phase; as a result, a consensus was quickly reached for an "accept" decision. I concur with this assessment but, at the same time, I would urge the authors to pay particular attention to the comments of Reviewer gpaa regarding the positioning of earlier work and some issues with the precision (and clarity) of the mathematical writing. With this proviso, I am happy to recommend acceptance as well.
train
[ "f8w6ebEIDBR", "ohTqbkYnaK3", "jIg36TuecFu", "5j7wYh3qN3", "Yy07CqZonU7", "o6u1xw4u9Ig", "KpbePEktxsw", "uNqm8Ik26C", "7ruBAgUNlTX", "7ff6spkd2yQ", "st4hvUsMsQ", "TyIwefjHPsc", "wyQUQNNeb-e", "kbiY5syS00r", "iLZ_AiRgsps", "i6njiXYJty" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm very pleased with the revision of the paper. I happy to confirm my acceptance of the paper.", " The authors addressed my questions satisfactorily. I only have a few more suggestions for the final version:\n\n1) I believe that for the model-based version also Sayin et all. 2021 only requires estimates for st...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 2, 3 ]
[ "7ff6spkd2yQ", "st4hvUsMsQ", "o6u1xw4u9Ig", "Yy07CqZonU7", "7ruBAgUNlTX", "KpbePEktxsw", "i6njiXYJty", "iLZ_AiRgsps", "kbiY5syS00r", "wyQUQNNeb-e", "TyIwefjHPsc", "nips_2022_VdUeCoF-0tS", "nips_2022_VdUeCoF-0tS", "nips_2022_VdUeCoF-0tS", "nips_2022_VdUeCoF-0tS", "nips_2022_VdUeCoF-0tS"...
nips_2022_QoHSzxp7tSN
Uncertainty-Aware Reinforcement Learning for Risk-Sensitive Player Evaluation in Sports Game
A major task of sports analytics is player evaluation. Previous methods commonly measured the impact of players' actions on desirable outcomes (e.g., goals or winning) without considering the risk induced by stochastic game dynamics. In this paper, we design an uncertainty-aware Reinforcement Learning (RL) framework to learn a risk-sensitive player evaluation metric from stochastic game dynamics. To embed the risk of a player’s movements into the distribution of action-values, we model their 1) aleatoric uncertainty, which represents the intrinsic stochasticity in a sports game, and 2) epistemic uncertainty, which is due to a model's insufficient knowledge regarding Out-of-Distribution (OoD) samples. We demonstrate how a distributional Bellman operator and a feature-space density model can capture these uncertainties. Based on such uncertainty estimation, we propose a Risk-sensitive Game Impact Metric (RiGIM) that measures players' performance over a season by conditioning on a specific confidence level. Empirical evaluation, based on over 9M play-by-play ice hockey and soccer events, shows that RiGIM correlates highly with standard success measures and has a consistent risk sensitivity.
Accept
The reviews on this paper were mixed. One of the reviewers raised several concerns about the work; although they were valid concerns, our assessment is that the authors have addressed them to a satisfactory degree in their rebuttal. This is a solid paper describing a systematic, careful, application of techniques from supervised and reinforcement learning to the problem of evaluating sport players taking risk into account. In preparing their final manuscript, we suggest the authors make an effort to explain the application as well as possible to the NeurIPS audience, who may not be familiar with player evaluation in sports game. The interesting discussion between authors and reviewers that followed the initial reviews can be a good source of the type of questions the community might be interested in. Although the authors have already modified the paper in response to this discussion, we suggest they try to incorporate as much of it as possible to the final version of the paper.
train
[ "2XLDjoj_za8", "gJsUq6gM4TK", "-QDLeIZZLPh", "35sRtfHRlFZ", "uG58sCVTY7C3", "X0vDSiBmHtN", "bcvsCInwnZo", "HwwN9b2rgRY", "35saM_bXU6G", "jlikju8c-by", "tFcLeCz4rv", "itSkFSyo8PG", "42zaXLZDWsxt", "ai2JwnOL34N", "oeJbWsf88rc", "YugNU84IUAw", "DQLULXMQTh", "5rax539skI" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper describes a risk-aware player evaluation metric, leveraging RF-specific ideas to quantify aleotoric and epistemic uncertainty. The paper was originally flagged by reviewer mdqR because of the involvement on human subjects, aka players here. \n\nBecause the work deals primarily with metric creation and i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "nips_2022_QoHSzxp7tSN", "uG58sCVTY7C3", "oeJbWsf88rc", "jlikju8c-by", "X0vDSiBmHtN", "35saM_bXU6G", "HwwN9b2rgRY", "ai2JwnOL34N", "itSkFSyo8PG", "tFcLeCz4rv", "YugNU84IUAw", "5rax539skI", "ai2JwnOL34N", "DQLULXMQTh", "nips_2022_QoHSzxp7tSN", "nips_2022_QoHSzxp7tSN", "nips_2022_QoHSz...
nips_2022_STQOCn4NqBd
Doubly Robust Counterfactual Classification
We study counterfactual classification as a new tool for decision-making under hypothetical (contrary to fact) scenarios. We propose a doubly-robust nonparametric estimator for a general counterfactual classifier, where we can incorporate flexible constraints by casting the classification problem as a nonlinear mathematical program involving counterfactuals. We go on to analyze the rates of convergence of the estimator and provide a closed-form expression for its asymptotic distribution. Our analysis shows that the proposed estimator is robust against nuisance model misspecification, and can attain fast $\sqrt{n}$ rates with tractable inference even when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.
Accept
Reviewers agreed that the paper makes interesting theoretical contributions to the problem of counterfactual classification, with a novel and valuable result. The reviewers pointed out several aspects in which the empirical evaluation could be more thorough, though it is my view that is definitely sufficient for validating the paper's main claims. Finally, the paper is well-written and presents the math crisply.
train
[ "P0T0YlBPBrs", "1Zoukib0hlJ", "JiIO2FpAA01", "kPKGrhOnrTn", "zwZrxUJl_qt", "BQtLoVTESt5", "hc9I6LrdInHY", "DdnB18QmJqt", "dEJPet6CAxq", "XfZhwAdDoS", "kk455ZnIPoT7", "wa1gm09ilgg", "-YWZWbRe9Ym", "Ft9OA6TDg6r", "U7S-yHxhJpc", "fTP0QrlC6X2", "5YzsStlbJoF", "o3UmyO5eeT3", "w4_0yT6P...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " Thanks for your careful answers. Most of my concerns are addressed. But I think this paper still lacks more experimental discussions. Also, there remain some technical points to clarify. In general, this paper gives solid theoretical results. Therefore, I keep the original rating. ", " Thank you again for your ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "DdnB18QmJqt", "JiIO2FpAA01", "U7S-yHxhJpc", "BQtLoVTESt5", "XfZhwAdDoS", "hc9I6LrdInHY", "dEJPet6CAxq", "kk455ZnIPoT7", "Ft9OA6TDg6r", "wa1gm09ilgg", "-YWZWbRe9Ym", "w4_0yT6PYrr", "o3UmyO5eeT3", "5YzsStlbJoF", "fTP0QrlC6X2", "nips_2022_STQOCn4NqBd", "nips_2022_STQOCn4NqBd", "nips_...
nips_2022_ahAEhOtVif
Manifold Interpolating Optimal-Transport Flows for Trajectory Inference
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow) that learns stochastic, continuous population dynamics from static snapshot samples taken at sporadic timepoints. MIOFlow combines dynamic models, manifold learning, and optimal transport by training neural ordinary differential equations (Neural ODE) to interpolate between static population snapshots as penalized by optimal transport with manifold ground distance. Further, we ensure that the flow follows the geometry by operating in the latent space of an autoencoder that we call a geodesic autoencoder (GAE). In GAE the latent space distance between points is regularized to match a novel multiscale geodesic distance on the data manifold that we define. We show that this method is superior to normalizing flows, Schr\"odinger bridges and other generative models that are designed to flow from noise to data in terms of interpolating between populations. Theoretically, we link these trajectories with dynamic optimal transport. We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
Accept
Even though several concerns have been raised by multiple reviewers, the reviewers largely agreed on the significance and novelty of the contributions. The authors provided quite detailed responses and given all the data, overall I believe the strengths of the paper outweigh its weaknesses. Hence I am recommending an acceptance.
train
[ "7hr7P_fG14W", "Sluj_MjEIos", "VbnW1qGcEzf", "VG8lb1UGtSZ", "d14z7B9Edm8", "9gbq4UZ0cS5", "cTMzDwlqOEs", "8aUmb2hh_a6", "0wBPUjDhTC_", "qgM15i9evxG", "fmr_dOvspyrl", "dURnywcc6t9", "wTZTmk3ZHEkN", "r-n_1wXzM2j", "aNdP1jf3NIo", "k6CVcEmHg7Q", "tuoueFhDYZ", "IHe0SWkyRgj", "O5oQuRkw...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Thank you for your feedback.\n\n> demonstrate that without geodesic requirement (i.e., we use some standard AE), we get worse results (kind of an ablation study).\n\nWe note that we have included an ablation study showing that without the geodesic requirements we get worse results. This is shown qualitatively in ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1, 2 ]
[ "Sluj_MjEIos", "VG8lb1UGtSZ", "9gbq4UZ0cS5", "d14z7B9Edm8", "cTMzDwlqOEs", "aNdP1jf3NIo", "tuoueFhDYZ", "fmr_dOvspyrl", "qgM15i9evxG", "k6CVcEmHg7Q", "nips_2022_ahAEhOtVif", "wTZTmk3ZHEkN", "r-n_1wXzM2j", "tuoueFhDYZ", "k6CVcEmHg7Q", "6PAC5vgVwgh", "IHe0SWkyRgj", "TywpUOghXFI", "...
nips_2022_zb-xfApk4ZK
Local-Global MCMC kernels: the best of both worlds
Recent works leveraging learning to enhance sampling have shown promising results, in particular by designing effective non-local moves and global proposals. However, learning accuracy is inevitably limited in regions where little data is available such as in the tails of distributions as well as in high-dimensional problems. In the present paper we study an Explore-Exploit Markov chain Monte Carlo strategy ($\operatorname{Ex^2MCMC}$) that combines local and global samplers showing that it enjoys the advantages of both approaches. We prove $V$-uniform geometric ergodicity of $\operatorname{Ex^2MCMC}$ without requiring a uniform adaptation of the global sampler to the target distribution. We also compute explicit bounds on the mixing rate of the Explore-Exploit strategy under realistic conditions. Moreover, we propose an adaptive version of the strategy ($\operatorname{FlEx^2MCMC}$) where a normalizing flow is trained while sampling to serve as a proposal for global moves. We illustrate the efficiency of $\operatorname{Ex^2MCMC}$ and its adaptive version on classical sampling benchmarks as well as in sampling high-dimensional distributions defined by Generative Adversarial Networks seen as Energy Based Models.
Accept
Three domain experts recommended acceptance for this paper. One reviewer in particular made a confident and strong recommendation. I find all three convincing. Reviewers agree that the paper is clear (both in presenting the motivation and the argument), and that the contribution to theory is substantial (proving desirable mixing properties and convergence guarantees, and developing an adaptive variant of the algorithm). Reviewers also agree on the whole that the empirical section adequately supports the paper's main point, even though (as Reviewer PqJ7 mentioned) larger-scale simulations would strengthen it further. There were some questions raised early on about related work. Reviewer PqJ7 pointed out specific related lines of work worth discussing in the paper, and a thread with Reviewer 9sXT drew some comparisons to another paper that combines local and global MCMC kernels. In both cases, concerns seemed adequately addressed during the discussion that ensued, and were covered in the draft update. Reviewer PqJ7 initially suggested computing additional metrics (namely FID) as part of the experiments and the reviewers did this too (adding an IS calculation as well). Discussion was productive – a case where the course of review strengthened the paper slightly further. Thanks to the reviewers for being thorough and responsive and to the authors for incorporating their feedback well. Altogether this is a nice and well-presented result.
train
[ "sScPUlZLpRi", "2royFBU6ogT", "KYoYQoa8Kq_", "bwn1WhhMXCb", "LmLfc4IqCAfi", "SROkEYuTbJd", "1Mufk2LVGYd", "8IGO62WpkzG", "ZNbTaouDl_x", "c3cuXQO5JM8", "8kZWdOezT8p5", "zCW2xZgskya", "N1Z22pLHDmM", "D5J6QTP2dt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their responses.\n\nI'm a bit confused by the reasons that the authors did not compare with [1]. Doesn't [1] already demonstrate the benefit from combining local and global kernels? The difference of the proposed method seems to be only the global kernel and the mentioned advantage also co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "SROkEYuTbJd", "bwn1WhhMXCb", "8kZWdOezT8p5", "zCW2xZgskya", "N1Z22pLHDmM", "N1Z22pLHDmM", "N1Z22pLHDmM", "N1Z22pLHDmM", "N1Z22pLHDmM", "nips_2022_zb-xfApk4ZK", "D5J6QTP2dt", "nips_2022_zb-xfApk4ZK", "nips_2022_zb-xfApk4ZK", "nips_2022_zb-xfApk4ZK" ]
nips_2022_vsShetzoRG9
InsNet: An Efficient, Flexible, and Performant Insertion-based Text Generation Model
We propose InsNet, an expressive insertion-based text generator with efficient training and flexible decoding (parallel or sequential). Unlike most existing insertion-based text generation works that require re-encoding of the (decoding) context after each insertion operation and thus are inefficient to train, InsNet only requires one pass of context encoding for the entire insertion sequence during training by using a novel insertion-oriented position encoding to enable computation sharing. Furthermore, InsNet provides a controllable switch between parallel and sequential decoding, making it flexible to handle more parallelizable tasks such as machine translation to support efficient decoding, or less parallelizable tasks such as lexically constrained text generation to guarantee high-quality outputs. Experiments on two unsupervised lexically constrained text generation datasets and three machine translation datasets demonstrate InsNet’s advantages over previous insertion-based methods in terms of training speed, inference efficiency, and generation quality.
Accept
There is consensus between 3/4 reviewers regarding acceptance and the review that suggests borderline rejection also admits the merit of the paper. Given the new proposed method for insertion based generation in transformers and the experimental results, I think the paper should be accepted to Neurips.
train
[ "fhP0CEwbU15", "BSZdhk1Xfk", "YCvSVfJkwba", "QIDQg7cVhAI", "0XeeluZuSIE", "mwhAt3J4oX2", "zPgfb_j5JWY", "Gn1T2EoRB4c", "1b_fGv7Z7y", "VRLLUapjHp", "6TX0HO77NV", "vKfn5vUXSTQ" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nWe just added an illustrative graph for the model overview with illustrations of each component (position encoding, context encoder, slot representation, and the adaptive parallelization algorithm) and some generation samples to the appendix. We hope they help demonstrate the model better and pr...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "6TX0HO77NV", "mwhAt3J4oX2", "nips_2022_vsShetzoRG9", "vKfn5vUXSTQ", "6TX0HO77NV", "VRLLUapjHp", "1b_fGv7Z7y", "nips_2022_vsShetzoRG9", "nips_2022_vsShetzoRG9", "nips_2022_vsShetzoRG9", "nips_2022_vsShetzoRG9", "nips_2022_vsShetzoRG9" ]
nips_2022_0Z0xltoU1q
Accelerated Projected Gradient Algorithms for Sparsity Constrained Optimization Problems
We consider the projected gradient algorithm for the nonconvex best subset selection problem that minimizes a given empirical loss function under an $\ell_0$-norm constraint. Through decomposing the feasible set of the given sparsity constraint as a finite union of linear subspaces, we present two acceleration schemes with global convergence guarantees, one by same-space extrapolation and the other by subspace identification. The former fully utilizes the problem structure to greatly accelerate the optimization speed with only negligible additional cost. The latter leads to a two-stage meta-algorithm that first uses classical projected gradient iterations to identify the correct subspace containing an optimal solution, and then switches to a highly-efficient smooth optimization method in the identified subspace to attain superlinear convergence. Experiments demonstrate that the proposed accelerated algorithms are magnitudes faster than their non-accelerated counterparts as well as the state of the art.
Accept
The paper studies sparsity-constrained optimization problems, in which the goal is to optimize a convex, Lipschitz function subject to and L0 constraint. It studies two algorithms: one based on extrapolation within the current support, and another which applies a second order method within the current support. The paper's theoretical results include (i) global convergence results for projected gradient methods with a locally linear rate, (ii) fast (linear, superlinear) rates for the proposed accelerated methods. Experiments show that the proposed methods indeed outperform projected gradient, and enjoy very fast convergence once the correct subspace is identified. Reviewers all note that the paper's theoretical results improve over the existing literature on convergence for sparsity-constrained problems. After interacting with the authors, the reviewers converged to a recommendation to accept the paper; the AC concurs.
train
[ "qR9e9on_i0", "un7mRKrsHdB", "YqVMIDITjVo", "UoWkKooT12B8", "_Tmits01UhL", "aHEE988o2Zb", "mdrJ_Kt37nv", "SUwmIu4nMAcN", "1YKTygLJarF", "YAf4onzPuq", "Q0FF9tJXmE", "AlLvj6NfUvd", "uMwi9tArwIb", "XJDiIR6j6Pf", "4stWBQVwe8t" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thanks the authors for the time and effort in addressing my comments and providing clarifications. I appreciate the authors' effort in discussing related approaches and providing additional experimental results.\n\n \n", " What we meant to express is that how the algorithm is executed by a user is irrelevant ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "aHEE988o2Zb", "YqVMIDITjVo", "UoWkKooT12B8", "uMwi9tArwIb", "AlLvj6NfUvd", "AlLvj6NfUvd", "uMwi9tArwIb", "XJDiIR6j6Pf", "XJDiIR6j6Pf", "4stWBQVwe8t", "nips_2022_0Z0xltoU1q", "nips_2022_0Z0xltoU1q", "nips_2022_0Z0xltoU1q", "nips_2022_0Z0xltoU1q", "nips_2022_0Z0xltoU1q" ]
nips_2022_67NpH8-_h94
Provably sample-efficient RL with side information about latent dynamics
We study reinforcement learning (RL) in settings where observations are high-dimensional, but where an RL agent has access to abstract knowledge about the structure of the state space, as is the case, for example, when a robot is tasked to go to a specific room in a building using observations from its own camera, while having access to the floor plan. We formalize this setting as transfer reinforcement learning from an "abstract simulator," which we assume is deterministic (such as a simple model of moving around the floor plan), but which is only required to capture the target domain's latent-state dynamics approximately up to unknown (bounded) perturbations (to account for environment stochasticity). Crucially, we assume no prior knowledge about the structure of observations in the target domain except that they can be used to identify the latent states (but the decoding map is unknown). Under these assumptions, we present an algorithm, called TASID, that learns a robust policy in the target domain, with sample complexity that is polynomial in the horizon, and independent of the number of states, which is not possible without access to some prior knowledge. In synthetic experiments, we verify various properties of our algorithm and show that it empirically outperforms transfer RL algorithms that require access to "full simulators" (i.e., those that also simulate observations).
Accept
This paper studies how to improve learning efficiency in settings where the agent has access to an abstract, simplified model of the world. All the reviewers voted to accept and each pointed out useful clarifications and improvements to the paper. The main contributions of this work are the novel problem setting, algorithms proposed and theoretical results. There are some proof of concept experiments included as well. The paper is not without flaw though: - the intro dismisses two related works because those algorithms were "very different". This is either a communication problem or a bad excuse - the proof of concept experiments don't really match the motivation of the paper well. The toy environments are not representative of the targeted complexity (this is not to say they must be messy, high-dim real sensor data.) Most importantly the paper has basic correctness issues with the experiments provided: - reporting results from five runs (seeds) in toy domains and reporting standard deviation (which is likely not well estimated from 5 runs) instead use a more conservative measure of confidence such as student-t CI or bootstrap intervals - no clear discussion of how the baselines where tuned (they all use the same step-size in each experiment. With Adam the step-size is less sensitive, but its very unlikely that the values of alpha used (and other hypers) were best. We need a clear description of the empirical methodology used here; researcher descent is a very biased process and not good enough --> This is particularly problematic because the abstract claims the new method outperforms the baselines These empirical issues are common practice in the field but poor none the less. It is very likely all could be addressed and the main messages of the paper continue to hold. The empirical writeup of this work significantly weakens an otherwise strong paper.
test
[ "lIGrpaCUwCr", "9Nh1T8oueRZ", "O61qhyniD0P", "JFwUXki8UVK", "CldAcUhfqg", "0dnFQNUl2Cn", "ciIKOJozt7D", "uaqLLmmf8kN", "IPH44H06Qzf", "u214rMrjhk", "a2nhrs5ROIH", "ciQruWiSBp", "h0yd5cwFlj9", "Kf1nJH4qu9M", "jG5sAt2dOgT" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their time and useful feedback. Based on the feedback, we have updated the paper to \n\n- Add a table of notations in the Appendix\n\n- Fix a typo and add some clarification in the proof (Reviewer 3Ver)\n \nWe will be happy to incorporate any additional suggestions to improve the manusc...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "nips_2022_67NpH8-_h94", "h0yd5cwFlj9", "IPH44H06Qzf", "a2nhrs5ROIH", "0dnFQNUl2Cn", "ciIKOJozt7D", "uaqLLmmf8kN", "jG5sAt2dOgT", "Kf1nJH4qu9M", "h0yd5cwFlj9", "ciQruWiSBp", "nips_2022_67NpH8-_h94", "nips_2022_67NpH8-_h94", "nips_2022_67NpH8-_h94", "nips_2022_67NpH8-_h94" ]
nips_2022_fLOU5jXlJZV
Differentially Private Online-to-batch for Smooth Losses
We develop a new reduction that converts any online convex optimization algorithm suffering $O(\sqrt{T})$ regret into an $\epsilon$-differentially private stochastic convex optimization algorithm with the optimal convergence rate $\tilde O(1/\sqrt{T} + 1/\epsilon T)$ on smooth losses in linear time, forming a direct analogy to the classical non-private ``online-to-batch'' conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.
Accept
The paper made original contributions to DP learning of convex and smooth differentially private learning problem by connecting to the vast parameter-free online learning literature. One of the reviewers read the paper with great details and carefully checked the correctness of the results. The AC also took a close look and find the results very nice. The reviewers and the AC further discussed the work and clarified some of the concerns raised, e.g., regarding computation; but the missing experiments make it hard to vouch for the practicality of the approach. Based on the theoretical contribution alone, we believe the paper is above the bar and would happily recommend accept. The authors are encouraged to take into account the points below and consider adding benchmarking experiments. --------------- Some additional feedback / comments out of the discussion: 1. For DP-SCO as the optimal rates are known to be achievable by a linear time algorithm (FKT'20). The proposed new algorithm thus does not improve over existing methods in either statistical or computational complexity. 2. If computation is not a concern, Noisy Gradient Descent with O(n^2) iterations (thus n^3 incremental gradient oracle calls) is known to provide an even stronger excess *empirical* risk bound that is optimal (without that additional log T in this paper). It also is very hard to beat in practice. 3. The main contribution of the proposed algorithm is then about new algorithmic techniques borrowed from adaptive online learning. This approach gives rise to the unified treatment of general convex and strongly convex problems and they discussed how it helps to tap into other more adaptive / more parameter-free online learners. 4. Tree-aggregation for releasing gradient sequences by leveraging smoothness (and stability implied by the anytime online-to-batch reduction) is cute. I think technically this is the most interesting idea. 5. I think the paper contains enough good results to be considered as a purely theoretical work for acceptance. That said, I do think the algorithm has the potential to be practical. It is a pity that the authors did not try. Though with the several layers of reductions and the binary-tree approach, it won't surprise me if the proposed method is not competitive against methods such as NoisyGD or NoisySGD. 6. Another positive aspect about this paper is that it is polished and the writing is pretty good. I particularly liked how the authors accurately describe their contribution in the title / abstract by clearly highlighting the key assumption on the smoothness.
train
[ "VIbzSQVRmX", "l7_BTynXAM9", "RKJ_uGGZ3Q9", "6tbaiVTBqun", "r8OTJQoIcuY", "OKjhU3Qabx-", "yjfHwGvyYGM", "G-i8vfjAlE7" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your further comments. We respond to your comments regarding Q3 below:\n\nIn regular online-to-batch where $g_t = \\nabla\\ell(w_t,z_t)$, it’s true that $\\mathbb{E}[g_t | w_1,\\ldots, w_t] = \\nabla\\mathcal{L}(w_t)$. The text right after Theorem 1 is discussing this more typical and familiar setti...
[ -1, -1, -1, -1, -1, 4, 4, 8 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "l7_BTynXAM9", "RKJ_uGGZ3Q9", "G-i8vfjAlE7", "yjfHwGvyYGM", "OKjhU3Qabx-", "nips_2022_fLOU5jXlJZV", "nips_2022_fLOU5jXlJZV", "nips_2022_fLOU5jXlJZV" ]
nips_2022_ypXcTtbBsnZ
UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications. Some following works have targeted to improve data efficiency by adding self-supervision terms, but inter-domain (image-text) contrastive loss and intra-domain (image-image) contrastive loss are defined on individual spaces in those works, so many feasible combinations of supervision are overlooked. To overcome this issue, we propose UniCLIP, a Unified framework for Contrastive Language-Image Pre-training. UniCLIP integrates the contrastive loss of both inter-domain pairs and intra-domain pairs into a single universal space. The discrepancies that occur when integrating contrastive loss between different domains are resolved by the three key components of UniCLIP: (1) augmentation-aware feature embedding, (2) MP-NCE loss, and (3) domain dependent similarity measure. UniCLIP outperforms previous vision-language pre-training methods on various single- and multi-modality downstream tasks. In our experiments, we show that each component that comprises UniCLIP contributes well to the final performance.
Accept
This paper proposed a framework for image-language pretraining which include three techniques. The techniques are well motivated and developed to achieve better performance on zero-shot, linear probing and retrieval test. Ablation studies are provided to show the effectiveness of the three techniques. The writing of the paper is clear. The authors appropriately answered most of the questions of the reviewers.
train
[ "AbYjfLjiN4F", "k_v2BjrvXf2", "Swm-hARwsQf", "oHXPniDRdwV", "udK1hn5Qso", "qgLc_48BTU", "vRuOj941vzD", "I_HjDDl1Mv", "dAMNu5Hg9Kf", "nQGv7DU1wBH", "GBLQvRu10y-", "O5hqPzGBvm1" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for providing detailed answers to my questions, limitations provided are very detailed and I believe the similarities will be useful for readers and the authors should consider adding them to the appendix. Also, I appreciate that the authors have conducted finetuned retrieval tasks during the discussion pe...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "udK1hn5Qso", "Swm-hARwsQf", "oHXPniDRdwV", "GBLQvRu10y-", "O5hqPzGBvm1", "GBLQvRu10y-", "nQGv7DU1wBH", "dAMNu5Hg9Kf", "nips_2022_ypXcTtbBsnZ", "nips_2022_ypXcTtbBsnZ", "nips_2022_ypXcTtbBsnZ", "nips_2022_ypXcTtbBsnZ" ]
nips_2022_ySB7IbdseGC
Structured Recognition for Generative Models with Explaining Away
A key goal of unsupervised learning is to go beyond density estimation and sample generation to reveal the structure inherent within observed data. Such structure can be expressed in the pattern of interactions between explanatory latent variables captured through a probabilistic graphical model. Although the learning of structured graphical models has a long history, much recent work in unsupervised modelling has instead emphasised flexible deep-network-based generation, either transforming independent latent generators to model complex data or assuming that distinct observed variables are derived from different latent nodes. Here, we extend amortised variational inference to incorporate structured factors over multiple variables, able to capture the observation-induced posterior dependence between latents that results from “explaining away” and thus allow complex observations to depend on multiple nodes of a structured graph. We show that appropriately parametrised factors can be combined efficiently with variational message passing in rich graphical structures. We instantiate the framework in nonlinear Gaussian Process Factor Analysis, evaluating the structured recognition framework using synthetic data from known generative processes. We fit the GPFA model to high-dimensional neural spike data from the hippocampus of freely moving rodents, where the model successfully identifies latent signals that correlate with behavioural covariates.
Accept
**Summary**: This paper proposes a specific form of variational approximation for amortized inference that is proportional to the product of the prior and amortized factor potentials. This is similar to the approach proposed in work on Structured Variational Autoencoders (Johnson et al, NeurIPS 2016), but differs in that each factor in the prior is paired with a corresponding amortized factor over the same set of variables, whereas the original work on SVAEs assumes mean-field factors over individual variables. The authors evaluate this approach in the context of SVAEs, but the main intended us case is an application to sparse variational Gaussian process factor analysis (svGPFA) models, resulting in a proposed AEA-svGPFA model. **Strengths**: Reviewer *L1xL* found that this submission proposes a good approach with clear motivation and theoretical justification, with a good presentation and discussion of related work. Reviewers *D7KE* and *9rkF* also appreciated the novel application of SVAE to the svGPFA model, allowing it to better capture posterior dependencies, whilst retaining the same asymptotic time complexity as SGPVAE, with good results for the AEA-svGPFA on neural spike population data. Reviewer *9rkF* further appreciated the analysis of impact of modeling posterior coorelations. **Weaknesses**: Reviewers were on the whole not fully convinced by the significance of the application to SVAEs [L1xL, D7KE, 9rKF], and each noted that they felt unqualified to judge the significance of the AEA-svGPFA results due to a lack of familiarity of this model class. **Reviewer Author Discussion**: The authors updated their submission to add an instantiation of the AEA on latent GMMs, evaluated on the pinwheel domain (Johnson et al. 2016), compared to SIN (Lin et al 2018), a vanilla VAE, and a varaitional GMM. The also discussed why a tighter bound (Proposition 3.1) aids posterior inference (with additional discussion in Appendix D), and provided clarifications to reviewers *L1xL* and *D7KE*. In response to the author discussion, all 3 reviewers raised their score (5->6). **Reviewer AC Discussion**: While all reviewers indicated to the AC that they are leaning towards acceptance, the are also all in agreement that this a paper that is borderline and could also be rejected. The main point of deliberation remains whether the paper would be stronger if it focused on the the AEA-svGPFA and omitted the AEA-SVAE results (which remain somewhat unconvincing to the reviewers). The reviewers also reiterated that they find it difficult to judge the significance of the AEA-svGPFA results. **Formatting**: There are minor formatting violations; Table 1 and Figure 2 are about 1 inch too wide, but this is easily addressable. **Overall AC Recommendation**: The AC took a quick and very cursory look at the the paper. The AC has no immediate concerns about soundness and clarity, but also finds it difficult to assess significance. Taking everything into account, this paper appears narrowly above the threshold for acceptance, but may have to be cut to make room for other submissions.
train
[ "rdnRBXzvr3", "ckqQJfsIyJc", "WDGtFJL0nnS", "Szzy-9nA-0", "hmryMeA9n2", "CZ5sc200tPj", "Gj_V55I6vtE", "sq6gcNu_Jtb", "a0v3CGpAMS_", "4S0rjV_qb4", "0IKCeByrduh", "GUMhq02oBdy", "ZGwU7C3Pk5j", "94FSXMzXWs", "3A_A8rK0O-" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We wish to thank all reviewers for their time and efforts spent during the review and rebuttal phase. We are glad to see that all reviewers have chosen to increase their scores accordingly, reflecting the reviewers' joint recognition in our replies and in the our (updated) paper. \n\nIf there is any additional qu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "nips_2022_ySB7IbdseGC", "Szzy-9nA-0", "hmryMeA9n2", "Gj_V55I6vtE", "CZ5sc200tPj", "ZGwU7C3Pk5j", "94FSXMzXWs", "nips_2022_ySB7IbdseGC", "3A_A8rK0O-", "0IKCeByrduh", "94FSXMzXWs", "ZGwU7C3Pk5j", "nips_2022_ySB7IbdseGC", "nips_2022_ySB7IbdseGC", "nips_2022_ySB7IbdseGC" ]
nips_2022_LYcuTyW6Vu
LiteTransformerSearch: Training-free Neural Architecture Search for Efficient Language Models
The Transformer architecture is ubiquitously used as the building block of largescale autoregressive language models. However, finding architectures with the optimal trade-off between task performance (perplexity) and hardware constraints like peak memory utilization and latency is non-trivial. This is exacerbated by the proliferation of various hardware. We leverage the somewhat surprising empirical observation that the number of decoder parameters in autoregressive Transformers has a high rank correlation with task performance, irrespective of the architecture topology. This observation organically induces a simple Neural Architecture Search (NAS) algorithm that uses decoder parameters as a proxy for perplexity without need for any model training. The search phase of our training-free algorithm, dubbed Lightweight Transformer Search (LTS), can be run directly on target devices since it does not require GPUs. Using on-target device measurements, LTS extracts the Pareto-frontier of perplexity versus any hardware performance cost. We evaluate LTS on diverse devices from ARM CPUs to NVIDIA GPUs and two popular autoregressive Transformer backbones: GPT-2 and Transformer-XL. Results show that the perplexity of 16-layer GPT-2 and Transformer-XL can be achieved with up to 1.5×, 2.5× faster runtime and 1.2×, 2.0× lower peak memory utilization. When evaluated in zero and one-shot settings, LTS Pareto-frontier models achieve higher average accuracy compared to the 350M parameter OPT across 14 tasks, with up to 1.6× lower latency. LTS extracts the Pareto-frontier in under 3 hours while running on a commodity laptop. We effectively remove the carbon footprint of hundreds of GPU hours of training during search, offering a strong simple baseline for future NAS methods in autoregressive language modeling.
Accept
Building on the novel observation that there is a strong correlation between model quality and number of parameters in the decoder of autoregressive Transformers, this work proposes a training-free NAS algorithm for this class of models. The main concerns raised by reviewers are related to the key observation the paper is built on (this correlation), that may not necessarily hold in all situations. However, authors were able to clarify important points, provide additional motivations and empirical results, eventually leading to all reviewers leaning towards acceptance. Considering the popularity of this class of models, I believe this work should be of significant interest to researchers and practitioners in the field, and I am thus recommending acceptance.
test
[ "PU0cYbWcSf6", "cdCqyAsVXv", "xzg-S0UWxXr", "UkbwpK1Cq_v", "wfWCtBr8-xh", "2PteUAeRCrr", "GbIse7nKNmF", "fnVsPM3tzcs", "Dk7ciSxI2NB", "F3WYR_6ffdF", "CA9HSmb3sCW", "L92vfSVcsU", "ubX16es5O0f", "kzcpxa-SDsA", "hjwfw26dDv", "N_bzi5HlaST", "bIlWhbVgGn", "Yi4MDIw_fN6", "LUX9hG4yF1q",...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " Thank you so much for reading our response and we are happy we were able to clarify any ambiguities.", " Thank you so much for your kind note and we are glad we were able to address your concerns.", " Thank you so much for your response and for engaging with us during the discussion period. To comply with the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "wfWCtBr8-xh", "2PteUAeRCrr", "UkbwpK1Cq_v", "fnVsPM3tzcs", "tXOSt2M8Sp", "N_bzi5HlaST", "nips_2022_LYcuTyW6Vu", "Dk7ciSxI2NB", "L92vfSVcsU", "CA9HSmb3sCW", "kzcpxa-SDsA", "ubX16es5O0f", "0bpNWbTG4CX", "hjwfw26dDv", "HHI_BFNCl0i", "bIlWhbVgGn", "dej9EATlypY", "tXOSt2M8Sp", "nips_...
nips_2022_F0wPem89q9y
Adversarial Reprogramming Revisited
Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose a neural network to perform a different task, by manipulating its input without modifying its weights. We prove that two-layer ReLU neural networks with random weights can be adversarially reprogrammed to achieve arbitrarily high accuracy on Bernoulli data models over hypercube vertices, provided the network width is no greater than its input dimension. We also substantially strengthen a recent result of Phuong and Lampert on directional convergence of gradient flow, and obtain as a corollary that training two-layer ReLU neural networks on orthogonally separable datasets can cause their adversarial reprogramming to fail. We support these theoretical results by experiments that demonstrate that, as long as batch normalisation layers are suitably initialised, even untrained networks with random weights are susceptible to adversarial reprogramming. This is in contrast to observations in several recent works that suggested that adversarial reprogramming is not possible for untrained networks to any degree of reliability.
Accept
This paper develops several theoretical results around when adversarial reprogramming will, and will not, be possible. They support these with experiments. As this is the first meaningful theoretical analysis of adversarial reprogramming, the paper is potentially impactful. All reviewers support paper acceptance. Additionally, the paper appears to have been significantly improved over the course of the rebuttal period. I therefore recommend paper acceptance.
train
[ "eC9rX1eoXaQ", "L8mpRUoeuu3", "ptRF5A-EXE", "b3E0DZQ2bn", "B17fGP4yhA3", "5YVLdZ5gsdJ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We hope that you will find useful our responses to the bullet points in the review under \"Weaknesses\", and then to the point in the \"Limitations\" section of the review:\n\n---\n\nRegarding the assumption on the width in Section 2, we point out its source (namely our theoretical construction of adversarial pro...
[ -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, 3, 2, 4 ]
[ "5YVLdZ5gsdJ", "B17fGP4yhA3", "b3E0DZQ2bn", "nips_2022_F0wPem89q9y", "nips_2022_F0wPem89q9y", "nips_2022_F0wPem89q9y" ]
nips_2022_SQbrWcMOcPR
Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy
Large convolutional neural networks (CNN) can be difficult to train in the differentially private (DP) regime, since the optimization algorithms require a computationally expensive operation, known as the per-sample gradient clipping. We propose an efficient and scalable implementation of this clipping on convolutional layers, termed as the mixed ghost clipping, that significantly eases the private training in terms of both time and space complexities, without affecting the accuracy. The improvement in efficiency is rigorously studied through the first complexity analysis for the mixed ghost clipping and existing DP training algorithms. Extensive experiments on vision classification tasks, with large ResNet, VGG, and Vision Transformers (ViT), demonstrate that DP training with mixed ghost clipping adds $1\sim 10\%$ memory overhead and $<2\times$ slowdown to the standard non-private training. Specifically, when training VGG19 on CIFAR10, the mixed ghost clipping is $3\times$ faster than state-of-the-art Opacus library with $18\times$ larger maximum batch size. To emphasize the significance of efficient DP training on convolutional layers, we achieve 96.7\% accuracy on CIFAR10 and 83.0\% on CIFAR100 at $\epsilon=1$ using BEiT, while the previous best results are 94.8\% and 67.4\%, respectively. We open-source a privacy engine (\url{https://github.com/woodyx218/private_vision}) that implements DP training of CNN (including convolutional ViT) with a few lines of code.
Accept
The paper resulted in reasonably positive reviews, and the rebuttal phase cleared most of the reviewer concerns. I will request the authors to incorporate the reviewer suggestions, in particular a more detailed comparison to the paper [32].
train
[ "b-W3FrsmrKd", "f9z-HYmzEXG", "6Qa4nSvDjWK", "0-yP64GnYPVf", "PjTDxujpte", "vqQvO2KvCm", "CGOIxr024R8", "vG6sgzx-9K7", "ajdT6Wh9uWD" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are grateful for the reviewer's evaluation. Your suggestion is well-taken and we have added the discussion of [32] in a new section (Appendix F) in the revision.", " I think the authors have addressed my concern. I hope the author could add the discussion of [32] in their revision. ", " We thank all the re...
[ -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "f9z-HYmzEXG", "vqQvO2KvCm", "nips_2022_SQbrWcMOcPR", "ajdT6Wh9uWD", "vG6sgzx-9K7", "CGOIxr024R8", "nips_2022_SQbrWcMOcPR", "nips_2022_SQbrWcMOcPR", "nips_2022_SQbrWcMOcPR" ]
nips_2022_McjGUq1H-mm
Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD for Gradient-Dominated Functions
We study the performance of Stochastic Cubic Regularized Newton (SCRN) on a class of functions satisfying gradient dominance property with $1\le\alpha\le2$ which holds in a wide range of applications in machine learning and signal processing. This condition ensures that any first-order stationary point is a global optimum. We prove that the total sample complexity of SCRN in achieving $\epsilon$-global optimum is $\mathcal{O}(\epsilon^{-7/(2\alpha)+1})$ for $1\le\alpha< 3/2$ and $\mathcal{\tilde{O}}(\epsilon^{-2/(\alpha)})$ for $3/2\le\alpha\le 2$. SCRN improves the best-known sample complexity of stochastic gradient descent. Even under a weak version of gradient dominance property, which is applicable to policy-based reinforcement learning (RL), SCRN achieves the same improvement over stochastic policy gradient methods. Additionally, we show that the average sample complexity of SCRN can be reduced to ${\mathcal{O}}(\epsilon^{-2/\alpha})$ for $1\le\alpha< 3/2$ using a variance reduction method with time-varying batch sizes. Experimental results in various RL settings showcase the remarkable performance of SCRN compared to first-order methods.
Accept
This paper presents improved convergence rates for SCRN on gradient dominated functions. Reviews all agree that the paper advances known results. Please take the comments about outperforming SGD into account in the final copy.
train
[ "Exs9vDjR-ii", "2UXLj_BrJh", "9KOX9n7Yijt", "AV6Jp5utk1K", "spuZAaoElnG", "DkjPwpF_Pa", "04wPQ_XUVbZ", "SI-lDs32KIB", "4XM44LwoK4H", "cv79I-HYukM", "hk8PFH6jjiz", "jb7TQ6QVXEv", "859x14ZS6n7", "nd2py4K6UWV", "RiffwpvVsKf", "rJQPIRqbxHY", "QOIoNv8433k", "z8dqDnRb19O", "53DkCo9M8Yb...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " Thanks for the response!", " We thank the reviewer for reading the rebuttal. We should emphasize the following remarks regarding the differences between the RL tasks considered in this paper and deep learning (DL):\n* Structured nonconvexity and landscape - unlike DL, the nonconvex objective of RL satisfies the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "2UXLj_BrJh", "9KOX9n7Yijt", "RiffwpvVsKf", "spuZAaoElnG", "DkjPwpF_Pa", "04wPQ_XUVbZ", "cv79I-HYukM", "4XM44LwoK4H", "XbZZY64lgr0", "W7iSiCdFlj4", "0Sz8jb571m", "dRrMenqleja", "W7iSiCdFlj4", "0Sz8jb571m", "0Sz8jb571m", "dRrMenqleja", "0Sz8jb571m", "XbZZY64lgr0", "dRrMenqleja", ...
nips_2022_LKEYuYNOqx
GENIE: Higher-Order Denoising Diffusion Solvers
Denoising diffusion models (DDMs) have emerged as a powerful class of generative models. A forward diffusion process slowly perturbs the data, while a deep model learns to gradually denoise. Synthesis amounts to solving a differential equation (DE) defined by the learnt model. Solving the DE requires slow iterative solvers for high-quality generation. In this work, we propose Higher-Order Denoising Diffusion Solvers (GENIE): Based on truncated Taylor methods, we derive a novel higher-order solver that significantly accelerates synthesis. Our solver relies on higher-order gradients of the perturbed data distribution, that is, higher-order score functions. In practice, only Jacobian-vector products (JVPs) are required and we propose to extract them from the first-order score network via automatic differentiation. We then distill the JVPs into a separate neural network that allows us to efficiently compute the necessary higher-order terms for our novel sampler during synthesis. We only need to train a small additional head on top of the first-order score network. We validate GENIE on multiple image generation benchmarks and demonstrate that GENIE outperforms all previous solvers. Unlike recent methods that fundamentally alter the generation process in DDMs, our GENIE solves the true generative DE and still enables applications such as encoding and guided sampling. Project page and code: https://nv-tlabs.github.io/GENIE.
Accept
There is overall consensus that the paper has significant contributions, good experimental validation and a clear presentation. The authors have addressed the majority of concerns raised by the reviewers during the author-reviewer discussions, leading to several scores being raised. I would like to thank the authors and reviewers for actively engaging in discussions. The recommendation is to accept this paper.
train
[ "8G8ZOpM5wqz", "IyNQ4y57LVq", "jhikCEW2XLa", "_gyusBu0c9Sp", "J7fWGCOcQo3", "9I7MQo4P8_m", "iQhzXdm18a", "pZr33ySAnnx", "st4MlVSniUO", "-nq_PyjudgT", "ewBSvxhb1LH", "nlZ-OXAYRLv", "xQ0nEdndnvZ", "dZRI5tzYILx", "LIBezZuNW7l", "iYWT7NKIkQK", "Ug_Qv0TQ6vI", "GXWBhG8Y3uk", "lfZlkMnVO...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Your revision have properly addressed my concerns so I will rasie my rating accordingly.", " Thanks for the response, which clarifies my misunderstanding. I'm happy to increase my rating accordingly.", " - (4.) Given the previous discussion about [69], we believe that the reviewer is referring to Progressive ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 10, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "st4MlVSniUO", "iQhzXdm18a", "_gyusBu0c9Sp", "J7fWGCOcQo3", "9I7MQo4P8_m", "iQhzXdm18a", "nlZ-OXAYRLv", "nips_2022_LKEYuYNOqx", "dZRI5tzYILx", "ewBSvxhb1LH", "lfZlkMnVOal", "xQ0nEdndnvZ", "LIBezZuNW7l", "xA_ilZcYmTM", "GXWBhG8Y3uk", "Ug_Qv0TQ6vI", "nips_2022_LKEYuYNOqx", "nips_2022...
nips_2022_plu6AK3qs5T
Automatic Clipping: Differentially Private Deep Learning Made Easy and Stronger
Per-example gradient clipping is a key algorithmic step that enables practical differential private (DP) training for deep learning models. The choice of clipping norm $R$, however, is shown to be vital for achieving high accuracy under DP. We propose an easy-to-use replacement, called AutoClipping, that eliminates the need to tune $R$ for any DP optimizers, including DP-SGD, DP-Adam, DP-LAMB and many others. The automatic variants are as private and computationally efficient as existing DP optimizers, but require no DP-specific hyperparameters and thus make DP training as amenable as the standard non-private training. We give a rigorous convergence analysis of automatic DP-SGD in the non-convex setting, which shows that it can enjoy an asymptotic convergence rate that matches the standard SGD, under a symmetric noise assumption of the per-sample gradients. We also demonstrate on various language and vision tasks that automatic clipping outperforms or matches the state-of-the-art, and can be easily employed with minimal changes to existing codebases.
Reject
This paper introduces an alternative to clipping for preprocessing gradients to limit sensitivity prior to adding noise for differentially private optimization, with the objective of improving the amount of tuning required by not requiring tuning of a clipping threshold. A key theoretical assumption in the paper is that of symmetric noise, which engendered a significant discussion among reviewers. It seems unclear how valid this assumption is, even in relatively simple linear regression settings.
train
[ "Wcs2AHHDod-", "k-pf3JgeoDo", "CnGK0cqjzHe", "9k7zVUyFqOa", "4kbnCiTRbd3", "1zZvgQTj5h", "r6chpCCKeUc", "JYEBbwVaqBg", "NF7dAwlMT84", "C_KXHk19whl", "ql49xLiEYCE", "e6vdoIXKNRr", "Pt7T-FIsZDg", "fmIWLPCdvyQ", "53KOfxqC-4_", "Bo1lU_vr4DY", "XyszsmCRAxq", "wiTmAwe8S8", "0_VxTqSZETH...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank again the reviewer for participating in the discussion! We totally agree that the distortion of automatic clipping (and Abadi's clipping) needs more study. We would like to mention two facts that may alleviate your concern about the bias:\n\n1. Bias is not always fatal for the convergence. As shown in ou...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "k-pf3JgeoDo", "fmIWLPCdvyQ", "9k7zVUyFqOa", "ql49xLiEYCE", "Bo1lU_vr4DY", "XyszsmCRAxq", "wiTmAwe8S8", "0_VxTqSZETH", "nips_2022_plu6AK3qs5T", "0_VxTqSZETH", "0_VxTqSZETH", "XyszsmCRAxq", "XyszsmCRAxq", "wiTmAwe8S8", "Bo1lU_vr4DY", "nips_2022_plu6AK3qs5T", "nips_2022_plu6AK3qs5T", ...
nips_2022_zJNqte0b-xn
First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces
From optimal transport to robust dimensionality reduction, many machine learning applications can be cast into the min-max optimization problems over Riemannian manifolds. Though many min-max algorithms have been analyzed in the Euclidean setting, it has been elusive how these results translate to the Riemannian case. Zhang et al. (2022) have recently identified that geodesic convex concave Riemannian problems admit always Sion’s saddle point solutions. Immediately, an important question that arises is if a performance gap between the Riemannian and the optimal Euclidean space convex concave algorithms is necessary. Our work is the first to answer the question in the negative: We prove that the Riemannian corrected extragradient (RCEG) method achieves last-iterate at a linear convergence rate at the geodesically strongly convex concave case, matching the euclidean one. Our results also extend to the stochastic or non-smooth case where RCEG & Riemanian gradient ascent descent (RGDA) achieve respectively near-optimal convergence rates up to factors depending on curvature of the manifold. Finally, we empirically demonstrate the effectiveness of RCEG in solving robust PCA.
Accept
The paper analyzes the performance of a few of the recent algorithms for min-max optimization over manifolds. The analysis is extensive and is interesting in its own respect. In the final version, please address the comments of the reviewers.
train
[ "5siZXfB71-", "TSl117yMGtx", "gQANuG-IUFFY", "rDjDVWxe9ZT", "He8vWZ63gs1", "YWNIx-lb8RM", "ihG5ESMWqr", "6Cfk-TdiLuhB", "38XbX9ObCLN", "dEWinZ-Xn2", "V2bCMbVboHi", "4YBZgNfbnGk", "fiyGoS26krL", "fa_FFZG3ILv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the good response, I revisited my scores accordingly. I feel that these points should be discussed more thoroughly in the main text and would invite the authors to do so if the paper gets accepted. For instance would be nice to know what $\\delta$ should be in the case of rPCA, i.e. how close to the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3, 3 ]
[ "rDjDVWxe9ZT", "YWNIx-lb8RM", "He8vWZ63gs1", "fa_FFZG3ILv", "fiyGoS26krL", "4YBZgNfbnGk", "V2bCMbVboHi", "dEWinZ-Xn2", "dEWinZ-Xn2", "nips_2022_zJNqte0b-xn", "nips_2022_zJNqte0b-xn", "nips_2022_zJNqte0b-xn", "nips_2022_zJNqte0b-xn", "nips_2022_zJNqte0b-xn" ]
nips_2022_zrAUoI2JA2
u-HuBERT: Unified Mixed-Modal Speech Pretraining And Zero-Shot Transfer to Unlabeled Modality
While audio-visual speech models can yield superior performance and robustness compared to audio-only models, their development and adoption are hindered by the lack of labeled and unlabeled audio-visual data and the cost to deploy one model per modality. In this paper, we present u-HuBERT, a self-supervised pre-training framework that can leverage both multimodal and unimodal speech with a unified masked cluster prediction objective. By utilizing modality dropout during pre-training, we demonstrate that a single fine-tuned model can achieve performance on par or better than the state-of-the-art modality-specific models. Moreover, our model fine-tuned only on audio can perform well with audio-visual and visual speech input, achieving zero-shot modality generalization for multiple speech processing tasks. In particular, our single model yields 1.2%/1.4%/27.2% speech recognition word error rate on LRS3 with audio-visual/audio/visual input.
Accept
This paper enjoyed a reasonable discussion between the authors and the reviewers, and the authors have improved the paper in a number of areas, including (1) more experiments clarifying the role of the modality dropout rates on model performance, (2) a baseline for the Librispeech transfer experiment, and (3) more details in the comparisons to other state-of-the-art models. While the reviewers generally agree that the zero-shot results are novel, the experimental results are strong, and the analysis of the role of modality dropout in learning modality-agnostic representations is a good contribution, they split on their final recommendation on the paper, with three recommending acceptance (at least to some degree) and one sticking to a borderline reject recommendation after discussion with the authors, arguing that the novelty with respect to the AV-HuBERT paper is insufficient. I am recommending that the paper be accepted, but I think the discussion between the authors and reviewer BiuF includes important points that should be made more strongly in the paper. Specifically, they need to emphasize more strongly the use of targets from multiple modalities (their answer to Q1 in "Response to Reviewer BiUF's Followup Comments"), and they need to explain *in the paper* that "the u-HuBERT from the fourth-to-the-last row to the second-to-the-last row are pre-trained without unimodal data, and are effectively AV-HuBERT" (from their answer to Q2 in "Response to Reviewer BiUF's Followup Comments") to help readers understand the relationship between u-HuBERT and AV-HuBERT. I would also suggest that they add experiments where modality dropout is applied during cluster label generation as an additional constrast.
train
[ "Fa4KHm-6iST", "CTpteyjU2JR", "4Bm3f1xRrHXJ", "bGqzhjmkVTs", "0W2wleLkgXZ", "uwM3gDDgnQ", "4ebcPyZbzwp", "bF7k9vVk1M", "ktE6ws81YxY", "yaKCzIwo9f", "9rV9ZHQvkNJ", "BAr0QB-vGti", "typvR9lAnBb", "9qM64z-0neP", "_4OK7Xc2Zyt", "izRY6jqm4g" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors have responded to questions that you raised:\n- yjaa asked about tasks where framewise alignment between modalities is not present, whether the modality dropout is tuned and how sensitive results are to this hyperparameter, why modality dropout improves AV-WER, and cautions the authors against overcla...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "nips_2022_zrAUoI2JA2", "4Bm3f1xRrHXJ", "bF7k9vVk1M", "0W2wleLkgXZ", "uwM3gDDgnQ", "izRY6jqm4g", "_4OK7Xc2Zyt", "ktE6ws81YxY", "yaKCzIwo9f", "9qM64z-0neP", "BAr0QB-vGti", "typvR9lAnBb", "nips_2022_zrAUoI2JA2", "nips_2022_zrAUoI2JA2", "nips_2022_zrAUoI2JA2", "nips_2022_zrAUoI2JA2" ]
nips_2022_ODkBI1d3phW
Efficient and Effective Augmentation Strategy for Adversarial Training
Adversarial training of Deep Neural Networks is known to be significantly more data-hungry when compared to standard training. Furthermore, complex data augmentations such as AutoAugment, which have led to substantial gains in standard training of image classifiers, have not been successful with Adversarial Training. We first explain this contrasting behavior by viewing augmentation during training as a problem of domain generalization, and further propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training. We aim to handle the conflicting goals of enhancing the diversity of the training dataset and training with data that is close to the test distribution by using a combination of simple and complex augmentations with separate batch normalization layers during training. We further utilize the popular Jensen-Shannon divergence loss to encourage the \emph{joint} learning of the \emph{diverse augmentations}, thereby allowing simple augmentations to guide the learning of complex ones. Lastly, to improve the computational efficiency of the proposed method, we propose and utilize a two-step defense, Ascending Constraint Adversarial Training (ACAT), that uses an increasing epsilon schedule and weight-space smoothing to prevent gradient masking. The proposed method DAJAT achieves substantially better robustness-accuracy trade-off when compared to existing methods on the RobustBench Leaderboard on ResNet-18 and WideResNet-34-10. The code for implementing DAJAT is available here: https://github.com/val-iisc/DAJAT
Accept
The reviewers found the paper well written and were satisfied with the experimental setting, which shows clear improvements. The authors made a thorough rebuttal and carefully answered the reviewer's questions and I recommend for acceptance as I believe this will be useful to the community. I recommend the authors to carefully go over the reviewers’ comments and incorporate them into the final manuscript, along with the additional experiments from the rebuttal.
train
[ "QV6NJ-8B28", "jqWQ2sN3krX", "td4nijt1Bts", "cMHpNSoeBis", "vRtlMFV6xX", "JsiEUsMuYZ4", "fTXJnsicwpA", "lujt8SqlEtJ", "F2RwWLR7ZPSa", "3rlhd0oZAPS", "6xVPikEz67h", "ZICVX3dMR4", "8r69JOXQJaD", "2Ak6lWdpHF", "N3XUwYIzfau", "Yn5N_Yvofds", "JwBADWk3mJ", "7mJPyMglLFY", "nhQbzdpnxPZ",...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official...
[ " We thank the reviewer for the reply and the valuable feedback.", " We thank the reviewer for reconsidering the contributions of our work and updating the score. \n\nWe will update the final version to highlight the various other contributions of our work (listed in our earlier replies) in addition to the traini...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 5 ]
[ "td4nijt1Bts", "JsiEUsMuYZ4", "7mJPyMglLFY", "_c5j2v6m8VD", "pIFlarnsNMN", "F2RwWLR7ZPSa", "lujt8SqlEtJ", "F2RwWLR7ZPSa", "8r69JOXQJaD", "6xVPikEz67h", "ZICVX3dMR4", "_c5j2v6m8VD", "5PB2u_paMqC", "5PB2u_paMqC", "Yn5N_Yvofds", "JwBADWk3mJ", "pIFlarnsNMN", "nhQbzdpnxPZ", "0LxUfpjiM...
nips_2022_MRpRKU8haea
Efficient Sampling on Riemannian Manifolds via Langevin MCMC
We study the task of efficiently sampling from a Gibbs distribution $d \pi^* = e^{-h} d {\text{vol}}_g$ over a Riemannian manifold $M$ via (geometric) Langevin MCMC; this algorithm involves computing exponential maps in random Gaussian directions and is efficiently implementable in practice. The key to our analysis of Langevin MCMC is a bound on the discretization error of the geometric Euler-Murayama scheme, assuming $\nabla h$ is Lipschitz and $M$ has bounded sectional curvature. Our error bound matches the error of Euclidean Euler-Murayama in terms of its stepsize dependence. Combined with a contraction guarantee for the geometric Langevin Diffusion under Kendall-Cranston coupling, we prove that the Langevin MCMC iterates lie within $\epsilon$-Wasserstein distance of $\pi^*$ after $\tilde{O}(\epsilon^{-2})$ steps, which matches the iteration complexity for Euclidean Langevin MCMC. Our results apply in general settings where $h$ can be nonconvex and $M$ can have negative Ricci curvature. Under additional assumptions that the Riemannian curvature tensor has bounded derivatives, and that $\pi^*$ satisfies a $CD(\cdot,\infty)$ condition, we analyze the stochastic gradient version of Langevin MCMC, and bound its iteration complexity by $\tilde{O}(\epsilon^{-2})$ as well.
Accept
This paper focus on the problem of sampling from a distribution on a large class of Riemannian manifolds. Authors study an Euler-type discretization in this context, for the Langevin diffusion. The paper establishes a control over the one-step discretization error, which is then iterated to obtain a sampling guarantee for the discrete algorithm. Several extensions are also discussed, e.g. the stochastic gradient setting. Sampling on Riemannian manifolds is a challenging and active area of research. Given that the paper can cover a wide range of manifolds, I also agree with the reviewers, that the contributions of this paper are solid. I strongly recommend accepting this paper.
train
[ "6TuA3Gd85LN", "aS68ydcTtf4", "30JozI9V2sd", "JNxitSuYtFK", "IO5vVXNrffu", "1sy8W3UP9HB", "j3E2gEln-_Z", "SdbXSjOdop", "If1c0yrKfoc", "Xj4Ix4Ljy9-", "dC_CynJmZW", "4GmB2NmLtWr", "QZCfiPUnVVy", "7C4LWPiXRSt", "-mtTSty4Hps" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response. I have raised my score to 7.", " Thank you for your reconsideration, as well as for clarifying your question and for your suggestions on presentation. We will keep that in mind when preparing the final draft.\n\nYou are correct in your description of the relation between $x^0(...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "1sy8W3UP9HB", "JNxitSuYtFK", "IO5vVXNrffu", "SdbXSjOdop", "dC_CynJmZW", "j3E2gEln-_Z", "-mtTSty4Hps", "If1c0yrKfoc", "7C4LWPiXRSt", "QZCfiPUnVVy", "4GmB2NmLtWr", "nips_2022_MRpRKU8haea", "nips_2022_MRpRKU8haea", "nips_2022_MRpRKU8haea", "nips_2022_MRpRKU8haea" ]
nips_2022_92leLHqlcvv
A Direct Approximation of AIXI Using Logical State Abstractions
We propose a practical integration of logical state abstraction with AIXI, a Bayesian optimality notion for reinforcement learning agents, to significantly expand the model class that AIXI agents can be approximated over to complex history-dependent and structured environments. The state representation and reasoning framework is based on higher-order logic, which can be used to define and enumerate complex features on non-Markovian and structured environments. We address the problem of selecting the right subset of features to form state abstractions by adapting the $\Phi$-MDP optimisation criterion from state abstraction theory. Exact Bayesian model learning is then achieved using a suitable generalisation of Context Tree Weighting over abstract state sequences. The resultant architecture can be integrated with different planning algorithms. Experimental results on controlling epidemics on large-scale contact networks validates the agent's performance.
Accept
The paper proposes an interesting approach and analysis for extending the AIXI agent to handle non-Markovian structured decision processes. Technically, the paper appears to be fine, and its evaluation has significantly improved after the addition of baselines. At the same time, this work could be strengthened by motivating its own significance more explicitly: despite being conceptually appealing, the practical importance of extending the AIXI agent to non-Markovian and structured environments is not entirely obvious.
train
[ "lUQblNj-7B", "hjrYXywWKX2", "kcu0LUJI7ko", "0_1ds_a_Ew", "aehXo3m-LDP", "PqxtMr3f8k", "7jkID6OfWC", "8m7gK-NrJER", "317B164umEe", "uj7DIU1hXU5" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your additional comments and newly considered recommendation. In regards to your remaining questions:\n\n* Can you clarify what \"highest weighting\" means here? Also, why not use the same feature selection method with the same threshold that's used by RF-BDD?\n\nRandom Forest provides a variable i...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 1, 3, 2 ]
[ "kcu0LUJI7ko", "7jkID6OfWC", "0_1ds_a_Ew", "aehXo3m-LDP", "uj7DIU1hXU5", "317B164umEe", "8m7gK-NrJER", "nips_2022_92leLHqlcvv", "nips_2022_92leLHqlcvv", "nips_2022_92leLHqlcvv" ]
nips_2022_1-F7HbLInPy
Instance-based Learning for Knowledge Base Completion
In this paper, we propose a new method for knowledge base completion (KBC): instance-based learning (IBL). For example, to answer (Jill Biden, lived city,? ), instead of going directly to Washington D.C., our goal is to find Joe Biden, who has the same lived city as Jill Biden. Through prototype entities, IBL provides interpretability. We develop theories for modeling prototypes and combining IBL with translational models. Experiments on various tasks confirmed the IBL model's effectiveness and interpretability. In addition, IBL shed light on the mechanism of rule-based KBC models. Previous research has generally agreed that rule-based models provide rules with semantically compatible premise and hypothesis. We challenge this view. We begin by demonstrating that some logical rules represent {\it instance-based equivalence} (i.e. prototypes) rather than semantic compatibility. These are denoted as {\it IBL rules}. Surprisingly, despite occupying only a small portion of the rule space, IBL rules outperform non-IBL rules in all four benchmarks. %KBC can be achieved using only IBL rules in two benchmarks without sacrificing effectiveness. We use a variety of experiments to demonstrate that rule-based models work because they have the ability to represent instance-based equivalence via IBL rules. The findings provide new insights of how rule-based models work and how to interpret their rules.
Accept
Develops a new paradigm for knowledge base completion, based around instance based learning. The paper has motivation, some supporting theory and good empirical work. Reviewers ZosV and DVJo mention a relationship with GNN/GCNs which should be further discussed in the paper. The paper has some simple grammar and spelling errors that should be fixed up, though no reviewers mentioned this. The authors have some enlightening discussion with reviewers, for instance on definitions, that should be included in the paper.
train
[ "m1s62dMhXx", "emmiSPoxjmq", "clqsKDJNYrs", "rjhvcOlgRAW", "P7axP-fGGxS", "niXqLS6l12i", "piJsD-IHxiU", "KIAqaAGYMIg", "YgXTLZW8XA", "PjwYkOHCy3Y", "k2Ayi9wsoeS", "OSmFS0saO_5", "TjZOZH4RQY5", "9I0TzPeJ7Z3", "FFWaJanLeTN", "OdobsS_Deqm", "7Sfr7jPJh9b", "n-VrrW-Bmr", "hHczRVkSoT" ...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are excited to hear that our responses answered your questions. Thank you.", " Encouraged by Reviewer DVJo, we employ translational models to provide a theoretical proof for the effectiveness of IBL rules. We will show that IBL rules always hold true under the assumption of translational models. We take Tran...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "clqsKDJNYrs", "FFWaJanLeTN", "9I0TzPeJ7Z3", "P7axP-fGGxS", "OSmFS0saO_5", "hHczRVkSoT", "n-VrrW-Bmr", "7Sfr7jPJh9b", "OdobsS_Deqm", "OSmFS0saO_5", "FFWaJanLeTN", "hHczRVkSoT", "n-VrrW-Bmr", "7Sfr7jPJh9b", "OdobsS_Deqm", "nips_2022_1-F7HbLInPy", "nips_2022_1-F7HbLInPy", "nips_2022_...
nips_2022_Q7kdFAVPdu
ATD: Augmenting CP Tensor Decomposition by Self Supervision
Tensor decompositions are powerful tools for dimensionality reduction and feature interpretation of multidimensional data such as signals. Existing tensor decomposition objectives (e.g., Frobenius norm) are designed for fitting raw data under statistical assumptions, which may not align with downstream classification tasks. In practice, raw input tensor can contain irrelevant information while data augmentation techniques may be used to smooth out class-irrelevant noise in samples. This paper addresses the above challenges by proposing augmented tensor decomposition (ATD), which effectively incorporates data augmentations and self-supervised learning (SSL) to boost downstream classification. To address the non-convexity of the new augmented objective, we develop an iterative method that enables the optimization to follow an alternating least squares (ALS) fashion. We evaluate our proposed ATD on multiple datasets. It can achieve 0.8%~2.5% accuracy gain over tensor-based baselines. Also, our ATD model shows comparable or better performance (e.g., up to 15% in accuracy) over self-supervised and autoencoder baselines while using less than 5% of learnable parameters of these baseline models.
Accept
The authors proposes augmented tensor decomposition (ATD) to adapt data reconstruction towards the downstream tasks, e.g., appropriate feature clustering. It leverages data augmentation and self-supervised learning, with optimization accomplished akin to alternating least square (ALS). Significantly superior performance is obtained in experiment, compared with existing CP methods and ALS. All the reviewers, including myself, find the paper a solid contribution to the methodology and analysis. There were a few concerns and clarification requests, and the rebuttal did a good job addressing them. These additional results and insights can be included in the final version of the paper.
train
[ "7U3g3NRcaFI", "wqI0QDmUMr0", "oejPLo7e_HSm", "J_v14DSjz7w", "vWkywPjkGXqk", "GyxvDYZPMms", "g0a5IX7WHWK", "o6_uM0SkyJG", "sQy-K26XhXo", "DEFrBTkmMZM", "dphfe9BzKF4", "IfQanMj4NO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response. The authors have solved my concerns by adding the results and explanations.", " I thank the authors for the thorough, thoughtful response to my initial review, and their revisions to the submission. I maintain my initial rating to recommend acceptance.", " **Q4...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "g0a5IX7WHWK", "GyxvDYZPMms", "IfQanMj4NO", "IfQanMj4NO", "DEFrBTkmMZM", "DEFrBTkmMZM", "sQy-K26XhXo", "dphfe9BzKF4", "nips_2022_Q7kdFAVPdu", "nips_2022_Q7kdFAVPdu", "nips_2022_Q7kdFAVPdu", "nips_2022_Q7kdFAVPdu" ]
nips_2022_LfHwpvDPGpx
AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars
Although 2D generative models have made great progress in face image generation and animation, they often suffer from undesirable artifacts such as 3D inconsistency when rendering images from different camera viewpoints. This prevents them from synthesizing video animations indistinguishable from real ones. Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of camera pose by leveraging 3D scene representations. These methods can well preserve the 3D consistency of the generated images across different views, yet they cannot achieve fine-grained control over other attributes, among which facial expression control is arguably the most useful and desirable for face animation. In this paper, we propose an animatable 3D-aware GAN for multiview consistent face animation generation. The key idea is to decompose the 3D representation of the 3D-aware GAN into a template field and a deformation field, where the former represents different identities with a canonical expression, and the latter characterizes expression variations of each identity. To achieve meaningful control over facial expressions via deformation, we propose a 3D-level imitative learning scheme between the generator and a parametric 3D face model during adversarial training of the 3D-aware GAN. This helps our method achieve high-quality animatable face image generation with strong visual 3D consistency, even though trained with only unstructured 2D images. Extensive experiments demonstrate our superior performance over prior works. Project page: \url{https://yuewuhkust.github.io/AniFaceGAN/
Accept
The paper addresses an interesting topic and advances the state of the art. The authors have responded sufficiently to the criticisms of the reviewers including the one reviewer that was recommending rejection. The authors are encouraged to incorporate the clarifications and additional results in the final version.
train
[ "TBRc3moI4-B", "NjDlGedUIzH", "XS7XZdwD3_f", "FYTDQ2bwPlq", "wauTQsdzhsy", "IpuIS99uMsy", "fy5jtgkrDn1", "sYUJwag6ziS", "tlTAOz_EU7o", "zkt1sEI6NC", "Zwfc3JQE-tl", "_9-GHftP86U", "9YJf18i-yE6", "bx-aKE_KqpR", "Qd4d1waUc7m", "eCBcPxsuudK" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for the reviews and comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. We would...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "NjDlGedUIzH", "tlTAOz_EU7o", "wauTQsdzhsy", "IpuIS99uMsy", "Zwfc3JQE-tl", "zkt1sEI6NC", "eCBcPxsuudK", "eCBcPxsuudK", "Qd4d1waUc7m", "bx-aKE_KqpR", "bx-aKE_KqpR", "9YJf18i-yE6", "nips_2022_LfHwpvDPGpx", "nips_2022_LfHwpvDPGpx", "nips_2022_LfHwpvDPGpx", "nips_2022_LfHwpvDPGpx" ]
nips_2022_sL7XH6-V21e
Depth is More Powerful than Width with Prediction Concatenation in Deep Forest
Random Forest (RF) is an ensemble learning algorithm proposed by \citet{breiman2001random} that constructs a large number of randomized decision trees individually and aggregates their predictions by naive averaging. \citet{zhou2019deep} further propose Deep Forest (DF) algorithm with multi-layer feature transformation, which significantly outperforms random forest in various application fields. The prediction concatenation (PreConc) operation is crucial for the multi-layer feature transformation in deep forest, though little has been known about its theoretical property. In this paper, we analyze the influence of Preconc on the consistency of deep forest. Especially when the individual tree is inconsistent (as in practice, the individual tree is often set to be fully grown, i.e., there is only one sample at each leaf node), we find that the convergence rate of two-layer DF \textit{w.r.t.} the number of trees $M$ can reach $\mathcal{O}(1/M^2)$ under some mild conditions, while the convergence rate of RF is $\mathcal{O}(1/M)$. Therefore, with the help of PreConc, DF with deeper layer will be more powerful than the shallower layer. Experiments confirm theoretical advantages.
Accept
This paper establishes a set of theoretical results for characterizing the behavior of Deep Forest, an important model in deep learning. The analysis approach is novel and the results are of significant importance. The majority of the reviewers appreciate the authors’ contribution and all reviewers recommend acceptance. Thus I would strongly recommend accepting the paper.
test
[ "vzZ757JZT0", "7BSygd-4fNA", "LUyZg029hZ7", "oracB4dMw8v", "ZdzrI6A7-t-", "iJzFbj6ISJF", "liCrF0Bc4qS", "xCblxPSs9Hh", "kxpq4caSTG5", "i9so4MXKHfc", "JJ-UUSToo6E" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my question. I think this part is a valuable addition to the paper and makes it somewhat easier to understand. ", " --- after rebuttal ---\n\nDear reviewer QMfU,\n\nThank you for reading our response and increasing the score. We noticed that you have raised some new questions, and we hop...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "liCrF0Bc4qS", "i9so4MXKHfc", "i9so4MXKHfc", "i9so4MXKHfc", "JJ-UUSToo6E", "i9so4MXKHfc", "kxpq4caSTG5", "nips_2022_sL7XH6-V21e", "nips_2022_sL7XH6-V21e", "nips_2022_sL7XH6-V21e", "nips_2022_sL7XH6-V21e" ]
nips_2022_pOEN7dDC0d
On the Word Boundaries of Emergent Languages Based on Harris's Articulation Scheme
The purpose of this paper is to investigate whether Harris's articulation scheme (HAS) also holds in emergent languages. HAS is thought to be a universal property in natural languages that articulatory boundaries can be obtained from statistical information of phonems alone, without referring to word meanings. Emergent languages are artificial communication protocols that arise between agents in a simulated environment and have been attracting attention in recent years. It is considerd important to study the structure of emergent languages and the similarity to natural languages. In this paper, we employ HAS as an unsupervised word segmentation method and verify whether emergent languages arising from signaling games have meaningful boundaries. Our experiments showed that the emergent languages arising from signaling games satisfy some preconditions for HAS. However, it was also suggested that the HAS-based segmentation boundaries are not necessarily semantically valid.
Reject
Harris' hypothesis (or articulation scheme, "HAS"; Harris, 1955) suggests that the linguistic boundaries of words can be detected by a count that is unrelated to meaning. In this paper, the authors explore if emergent languages between agents share this property with natural languages. They test the predictions of HAS on the messages produced by converged agents trained in a Lewis signaling game, where a speaker agent is trained by reinforcement learning and a listener agent is trained by supervised learning. The results are mixed (they find only weak partial support for these predictions in the resulting emergent languages) but the idea is novel and interesting. All reviewers suggest acceptance, but the paper is IMO just below the decision boundary. It is likely to be of interest to only a small number of people and it is unclear whether it will spark any follow-on work since the Harris' hypothesis doesn't seem to fully hold.
train
[ "AFvkv8ujgPD", "WMHocDJkodz", "p_OfFml1H_q", "9Ze2ZP0jOTQk", "rBVMYN1Dr8V", "7P9C498jrii", "OtUPhwoPKCk", "bRbC4h8JCUY", "NTMM85uhkZj" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our response! First, let us address your second comment on the follow-up experiment and then your first comment on the fixed-length.\n\n> I am reminded of the concern as I see that this assumption isn't addressed in the synthetic experiment: for a given game with fixed $n_{att}$, you fix seg...
[ -1, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "WMHocDJkodz", "rBVMYN1Dr8V", "NTMM85uhkZj", "bRbC4h8JCUY", "7P9C498jrii", "OtUPhwoPKCk", "nips_2022_pOEN7dDC0d", "nips_2022_pOEN7dDC0d", "nips_2022_pOEN7dDC0d" ]
nips_2022_-yiZR4_Xhh
Dance of SNN and ANN: Solving binding problem by combining spike timing and reconstructive attention
The binding problem is one of the fundamental challenges that prevent the artificial neural network (ANNs) from a compositional understanding of the world like human perception, because disentangled and distributed representations of generative factors can interfere and lead to ambiguity when complex data with multiple objects are presented. In this paper, we propose a brain-inspired unsupervised hybrid neural network (HNN) that introduces temporal binding theory originated from neuroscience into ANNs by integrating spike timing dynamics (via spiking neural networks, SNNs) with reconstructive attention (by ANNs). Spike timing provides an additional dimension for grouping, while reconstructive feedback coordinates the spikes into temporal coherent states. Through iterative interaction of ANN and SNN, the model continuously binds multiple objects at alternative synchronous firing times in the SNN coding space. The effectiveness of the model is evaluated on five artificially generated datasets of binary images. By visualization and analysis, we demonstrate that the binding is explainable, soft, flexible, and hierarchical. Notably, the model is trained on single object datasets without explicit supervision on grouping, but can successfully bind multiple objects on test datasets, showing its compositional generalization capability. Further results show its binding ability in dynamic situations.
Accept
This paper presents a solution to the binding problem that uses a combination of spiking and reconstructive attention. The authors show how spike timing can be used to group sensory inputs into objects and then have attention alter the synchrony of firing to bind multiple objects. They demonstrate this on a variety of visual datasets including basic shapes and MNIST. The reviews for this paper were borderline. The reviewers felt that the paper was novel, technically sound, and and well-written, but they were not completely convinced by the paper, in particular there were worries that the experiments did not provide adequate comparisons to other techniques to make the significance of this approach clear, nor did they test appropriately complex situations. Nonetheless, the authors were able to partially alleviate the reviewers concerns, and the final scores were 7,6,5,5. Given these scores, and the general agreement that the paper was technically sound and novel, an 'accept' decision was made.
train
[ "6DDKKPxmnOb", "58ujZkISMMw", "78TK6-IQX-m", "RBiJXe-onW8", "4vWiwheWmHs", "3zwLWb-fo2b", "UArF8lAjB3a", "p4WQvkY66uV", "gPnsmot5O59", "iAGLiy-nO6uV", "xUnYGs-FcOg", "WjTeauiKmK", "dcu55O5DUyi", "9t6beX4_moM", "I46hnSWhbM", "kFqvZ_dFjTe", "drE7lLw4ACz", "Y-QIxIza85S", "9yB_oOeX8s...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " Thanks for the reviewer's constructive comments. We agree with the reviewer and will work on more elaborative comparative analysis.", " We thank the reviewer for appreciating the novelty of the proposed architecture. Meanwhile, we can understand the the reviewer's concerns on more complex tasks.\n\nAs pointed o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "78TK6-IQX-m", "UArF8lAjB3a", "3zwLWb-fo2b", "p4WQvkY66uV", "iAGLiy-nO6uV", "gPnsmot5O59", "I46hnSWhbM", "dcu55O5DUyi", "WjTeauiKmK", "9t6beX4_moM", "9yB_oOeX8s", "9yB_oOeX8s", "drE7lLw4ACz", "Y-QIxIza85S", "kFqvZ_dFjTe", "nips_2022_-yiZR4_Xhh", "nips_2022_-yiZR4_Xhh", "nips_2022_-...
nips_2022_oNnv9XjClGK
Direct Advantage Estimation
The predominant approach in reinforcement learning is to assign credit to actions based on the expected return. However, we show that the return may depend on the policy in a way which could lead to excessive variance in value estimation and slow down learning. Instead, we show that the advantage function can be interpreted as causal effects and shares similar properties with causal representations. Based on this insight, we propose Direct Advantage Estimation (DAE), a novel method that can model the advantage function and estimate it directly from on-policy data while simultaneously minimizing the variance of the return without requiring the (action-)value function. We also relate our method to Temporal Difference methods by showing how value functions can be seamlessly integrated into DAE. The proposed method is easy to implement and can be readily adapted by modern actor-critic methods. We evaluate DAE empirically on three discrete control domains and show that it can outperform generalized advantage estimation (GAE), a strong baseline for advantage estimation, on a majority of the environments when applied to policy optimization.
Accept
Overall, the proposed approach is well-motivated, simple to implement and appears to work well empirically. The connection between the advantage function and causal concepts is appreciated by some of the reviewers, but a point of confusion and viewed as insufficiently motivated by others. I recommend acceptance for this paper, but for the causal connection, I want to ask the authors to either provide more in-dept explanation or to de-emphasize this connection.
train
[ "ua7HJqArd6j", "Vju9WmsFuWk", "lWreI4HF8ei", "Ww2P4weNfh", "IuP-3807B6B", "0LPpVYp8D8z", "LbU6SH_RVCe", "FX2ABIemkMwe", "AK9Pd_mJFhX", "hgzJa4oWiaK", "JWVcIbxKoM", "jYX_QVTsAY5", "s_vQQMNpanu", "rfXoth4A9CN", "EEbqPi-aPqk", "c42tCOFCRDK", "bd2BxJEX1o8", "w-pjrOuDEDp", "CVDjwzQQQb...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " Thank you for the clarifications and additions to the paper. Your response about understanding the need for the constraint was helpful. \nI'm happy to still recommend acceptance.", " Thanks! Bumped up my score.", " Thank you for the suggestion, we've expanded the discussion on the softer condition (line 160-1...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "EEbqPi-aPqk", "lWreI4HF8ei", "LbU6SH_RVCe", "0LPpVYp8D8z", "jYX_QVTsAY5", "s_vQQMNpanu", "FX2ABIemkMwe", "AK9Pd_mJFhX", "JWVcIbxKoM", "nips_2022_oNnv9XjClGK", "CVDjwzQQQb0", "w-pjrOuDEDp", "rfXoth4A9CN", "bd2BxJEX1o8", "c42tCOFCRDK", "nips_2022_oNnv9XjClGK", "nips_2022_oNnv9XjClGK",...
nips_2022_ijzm0EhAY_w
Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations
Most video-and-language representation learning approaches employ contrastive learning, e.g., CLIP, to project the video and text features into a common latent space according to the semantic similarities of text-video pairs. However, such learned shared latent spaces are not often optimal, and the modality gap between visual and textual representation can not be fully eliminated. In this paper, we propose Expectation-Maximization Contrastive Learning (EMCL) to learn compact video-and-language representations. Specifically, we use the Expectation-Maximization algorithm to find a compact set of bases for the latent space, where the features could be concisely represented as the linear combinations of these bases. Such feature decomposition of video-and-language representations reduces the rank of the latent space, resulting in increased representing power for the semantics. Extensive experiments on three benchmark text-video retrieval datasets prove that our EMCL can learn more discriminative video-and-language representations than previous methods, and significantly outperform previous state-of-the-art methods across all metrics. More encouragingly, the proposed method can be applied to boost the performance of existing approaches either as a jointly training layer or an out-of-the-box inference module with no extra training, making it easy to be incorporated into any existing methods.
Accept
This paper proposes Expectation-Maximization Contrastive Learning (EMCL) to learn compact video-and-language representations for the general goal of projecting the video and text features into a common latent space. Reviewers agreed that this is an important problem of modality gap, a useful way to represent the modalities into a shared latent space of learned basis vectors, and good improvements. Some reviewers also expressed concerns that the approach is a bit complicated and simpler baselines should be compared to (done in rebuttal); several missing related works discussion w.r.t. distillation, dimensionality reduction, transfer learning/domain adaptation; some motivation confusions about closing representation gap; more tasks should be added such as QA and captioning.
test
[ "V4OXIfIE_S", "0NOxN1Cjf0", "TdB5P3OkA-", "RozmZgyfo4e", "FaoSIwFTpcV", "-lYMUgJft1", "JNOAnZlNyyS", "frU6FlolWbq", "rllE8Zv9KOL", "6xvUvxNbHCd", "14_y-Kgv3_Nz", "ccS4b75ZIjZ", "c6xhoH2W7K", "XAtgMmjvCN", "WuDrI-nePsG", "GChm--Hc3TX", "Mtz-1dg7HX" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your valuable suggestions and comments. We really enjoy communicating with you and appreciate your efforts.", " Thanks for reading our response and raising further advice. \n\n> **Q1**: Did you optimize the number of parameters in the diagonal vs full covariance case? The model may need fewer d...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "RozmZgyfo4e", "TdB5P3OkA-", "JNOAnZlNyyS", "FaoSIwFTpcV", "-lYMUgJft1", "rllE8Zv9KOL", "frU6FlolWbq", "Mtz-1dg7HX", "6xvUvxNbHCd", "GChm--Hc3TX", "WuDrI-nePsG", "c6xhoH2W7K", "XAtgMmjvCN", "nips_2022_ijzm0EhAY_w", "nips_2022_ijzm0EhAY_w", "nips_2022_ijzm0EhAY_w", "nips_2022_ijzm0EhA...
nips_2022_pFqgUJxXXz
Adversarial Task Up-sampling for Meta-learning
The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks. Frequent violation of the assumption in applications with either insufficient tasks or a very narrow meta-training task distribution leads to memorization or learner overfitting. Recent solutions have pursued augmentation of meta-training tasks, while it is still an open question to generate both correct and sufficiently imaginary tasks. In this paper, we seek an approach that up-samples meta-training tasks from the task representation via a task up-sampling network. Besides, the resulting approach named Adversarial Task Up-sampling (ATU) suffices to generate tasks that can maximally contribute to the latest meta-learner by maximizing an adversarial loss. On few-shot sine regression and image classification datasets, we empirically validate the marked improvement of ATU over state-of-the-art task augmentation strategies in the meta-testing performance and also the quality of up-sampled tasks.
Accept
This paper proposes a task augmentation method for meta-learning that generates new tasks which match the true task distribution and are also challenging for the current meta-learner. This is done by training a task upsampling network with an adversarial loss as well as an EMD loss between the adversarially generated and ground-truth tasks. The authors provide a theoretical analysis that the tasks generated by their adaptive task upsampling framework are indeed task-aware, or comply with the true task distribution, and validate the method on both regression and classification tasks. The results show that the proposed task upsampling method outperforms existing regularization methods or task augmentation methods. The paper initially received split reviews. Reviewers were generally positive about the introduction of the desirable properties for augmented tasks in meta-learning as a meaningful contribution. They also found the proposed method with adversarial task-aware upsampling as novel and interesting, and the experimental validation as adequately showing the effectiveness of the proposed method as well as each of its components. Another advantage that is not mentioned by the reviewers, is that it is applicable to both regression and classification tasks. However, reviewers were also concerned with unclear, and somehow disconnected theoretical analysis from the actual framework, marginal improvements over state-of-the-art baselines such as MLTI, unclear effectiveness of the adversarial loss, and missing results on larger few-shot classification datasets. Most of these points were addressed in the author response, which resulted in some of the reviewers raising their scores, and all reviewers leaned toward acceptance after the interactive discussion period. In sum, this is a well-written paper that introduces meaningful insights about task augmentation in meta-learning, as well as a novel method, and may be of interest to researchers working on the topic. However, method-wise, its relatively weak improvement over existing, simpler task augmentation methods may diminish the potential practical impact of the work. Yet, the advantages outweigh its drawbacks and the work is worth sharing at NeurIPS 2022.
test
[ "RuBJ-aoz8n9", "0Isa8pPUR7", "VFQ0qDtVsFV", "zAytCGk-to", "xhtJn0ssULe", "6SU95tH3k-e", "w_cqMCTZmuX", "SlKe-O1fjrt", "vv5-kihLIYq", "h2-rSy-Och", "r0HwIYSHmoo", "FUXfKlecuH", "JKSiKNNIMMUk", "19AC2GiWzI", "ZSHoaTVARV5", "KLGTG81SckE", "YL8T6-0D2Rc", "LiKRakV_dl", "OOiUK2EEoj", ...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Dear reviewers, \n\nThe interactive author-reviewer discussion period is now over, and we need to have an internal discussion to decide whether to accept or reject the paper. It seems that all of you are leaning toward acceptance. However, some of you seem to have not yet read authors' responses, so please go ove...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "nips_2022_pFqgUJxXXz", "VFQ0qDtVsFV", "19AC2GiWzI", "OOiUK2EEoj", "SlKe-O1fjrt", "FUXfKlecuH", "YL8T6-0D2Rc", "OOiUK2EEoj", "OOiUK2EEoj", "mqCPnq11ZgX", "mqCPnq11ZgX", "mqCPnq11ZgX", "OOiUK2EEoj", "LiKRakV_dl", "YL8T6-0D2Rc", "YL8T6-0D2Rc", "nips_2022_pFqgUJxXXz", "nips_2022_pFqgU...
nips_2022_DhHqObn2UW
A Unifying Framework for Online Optimization with Long-Term Constraints
We study online learning problems in which a decision maker has to take a sequence of decisions subject to $m$ long-term constraints. The goal of the decision maker is to maximize their total reward, while at the same time achieving small cumulative constraints violations across the $T$ rounds. We present the first best-of-both-world type algorithm for this general class of problems, with no-regret guarantees both in the case in which rewards and constraints are selected according to an unknown stochastic model, and in the case in which they are selected at each round by an adversary. Our algorithm is the first to provide guarantees in the adversarial setting with respect to the optimal fixed strategy that satisfies the long-term constraints. In particular, it guarantees a $\rho/(1+\rho)$ fraction of the optimal utility and sublinear regret, where $\rho$ is a feasibility parameter related to the existence of strictly feasible solutions. Our framework employs traditional regret minimizers as black-box components. Therefore, by instantiating it with an appropriate choice of regret minimizers it can handle both the full-feedback as well as the bandit-feedback setting. Moreover, it allows the decision maker to seamlessly handle scenarios with non-convex reward and constraints. We show how our framework may be applied in the context of budget-management mechanisms for repeated auctions in order to guarantee long-term constraints which are not packing (e.g., ROI constraints).
Accept
This paper studies the problem of online optimization with long-term constraints and proposes an interesting black-box type of algorithm that works well for both the stochastic setting and the adversarial setting. One issue that reviewers brought up for the adversarial case is whether competing with only \rho/(1+\rho) fraction of the optimal utility is indeed meaningful or necessary. In the rebuttal the authors mentioned that this is likely optimal in light of its connection to the literature on the budget-constrained case, but only leave the formal reduction between the two as future directions. We believe that the paper would be much more complete if this reduction is properly spelled out, and thus encourage the authors to do so in the final version.
test
[ "3GNKN9Tt7HQ", "KyV4Lu4xYUo", "3UIwpd7cpd", "3TTKhD5Ftxq", "t-7z4tmuEXq", "sj40NStfE2", "broTJ7ulfm", "WDKPFkWiABr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the through replies to my questions. I agree that despite the fact that no prior rate is explicitly \"beaten\" there are new implications for different scenarios that may be of interest. I have raised my contribution score from 2 to 3. I also appreciate the clear explanation of the challenges of app...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "3UIwpd7cpd", "t-7z4tmuEXq", "WDKPFkWiABr", "broTJ7ulfm", "sj40NStfE2", "nips_2022_DhHqObn2UW", "nips_2022_DhHqObn2UW", "nips_2022_DhHqObn2UW" ]
nips_2022_Hbvlb4D1aFC
Masked Prediction: A Parameter Identifiability View
The vast majority of work in self-supervised learning have focused on assessing recovered features by a chosen set of downstream tasks. While there are several commonly used benchmark datasets, this lens of feature learning requires assumptions on the downstream tasks which are not inherent to the data distribution itself. In this paper, we present an alternative lens, one of parameter identifiability: assuming data comes from a parametric probabilistic model, we train a self-supervised learning predictor with a suitable parametric form, and ask whether the parameters of the optimal predictor can be used to extract the parameters of the ground truth generative model. Specifically, we focus on latent-variable models capturing sequential structures, namely Hidden Markov Models with both discrete and conditionally Gaussian observations. We focus on masked prediction as the self-supervised learning task and study the optimal masked predictor. We show that parameter identifiability is governed by the task difficulty, which is determined by the choice of data model and the amount of tokens to predict. Technique-wise, we uncover close connections with the uniqueness of tensor rank decompositions, a widely used tool in studying identifiability through the lens of the method of moments.
Accept
This work investigates a theoretical way to analyze self-supervised learning. The identifiability of latent stochastic generative parameters in the context of masking self-supervised prediction tasks is investigated, where HMM and Gaussian HMM models are assumed for latent space sequences. The 3 main claims are proved by theorems. The paper does not provide experimental results in the main paper on how well the findings hold in practice, but experimental simulations are provided in the appendix. I suggest accepting the paper, as it deals with an important topic in understanding SSL and it is of significant and timely interest to the community by presenting novel results on the identifiability of masked prediction tasks in discrete and Gaussian HMMs. Although the setup is a little narrow and strongly simplified, the paper is original enough. Further, the paper is well written and provides thorough comparisons to related literature. In future work, addressing practical impact would be a nice addition.
test
[ "D1bCBu5SdFF", "WguT3P9ygta", "qOTVF59GGOf", "3BgvAzUNCZf", "EfQvK_oLBKc", "R3PFHqkRykD", "gnOZ8oduEJ0", "wmpw2iycNYN", "t7WUHoyLfcW", "BoIvl_tpOhB", "fQzn3qBJTHD" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much again for the insightful comments! Please let us know if our response has addressed your concerns, and we are happy to discuss more if there are further questions.", " Thank you very much again for the constructive feedback! Please let us know if our response has addressed your concerns, and...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "gnOZ8oduEJ0", "R3PFHqkRykD", "3BgvAzUNCZf", "BoIvl_tpOhB", "fQzn3qBJTHD", "t7WUHoyLfcW", "wmpw2iycNYN", "nips_2022_Hbvlb4D1aFC", "nips_2022_Hbvlb4D1aFC", "nips_2022_Hbvlb4D1aFC", "nips_2022_Hbvlb4D1aFC" ]
nips_2022_Eccx2-_vZS4
Tabular data imputation: quality over quantity
Tabular data imputation algorithms allow to estimate missing values and use incomplete numerical datasets. Current imputation methods minimize the error between the unobserved ground truth and the imputed values. We show that this strategy has major drawbacks in the presence of multimodal distributions, and we propose to use a qualitative approach rather than the actual quantitative one. We introduce the kNNxKDE algorithm: a hybrid method using chosen neighbors ($k$NN) for conditional density estimation (KDE) tailored for data imputation. We qualitatively and quantitatively show that our method preserves the original data structure when performing imputation. This work advocates for a careful and reasonable use of statistics and machine learning models by data practitioners.
Reject
This paper makes an interesting observation that imputation error metrics may work poorly for multimodal data, and proposes a creative solution. However, empirical evaluation of the idea on real datasets, and comparison to other proposals for imputation with multimodal data, are lacking. I would encourage the authors to take the reviewers' suggestions seriously and resubmit to a future conference.
test
[ "L8VzOxJSS3S", "hopAQpihuCv", "UwtGfGGNsNC", "2BVmOXsEgqy", "lFVhT2VWfFR", "5s7v5HfSHqE", "TKhtERNYw1K" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response, and wish them luck on future iterations of the paper.", " Thank you for your time and valuable insight.\n\nThe presentation of the kNNxKDE indeed does not make use of real-world datasets. The initial motivation behind this work is a multi-dimensional astrophysics dataset ...
[ -1, -1, -1, -1, 3, 3, 4 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "2BVmOXsEgqy", "TKhtERNYw1K", "5s7v5HfSHqE", "lFVhT2VWfFR", "nips_2022_Eccx2-_vZS4", "nips_2022_Eccx2-_vZS4", "nips_2022_Eccx2-_vZS4" ]
nips_2022_uRTW_PgXvc7
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
Capitalizing on large pre-trained models for various downstream tasks of interest have recently emerged with promising performance. Due to the ever-growing model size, the standard full fine-tuning based task adaptation strategy becomes prohibitively costly in terms of model training and storage. This has led to a new research direction in parameter-efficient transfer learning. However, existing attempts typically focus on downstream tasks from the same modality (e.g., image understanding) of the pre-trained model. This creates a limit because in some specific modalities, (e.g., video understanding) such a strong pre-trained model with sufficient knowledge is less or not available. In this work, we investigate such a novel cross-modality transfer learning setting, namely parameter-efficient image-to-video transfer learning. To solve this problem, we propose a new Spatio-Temporal Adapter (ST-Adapter) for parameter-efficient fine-tuning per video task. With a built-in spatio-temporal reasoning capability in a compact design, ST-Adapter enables a pre-trained image model without temporal knowledge to reason about dynamic video content at a small ~8% per-task parameter cost, requiring approximately 20 times fewer updated parameters compared to previous work. Extensive experiments on video action recognition tasks show that our ST-Adapter can match or even outperform the strong full fine-tuning strategy and state-of-the-art video models, whilst enjoying the advantage of parameter efficiency.
Accept
This paper proposes a new Spatio-temporal adapter for parameter-efficient fine-tuning per video task transferred from a pretrained image model. After the discussion phase, the requested comparisons on full finetuning, finetuning only the temporal attention modules and more backbones are added and the reviewers are satisfied with the rebuttal. Given all the positive scores by reviewers, the meta-reviewers recommend accepting this paper.
train
[ "vrasYBu7Unt", "G47r0yztxEx", "zNzhC_7I3bW", "o4TF6AOAfi4", "JIH-nbdWfH", "A1u_wkE3CjP", "MWJGP8EJIaz", "Ev-PxEooOB1", "Z-71q4n-np", "LvjhLWUixCI", "R1noi-77cWF", "pAu11FdEibf", "xjkFZsa49R", "00dUfFcq0EC", "afClvXCEhD2" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for constructive feedback. We appreciate the positive comments that (1) Our results are strong despite fine-tuning only a small fraction of parameters (ZMxj, Y5Pa, HAPR, 5ouG); (2) Comprehensive ablations (HARP, 5ouG); (3) The paper is well written and easy to follow (Y5Pa, HAPR, 5ouG). We ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 4 ]
[ "nips_2022_uRTW_PgXvc7", "o4TF6AOAfi4", "R1noi-77cWF", "MWJGP8EJIaz", "A1u_wkE3CjP", "Z-71q4n-np", "LvjhLWUixCI", "afClvXCEhD2", "00dUfFcq0EC", "xjkFZsa49R", "pAu11FdEibf", "nips_2022_uRTW_PgXvc7", "nips_2022_uRTW_PgXvc7", "nips_2022_uRTW_PgXvc7", "nips_2022_uRTW_PgXvc7" ]
nips_2022_X6bp8ri8dV
Exact Solutions of a Deep Linear Network
This work finds the analytical expression of the global minima of a deep linear network with weight decay and stochastic neurons, a fundamental model for understanding the landscape of neural networks. Our result implies that zero is a special point in deep neural network architecture. We show that weight decay strongly interacts with the model architecture and can create bad minima at zero in a network with more than $1$ hidden layer, qualitatively different from a network with only $1$ hidden layer. Practically, our result implies that common deep learning initialization methods are insufficient to ease the optimization of neural networks in general.
Accept
There is a clear consensus amongst the reviewers that the manuscript advances the theory for linear deep networks to a degree warranting acceptance at NeurIPS. The authors responded well to the issues raised by the reviewers which results in increased support by the reviewers that the manuscript be accepted. Inclusion of weight decay, stochasticity, and architectures beyond feed forward networks make this a valuable addition to the theory of linear deep networks.
train
[ "a27sFbFSRT", "ukgaVarM4y", "nVWvA4tlOx_", "vYUbDYd6PgR", "mm7tX_OcFHx", "ZUjkGKmR0KK", "E_akB5lb88Q", "vgtC-gegQNO", "ekTY9f8YgWd", "NJOlXcf0fqd", "MTx5qJIqXl", "bj8mj0uybUX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for their detailed response and for providing much additional material in the updated draft. I have read all other reviews and the authors' responses, and I decided to increase my score by 2. I have updated the review to reflect the changes. As mentioned in the updated review, there are two main...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "vYUbDYd6PgR", "ZUjkGKmR0KK", "bj8mj0uybUX", "MTx5qJIqXl", "MTx5qJIqXl", "NJOlXcf0fqd", "ekTY9f8YgWd", "nips_2022_X6bp8ri8dV", "nips_2022_X6bp8ri8dV", "nips_2022_X6bp8ri8dV", "nips_2022_X6bp8ri8dV", "nips_2022_X6bp8ri8dV" ]
nips_2022_TVpZaWNczF6
Constrained Predictive Coding as a Biologically Plausible Model of the Cortical Hierarchy
Predictive coding (PC) has emerged as an influential normative model of neural computation, with numerous extensions and applications. As such, much effort has been put into mapping PC faithfully onto the cortex, but there are issues that remain unresolved or controversial. In particular, current implementations often involve separate value and error neurons and require symmetric forward and backward weights across different brain regions. These features have not been experimentally confirmed. In this work, we show that the PC framework in the linear regime can be modified to map faithfully onto the cortical hierarchy in a manner compatible with empirical observations. By employing a disentangling-inspired constraint on hidden-layer neural activities, we derive an upper bound for the PC objective. Optimization of this upper bound leads to an algorithm that shows the same performance as the original objective and maps onto a biologically plausible network. The units of this network can be interpreted as multi-compartmental neurons with non-Hebbian learning rules, with a remarkable resemblance to recent experimental findings. There exist prior models which also capture these features, but they are phenomenological, while our work is a normative derivation. Notably, the network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex. The normative nature of our algorithm in the simplified linear case allows us to prove interesting properties of the framework and analytically understand the computational role of our network's components. The parameters of our network have natural interpretations as physiological quantities in a multi-compartmental model of pyramidal neurons, providing a concrete link between PC and experimental measurements carried out in the cortex.
Accept
A biologically more plausible extension of predictive coding that incorporates physiological detail is presented in this manuscript. I appreciate all discussions and extra experiments and improvements made to the paper. I agree with one of the reviewers who noted that it's a "potentially important variant of predictive coding that could be of interest to neuroscientists and ML scientists." Considering the theoretical contributions and new formulation to a subfield with significant growing interest, I recommend its acceptance to NeurIPS.
train
[ "rOTQakOWNvQ", "GlPVwSSjx-2", "rIwpua3bO-", "Us2USXr3jnZ", "GK8jMaQeesX", "bXX36PHNiHC", "rt9RuVIOKiZ", "KyusFPj5OSp", "j189KN3E1d", "zj-jPd7gjuS", "wer3paeS9fU", "nExAaC7iL0y" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " * We first want to thank the reviewer for pointing out that we are over 9 pages. Adding the extra explanations pushed us over the page limit but we have now corrected the issue. \n\n* On the new additions: we would like to point out that the new experimental results are just further verification of our algorithm ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "GlPVwSSjx-2", "GK8jMaQeesX", "rt9RuVIOKiZ", "KyusFPj5OSp", "nExAaC7iL0y", "wer3paeS9fU", "wer3paeS9fU", "zj-jPd7gjuS", "nips_2022_TVpZaWNczF6", "nips_2022_TVpZaWNczF6", "nips_2022_TVpZaWNczF6", "nips_2022_TVpZaWNczF6" ]
nips_2022_1fKJLRTUdo
SCL-WC: Cross-Slide Contrastive Learning for Weakly-Supervised Whole-Slide Image Classification
Weakly-supervised whole-slide image (WSI) classification (WSWC) is a challenging task where a large number of unlabeled patches (instances) exist within each WSI (bag) while only a slide label is given. Despite recent progress for the multiple instance learning (MIL)-based WSI analysis, the major limitation is that it usually focuses on the easy-to-distinguish diagnosis-positive regions while ignoring positives that occupy a small ratio in the entire WSI. To obtain more discriminative features, we propose a novel weakly-supervised classification method based on cross-slide contrastive learning (called SCL-WC), which depends on task-agnostic self-supervised feature pre-extraction and task-specific weakly-supervised feature refinement and aggregation for WSI-level prediction. To enable both intra-WSI and inter-WSI information interaction, we propose a positive-negative-aware module (PNM) and a weakly-supervised cross-slide contrastive learning (WSCL) module, respectively. The WSCL aims to pull WSIs with the same disease types closer and push different WSIs away. The PNM aims to facilitate the separation of tumor-like patches and normal ones within each WSI. Extensive experiments demonstrate state-of-the-art performance of our method in three different classification tasks (e.g., over 2% of AUC in Camelyon16, 5% of F1 score in BRACS, and 3% of AUC in DiagSet). Our method also shows superior flexibility and scalability in weakly-supervised localization and semi-supervised classification experiments (e.g., first place in the BRIGHT challenge). Our code will be available at https://github.com/Xiyue-Wang/SCL-WC.
Accept
This work presents a method to obtain slide level representations in computational pathology. The specific contributions of this work are a positive-negative-aware module (PNM) and a weakly-supervised cross-slide contrastive learning (WSCL) module and a loss to encourage intra-WSI local patch separation and inter-WSI global feature contrast. The idea is to use the attention weights in a MIL framework as patch level pseudo labels. These are used to compute "weights" for positive and negative patches (the latter of which is assumed to be significantly larger in number for a given WSI). By using these weights in a contrastive manner to push the representations from positive patches away from those of negative patches, the model learns more effectively since it is less susceptible to the noise from the negative patches. The reviewers found these contributions novel. During the review, the largest source of concern was along the choices made during the empirical evaluation. Specifically, the manuscript in its current form lacked empirical backing for several of the choices made regarding the neural architecture, the algorithm for self-supervised learning etc. In response to this the authors conducted several different kinds of ablation studies (which were incorporated into the supplement) during the rebuttal process, which other reviewers found convincing as a potential explanation of the outcomes. Overall, I found the main contributions of this work (leveraging patch level attention weights as psuedo labels) to be an interesting use case for computational pathology to better focus the learning signal on positive patches. My additional comment is that I think the additional ablation experiments (and references) are an important part of the contributions of this work and should be incorporated into the main paper rather than in the appendix (several equations can be compressed to make space in the manuscript in addition to the additional page).
train
[ "MPh8pdrY53", "QVyuHxeN8FU", "OlaOb4awoYk", "pqz2Ac418BU", "x_I3tAEQ5q-", "upWXmr5u5g5", "xPssydgzCc", "XTZiUBROrD", "hypkQeq4vqa", "Oik4Ae1G7c_", "LC_jTjMMSWu", "H6E7VKaqzlh", "LdT01EnaHFm", "RzBgtajyihJ", "mDRtBVVUn6A" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We want to thank you again for your constructive comments to our original manuscript, and we have studied them thoroughly and made corresponding responses and revisions. With the end of the discussion period approaching, we would like to summarize our responses again. We would appreciate it very much if you could...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ "RzBgtajyihJ", "nips_2022_1fKJLRTUdo", "pqz2Ac418BU", "x_I3tAEQ5q-", "upWXmr5u5g5", "xPssydgzCc", "RzBgtajyihJ", "hypkQeq4vqa", "H6E7VKaqzlh", "LdT01EnaHFm", "mDRtBVVUn6A", "nips_2022_1fKJLRTUdo", "nips_2022_1fKJLRTUdo", "nips_2022_1fKJLRTUdo", "nips_2022_1fKJLRTUdo" ]
nips_2022_NjKAm5wMbo2
VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning
We propose VRL3, a powerful data-driven framework with a simple design for solving challenging visual deep reinforcement learning (DRL) tasks. We analyze a number of major obstacles in taking a data-driven approach, and present a suite of design principles, novel findings, and critical insights about data-driven visual DRL. Our framework has three stages: in stage 1, we leverage non-RL datasets (e.g. ImageNet) to learn task-agnostic visual representations; in stage 2, we use offline RL data (e.g. a limited number of expert demonstrations) to convert the task-agnostic representations into more powerful task-specific representations; in stage 3, we fine-tune the agent with online RL. On a set of challenging hand manipulation tasks with sparse reward and realistic visual inputs, compared to the previous SOTA, VRL3 achieves an average of 780% better sample efficiency. And on the hardest task, VRL3 is 1220% more sample efficient (2440% when using a wider encoder) and solves the task with only 10% of the computation. These significant results clearly demonstrate the great potential of data-driven deep reinforcement learning.
Accept
This paper introduced a simple paradigm for improving sample efficiency of training deep reinforcement learning policies for vision-based control tasks. The idea is to use a 3-stage pipeline: 1) pre-training visual representations on large-scale image datasets, 2) policy training with offline RL, and 3) fine-tuning the policy with online RL. This work received mixed reviews from four reviewers, with one Reject, one Weak Reject, and two Weak Accepts. The reviewers appreciated the demonstrated effectiveness of the proposed approach despite the simplicity of the approach. Meanwhile, they expressed major concerns regarding the limited novelty concerning the burgeoning body of literature on visual pre-training and limited evaluations and ablation studies. The authors drafted very detailed responses to the reviewers' comments, which clarified many technical issues brought up in the initial reviews. At the end of the discussion period, Reviewer i5GH (who did not engage in the discussions) and Reviewer 1wyj maintained their negative ratings of this paper, while the other two voted Weak Accept. The AC read the paper, the reviews, and the authors' responses carefully. Reviewer i5GH's main criticisms are 1) the novelty of VRL3 and missing citations and discussions of prior work (MVP, PVR, R3M, etc.) and 2) insufficient comparisons and ablations of key model designs. The AC checked the publication/release dates of the mentioned works and believed they should be considered *concurrent* with this submission. Thus, the technical merit of this work should not be penalized by the existence of these related works. Meanwhile, the authors added the citations and discussions about these works in the revised draft, which addressed the reviewer's comment. In addition, the authors also provided additional ablation studies and clarifications which addressed the second point raised by Reviewer i5GH. Reviewer 1wyj expressed concerns about the heavy revision during rebuttal and the overclaim and over-selling of the approach. The AC agreed with this reviewer that some language, such as "minimalist", should be toned down in the next reversion of this manuscript. Taking all these into account, the AC found that the rebuttal has addressed the major issues raised in the reviews. Even though this work does not generate revolutionary ideas, it has shown convincing evidence of a practical approach that improves the learning efficiency of deep reinforcement learning in challenging vision-based control tasks. This work may pave the road for future work to develop more advanced methods. Therefore, the AC thinks that this work has passed the bar of acceptance at NeurIPS, despite the mixed final ratings.
train
[ "3Mo5388DsG", "FKPW0W57A1W", "3jLderQSNrQ", "WG3zQZ6SIBI", "tDANdYQ9oN8", "z1qCJzHhjk_Q", "PS3E-gGc6S18", "SYxkK9ed-0S", "x1_9RgBFvB", "lNQ0ypWOwqQ", "RbXOn90K_JD", "oEPl_hB_Czr", "wOf60PJ4K4_", "LEuS53OCAKZj", "6-yYNI3JMc", "TJEOzLOpyz0", "lMeUV80ypk_", "2roubcqL4dG", "FY_cZpofh...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official...
[ " Dear reviewers, could you please check the authors' responses and see if they sway your overall assessments of this paper? Either way, we would greatly appreciate it if you acknowledged that you have read the responses (thanks to 5Wwb who have already done so) and discussed any follow-up questions with the author...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "nips_2022_NjKAm5wMbo2", "z1qCJzHhjk_Q", "z1qCJzHhjk_Q", "FY_cZpofhe", "z1qCJzHhjk_Q", "PS3E-gGc6S18", "hmu_hDPixAF", "l1cJRGS0DgD", "nips_2022_NjKAm5wMbo2", "l1cJRGS0DgD", "l1cJRGS0DgD", "l1cJRGS0DgD", "hmu_hDPixAF", "hmu_hDPixAF", "hmu_hDPixAF", "E_peXAqNjdS", "1l75npr4EvQ", "1l7...
nips_2022__B5Y2hvZKpS
Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning
Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity. This framework is typically used for releasing robust statistics such as median or trimmed mean in a differentially private manner. While PTR is a common framework introduced over a decade ago, using it in applications such as robust SGD where we need many adaptive robust queries is challenging. This is mainly due to the lack of \Renyi Differential Privacy (RDP) analysis, an essential ingredient underlying the moments accountant approach for differentially private deep learning. In this work, we generalize the standard PTR and derive the first RDP bound for it. We show that our RDP bound for PTR yields tighter DP guarantees than the directly analyzed $(\varepsilon, \delta)$-DP. We also derive the algorithm-specific privacy amplification bound of PTR under subsampling. We show that our bound is much tighter than the general upper bound and close to the lower bound. Our RDP bounds enable tighter privacy loss calculation for the composition of many adaptive runs of PTR. As an application of our analysis, we show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms that use robust statistics for gradients aggregation. We conduct experiments on the settings of label, feature, and gradient corruption across different datasets and architectures. We show that PTR-based private and robust training algorithm significantly improves the utility compared with the baseline.
Accept
Paper studies RDP bound of Propose-Test-Release (PTR) algorithm in differential privacy. In particular, it shows RDP bound of subsampled PTR and demonstrates how it is useful for composition of robust SGD. Given the textbook importance of PTR, we recommend accepting. However, we encourage authors to incorporate the comments from the authors, make sure that all the details of the proofs are made available in the final version, and clarify any comments reviewers raised.
train
[ "P8aKEUKKlD", "GwNokIJT8yH", "x9pMmqNz9vD", "Z1wGzFd53Jr", "yWEqdVoTG2Q", "R2EKntUToC4z", "ugTpcNOWDcv", "bDH0QMROY4L", "fq7xs7gX4lQ", "6CAy89k8mYye", "Zoo_825raDC", "rVaeyqTY5Tf", "BAFgYEPCem", "SDnRL7IeuXD" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the rebuttal and the revision! The rebuttal solved my concerns about the presentation. I have raised my score for \"presentation\", but will keep the over score.", " Dear Reviewer Tbda,\n\nWe want to thank you for the positive comments about our paper. We’d also like to express our gratitude for your...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 5 ]
[ "ugTpcNOWDcv", "SDnRL7IeuXD", "BAFgYEPCem", "rVaeyqTY5Tf", "Zoo_825raDC", "SDnRL7IeuXD", "BAFgYEPCem", "Zoo_825raDC", "rVaeyqTY5Tf", "nips_2022__B5Y2hvZKpS", "nips_2022__B5Y2hvZKpS", "nips_2022__B5Y2hvZKpS", "nips_2022__B5Y2hvZKpS", "nips_2022__B5Y2hvZKpS" ]
nips_2022_IXoHxXIGpyV
Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only. However, it remains very challenging for the adapted generator (i) to generate diverse images inherited from the pre-trained generator while (ii) faithfully acquiring the domain-specific attributes and styles of the reference image. In this paper, we present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation. For global-level adaptation, we leverage the difference between the CLIP embedding of the reference image and the mean embedding of source images to constrain the target generator. For local-level adaptation, we introduce an attentive style loss which aligns each intermediate token of an adapted image with its corresponding token of the reference image. To facilitate diverse generation, selective cross-domain consistency is introduced to select and retain domain-sharing attributes in the editing latent $\mathcal{W}+$ space to inherit the diversity of the pre-trained generator. Extensive experiments show that our method outperforms the state-of-the-arts both quantitatively and qualitatively, especially for the cases of large domain gap. Moreover, our DiFa can easily be extended to zero-shot generative domain adaption with appealing results.
Accept
The paper focuses on one-shot generative domain adaption and proposes a novel method to obtain faithful adaptation. It is well written and easy to follow. The visualization results and quantitative evaluations demonstrate the proposed method can generate high-quality results. In all, the meta-reviewer considers the contribution of this paper significant, and worth publication. The authors need to incorporate ethics reviews when preparing the camera ready version.
train
[ "ZhPGvOwjIk3", "LkFo0r2Q4H", "oLzmIEmTORf", "fGhWFOoG5Z", "Hir1T38n6Zi", "LkRoEwcbKU", "ora7_ZdBgCS", "JRHiwGERZvB", "GO5Xl9NG2xX", "WFGFXcnARKP", "ZBQTtBG4Ke7", "2dDlB4SF56X", "-2xZSDiqg_H", "VdXnhXHc48W", "mEhuCodoVGl", "Fpu6CxoupQy", "an03vx5K69e" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Reviewer 4nF5 flagged this paper for ethics review. It's unclear from their review the precise reasons. Nevertheless, I agree that this paper deserves more attention.\n\nThe authors begin their discussion of potential negative societal implications in the conclusion of Section `5`:\n\n>Meanwhile, our DiFa also ma...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "nips_2022_IXoHxXIGpyV", "nips_2022_IXoHxXIGpyV", "2dDlB4SF56X", "Hir1T38n6Zi", "2dDlB4SF56X", "ora7_ZdBgCS", "JRHiwGERZvB", "Fpu6CxoupQy", "Fpu6CxoupQy", "mEhuCodoVGl", "Fpu6CxoupQy", "an03vx5K69e", "VdXnhXHc48W", "nips_2022_IXoHxXIGpyV", "nips_2022_IXoHxXIGpyV", "nips_2022_IXoHxXIGpy...
nips_2022_i-8uqlurj1f
Does Momentum Change the Implicit Regularization on Separable Data?
The momentum acceleration technique is widely adopted in many optimization algorithms. However, there is no theoretical answer on how the momentum affects the generalization performance of the optimization algorithms. This paper studies this problem by analyzing the implicit regularization of momentum-based optimization. We prove that on the linear classification problem with separable data and exponential-tailed loss, gradient descent with momentum (GDM) converges to the $L^2$ max-margin solution, which is the same as vanilla gradient descent. That means gradient descent with momentum acceleration still converges to a low-complexity model, which guarantees their generalization. We then analyze the stochastic and adaptive variants of GDM (i.e., SGDM and deterministic Adam) and show they also converge to the $L^2$ max-margin solution. Technically, the implicit regularization of SGDM is established based on a novel convergence analysis of SGDM under a general noise condition called affine noise variance condition. To the best of our knowledge, we are the first to derive SGDM’s convergence under such an assumption. Numerical experiments are conducted to support our theoretical results.
Accept
The reviewers agree that the paper provides a nice technical contribution to understanding the effects of momentum on the generalization error of optimization methods. The results of the paper are solid and interesting, while the presentation is clear and easy to follow.
train
[ "Ku0sxXUgDys", "ry14kfDZTnP", "DieavM-DDPa8", "apKfP24DhUl", "3sSXbLVegQX", "dmRsCtxjs2g", "cvwoQ8va6pG", "PGtXlLBuc2p", "gmJ8vhBfk2", "4tt5cfyPaj", "xQriWgjTtY-", "8opZc9Lb7QY" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am generally satisfied with the authors' comments and explanations regarding my questions. I would like to keep my evaluation as \"weak accept\".", " I thank the reviewers for their response and for adding changes in their updated version of the paper. Even though I am still a bit concerned about the novelty ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "PGtXlLBuc2p", "3sSXbLVegQX", "dmRsCtxjs2g", "8opZc9Lb7QY", "8opZc9Lb7QY", "xQriWgjTtY-", "4tt5cfyPaj", "gmJ8vhBfk2", "nips_2022_i-8uqlurj1f", "nips_2022_i-8uqlurj1f", "nips_2022_i-8uqlurj1f", "nips_2022_i-8uqlurj1f" ]
nips_2022_Tsy9WCO_fK1
Can Push-forward Generative Models Fit Multimodal Distributions?
Many generative models synthesize data by transforming a standard Gaussian random variable using a deterministic neural network. Among these models are the Variational Autoencoders and the Generative Adversarial Networks. In this work, we call them "push-forward" models and study their expressivity. We formally demonstrate that the Lipschitz constant of these generative networks has to be large in order to fit multimodal distributions. More precisely, we show that the total variation distance and the Kullback-Leibler divergence between the generated and the data distribution are bounded from below by a constant depending on the mode separation and the Lipschitz constant. Since constraining the Lipschitz constants of neural networks is a common way to stabilize generative models, there is a provable trade-off between the ability of push-forward models to approximate multimodal distributions and the stability of their training. We validate our findings on one-dimensional and image datasets and empirically show that the recently introduced diffusion models do not suffer of such limitation.
Accept
This paper proposes a theory on the Lipschitz continuity of the pushforward mapping, which transforms a Gaussian distribution to a multimodal distribution. This is a borderline paper. The reviewers are generally positive about the paper and leaning towards acceptance, though there is still some notable gap between the proposed theory and empirical results. The authors are expected to further revise their paper based on the reviewers' suggestions.
val
[ "UiX6MBZ7sYY", "CZDeC2-ecu4", "nUBiiqQGYjb", "sG1qgs-omfj", "CRoY6RRVkhk", "pgv1Moyk7ND", "8uvR9n_fDGr", "8Ih5Qv5O7bt", "Z-aU1JoOMiM", "yVG2tWCQe9", "pw5vNyDVk-C", "-6t1Xe6bGmw", "nzOOJsUDVko", "HlEDpXLnv3r", "MaXTGQQCDYk", "tyBvLIwmc8u", "MBf6-2YU8Gm", "e5vJrTzG8E", "HdQbQI19kR3...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " Dear AC, \n\nI have read the rebuttal of the authors, and I would like to keep my initial score due to the reasons I have pointed out in my discussions with the authors. \n\n", " \n\nDear Reviewers,\n\nWe are entering the discussion phase, where the authors will be not involved in the discussion.\n\nI would lik...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3, 4 ]
[ "CZDeC2-ecu4", "nips_2022_Tsy9WCO_fK1", "sG1qgs-omfj", "nzOOJsUDVko", "pgv1Moyk7ND", "8uvR9n_fDGr", "Z-aU1JoOMiM", "HdQbQI19kR3", "HdQbQI19kR3", "e5vJrTzG8E", "MBf6-2YU8Gm", "tyBvLIwmc8u", "MaXTGQQCDYk", "nips_2022_Tsy9WCO_fK1", "nips_2022_Tsy9WCO_fK1", "nips_2022_Tsy9WCO_fK1", "nips...
nips_2022_QnajmHkhegH
DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations
Solving multi-label recognition (MLR) for images in the low-label regime is a challenging task with many real-world applications. Recent work learns an alignment between textual and visual spaces to compensate for insufficient image labels, but loses accuracy because of the limited amount of available MLR annotations. In this work, we utilize the strong alignment of textual and visual features pretrained with millions of auxiliary image-text pairs and propose \textit{Dual Context Optimization} (DualCoOp) as a unified framework for partial-label MLR and zero-shot MLR. \ours encodes positive and negative contexts with class names as part of the linguistic input (i.e. prompts). Since \ours only introduces a very light learnable overhead upon the pretrained vision-language framework, it can quickly adapt to multi-label recognition tasks that have limited annotations and even unseen classes. Experiments on standard multi-label recognition benchmarks across two challenging low-label settings demonstrate the advantages of our approach over state-of-the-art methods. Our code will be publicly available.Project page: https://cs-people.bu.edu/sunxm/DualCoOp/project.html
Accept
This paper’s DualCoOp extends the previous CoOp prompt learning framework to multi-label and multi-label zero-shot recognition. Reviewers were broadly positive, appreciating the writing, good results, and overall idea of exploiting pre-trained CLIP models for MLR. Questions focused on comparison to vanilla CoOp, evaluation in the fully supervised regime, inference cost and impact of fine-tuning. These were all generally resolved during author feedback phase. Since all reviewers are positive and questions are resolved, I recommend accept.
test
[ "2GjFPadw2QG", "BEUIJ3uB1Vp", "V77zpOUWytC", "OLqsDVf3c5p_", "tgH_Yh9r64q", "4ur4Fd68dOd", "vtOhwUuN9_td", "msEGw5ShFDQ", "-ywg8Db-FT1", "nndspUNFv3E", "HromKQGjl0s" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our rebuttal. We will update our paper with these experiments in the next version.", " Thank you for your rebuttal and for the extra experiments! The rebuttal answered all my questions!\n\n I would suggest to include the extra experiments in the revised version of the paper (at least in th...
[ -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "BEUIJ3uB1Vp", "V77zpOUWytC", "HromKQGjl0s", "nndspUNFv3E", "-ywg8Db-FT1", "msEGw5ShFDQ", "nips_2022_QnajmHkhegH", "nips_2022_QnajmHkhegH", "nips_2022_QnajmHkhegH", "nips_2022_QnajmHkhegH", "nips_2022_QnajmHkhegH" ]
nips_2022_ZV9WAe-Q0J
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
Vision Transformers (ViTs) have recently achieved competitive performance in broad vision tasks. Unfortunately, on popular threat models, naturally trained ViTs are shown to provide no more adversarial robustness than convolutional neural networks (CNNs). Adversarial training is still required for ViTs to defend against such adversarial attacks. In this paper, we provide the first and comprehensive study on the adversarial training recipe of ViTs via extensive evaluation of various training techniques across benchmark datasets. We find that pre-training and SGD optimizer are necessary for ViTs' adversarial training. Further considering ViT as a new type of model architecture, we investigate its adversarial robustness from the perspective of its unique architectural components. We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs. Our code is available at https://github.com/mo666666/When-Adversarial-Training-Meets-Vision-Transformers.
Accept
The paper studies how to properly conduct adversarial training on ViTs to obtain adversarially robust models. The paper mainly tried to conduct a comprehensive study on ViT adversarial training and identified several important training heuristics that can improve the robustness of ViTs. Among four reviewers, three consider this as a borderline paper and one strongly supports this work. After several discussions, we think this paper will build a solid foundation for future adversarial robustness studies on ViT models so decide to accept the paper. However, we do hope the authors carefully revise their paper based on the reviewers' suggestions. More specifically, the main concern from reviewers is the correctness issue pointed out by Reviewer 63WY. We hope the authors can carefully check and explain those issues in their final version.
test
[ "6RclkV0jutm", "JsQGHljHJ6a", "6LMS0zc2e3", "CYjW0TXFN9P", "0njRXfaTwM4", "6bE0NGiZbLh", "1VaH7myhXrq", "rH8zLYfj6P7", "kIB056k-Pu2", "Wg-MqrED7NM", "C3XHLINNZkX", "xi9TDcE70in", "6ZpDkc3zmix", "iEDSzb3Noik", "htlJ1J_RMdp", "vXYT3wmANuCU", "wxZ0_qhGtSS", "WF8uwTbD0WL", "qAa-TZmKD...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Dear Reviewer 63WY,\n\nThanks for your reply! We will definitely update the results. \n\nWe want to emphasize the results on CIFAR again. Please note this is **adversarial training**. As you have noted, **in Table 3 of [1], ViT-B achieved 49.2% robustness under AA, while in our Table 2, ViT-B achieved 49.06%**. T...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 9, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 4 ]
[ "JsQGHljHJ6a", "6LMS0zc2e3", "CYjW0TXFN9P", "xi9TDcE70in", "qAa-TZmKDAQ", "1VaH7myhXrq", "iEDSzb3Noik", "qAa-TZmKDAQ", "415yUK1kCD", "WF8uwTbD0WL", "wxZ0_qhGtSS", "6ZpDkc3zmix", "WF8uwTbD0WL", "415yUK1kCD", "qAa-TZmKDAQ", "wxZ0_qhGtSS", "nips_2022_ZV9WAe-Q0J", "nips_2022_ZV9WAe-Q0J...
nips_2022_QeRAyn4igEA
Block-wise Separable Convolutions: An Alternative Way to Factorize Standard Convolutions
Convolutional neural networks (CNNs) have demonstrated great capability of solving various computer vision tasks with nice prediction performance. Nevertheless, the higher accuracy often comes with an increasing number of model parameters and large computational cost. This raises challenges in deploying them in resource-limited devices. In this paper, we introduce block-wise separable convolutions (BlkSConv) to replace the standard convolutions in order to compress deep CNN models. First, BlkSConv expresses the standard convolutional kernel as an ordered set of block vectors each of which is a linear combination of fixed basis block vectors. Then it eliminates most basis block vectors and their corresponding coefficients to obtain an approximated convolutional kernel. Moreover, the proposed BlkSConv operation can be efficiently realized via a combination of pointwise and group-wise convolutions. Thus the constructed networks have smaller model size and fewer multiply-adds operations while keeping comparable prediction accuracy. However, it is unknown how to search a qualified hyperparameter setting of the block depth and number of basis block vectors. To address this problem, we develop a hyperparameter search framework based on principal component analysis (PCA) to help determine these two hyperparameters such that the corresponding network achieves nice prediction performance while simultaneously satisfying the constraints of model size and model efficiency. Experimental results demonstrate the prediction performance of constructed BlkSConv-based CNNs where several convolutional layers are replaced by BlkSConv layers suggested by the proposed PCA-based hyperparameter search algorithm. Our results show that BlkSConv-based CNNs achieve competitive performance compared with the standard convolutional models for the datasets including ImageNet, CIFAR-10/100, Stanford Dogs, and Oxford Flowers.
Reject
This paper proposes a factorized convolution operation, which can be implemented with a combination of group wise convolution and point wise convolution. The reviewers in general agreed that this a valid idea, and found the PCA based hyper-parameter search method interesting. However, several concerns were raised on the experimental side, including lack of fair comparison with other efficient convolution baselines, as well as degraded latency and memory cost compared to simple ResNets. Given the practical nature of the work, the AC agrees that this work needs substantial improvements on the empirical evaluations to pass the bar for publication.
train
[ "DLIvPEtZ9x", "TuUiAbwbS1o", "dsIzWrlduKk", "oZ_-Q_fHvOc", "sEDDVQHF9z_", "1VRNKZecTcJ", "mxDNA14Tp5I", "JXE6SsxVmfo", "oYleYuPF-S_" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for clarifying that M is not necessary to be 2^k!\nI agree that FLOPs is not necessary correlated to latency, but if the proposed method makes the original ResNet18 slower (and also lower accuracy), then the contribution of reducing FLOPs/Prams is limited in practice.", " 1. The latency results are also ...
[ -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "TuUiAbwbS1o", "dsIzWrlduKk", "sEDDVQHF9z_", "oYleYuPF-S_", "JXE6SsxVmfo", "mxDNA14Tp5I", "nips_2022_QeRAyn4igEA", "nips_2022_QeRAyn4igEA", "nips_2022_QeRAyn4igEA" ]
nips_2022_xL7B5axplIe
Conservative Dual Policy Optimization for Efficient Model-Based Reinforcement Learning
Provably efficient Model-Based Reinforcement Learning (MBRL) based on optimism or posterior sampling (PSRL) is ensured to attain the global optimality asymptotically by introducing the complexity measure of the model. However, the complexity might grow exponentially for the simplest nonlinear models, where global convergence is impossible within finite iterations. When the model suffers a large generalization error, which is quantitatively measured by the model complexity, the uncertainty can be large. The sampled model that current policy is greedily optimized upon will thus be unsettled, resulting in aggressive policy updates and over-exploration. In this work, we propose Conservative Dual Policy Optimization (CDPO) that involves a Referential Update and a Conservative Update. The policy is first optimized under a reference model, which imitates the mechanism of PSRL while offering more stability. A conservative range of randomness is guaranteed by maximizing the expectation of model value. Without harmful sampling procedures, CDPO can still achieve the same regret as PSRL. More importantly, CDPO enjoys monotonic policy improvement and global optimality simultaneously. Empirical results also validate the exploration efficiency of CDPO.
Accept
Reviewers all appreciated the authors effort on adding additional experiments and revising the draft during the rebuttal phase, and the reviewers are in general satisfied by the authors response. The reviewers agree that the paper has a solid contribution to MBRL.
train
[ "X05ixGzxMOt", "s2LB3dHKCtv", "u9-qct8rdZV", "jGbVzlDtysr", "oLQ1dsPc7CF", "sIM4ciuFVqf", "IFodpWsRSX8", "tjSTLxY6e3v", "N9fJHp8vSue", "HODDKtQB50P", "8Rwauj1RjII", "q-09DOpYpAr", "n8HYL0vtv3q", "6j_k3T9cvi", "G4muzAuWmFD", "ynv4qISpq1t", "QGnhSdbwqBb", "GhU9QdSj3Yh" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am pleased with your detailed response.", " Thank you for helping us improve the paper and for updating the score! We really appreciate your suggestions and will continue polishing our paper.", " Thank you for your hard working! My concerns have been addressed. While I think the baseline comparison should i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "6j_k3T9cvi", "u9-qct8rdZV", "HODDKtQB50P", "nips_2022_xL7B5axplIe", "G4muzAuWmFD", "IFodpWsRSX8", "tjSTLxY6e3v", "q-09DOpYpAr", "nips_2022_xL7B5axplIe", "8Rwauj1RjII", "G4muzAuWmFD", "ynv4qISpq1t", "QGnhSdbwqBb", "GhU9QdSj3Yh", "nips_2022_xL7B5axplIe", "nips_2022_xL7B5axplIe", "nips...
nips_2022_OlDEMIbCvTl
Efficient Submodular Optimization under Noise: Local Search is Robust
The problem of monotone submodular maximization has been studied extensively due to its wide range of applications. However, there are cases where one can only access the objective function in a distorted or noisy form because of the uncertain nature or the errors involved in the evaluation. This paper considers the problem of constrained monotone submodular maximization with noisy oracles introduced by Hassidim and Singer (2017). For a cardinality constraint, we propose an algorithm achieving a near-optimal (1-1/e-O(epsilon))-approximation guarantee (for arbitrary epsilon > 0) with only a polynomial number of queries to the noisy value oracle, which improves the exponential query complexity of Singer and Hassidim (2018). For general matroid constraints, we show the first constant approximation algorithm in the presence of noise. Our main approaches are to design a novel local search framework that can handle the effect of noise and to construct certain smoothing surrogate functions for noise reduction.
Accept
Overall, this paper studies an important problem and obtains strong results. There were some concerns raised by one reviewer regarding the novelty of the techniques and the lack of discussion regarding the noise model. Please make sure to add the discussion about the noise model from the rebuttal in the camera ready version of the paper.
train
[ "f9_oR39rPE", "3K5vr3fSQV", "Za1aE3_hYEi", "nJPuGf06puG", "Z--tItT9Ekv", "QiPBkN5Ad5c" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments that give us the chance to clarify our novelty and contributions. We hope that our reply addresses your concerns and that you will increase your support for the paper.\n\n> ''The adaptation of the FW algorithm is fairly straightforward, and as such, the paper does not contribute much in t...
[ -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "nJPuGf06puG", "Z--tItT9Ekv", "QiPBkN5Ad5c", "nips_2022_OlDEMIbCvTl", "nips_2022_OlDEMIbCvTl", "nips_2022_OlDEMIbCvTl" ]
nips_2022_nE8_DvxAqAB
Egocentric Video-Language Pretraining
Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention. Best performing works rely on large-scale, 3rd-person video-text datasets, such as HowTo100M. In this work, we exploit the recently released Ego4D dataset to pioneer Egocentric VLP along three directions. (i) We create EgoClip, a 1st-person video-text pretraining dataset comprising 3.8M clip-text pairs well-chosen from Ego4D, covering a large variety of human daily activities. (ii) We propose a novel pretraining objective, dubbed EgoNCE, which adapts video-text contrastive learning to the egocentric domain by mining egocentric-aware positive and negative samples. (iii) We introduce EgoMCQ, a development benchmark that is close to EgoClip and hence can support effective validation and fast exploration of our design decisions in EgoClip and EgoNCE. Furthermore, we demonstrate strong performance on five egocentric downstream tasks across three datasets: video-text retrieval on EPIC-KITCHENS-100; action recognition on Charades-Ego; natural language query, moment query, and object state change classification on Ego4D challenge benchmarks. The dataset and code are available at https://github.com/showlab/EgoVLP.
Accept
For egocentric video-language pretraining, this paper creates a 1st-person video-text pretraining dataset, proposes a new contrastive loss EgoNCE, and builds a new benchmark EgoMCQ. Although the contribution of this work is somewhat incremental, its motivation, experimentation and organization are good. Besides, all reviewers agree that egocentric video-language pretraining is an important topic for the community. I hence suggest accepting it.
train
[ "g-jzZKxKBNT", "EfkXn2VhnB", "d_jyuWTqFz-", "60p8xT-EFtY", "60_oVp8wJo", "fXSYynuNlvG", "vHcFeUnSI5", "Fo-VjSNjUrx", "beDKgDUKsFu", "X3s-GaUn5LP", "_MWILqsvg6B", "IsWVk9BCIua", "PohF5UoBOfO", "4KROUiTXx8w", "flMAcj9Wshh", "Z6Hy9Dcgh9" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your feedback, we have posted a new response in https://openreview.net/forum?id=nE8_DvxAqAB&noteId=EfkXn2VhnB to recap our contribution (against Ego4D and Frozen model)\n\n- Our work is motivated by Ego4D but we note that **there is a long way to pave from the Ego4D dataset to Egocentric V...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "d_jyuWTqFz-", "nips_2022_nE8_DvxAqAB", "beDKgDUKsFu", "60_oVp8wJo", "vHcFeUnSI5", "Z6Hy9Dcgh9", "Z6Hy9Dcgh9", "4KROUiTXx8w", "4KROUiTXx8w", "flMAcj9Wshh", "PohF5UoBOfO", "nips_2022_nE8_DvxAqAB", "nips_2022_nE8_DvxAqAB", "nips_2022_nE8_DvxAqAB", "nips_2022_nE8_DvxAqAB", "nips_2022_nE8_...
nips_2022_2fD1Ux9InIW
Generalised Implicit Neural Representations
We consider the problem of learning implicit neural representations (INRs) for signals on non-Euclidean domains. In the Euclidean case, INRs are trained on a discrete sampling of a signal over a regular lattice. Here, we assume that the continuous signal exists on some unknown topological space from which we sample a discrete graph. In the absence of a coordinate system to identify the sampled nodes, we propose approximating their location with a spectral embedding of the graph. This allows us to train INRs without knowing the underlying continuous domain, which is the case for most graph signals in nature, while also making the INRs independent of any choice of coordinate system. We show experiments with our method on various real-world signals on non-Euclidean domains.
Accept
There is a clear consensus amongst the reviewers that the results are interesting, the manuscript is easy to read, and would be interesting to the NeurIPS community. One of the main criticisms was the lack of baselines to compare the authors results against, and this was resolved by the authors in revision which encouraged one reviewer to advance their recommendation from weak accept to accept; this can be expected to also have been appreciated by other reviewers.
train
[ "sXB-36neq__", "jJbMCdjzBdo", "_tWhW0pNLA0", "tUBmiTB2a0", "vo7dOKSkgwA", "iLbPIMDs_Vq", "g9WPuglgcqj", "UwBQQmNxV6", "PWJM9m8vSFK", "Uzd70WSUtag", "onKdQ7iuvPr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for reply and sorry for the late response. Authors' reply has resolved all my questions. I'll keep my score and advocate acceptance.", " We thank the authors for their quick and thorough reply. The clarification about intrinsic symmetries, comparison to related work, sign choice in super-resolution exper...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "PWJM9m8vSFK", "tUBmiTB2a0", "nips_2022_2fD1Ux9InIW", "vo7dOKSkgwA", "onKdQ7iuvPr", "Uzd70WSUtag", "UwBQQmNxV6", "PWJM9m8vSFK", "nips_2022_2fD1Ux9InIW", "nips_2022_2fD1Ux9InIW", "nips_2022_2fD1Ux9InIW" ]
nips_2022_Z5SE9PiAO4t
Few-shot Image Generation via Adaptation-Aware Kernel Modulation
Few-shot image generation (FSIG) aims to learn to generate new and diverse samples given an extremely limited number of samples from a domain, e.g., 10 training samples. Recent work has addressed the problem using transfer learning approach, leveraging a GAN pretrained on a large-scale source domain dataset and adapting that model to the target domain based on very limited target domain samples. Central to recent FSIG methods are knowledge preserving criteria, which aim to select a subset of source model's knowledge to be preserved into the adapted model. However, a major limitation of existing methods is that their knowledge preserving criteria consider only source domain/source task, and they fail to consider target domain/adaptation task in selecting source model's knowledge, casting doubt on their suitability for setups of different proximity between source and target domain. Our work makes two contributions. As our first contribution, we re-visit recent FSIG works and their experiments. Our important finding is that, under setups which assumption of close proximity between source and target domains is relaxed, existing state-of-the-art (SOTA) methods which consider only source domain/source task in knowledge preserving perform no better than a baseline fine-tuning method. To address the limitation of existing methods, as our second contribution, we propose Adaptation-Aware kernel Modulation (AdAM) to address general FSIG of different source-target domain proximity. Extensive experimental results show that the proposed method consistently achieves SOTA performance across source/target domains of different proximity, including challenging setups when source and target domains are more apart. Project Page: https://yunqing-me.github.io/AdAM/
Accept
This paper focuses on the few-shot image generation task, which is an interesting and challenging problem. It is well-organized and easy to follow. All the reviewers acknowledge that experimental results demonstrate the effectiveness of the proposed adaptation-aware kernel modulation method. Overall, the meta-reviewer recommends acceptance of the paper.
train
[ "IyT8S01rWSH", "iLM0B-xc35r", "lnCT_6AC874", "fiTc4YQiwwF", "K877lyq4WSF", "tJnrzUJ5fCl", "BsyxuSB0p4v", "bPP4qMR81GG", "Of0y8T4p-yk", "ONSaVQ1y4hi", "ZAvL1xDIH5k", "Yzga-c1Tqs", "UO0_aQ8vhWX", "TF0zvIFU37w", "EXdt8CEiyYT", "3Ik7euhHSw2", "e6HZc3bkjy3", "en2tQYyCMZq", "TMaxD1ioSw...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear ACs and Reviewers:\n\nWe sincerely thank you for your positive comments and recommendations. We appreciate our Reviewers’ involvement during the discussion stage allowing us to accurately communicate additional details of our work. \n\n-------------------------------------\n\nWe highly appreciate the Review...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "nips_2022_Z5SE9PiAO4t", "lnCT_6AC874", "e6HZc3bkjy3", "en2tQYyCMZq", "en2tQYyCMZq", "e6HZc3bkjy3", "UO0_aQ8vhWX", "nips_2022_Z5SE9PiAO4t", "e6HZc3bkjy3", "e6HZc3bkjy3", "en2tQYyCMZq", "en2tQYyCMZq", "TMaxD1ioSwE", "TMaxD1ioSwE", "TMaxD1ioSwE", "nips_2022_Z5SE9PiAO4t", "nips_2022_Z5S...
nips_2022_-AxpnEv1f1
Look Around and Refer: 2D Synthetic Semantics Knowledge Distillation for 3D Visual Grounding
3D visual grounding task has been explored with visual and language streams to comprehend referential language for identifying targeted objects in 3D scenes. However, most existing methods devote the visual stream to capture the 3D visual clues using off-the-shelf point clouds encoders. The main question we address is “can we consolidate the 3D visual stream by 2D clues and efficiently utilize them in both training and testing phases?”. The main idea is to assist the 3D encoder by incorporating rich 2D object representations without requiring extra 2D inputs. To this end, we leverage 2D clues, synthetically generated from 3D point clouds, that empirically show their aptitude to boost the quality of the learned visual representations. We validate our approach through comprehensive experiments on Nr3D, Sr3D, and ScanRefer datasets. Our experiments show consistent performance gains against counterparts, where our proposed module, dubbed as LAR, significantly outperforms state-of-the-art 3D visual grounding techniques on three benchmarks. Our code will be made publicly available.
Accept
After the rebuttal and discussion the paper received one weak accept, and three borderline ratings (2 ba, 1br). The concerns of the only reviewer leaning towards rejection are well addressed, as such the AC sees no reason to reject this paper.
train
[ "qv3WAGizAKi", "fvIwdkHFMrW", "F8sBPElt-f6", "vk0wNJeWEUH", "mvjYIHfBe1E", "jwt6YIb3pyw", "wE8Ak-Q_buB", "qeb04p0CF7", "Z5BTwQgSQrjj", "5yPHLzSy_eZW", "Ef63Y02xy_w", "IgbsN8EZG4n", "bhsagyYpbD9", "EeE1WkA6Fk", "uQkGDV7IcED", "O3ghwFbVVY", "Iqk7hy7sMC6" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " # ---------------------------------------------------------\n**10: The paper focuses on indoor scenes by how it selects the viewing directions of the virtual cameras. How would this work in more open-world settings where the camera does not always point outwards? Would this approach still work?**\n\nOur SIG modul...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "O3ghwFbVVY", "Iqk7hy7sMC6", "uQkGDV7IcED", "EeE1WkA6Fk", "Iqk7hy7sMC6", "Iqk7hy7sMC6", "O3ghwFbVVY", "O3ghwFbVVY", "O3ghwFbVVY", "uQkGDV7IcED", "EeE1WkA6Fk", "EeE1WkA6Fk", "nips_2022_-AxpnEv1f1", "nips_2022_-AxpnEv1f1", "nips_2022_-AxpnEv1f1", "nips_2022_-AxpnEv1f1", "nips_2022_-Axp...
nips_2022_xZmjH3Pm2BK
Wavelet Score-Based Generative Modeling
Score-based generative models (SGMs) synthesize new data samples from Gaussian white noise by running a time-reversed Stochastic Differential Equation (SDE) whose drift coefficient depends on some probabilistic score. The discretization of such SDEs typically requires a large number of time steps and hence a high computational cost. This is because of ill-conditioning properties of the score that we analyze mathematically. Previous approaches have relied on multiscale generation to considerably accelerate SGMs. We explain how this acceleration results from an implicit factorization of the data distribution into a product of conditional probabilities of wavelet coefficients across scales. The resulting Wavelet Score-based Generative Model (WSGM) synthesizes wavelet coefficients with the same number of time steps at all scales, and its time complexity therefore grows linearly with the image size. This is proved mathematically for Gaussian distributions, and shown numerically for physical processes at phase transition and natural image datasets.
Accept
Summary: This paper introduce Wavelet Score-Based Generative Modeling, a multi-scale diffusion models that allows for considerably faster synthesis by working in the wavelet basis. The topic is timely, given the community's growing interest in diffusion models, and the high computational cost of sampling from such models. The reviewers think that this is a technically solid paper that introduces important insight, for example that the number of required time steps increases with the condition number. The reviewers found the empirical comparison to a baseline diffusion model satisfactory The reviewers had some relatively minor concerns, some of were addressed in a healthy discussion. For example. the submitted paper lacked an introductory explanation of the wavelet transform, but this was addressed by the authors by including an additional appendix. The reviewers agree that the theoretical and empirical contributions are sufficient for publication. The reviewers do note that experiments on larger-scale and/or more diverse datasets (e.g 1024x1024) would be illuminating. There was no discussion among reviewers. Recommendation: I recommend to accept this paper.
train
[ "moFa21-pcq", "H_tP0_DszYMI", "vPxczjMFvLr", "jfvh9bmOW7v", "b99Y34GktEaL", "SZCL_M6WAn4i", "hO7_aggLwB", "1zK9o0zUTu5", "tU6x0BYNORB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal. I acknowledge your rebuttal and the other reviewers.\nI am keeping my score as 6 during the author-reviewer rebuttal, because I still consider it the missed opportunity to see the high-resolution image experiments at submission.\nStill, I am supportive on this work, and I would like to...
[ -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "b99Y34GktEaL", "jfvh9bmOW7v", "SZCL_M6WAn4i", "tU6x0BYNORB", "1zK9o0zUTu5", "hO7_aggLwB", "nips_2022_xZmjH3Pm2BK", "nips_2022_xZmjH3Pm2BK", "nips_2022_xZmjH3Pm2BK" ]
nips_2022_fBU4qsM6Fkf
Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering
Recent self-supervised pre-training methods on Heterogeneous Information Networks (HINs) have shown promising competitiveness over traditional semi-supervised Heterogeneous Graph Neural Networks (HGNNs). Unfortunately, their performance heavily depends on careful customization of various strategies for generating high-quality positive examples and negative examples, which notably limits their flexibility and generalization ability. In this work, we present SHGP, a novel Self-supervised Heterogeneous Graph Pre-training approach, which does not need to generate any positive examples or negative examples. It consists of two modules that share the same attention-aggregation scheme. In each iteration, the Att-LPA module produces pseudo-labels through structural clustering, which serve as the self-supervision signals to guide the Att-HGNN module to learn object embeddings and attention coefficients. The two modules can effectively utilize and enhance each other, promoting the model to learn discriminative embeddings. Extensive experiments on four real-world datasets demonstrate the superior effectiveness of SHGP against state-of-the-art unsupervised baselines and even semi-supervised baselines. We release our source code at: https://github.com/kepsail/SHGP.
Accept
I recommend to accept this paper. In this paper, the authors propose SHGP, a technique to leverage structural clustering for pre-training heterogeneous graph neural networks in a self-supervised manner. After the rebuttal, all the reviewers are positive to accept this paper. I would encourage the authors to revise the paper based on the suggestions from reviewers.
train
[ "dZ3eHyodri", "nDoZ61sL1w1", "C2f2KN9rNR", "Vsw5HB1xqnS", "iS8vgsxmrH", "bWCIiW2enl8", "33mJX3Q2aW", "J5LBK-_jX9Q", "JU8CSaJQItT", "4EYXylLPC9", "iCa0URxzgGq", "-TDCnzNFh5L", "yP3URi0cRa4", "1fSvyVhunc4", "yK8OrDsTTxo" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate your acknowledgment of our work and your valuable comments which help improve the quality of our manuscript.", " Thanks for addressing my question. Based on the authors' response, I increased my score from 4 to 5 accordingly.", " Dear Reviewer cQLi,\n\nWe sincerely appreciate your comment...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 3 ]
[ "nDoZ61sL1w1", "C2f2KN9rNR", "-TDCnzNFh5L", "JU8CSaJQItT", "nips_2022_fBU4qsM6Fkf", "yK8OrDsTTxo", "yK8OrDsTTxo", "1fSvyVhunc4", "yP3URi0cRa4", "yP3URi0cRa4", "-TDCnzNFh5L", "nips_2022_fBU4qsM6Fkf", "nips_2022_fBU4qsM6Fkf", "nips_2022_fBU4qsM6Fkf", "nips_2022_fBU4qsM6Fkf" ]
nips_2022_QRp6viwPRaX
Improving 3D-aware Image Synthesis with A Geometry-aware Discriminator
3D-aware image synthesis aims at learning a generative model that can render photo-realistic 2D images while capturing decent underlying 3D shapes. A popular solution is to adopt the generative adversarial network (GAN) and replace the generator with a 3D renderer, where volume rendering with neural radiance field (NeRF) is commonly used. Despite the advancement of synthesis quality, existing methods fail to obtain moderate 3D shapes. We argue that, considering the two-player game in the formulation of GANs, only making the generator 3D-aware is not enough. In other words, displacing the generative mechanism only offers the capability, but not the guarantee, of producing 3D-aware images, because the supervision of the generator primarily comes from the discriminator. To address this issue, we propose GeoD through learning a geometry-aware discriminator to improve 3D-aware GANs. Concretely, besides differentiating real and fake samples from the 2D image space, the discriminator is additionally asked to derive the geometry information from the inputs, which is then applied as the guidance of the generator. Such a simple yet effective design facilitates learning substantially more accurate 3D shapes. Extensive experiments on various generator architectures and training datasets verify the superiority of GeoD over state-of-the-art alternatives. Moreover, our approach is registered as a general framework such that a more capable discriminator (i.e., with a third task of novel view synthesis beyond domain classification and geometry extraction) can further assist the generator with a better multi-view consistency. Project page can be found at https://vivianszf.github.io/geod.
Accept
This paper presents a new 3d-aware image generative model. Compared to previous methods such as pi-GAN, the paper introduces a new geometry-aware discriminator, which helps the model learn better 3D shapes. Many reviewers found the paper well-written, the idea novel/interesting, and the results convincing. They also raised several concerns regarding the evaluation, baselines, and high-res synthesis. The rebuttal has addressed most of the concerns. The AC agreed with most of the reviewers and recommended accepting the paper.
train
[ "4kD-CTPLRtn", "9bfv-1ETlOJ", "sIMoh1YvZbU", "w10_0RVD5V7", "1VpFXXKT8d", "RUkJymEiSTA", "pWuCkUulW-", "F-zuqaBurCw", "eFjEoawJWJm", "cyaH66oHZsRR", "BuLNrFzXkgd", "hBPjgRE1Sde", "_lS4PrM9MFy", "yjqaVNWgTMQ", "A1oroLG-HCtm", "-Lnr4B7W4Qo", "_0jcKxfzCr", "HeHizU0Krw_", "5cy1huCfBM...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thanks for your reply.\n\nFirst of all, we would like to reaffirm that **our motivation is to make the discriminator in 3D GANs 3D-aware**, which is **orthogonal** to existing studies on improving the generator. We do not claim regularizing the discriminator is better than regularizing the generator, but want to ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 3 ]
[ "9bfv-1ETlOJ", "sIMoh1YvZbU", "w10_0RVD5V7", "1VpFXXKT8d", "RUkJymEiSTA", "-Lnr4B7W4Qo", "nips_2022_QRp6viwPRaX", "eFjEoawJWJm", "HeHizU0Krw_", "-Lnr4B7W4Qo", "yjqaVNWgTMQ", "_0jcKxfzCr", "HeHizU0Krw_", "A1oroLG-HCtm", "WcOBvNsoBtK", "n7lekSOxBtw", "4nlDilxdmC8", "5cy1huCfBM", "n...
nips_2022_7k_J2kkIy3U
Estimating graphical models for count data with applications to single-cell gene network
Graphical models such as Gaussian graphical models have been widely applied for direct interaction inference in many different areas. In many modern applications, such as single-cell RNA sequencing (scRNA-seq) studies, the observed data are counts and often contain many small counts. Traditional graphical models for continuous data are inappropriate for network inference of count data. We consider the Poisson log-normal (PLN) graphical model for count data and the precision matrix of the latent normal distribution represents the network. We propose a two-step method PLNet to estimate the precision matrix. PLNet first estimates the latent covariance matrix using the maximum marginal likelihood estimator (MMLE) and then estimates the precision matrix by minimizing the lasso-penalized D-trace loss function. We establish the convergence rate of the MMLE of the covariance matrix and further establish the convergence rate and the sign consistency of the proposed PLNet estimator of the precision matrix in the high dimensional setting. Importantly, although the PLN model is not sub-Gaussian, we show that the PLNet estimator is consistent even if the model dimension goes to infinity exponentially as the sample size increases. The performance of PLNet is evaluated and compared with available methods using simulation and gene regulatory network analysis of real scRNA-seq data.
Accept
This paper addresses the problem of identifying a network of interacting entities with applications to genetic regulatory networks. The approach makes use of a Poisson log-normal graphical model structure for count data. The paper shows that the approach has desirable statistical properties such as consistency of the precision matrix estimate. The paper compares the performance to some other methods on various graph structures and presents results on a single-cell RNA sequencing data set. The reviewers generally found merit in the approach. The clarity of the paper was judged by the reviewers to be good. The suggestion from one reviewer that prior information about the gene regulatory network may be incorporated was welcomed by the authors, though this addition may have an impact on the consistency results. The rate of convergence was found to be root-n and while the practical applicability of that rate for relatively expensive samples such as scRNA-seq is not clear, the method is applicable to a wide range of problems where a sample size that approaches the numbers simulation examples is more feasible. I would strongly encourage the authors to discuss potential limitations including: (1) the interaction between the expense of scRNA-seq data collection and the asymptotics of the consistency results, and (2) the degree to which model misspecification affect scientific conclusions based on parametric inference.
test
[ "l_2qlROqxzz", "rTtrkRXYduX", "bxo1oThbTI-", "WFy1BpBPDOe", "Yg9HRXQ4idG", "IU2MvaT4WI", "xwPWfbye_Pg" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the comprehensive reviews and thoughtful comments. We are delighted that all reviewers appreciated the novelty and the significance of the paper and gave very positive comments.\n\nWe are excited about the recognition that “the authors' proposed algorithm (as well as the theoretical analysis of it) ...
[ -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "nips_2022_7k_J2kkIy3U", "xwPWfbye_Pg", "IU2MvaT4WI", "Yg9HRXQ4idG", "nips_2022_7k_J2kkIy3U", "nips_2022_7k_J2kkIy3U", "nips_2022_7k_J2kkIy3U" ]
nips_2022_5VHK0q6Oo4M
Policy Gradient With Serial Markov Chain Reasoning
We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We model agent behavior as the steady-state distribution of a parameterized reasoning Markov chain (RMC), optimized with a new tractable estimate of the policy gradient. We perform action selection by simulating the RMC for enough reasoning steps to approach its steady-state distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions. Our resulting algorithm achieves state-of-the-art performance in popular Mujoco and DeepMind Control benchmarks, both for proprioceptive and pixel-based tasks.
Accept
The paper makes a solid algorithmic contribution to RL that is extensively evaluated empirically and is a certain accept.
train
[ "hu7Eamnl36z", "mppFwAaI-DJ", "rpL37MhpkL", "wv8ViyLQV1u", "baufvAiGceM", "rufT2LQyJz", "EIKi5Jpe5J8", "pm-xD-yzmTq", "Qtb7riRSkQ_", "C30NxwNL-DQ", "tlGXe32IjSn", "3VYo5BJudgf", "pxf8LI9a9D0", "_YcnJGePv1A", "p8ay7abbLQ", "dT3qTPug2Xs", "P7kg3qsXkOz", "hFCvLvNAmLs", "Yn25URGOoi0"...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to very much thank reviewer NxbF for providing constructive criticism and suggesting relevant new comparisons.", " Thank you for your very thorough response to my key points of confusion. I feel that many of my main concerns have now been addressed and I have raised my score accordingly. ", " We...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "mppFwAaI-DJ", "_YcnJGePv1A", "rufT2LQyJz", "baufvAiGceM", "tlGXe32IjSn", "pm-xD-yzmTq", "nips_2022_5VHK0q6Oo4M", "Qtb7riRSkQ_", "C30NxwNL-DQ", "LfBwxdNKKKJ", "3VYo5BJudgf", "Yn25URGOoi0", "hFCvLvNAmLs", "p8ay7abbLQ", "dT3qTPug2Xs", "P7kg3qsXkOz", "nips_2022_5VHK0q6Oo4M", "nips_202...
nips_2022_rQAJmrLmGC6
Towards Effective Multi-Modal Interchanges in Zero-Resource Sounding Object Localization
Aiming to locate the object that emits a specified sound in complex scenes, the task of sounding object localization bridges two perception-oriented modalities of vision and acoustics, and brings enormous research value to the comprehensive perceptual understanding of machine intelligence. Although there are massive training data collected in this field, few of them contain accurate bounding box annotations, hindering the learning process and further application of proposed models. In order to address this problem, we try to explore an effective multi-modal knowledge transfer strategy to obtain precise knowledge from other similar tasks and transfer it through well-aligned multi-modal data to deal with this task in a zero-resource manner. Concretely, we design and propose a novel \textit{Two-stream Universal Referring localization Network} (TURN), which is composed of a localization stream and an alignment stream to carry out different functions. The former is utilized to extract the knowledge related to referring object localization from the image grounding task, while the latter is devised to learn a universal semantic space shared between texts and audios. Moreover, we further develop an adaptive sampling strategy to automatically identify the overlap between different data domains, thus boosting the performance and stability of our model. The extensive experiments on various publicly-available benchmarks demonstrate that TURN can achieve competitive performance compared with the state-of-the-art approaches without using any data in this field, which verifies the feasibility of our proposed mechanisms and strategies.
Accept
This paper presents an effective transfer-based two-stream architecture for zero-resource sounding object localization, which is an interesting paper that will benefit research in zero-resource tasks in multimodal settings. The experiment results are pretty impressive, both quantitatively and qualitatively. The rebuttal successfully addressed some of the major concerns and, in the end, there is a general consensus about accepting the paper.
train
[ "zTbca_Je4iQ", "XHd5W0jv2UG", "rtgPR6448fT", "n9yB0a0VMfd", "5gFhKnFFfe", "pD0i8TG6ps", "bX8o0Rp_Kd", "dABVuIy1oO", "oPYEhKmNiyd", "A09kEdItabe", "aol1hI8Cl7N", "qqfo9BwV_jB", "Z4ThsIEG2NR", "L-k3gtFyBIrb", "odkq83oS-bG", "MzjZ-qEDzup", "WJB21zT7Zq", "o88oHIoP_W" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " If individual boxes are needed for each sound component in the scenario of mixed sounds, the source domain should be designed as a one-to-many (one query & multiple targets) localization task, or extra sound separation modules will be needed in our architecture. Because from the aspect of task definition, both im...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "XHd5W0jv2UG", "qqfo9BwV_jB", "WJB21zT7Zq", "o88oHIoP_W", "dABVuIy1oO", "bX8o0Rp_Kd", "aol1hI8Cl7N", "Z4ThsIEG2NR", "nips_2022_rQAJmrLmGC6", "WJB21zT7Zq", "MzjZ-qEDzup", "o88oHIoP_W", "odkq83oS-bG", "nips_2022_rQAJmrLmGC6", "nips_2022_rQAJmrLmGC6", "nips_2022_rQAJmrLmGC6", "nips_2022...
nips_2022_rTTh1RIn6E
Out-of-Distribution Detection via Conditional Kernel Independence Model
Recently, various methods have been introduced to address the OOD detection problem with training outlier exposure. These methods usually count on discriminative softmax metric or energy method to screen OOD samples. In this paper, we probe an alternative hypothesis on OOD detection by constructing a novel latent variable model based on independent component analysis (ICA) techniques. This novel method named Conditional-i builds upon the probabilistic formulation, and applies the Hilbert-Schmidt Independence Criteria that offers a convenient solution for optimizing variable dependencies. Conditional-i exclusively encodes the useful class condition into the probabilistic model, which provides the desired convenience in delivering theoretical support for the OOD detection task. To facilitate the implementation of the Conditional-i model, we construct unique memory bank architectures that allow for convenient end-to-end training within a tractable budget. Empirical results demonstrate an evident performance boost on benchmarks against SOTA methods. We also provide valuable theoretical justifications that our training strategy is guaranteed to bound the error in the context of OOD detection. Code is available at: https://github.com/OODHSIC/conditional-i.
Accept
This paper proposes an out-of-distribution (OOD) detection method, where the features of in-distribution (ID) samples and those of the OOD samples are decorrelated in a class-wise manner by using HSIC. A theoretical guarantee for decorrelation is provided. The propposed method, Conditional-i, is evaluated on OOD detection benchmarks in image classification and NLP, and shows SOTA results. Reviewers had many major concerns, e.g., novelty from HOOD, missing baselines with OOD exposure, hyperparameter tuning, and insufficient theory to support the proposed method, which the authors addressed well, and most of the reviewers have been convinced. The paper is well-written and the mathematical notation seems fine for machine learners. However, the relation between Theorem 1 and Assumption 1 is weird. Any theorem must always hold (otherwise it's not a theorem), and the sentence "Theorem 1 holds if Assumption 1 holds" doesn't make sense. If the claim in Theorem 1 requires Assumption 1, Assumption 1 should appear first, and Theorem 1 should state that "under Assumption1, it holds that ...".
val
[ "LHmOv_x70OS", "lH3Z4rlG55N", "jiVKmMKNe6j", "DG4vKBHX1V-", "cjN5-uaTrgk", "Y-lpRgysS07", "JJ1Geas1Oo", "7tEObLPex9Z", "YpBL9Bhvb6C", "pJA7oT1oEWX", "ZMo8aJxeU7", "T8iwifAf-_f", "D9qCylaFcDy", "eV0QmAjlJ4s", "B48eU7Wh-yt", "jCMNuzCUPgJ", "LwV3Fzv0hvF", "vreYMDBW4kZ", "tB_TM-84E-c...
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author"...
[ " Dear Reviewer 62WK,\n\nWe are glad that our answers were helpful! We really appreciate your kind engagement during this rebuttal process. \n\nBest Regards\n", " Thank the authors for the nice response. I am now convinced that conditional-i can be very different from HOOD as evidenced by the additional examples...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "lH3Z4rlG55N", "cjN5-uaTrgk", "Y-lpRgysS07", "JJ1Geas1Oo", "JJ1Geas1Oo", "pLgxH_O3bS", "YiIWrYJYAGg", "nips_2022_rTTh1RIn6E", "pJA7oT1oEWX", "D9qCylaFcDy", "ybbKnwzsOxZ", "ybbKnwzsOxZ", "ybbKnwzsOxZ", "Fuwn1yD8Pl3", "Fuwn1yD8Pl3", "Fuwn1yD8Pl3", "Fuwn1yD8Pl3", "HrQ318OFFM8", "HrQ...
nips_2022_oDRQGo8I7P
Riemannian Score-Based Generative Modelling
Score-based generative models (SGMs) are a powerful class of generative models that exhibit remarkable empirical performance. Score-based generative modelling (SGM) consists of a ``noising'' stage, whereby a diffusion is used to gradually add Gaussian noise to data, and a generative model, which entails a ``denoising'' process defined by approximating the time-reversal of the diffusion. Existing SGMs assume that data is supported on a Euclidean space, i.e. a manifold with flat geometry. In many domains such as robotics, geoscience or protein modelling, data is often naturally described by distributions living on Riemannian manifolds and current SGM techniques are not appropriate. We introduce here \emph{Riemannian Score-based Generative Models} (RSGMs), a class of generative models extending SGMs to Riemannian manifolds. We demonstrate our approach on a variety of compact manifolds, and in particular with earth and climate science spherical data.
Accept
The manuscript generalizes score-based generative models to compact Riemannian manifolds. All referees agree that the method is novel, technically sound, and gives nice results. I recommend the paper to be accepted to the conference.
train
[ "9iJveTzXcj", "eQkTrqO752A", "b-2wtp8pgZE", "Ry6Ru-g-hkL", "Bv0PB9lugVo", "XIvsLjKbyXT", "D51DEtOojT", "uNDq64qE7Gr", "KW4gmLkaYYaT", "V5PKEMqsdZt", "0xajEfylCGW", "r44uVP1qNKx", "Rk8TUcNLHtR" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear author, I appreciate the clarification and I understand your justification for the baseline model comparison. I think your paper is very strong and promising, thus I will raise my score.", " Dear Reviewer,\nAs the discussion period is closing we would like to check whether you had a chance to look at our r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 2 ]
[ "eQkTrqO752A", "D51DEtOojT", "Ry6Ru-g-hkL", "Rk8TUcNLHtR", "XIvsLjKbyXT", "r44uVP1qNKx", "0xajEfylCGW", "V5PKEMqsdZt", "nips_2022_oDRQGo8I7P", "nips_2022_oDRQGo8I7P", "nips_2022_oDRQGo8I7P", "nips_2022_oDRQGo8I7P", "nips_2022_oDRQGo8I7P" ]
nips_2022_-Qp-3L-5ZdI
Gradient Descent: The Ultimate Optimizer
Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as its step size. Recent work has shown how the step size can itself be optimized alongside the model parameters by manually deriving expressions for "hypergradients" ahead of time. We show how to *automatically* compute hypergradients with a simple and elegant modification to backpropagation. This allows us to easily apply the method to other optimizers and hyperparameters (e.g. momentum coefficients). We can even recursively apply the method to its own *hyper*-hyperparameters, and so on ad infinitum. As these towers of optimizers grow taller, they become less sensitive to the initial choice of hyperparameters. We present experiments validating this for MLPs, CNNs, and RNNs. Finally, we provide a simple PyTorch implementation of this algorithm (see http://people.csail.mit.edu/kach/gradient-descent-the-ultimate-optimizer).
Accept
All reviewers recommend accepting the paper. Many years ago I experimented with this type of approach, for a single layer of hyper-parameters, and came to the conclusion that setting the hyper-hyper-parameters was just as difficult as setting the parameters of the optimization algorithm. I am shocked that this changes as you increase the number of layers, and I think others would be too. If this approach genuinely reduces hyper-parameter tuning in such a simple way, it should be a spotlight or oral presentation.
train
[ "vk8Dfu2piD-", "jKY8YtwjDXch", "Hs1mHGOh19", "GzO6BeyW5p5", "HbrBh-_o-Yd", "IuYodweoG--", "KQhhNelvS-N", "Dmgv4K5VgJf", "y1G0IDrGdcf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors addressed some of my concerns and promised to address others. I am also impressed by the amount of effort in the author's response. I updated my review and increase the score accordingly.", " Thank you for the detailed response and effort. ", " Thank you for your thoughtful review of our work. We ...
[ -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "Hs1mHGOh19", "HbrBh-_o-Yd", "y1G0IDrGdcf", "Dmgv4K5VgJf", "KQhhNelvS-N", "nips_2022_-Qp-3L-5ZdI", "nips_2022_-Qp-3L-5ZdI", "nips_2022_-Qp-3L-5ZdI", "nips_2022_-Qp-3L-5ZdI" ]
nips_2022_Sxf5k90HnvM
Contextual Squeeze-and-Excitation for Efficient Few-Shot Image Classification
Recent years have seen a growth in user-centric applications that require effective knowledge transfer across tasks in the low-data regime. An example is personalization, where a pretrained system is adapted by learning on small amounts of labeled data belonging to a specific user. This setting requires high accuracy under low computational complexity, therefore the Pareto frontier of accuracy vs. adaptation cost plays a crucial role. In this paper we push this Pareto frontier in the few-shot image classification setting with a key contribution: a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance with a single forward pass of the user data (context). We use meta-trained CaSE blocks to conditionally adapt the body of a network and a fine-tuning routine to adapt a linear head, defining a method called UpperCaSE. UpperCaSE achieves a new state-of-the-art accuracy relative to meta-learners on the 26 datasets of VTAB+MD and on a challenging real-world personalization benchmark (ORBIT), narrowing the gap with leading fine-tuning methods with the benefit of orders of magnitude lower adaptation cost.
Accept
The reviewers consider the work technically solid, but were concerned about the contextualization of this work in the literature. post-rebuttal, some of the reviewers concerns are resolved.
train
[ "fIG_5id-MiN", "snbPIYn7bzv", "VUJYEbtp7td", "4jjlVNVtGKC", "mrZsp5DdnAi", "ZnTNOKJoqm4", "xuZyliR7uS", "qxcBwhmBNkR", "XALQeHchYT", "DxhCn_0pTKN", "wmY4oAzJJE", "-Au4nQKfcKi", "Kv0v-qWhXhf", "tzdObIFRHzG", "ab9yx6Xji3G", "qmRlpr1r2vA", "R6vohlIxzg7", "nK8_AIBbX1", "SuYESltz5Dj",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " We thank the reviewer for engaging in the rebuttal session and for the useful feedback. We are glad to know that most of the questions have been solved. We have different views on some aspects but overall the confrontation has improved the quality and clarity of the paper.\n\n", " Actually the claim is not so c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "snbPIYn7bzv", "xuZyliR7uS", "4jjlVNVtGKC", "ZnTNOKJoqm4", "nips_2022_Sxf5k90HnvM", "DxhCn_0pTKN", "qxcBwhmBNkR", "XALQeHchYT", "-Au4nQKfcKi", "SuYESltz5Dj", "nips_2022_Sxf5k90HnvM", "c6fqIOKGfrq", "7lNVTUig2h1", "ab9yx6Xji3G", "qmRlpr1r2vA", "SuYESltz5Dj", "nips_2022_Sxf5k90HnvM", ...
nips_2022_-Xdts90bWZ3
Perturbation Learning Based Anomaly Detection
This paper presents a simple yet effective method for anomaly detection. The main idea is to learn small perturbations to perturb normal data and learn a classifier to classify the normal data and the perturbed data into two different classes. The perturbator and classifier are jointly learned using deep neural networks. Importantly, the perturbations should be as small as possible but the classifier is still able to recognize the perturbed data from unperturbed data. Therefore, the perturbed data are regarded as abnormal data and the classifier provides a decision boundary between the normal data and abnormal data, although the training data do not include any abnormal data. Compared with the state-of-the-art of anomaly detection, our method does not require any assumption about the shape (e.g. hypersphere) of the decision boundary and has fewer hyper-parameters to determine. Empirical studies on benchmark datasets verify the effectiveness and superiority of our method.
Accept
This paper proposes an anomaly detection approach that learns perturbations directly from the normal training data. Although the reviewers are concerned about the novelty and depth of the approach, they appreciate the comprehensive experimental results. Therefore, I recommend acceptance for the paper.
train
[ "RZ4qHpHj0Rc", "t-FBah0Jc2", "sJN4LLa1tic", "-FaD8Tpbhdk", "71nxCdB_7o-", "Meu6H7gdMl1", "ozVpLyHl9mP", "n84J0px7NY", "Gmj985Zrpj", "xq9myggJgS9" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi Reviewer,\n\nAs the author-reviewer discussion phase closes today, we'd like to know whether we have addressed your concerns or not. If yes, could you please modify the rating appropriately? If not, please let us know the issues and we will take advantages of the remaining hours to provide more explanations. T...
[ -1, -1, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "n84J0px7NY", "71nxCdB_7o-", "-FaD8Tpbhdk", "xq9myggJgS9", "Gmj985Zrpj", "ozVpLyHl9mP", "n84J0px7NY", "nips_2022_-Xdts90bWZ3", "nips_2022_-Xdts90bWZ3", "nips_2022_-Xdts90bWZ3" ]
nips_2022_apC354ZsGwK
Incorporating Bias-aware Margins into Contrastive Loss for Collaborative Filtering
Collaborative filtering (CF) models easily suffer from popularity bias, which makes recommendation deviate from users’ actual preferences. However, most current debiasing strategies are prone to playing a trade-off game between head and tail performance, thus inevitably degrading the overall recommendation accuracy. To reduce the negative impact of popularity bias on CF models, we incorporate Bias-aware margins into Contrastive loss and propose a simple yet effective BC Loss, where the margin tailors quantitatively to the bias degree of each user-item interaction. We investigate the geometric interpretation of BC loss, then further visualize and theoretically prove that it simultaneously learns better head and tail representations by encouraging the compactness of similar users/items and enlarging the dispersion of dissimilar users/items. Over six benchmark datasets, we use BC loss to optimize two high-performing CF models. In various evaluation settings (i.e., imbalanced/balanced, temporal split, fully-observed unbiased, tail/head test evaluations), BC loss outperforms the state-of-the-art debiasing and non-debiasing methods with remarkable improvements. Considering the theoretical guarantee and empirical success of BC loss, we advocate using it not just as a debiasing strategy, but also as a standard loss in recommender models. Codes are available at https://github.com/anzhang314/BC-Loss.
Accept
This paper studies the popularity bias of collaborative filtering-based recommendation systems. Specifically, this paper proposes a bias-aware margin into the contrastive loss, resulting in a modified BC loss to remedy the problem. Geometric interpretation and experimental results are provided to validate the effectiveness of the proposed method. This paper received mixed review comments. A review raises concerns about the technical novelty and contribution of the proposed method mainly due to the simplicity of the proposed solution. Considering that the simple solution is well supported by theoretical justification and empirical results, I lean to accept this paper.
test
[ "JJxxnBI9-Zx", "MASanOMWx2s", "eM2hXBwvJrb", "FEWkp_D1Dzq", "EkcfuzuIWYK", "Rl6gKCf0-2w", "0EhV1gAV-ui", "biHN42sQYJ7", "WMuDEnGf4kj", "i_VmiiQpJ2e", "T7G1FzN3vA_x", "x0I_C8OOg-F", "mJNHKkp3F1h", "3bieGULTxqW", "hYDY2UlK2wnJ", "xIcA8n-pXLx", "MHr7zsPg84z", "UjOtjI8oyL", "QmsBJonI...
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer again for your comments. This is a gentle reminder for the end of the discussion session. We have updated our submission and posted point-to-point responses to your comments. We would be grateful if you could confirm whether our responses have addressed your concerns. We hope that the rebutt...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "QmsBJonInY5", "FEWkp_D1Dzq", "FEWkp_D1Dzq", "3bieGULTxqW", "Rl6gKCf0-2w", "0EhV1gAV-ui", "mJNHKkp3F1h", "nips_2022_apC354ZsGwK", "vWNgPhYjoWq", "QmsBJonInY5", "nips_2022_apC354ZsGwK", "mJNHKkp3F1h", "xRDCFts1Mpz", "hYDY2UlK2wnJ", "vWNgPhYjoWq", "MHr7zsPg84z", "UjOtjI8oyL", "QmsBJo...
nips_2022_x-i37an3uym
Module-Aware Optimization for Auxiliary Learning
Auxiliary learning is a widely adopted practice in deep learning, which aims to improve the model performance on the primary task by exploiting the beneficial information in the auxiliary loss. Existing auxiliary learning methods only focus on balancing the auxiliary loss and the primary loss, ignoring the module-level auxiliary influence, i.e., an auxiliary loss will be beneficial for optimizing specific modules within the model but harmful to others, failing to make full use of auxiliary information. To tackle the problem, we propose a Module-Aware Optimization approach for Auxiliary Learning (MAOAL). The proposed approach considers the module-level influence through the learnable module-level auxiliary importance, i.e., the importance of each auxiliary loss to each module. Specifically, the proposed approach jointly optimizes the module-level auxiliary importance and the model parameters in a bi-level manner. In the lower optimization, the model parameters are optimized with the importance parameterized gradient, while in the upper optimization, the module-level auxiliary importance is updated with the implicit gradient from a small developing dataset. Extensive experiments show that our proposed MAOAL method consistently outperforms state-of-the-art baselines for different auxiliary losses on various datasets, demonstrating that our method can serve as a powerful generic tool for auxiliary learning.
Accept
The paper investigates module-aware optimization for auxiliary learning, which adjusts the impact of the auxiliary tasks (through their losses) at the module-level instead of treating the model as a whole. The motivation is based on the observation that a certain auxiliary loss may be beneficial for optimizing specific modules in a model but harmful to others at the same time. A bi-level formulation is employed as a natural solution, where the inner loop optimizes the model parameters and the outer loop optimizes the module-level auxiliary importance. Experiments conducted using different auxiliary losses on diverse datasets show the advantage of the proposed approach over existing baselines. Overall, the paper is well written and the problem is clearly motivated. The solution is technically sound and the experimental results show the effectiveness of the proposed module-aware optimization approach. The authors have conducted additional experiments based on the reviewers’ suggestions, which further enhanced the evaluation side of the paper. On the other hand, the overall novelty does not appear to be very strong. Treating the importance of each auxiliary loss to each module as hyper-parameters and then using a bi-level optimization to optimize both the model parameters and the importance parameters is a very straightforward idea. Reviewers also pointed out some important related works on learning adaptive weights over auxiliary tasks via bi-level optimization, adapting auxiliary tasks weights on a per-module basis, and layer-wise weight adaptation. The authors are encouraged to reference these important related works and highlight the core technical differences.
train
[ "wZwyqfZZlVt", "AZrAoD_uFsR", "xzHuBHJXEJm", "KbJsRcZiUMg", "Y42O3hHdJ2N", "AXzH3FbMcu", "5WMqRAgrC1p", "MdR3Rww_TXM", "QK3uC5e7FtU", "LbSDKG_hLn", "LyfkALpBQA", "tgKy6vXssED", "g1QgAH1Aq7d", "Wjv9UKjwrNy", "v0hD55mbEPd", "ZPFe8_SafK", "2aUwXDNUNJy", "s6NxO-o7Rg_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the responses. I believe that those address my concerns raised in the review. Nice work!", " Thank the authors for their rebuttal.\n\nMost of my concerns have now been resolved. I would like to keep my original rating as my final rating.", " Dear Reviewer 94wP,\n\nAs the author-reviewer discussi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ "g1QgAH1Aq7d", "xzHuBHJXEJm", "2aUwXDNUNJy", "ZPFe8_SafK", "v0hD55mbEPd", "5WMqRAgrC1p", "LyfkALpBQA", "nips_2022_x-i37an3uym", "s6NxO-o7Rg_", "s6NxO-o7Rg_", "s6NxO-o7Rg_", "2aUwXDNUNJy", "ZPFe8_SafK", "v0hD55mbEPd", "nips_2022_x-i37an3uym", "nips_2022_x-i37an3uym", "nips_2022_x-i37a...
nips_2022_Av8b0vxN7MX
A Classification of $G$-invariant Shallow Neural Networks
When trying to fit a deep neural network (DNN) to a $G$-invariant target function with $G$ a group, it only makes sense to constrain the DNN to be $G$-invariant as well. However, there can be many different ways to do this, thus raising the problem of ``$G$-invariant neural architecture design'': What is the optimal $G$-invariant architecture for a given problem? Before we can consider the optimization problem itself, we must understand the search space, the architectures in it, and how they relate to one another. In this paper, we take a first step towards this goal; we prove a theorem that gives a classification of all $G$-invariant single-hidden-layer or ``shallow'' neural network ($G$-SNN) architectures with ReLU activation for any finite orthogonal group $G$, and we prove a second theorem that characterizes the inclusion maps or ``network morphisms'' between the architectures that can be leveraged during neural architecture search (NAS). The proof is based on a correspondence of every $G$-SNN to a signed permutation representation of $G$ acting on the hidden neurons; the classification is equivalently given in terms of the first cohomology classes of $G$, thus admitting a topological interpretation. The $G$-SNN architectures corresponding to nontrivial cohomology classes have, to our knowledge, never been explicitly identified in the literature previously. Using a code implementation, we enumerate the $G$-SNN architectures for some example groups $G$ and visualize their structure. Finally, we prove that architectures corresponding to inequivalent cohomology classes coincide in function space only when their weight matrices are zero, and we discuss the implications of this for NAS.
Accept
Three reviewers gave quite positive ratings and comments on the paper. One another reviewer gave lower ratings. The AC thinks the reviewer with a low rating raised quite reasonable questions but found overall, the pros outweigh the cons (which seems to mostly stem from the unclarity in writing). The AC would recommend acceptance but also encourages the reviewers to incorporate the reviews and discussions in the final version.
val
[ "eoEWiZpENUY", "tB4BR7fTWSy", "JaPlOmf3bnn", "Zo3AXsnwpwP", "raasblaBRgj", "lJ7LwL-isSl", "KiCk9BzPoV", "9JsdaxnRfY", "YwwFLaaiBTt", "PBcauxbsLLn", "YOavZncPb4", "3xpBo8bhDPa", "tS3tF3HpkUN", "eVG4a7Wo3BH", "_nS6CHP89PR", "ybsQUc9hYgM", "s61yb-QiPE2", "a5Op7-ejwiO", "lgquMrzck27"...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_...
[ " **3)** In the end, it is simply taking the action on the feature space and splitting it to orbits. I think you could get this result in a much simpler fashion.\n\n**Response:** The question is, what reps can we use for this action? The answer is not just perm-reps but **signed** perm-reps, which is one result of ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 2 ]
[ "tB4BR7fTWSy", "JaPlOmf3bnn", "Zo3AXsnwpwP", "raasblaBRgj", "lJ7LwL-isSl", "KiCk9BzPoV", "9JsdaxnRfY", "YwwFLaaiBTt", "PBcauxbsLLn", "YOavZncPb4", "3zBrETKKCl", "tS3tF3HpkUN", "i6KdeA98NDt", "_nS6CHP89PR", "lgquMrzck27", "s61yb-QiPE2", "a5Op7-ejwiO", "nips_2022_Av8b0vxN7MX", "nip...
nips_2022_XFCirHGr4Cs
Improved Utility Analysis of Private CountSketch
Sketching is an important tool for dealing with high-dimensional vectors that are sparse (or well-approximated by a sparse vector), especially useful in distributed, parallel, and streaming settings. It is known that sketches can be made differentially private by adding noise according to the sensitivity of the sketch, and this has been used in private analytics and federated learning settings. The post-processing property of differential privacy implies that \emph{all} estimates computed from the sketch can be released within the given privacy budget. In this paper we consider the classical CountSketch, made differentially private with the Gaussian mechanism, and give an improved analysis of its estimation error. Perhaps surprisingly, the privacy-utility trade-off is essentially the best one could hope for, independent of the number of repetitions in CountSketch: The error is almost identical to the error from non-private CountSketch plus the noise needed to make the vector private in the original, high-dimensional domain.
Accept
The paper presents an improved analysis for a differentially private variant of CountSketch, leveraging concentration bounds regarding the median of symmetric variables. The reviewers found the new analysis interesting and the new results significant.
train
[ "-27taLFxag", "P2Ovspfr8m", "JnBl305YqTk", "RR1tYOnPk-m5", "JXydNvUIzK2", "9qvzMHMyXJu", "zld5GqRwTia", "gCfvyXMJcPv", "1HsSJpPlsV", "WpDNV29ceS0" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer (CnGL),\n\nJust want to add that our main contribution is to discover the \"perfect\" match between Gaussian noise and the median of the symmetric estimators as in CountSketch, e.g., previous authors had looked at privacy for CountMinSketch, and even those who considered privacy of CountSketch had n...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "P2Ovspfr8m", "JnBl305YqTk", "RR1tYOnPk-m5", "9qvzMHMyXJu", "WpDNV29ceS0", "1HsSJpPlsV", "gCfvyXMJcPv", "nips_2022_XFCirHGr4Cs", "nips_2022_XFCirHGr4Cs", "nips_2022_XFCirHGr4Cs" ]
nips_2022_x3JsaghSj0v
Hierarchical Graph Transformer with Adaptive Node Sampling
The Transformer architecture has achieved remarkable success in a number of domains including natural language processing and computer vision. However, when it comes to graph-structured data, transformers have not achieved competitive performance, especially on large graphs. In this paper, we identify the main deficiencies of current graph transformers: (1) Existing node sampling strategies in Graph Transformers are agnostic to the graph characteristics and the training process. (2) Most sampling strategies only focus on local neighbors and neglect the long-range dependencies in the graph. We conduct experimental investigations on synthetic datasets to show that existing sampling strategies are sub-optimal. To tackle the aforementioned problems, we formulate the optimization strategies of node sampling in Graph Transformer as an adversary bandit problem, where the rewards are related to the attention weights and can vary in the training procedure. Meanwhile, we propose a hierarchical attention scheme with graph coarsening to capture the long-range interactions while reducing computational complexity. Finally, we conduct extensive experiments on real-world datasets to demonstrate the superiority of our method over existing graph transformers and popular GNNs.
Accept
The paper proposes to account for the graph properties when performing node sampling. The proposed method learns the effective sampling by adjusting the weights of a set of sampling schemes, based on an adversary bandit formulation. Besides, a hierarchical attention scheme for Graph Transformer is presented with graph coarsening algorithms. Overall the algorithm is sound and effective, as shown empirically on a range of benchmarks. The paper is well-written. The reviewers asked for comparison with other methods on more datasets, which the authors have partially resolved in their responses.
train
[ "YbLVadwznR", "yhYGa8N5u7", "vm5CiYq8-MI", "ddnGQ6z0rZ", "yFmOm4OLwTy", "hrbGh-8SQC", "T7YIh4UvuEI", "8ylsEyyOF4w", "BlQEs0LrLVM", "iJTDmBNNyLk", "hA89PVYiNi", "NaTrOWLTYLf", "JMxUFYRsYl9", "64PVFIp90h" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers, thanks for your appreciation and valuable suggestions. We are happy to know that most of your concerns have been addressed. We have included the answers and new results in our newly revised paper. We will keep polishing our paper to make the final version more comprehensive and solid.", " Thanks...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 3 ]
[ "nips_2022_x3JsaghSj0v", "iJTDmBNNyLk", "hrbGh-8SQC", "T7YIh4UvuEI", "nips_2022_x3JsaghSj0v", "64PVFIp90h", "8ylsEyyOF4w", "JMxUFYRsYl9", "NaTrOWLTYLf", "hA89PVYiNi", "nips_2022_x3JsaghSj0v", "nips_2022_x3JsaghSj0v", "nips_2022_x3JsaghSj0v", "nips_2022_x3JsaghSj0v" ]
nips_2022_5MgZAu2NR7X
Self-Supervised Learning with an Information Maximization Criterion
Self-supervised learning allows AI systems to learn effective representations from large amounts of data using tasks that do not require costly labeling. Mode collapse, i.e., the model producing identical representations for all inputs, is a central problem to many self-supervised learning approaches, making self-supervised tasks, such as matching distorted variants of the inputs, ineffective. In this article, we argue that a straightforward application of information maximization among alternative latent representations of the same input naturally solves the collapse problem and achieves competitive empirical results. We propose a self-supervised learning method, CorInfoMax, that uses a second-order statistics-based mutual information measure that reflects the level of correlation among its arguments. Maximizing this correlative information measure between alternative representations of the same input serves two purposes: (1) it avoids the collapse problem by generating feature vectors with non-degenerate covariances; (2) it establishes relevance among alternative representations by increasing the linear dependence among them. An approximation of the proposed information maximization objective simplifies to a Euclidean distance-based objective function regularized by the log-determinant of the feature covariance matrix. The regularization term acts as a natural barrier against feature space degeneracy. Consequently, beyond avoiding complete output collapse to a single point, the proposed approach also prevents dimensional collapse by encouraging the spread of information across the whole feature space. Numerical experiments demonstrate that CorInfoMax achieves better or competitive performance results relative to the state-of-the-art SSL approaches.
Accept
The paper describes a self-supervised learning method based on an information maximization criterion that naturally prevents dimensional collapse. The authors consider the Shannon mutual information under the assumption that the data is Gaussian. A first-order approximation to the log-determinant of the sum of two matrices is used to simplify the final objective. Experiments on 4 image datasets show that the proposed approach gives better results than contrastive and non-contrastive methods. Strengths: 1 - The paper is well written and easy to follow. 2 - The paper is theoretically grounded on correlative information measure of representation. 3 - Strong results on some downstream classification problems. 4 - Initially the experiments included only one downstream task regarding classification, but the paper has been updated to include also results for object segmentation and detection task. 5 - Novel and well motivated. 6 - state-of-the-art SSL performance. Weaknesses: - Some weaknesses are pointed out by reviewer GZwK, but these are not well justified. Decision: A majority of reviewers vote for acceptance. The only reviewer voting slightly towards rejection is GZwK, with a reasoning that is not well justified. For example, the main criticisms mentioned by reviewer GZwK - The paper directly generalizes the earlier proposed log-determinant mutual information to the field of self-supervised learning. - this paper does not give a deep-going analysis that why the second-order statistics can play a important role in self-supervised learning are not mentioned by any of the other reviewers. Because of this, I have decided to accept the paper.
train
[ "5nZ9-MXSLe3", "nJoFXpTor4", "u3HfQ-4JVZi", "uFj2jAbzq7J", "4-btV-SqwwW", "cwjM04auOId", "2FP-axD4ad", "aUhLIJUB9kb", "bMnRcBlNlIS", "Jp4BapYy0jF", "-8khOpBCghh", "53W8-h_CfZX", "iMwiVf56Gny", "yIjOd57EBaf", "zGduO2pn7hY", "TErFwzgFeMt" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank you for your useful feedback and constructive suggestions.", " I appreciate the efforts the authors have put in the rebuttal. They have added explanations, clarifications and new experiments, which I believe to improve the overall understandability and the main contribution of the paper. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "nJoFXpTor4", "Jp4BapYy0jF", "nips_2022_5MgZAu2NR7X", "nips_2022_5MgZAu2NR7X", "TErFwzgFeMt", "TErFwzgFeMt", "TErFwzgFeMt", "TErFwzgFeMt", "yIjOd57EBaf", "yIjOd57EBaf", "zGduO2pn7hY", "iMwiVf56Gny", "nips_2022_5MgZAu2NR7X", "nips_2022_5MgZAu2NR7X", "nips_2022_5MgZAu2NR7X", "nips_2022_5...
nips_2022_fyIjM5CEdYW
Transcormer: Transformer for Sentence Scoring with Sliding Language Modeling
Sentence scoring aims at measuring the likelihood score of a sentence and is widely used in many natural language processing scenarios, like reranking, which is to select the best sentence from multiple candidates. Previous works on sentence scoring mainly adopted either causal language modeling (CLM) like GPT or masked language modeling (MLM) like BERT, which have some limitations: 1) CLM only utilizes unidirectional information for the probability estimation of a sentence without considering bidirectional context, which affects the scoring quality; 2) MLM can only estimate the probability of partial tokens at a time and thus requires multiple forward passes to estimate the probability of the whole sentence, which incurs large computation and time cost. In this paper, we propose \textit{Transcormer} -- a Transformer model with a novel \textit{sliding language modeling} (SLM) for sentence scoring. Specifically, our SLM adopts a triple-stream self-attention mechanism to estimate the probability of all tokens in a sentence with bidirectional context and only requires a single forward pass. SLM can avoid the limitations of CLM (only unidirectional context) and MLM (multiple forward passes) and inherit their advantages, and thus achieve high effectiveness and efficiency in scoring. Experimental results on multiple tasks demonstrate that our method achieves better performance than other language modelings.
Accept
This paper proposes a transformer architectural variant. The motivation of this variant is that it contains a sliding language modeling objective for sentence scoring, that enables it to look at all the tokens in a sentence using bidirectional context in a single pass. This solves the issue of using standard causal language modeling or masked language models, which respectively have limitations of using only unidirectional context and requiring multiple forward passes. The empirical gains in terms of quality are modest, but the speed-ups are quite impressive. It would have been nice to see evaluations on a few more tasks (like SuperGLUE). It is a bit unclear why such results are not presented, but on balance the paper is probably still above the cut-off.
train
[ "FA0xsDSyLWD", "DfCyKDnKU_4", "68Y2gmMxk8O", "6gEXQAmVUDF", "0mt5b7kgVgH", "a8dXNNhQlpq", "fUuCrVRsHX", "n-AGej95xY", "_yqoTgP83M", "3u885-nbmw", "uDpp4-pxXgD", "RpVLlXV3cjP", "qjs5QCoba1v" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the author for the improvements and further evaluations.\n\n**I raise my score from 4/10 to 6/10.** The work is thorough and introduces an architectural innovation to more richly model $P(y_s|y_{\\backslash s})$ in one pass. Unlike the paper originally claimed, it is not the first such model (T-TA and E...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "a8dXNNhQlpq", "n-AGej95xY", "nips_2022_fyIjM5CEdYW", "qjs5QCoba1v", "qjs5QCoba1v", "qjs5QCoba1v", "uDpp4-pxXgD", "uDpp4-pxXgD", "RpVLlXV3cjP", "nips_2022_fyIjM5CEdYW", "nips_2022_fyIjM5CEdYW", "nips_2022_fyIjM5CEdYW", "nips_2022_fyIjM5CEdYW" ]
nips_2022_skgJy0CjAO
LogiGAN: Learning Logical Reasoning via Adversarial Pre-training
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models. Upon automatic identification of logical reasoning phenomena in massive text corpus via detection heuristics, we train language models to predict the masked-out logical statements. Inspired by the facilitation effect of reflective thinking in human learning, we analogically simulate the learning-thinking process with an adversarial Generator-Verifier architecture to assist logic learning. LogiGAN implements a novel sequential GAN approach that (a) circumvents the non-differentiable challenge of the sequential GAN by leveraging the Generator as a sentence-level generative likelihood scorer with a learning objective of reaching scoring consensus with the Verifier; (b) is computationally feasible for large-scale pre-training with arbitrary target length. Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets requiring general reasoning abilities, revealing the fundamental role of logic in broad reasoning, as well as the effectiveness of LogiGAN. Ablation studies on LogiGAN components reveal the relative orthogonality between linguistic and logic abilities and suggest that reflective thinking's facilitation effect might also generalize to machine learning.
Accept
This paper presents a pretraining approach for integrating logical reasoning into language models. The authors develop an adversarial training method where they minimize the difference between the verifier scores and generator scores for logically inconsistent statements, thereby making the model less likely to generate sequences the verifier identifies as logically inconsistent. All the reviewers were positive (to varying degrees) on the contributions of this paper, specifically highlighting the clarity of the method, as well as the empirical rigor and results. Weaknesses pointed out by reviewers included the underspecification of different type of logical relations and the weakness of the pseudo-statement sampling procedure, which doesn't seem to explicitly retrieve logically inconsistent statements as negative examples. Despite these questions, the method is well motivated and clear, and the empirical results speak for themselves. This paper represents a solid piece of work and I'm inclined to see it accepted.
val
[ "aDqNFomK5u", "Qj_T_hF0pg", "4YtUCNqd1y5", "r8kFL1BtssPy", "OiNZl4hkw76", "mxE8nUTGUqF", "2xUN_fm0qds", "v25wZ5dJ88_", "pyinWqZ72hC", "zripqZxaxL2", "HWCprmqgHNd" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications from the authors.\n\nWeakness #1 (novelty): The authors clarify the novelty of logic-oriented MLM and adversarial mechanism. But I think it's not contradictory to my original review. Since task-oriented MLM and GANs have been deeply studied in the NLP community, the authors successfu...
[ -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 1, 4 ]
[ "r8kFL1BtssPy", "4YtUCNqd1y5", "OiNZl4hkw76", "v25wZ5dJ88_", "HWCprmqgHNd", "pyinWqZ72hC", "zripqZxaxL2", "nips_2022_skgJy0CjAO", "nips_2022_skgJy0CjAO", "nips_2022_skgJy0CjAO", "nips_2022_skgJy0CjAO" ]
nips_2022_byMcacS8GYZ
Deep Combinatorial Aggregation
Neural networks are known to produce poor uncertainty estimations, and a variety of approaches have been proposed to remedy this issue. This includes deep ensemble, a simple and effective method that achieves state-of-the-art results for uncertainty-aware learning tasks. In this work, we explore a combinatorial generalization of deep ensemble called deep combinatorial aggregation (DCA). DCA creates multiple instances of network components and aggregates their combinations to produce diversified model proposals and predictions. DCA components can be defined at different levels of granularity. And we discovered that coarse-grain DCAs can outperform deep ensemble for uncertainty-aware learning both in terms of predictive performance and uncertainty estimation. For fine-grain DCAs, we discover that an average parameterization approach named deep combinatorial weight averaging (DCWA) can improve the baseline training. It is on par with stochastic weight averaging (SWA) but does not require any custom training schedule or adaptation of BatchNorm layers. Furthermore, we propose a consistency enforcing loss that helps the training of DCWA and modelwise DCA. We experiment on in-domain, distributional shift, and out-of-distribution image classification tasks, and empirically confirm the effectiveness of DCWA and DCA approaches.
Accept
The work proposes an ensembling technique for improved uncertainty and OOD predictive performance. The idea is to use multiple permutations of a certain component (e.g., layers, residual blocks) for both training and testing. Like efficient ensembles such as BatchEnsemble, most parameters are shared across ensemble members and a component is used multiple times depending on its selection for a given permutation in the combinatorial space. Overall, this results in an ensemble over a wider variety of networks, with a higher diversity than if one used completely independent networks for each ensemble member. There are further variants for efficiency such as "Deep combinatorial weight averaging". The idea is conceptually simple and only requires the addition of a training objective to enforce consistency. The experiments are also quite thorough. I agree with two of the three reviewers, disagreeing with Reviewer tdmd. One of the main concerns is that while the method outperforms baselines like MC Dropout, there's a lack of a theoretical explanation. I believe this can be helpful but is not a necessity, particularly when theory for efficient ensembles of neural networks is quite a difficult setting to study formally without sweeping assumptions. Other concerns like efficiency/complexity compared to more efficient methods like MC Dropout, SWA, and BatchEnsemble; and training time and convergence difficulties are adequately addressed in the rebuttal. In related work, I would recommend discussing recent works that have also examined improving the diversity of ensembles for improved uncertainty/robustness, particularly those that have proposed a diversity penalty. To name a few: [1], [2], [3]. [1] A Diversity-Penalizing Ensemble Training Method for Deep Learning. Wentao Zhang, Jiawei Jiang, Yingxia Shao, Bin Cui https://arxiv.org/abs/2112.13316 [2] Hyperparameter Ensembles for Robustness and Uncertainty Quantification. Florian Wenzel, Jasper Snoek, Dustin Tran, Rodolphe Jenatton https://arxiv.org/abs/2006.13570 [3] Learning under Model Misspecification: Applications to Variational and Ensemble methods. Andres R. Masegosa https://arxiv.org/abs/1912.08335
train
[ "vW7PkSZDOm", "qSIOOWI-QCA", "z6Kj0a-rMA", "1l4uIbyTgpx", "hkVnZAXrPy", "iFd8VtMlW0D", "G3jFuxeceWu", "qioOLMFaBr", "1xJcaEgvUXl", "2ywVxqcqTd", "58Ctg7ZzHZz", "XFqeQD2AkoR", "31OYc9RRMf3" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response, I would like to know what are the advantages of the proposed method compared to Deep Ensemble or other ensemble methods?", " > 1. In principle, this method is very similar to MCDropout. It can be seen as dropping a part of the network in a larger network. MCdropout can be seen as dro...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "qSIOOWI-QCA", "z6Kj0a-rMA", "qioOLMFaBr", "hkVnZAXrPy", "iFd8VtMlW0D", "G3jFuxeceWu", "1xJcaEgvUXl", "31OYc9RRMf3", "XFqeQD2AkoR", "58Ctg7ZzHZz", "nips_2022_byMcacS8GYZ", "nips_2022_byMcacS8GYZ", "nips_2022_byMcacS8GYZ" ]
nips_2022_W-_4hgRkwb
Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering
Deploying models on target domain data subject to distribution shift requires adaptation. Test-time training (TTT) emerges as a solution to this adaptation under a realistic scenario where access to full source domain data is not available and instant inference on target domain is required. Despite many efforts into TTT, there is a confusion over the experimental settings, thus leading to unfair comparisons. In this work, we first revisit TTT assumptions and categorize TTT protocols by two key factors. Among the multiple protocols, we adopt a realistic sequential test-time training (sTTT) protocol, under which we further develop a test-time anchored clustering (TTAC) approach to enable stronger test-time feature learning. TTAC discovers clusters in both source and target domain and match the target clusters to the source ones to improve generalization. Pseudo label filtering and iterative updating are developed to improve the effectiveness and efficiency of anchored clustering. We demonstrate that under all TTT protocols TTAC consistently outperforms the state-of-the-art methods on six TTT datasets. We hope this work will provide a fair benchmarking of TTT methods and future research should be compared within respective protocols. A demo code is available at https://github.com/Gorilla-Lab-SCUT/TTAC.
Accept
The paper proposes a clustering-based method for sequential test-time training, where the test data under distribution shifts arrives in a stream manner. The authors also explore a newer setting by emphasizing certain dimensions of the test-time adaptation problem: they distinguish experiment setups, from the perspective of whether it uses on-pass adaptation or multi-pass adaptation and whether it needs to alter the training phase. The distinction is important as prior works often compared under different setups, leading to unfair comparisons. The authors did a good job to address the reviewers' concerns in the author-reviewer discussion phase, and at the end, all reviewers unanimously support the acceptance. AC also did not find a particular weakness for rejection. As the test-time adaptation has emerged as a realistic solution to make ML models more robust to work, AC thinks that the contribution of this paper is of interest to a broad range of NeurIPS audience and would guide future research on the topic. Overall, AC is happy to recommend the acceptance.
test
[ "7FwGzO1bQ0_", "DySz-XKGbW", "ERIIOgYaJwU", "nuPdRB77Bng", "baYwwEY7-gT", "fXtMIrC5hs0", "B4BVljhyymp", "3cumlSEJbeZ", "b4Yciv6TJFT", "uyFGTJeEcDn", "KFMr2YgHWm_", "JUgvJr09JJA", "Un_w-weQP6", "nUeJrmeR5o-", "Qn_IRh3q8IN", "JgOo7VsUfzL", "vQVSK7su7iE" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for supporting this work. In the revised manuscript, we shall explicitly address the hyperparameter tuning, categorization of SHOT and point out that more theoretical analysis into KL-Divergence as alignment objective is necessary as future work. Thanks again for your firm supp...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "baYwwEY7-gT", "ERIIOgYaJwU", "Un_w-weQP6", "nips_2022_W-_4hgRkwb", "B4BVljhyymp", "nips_2022_W-_4hgRkwb", "vQVSK7su7iE", "JgOo7VsUfzL", "Qn_IRh3q8IN", "nUeJrmeR5o-", "nUeJrmeR5o-", "nUeJrmeR5o-", "nUeJrmeR5o-", "nips_2022_W-_4hgRkwb", "nips_2022_W-_4hgRkwb", "nips_2022_W-_4hgRkwb", ...
nips_2022_GFiqdZOm-Ei
Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation
Symbolic music generation aims to generate music scores automatically. A recent trend is to use Transformer or its variants in music generation, which is, however, suboptimal, because the full attention cannot efficiently model the typically long music sequences (e.g., over 10,000 tokens), and the existing models have shortcomings in generating musical repetition structures. In this paper, we propose Museformer, a Transformer with a novel fine- and coarse-grained attention for music generation. Specifically, with the fine-grained attention, a token of a specific bar directly attends to all the tokens of the bars that are most relevant to music structures (e.g., the previous 1st, 2nd, 4th and 8th bars, selected via similarity statistics); with the coarse-grained attention, a token only attends to the summarization of the other bars rather than each token of them so as to reduce the computational cost. The advantages are two-fold. First, it can capture both music structure-related correlations via the fine-grained attention, and other contextual information via the coarse-grained attention. Second, it is efficient and can model over 3X longer music sequences compared to its full-attention counterpart. Both objective and subjective experimental results demonstrate its ability to generate long music sequences with high quality and better structures.
Accept
This paper presents a novel attention selection scheme for modeling the long-term structure of symbolic music. Modeling multi-track music with repetitive structure is a challenging task, especially due to the quadratic memory problem of the transformer. The authors propose measure-limited fine-grained attention and coarse-grained attention that attends to summarizations of measure, not individual tokens. The experiments show that the proposed method can achieve better perplexity compared to the other transformer variants in multi-track symbolic music generation. One of the main questions raised by the reviewers was the musical quality of the generated examples, such as how the repetitive structure was generated or how it would work in other types of dataset. The authors responded with many additional results that can answer those questions. Though one of the reviewers was not fully satisfied with the evaluation, which is valuable criticism that can improve this paper, the majority of reviewers have agreed that the paper has a clear contribution to be introduced in NeurIPS. ## Strengths - The paper proposes a nicely working solution for the widely-known but critical limitation of long-sequence modeling of a transformer. The idea is clear, and the implementation is sound. - The experiment result and the presented generation examples show that the proposed attention strategy can help model long symbolic music sequences. ## Weaknesses - The evaluation lacks musical analysis of the generated results. It is complicated to evaluate the quality of generated music samples only using the NLL and listening tests with limited examples and subjects. After the reviewer's feedback, the authors presented additional results, including repeat statistics and self-similarity matrices of each track. Since the criticized weaknesses can be largely improved by these additional results, I hope the authors can figure out a good solution to include these contents somehow in the revised paper. - Even though it can be applied to many kinds of music, the selection of structure-related bar (1,2,4,8, 16) is a strong assumption. It is a reasonable choice if one has to select it rule-based, but there is a clear limitation of this approach, as the authors also agreed. ## Recommendation - BP-transformer for natural language has a similar concept to the proposed coarse-grained attention in terms that each token attends to the summary of far-located tokens. - Ye, Zihao, et al. "BP-Transformer: Modelling long-range context via binary partitioning." arXiv preprint arXiv:1911.04070 (2019) - As one of the reviewers criticized, a detailed analysis of generated examples is necessary. The authors might consider asking music experts to do a qualitative analysis on the generated examples. Current subjective evaluation has many limitations as a musical background of subjective can be very important to evaluate the quality of music structure. Even though it would be difficult to include those detailed analyses in the main paper, it would be beneficial for the authors and readers to understand the characteristics of Museformer truly. The following paper is a good example of analyzing machine-generated music. - Sturm, Bob L., and Oded Ben-Tal. "Taking the models back to music practice: Evaluating generative transcription models built using deep learning." Journal of Creative Music Systems 2 (2017): 32-60. - The result also suggests the possibility that coarse-grained bar-level summarization is more important than fine-grained attention with structure-related bars. It is not clear which part is more essential to model long-term structures. Therefore, I recommend authors do an ablation study not only just with perplexity score but with detailed analysis of the actual generated examples from two different models (coarse-grained only, and fine-grained only).
train
[ "SPRMjohK0T2", "tEczURzj8mh", "zj3M8nH41iN", "2941dHrxESU", "_papYDW7Xk", "P49T8P9ZpG", "Pu9B16Ovs7no", "J_Zg0_cqLzd", "bpEcDmzjwrC", "rxJPoGurR6Gj", "5XvrzBxB22", "9E8nXqs3En", "DbVZxv2ZG2P", "qiy_zPTevyC", "27f1v7K0v-z" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Due to the character limits, our response is divided into two comments. This is the second half.\n\n---\n\n**Q3, W2: Are \"Museformer w/o coarse-grained attention\" and Music Transformer almost the same? If so, why PPLs are so different? Should we conclude \"bar summary\" is a more fundamental improvement compare...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 3 ]
[ "Pu9B16Ovs7no", "Pu9B16Ovs7no", "2941dHrxESU", "qiy_zPTevyC", "qiy_zPTevyC", "27f1v7K0v-z", "J_Zg0_cqLzd", "9E8nXqs3En", "DbVZxv2ZG2P", "qiy_zPTevyC", "27f1v7K0v-z", "nips_2022_GFiqdZOm-Ei", "nips_2022_GFiqdZOm-Ei", "nips_2022_GFiqdZOm-Ei", "nips_2022_GFiqdZOm-Ei" ]
nips_2022_I0CiI7Oyp1E
Theoretically Provable Spiking Neural Networks
Spiking neural networks have attracted increasing attention in recent years due to their potential of handling time-dependent data. Many algorithms and techniques have been developed; however, theoretical understandings of many aspects of spiking neural networks are far from clear. A recent work [Zhang and Zhou, 2021] disclosed that typical spiking neural networks could hardly work on spatio-temporal data due to their bifurcation dynamics and suggested that the self-connection structure has to be added. In this paper, we theoretically investigate the approximation ability and computational efficiency of spiking neural networks with self connections, and show that the self-connection structure enables spiking neural networks to approximate continuous dynamical systems using a polynomial number of parameters within polynomial time complexities. Our theoretical results may shed some insight for the future studies of spiking neural networks.
Accept
This paper shows that spiking neural networks with self-connections can approximate continuous dynamical systems with polynomial parameter and time complexities. All reviewers agreed that the contributions of this paper were clearly above the acceptance threshold.
train
[ "1aqxa6A4_Vj", "O65Lc12cy_g", "a0_Zx1I2K1Bh", "7Y92rd18qW5", "mVev44kqsTz", "3ZEu7iNzxIg", "9m7e2lRGLRx", "67rlOTIqjk" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reponse.\nIndeed the self-connection aspect is now clearer, though recurrence often (indeed unless explicitly excluded) includes self-connections.\nThere are both spiking and non-spiking versions of reservoir computing, termed liquid state machines (from the Maass group) and echo state networks (f...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 1, 3, 3 ]
[ "a0_Zx1I2K1Bh", "mVev44kqsTz", "3ZEu7iNzxIg", "9m7e2lRGLRx", "67rlOTIqjk", "nips_2022_I0CiI7Oyp1E", "nips_2022_I0CiI7Oyp1E", "nips_2022_I0CiI7Oyp1E" ]
nips_2022_2tfv0K8Vbtf
Trading Off Resource Budgets For Improved Regret Bounds
In this work we consider a variant of adversarial online learning where in each round one picks $B$ out of $N$ arms and incurs cost equal to the $\textit{minimum}$ of the costs of each arm chosen. We propose an algorithm called Follow the Perturbed Multiple Leaders (FPML) for this problem, which we show (by adapting the techniques of Kalai and Vempala [2005]) achieves expected regret $\mathcal{O}(T^{\frac{1}{B+1}}\ln(N)^{\frac{B}{B+1}})$ over time horizon $T$ relative to the $\textit{single}$ best arm in hindsight. This introduces a trade-off between the budget $B$ and the single-best-arm regret, and we proceed to investigate several applications of this trade-off. First, we observe that algorithms which use standard regret minimizers as subroutines can sometimes be adapted by replacing these subroutines with FPML, and we use this to generalize existing algorithms for Online Submodular Function Maximization [Streeter and Golovin, 2008] in both the full feedback and semi-bandit feedback settings. Next, we empirically evaluate our new algorithms on an online black-box hyperparameter optimization problem. Finally, we show how FPML can lead to new algorithms for Linear Programming which require stronger oracles at the benefit of fewer oracle calls.
Accept
The paper considers the experts problem when the online player is allowed to choose B arms instead of 1 and is rewarded according the best arm within the set it chose. The comparator considered in the first result of the paper is standard best single arm. The paper shows that the regret scales in this setting as ~ T^(1/(B+1)) -- recovering and extending the standard result. The paper uses this result to then further get another result when the set wise function is max. Here existing result in the comparator compared with fixed size B, i.e. max size allowed, where as they present a result that allows comparison with any budget B'. They also play a ln(T) factor more arms in each round which allows them to compare with OPT as opposed to (1 - 1/e) OPT. The paper according to reviews is well written, but I found section 3 hard to read, especially the notations of B' and tilde{B}. I also think the regret statement in that theorem is not well defined, you should state clearly it is with respect to B' sized sets. There are other shortcomings of the paper like the bandit results dont match the optimal in the case of B=1 (as discussed with a reviewer), which points to suboptimality of presented bound. The lower bounds presented are also unfortunately weaker. These are not shortcomings per se but desiderata to make the paper very strong. I also find the fact that they ln(T) factor more arms in OSFM is not stressed appropriately. Overall the results in the paper are good as presented, but the shortcomings listed above the paper put it right on the borderline. Overall there is general appreciation of the results by the reviewers and as such according to that it puts it slightly above the borderline according to me. I am recommend a marginal accpet for the paper. I would like to urge the authors to look at the paper critically irrespective of the outcome and definitely look to improve presentation at the minimum. Other lingering questions if answered will be a great bonus.
test
[ "DrHfqbhGG9", "MO00X1YpXym", "n270INVsfHq", "4qtjgqVrEvuq", "ajoV19Uhlrr5", "16iX3V2ItYR", "pElKkgLL8uh", "yZSYssf6gbp", "e5ESkgcEa3a", "V9fIVu1S3M0" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank for your response and very clear explanation. We agree with you that in the $B=1$ case the $K = \\mathcal{O}(N)$ condition is in fact not likely, and that the geometric resampling estimators do lead to a $\\mathcal{O}(T^{2/3})$ bound using our analysis when $B=1$. Interestingly, in this case our algorithm i...
[ -1, -1, -1, -1, -1, -1, 4, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "MO00X1YpXym", "4qtjgqVrEvuq", "pElKkgLL8uh", "e5ESkgcEa3a", "yZSYssf6gbp", "V9fIVu1S3M0", "nips_2022_2tfv0K8Vbtf", "nips_2022_2tfv0K8Vbtf", "nips_2022_2tfv0K8Vbtf", "nips_2022_2tfv0K8Vbtf" ]
nips_2022_w_jvWzNXd6n
Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
To overcome the quadratic cost of self-attention, recent works have proposed various sparse attention modules, most of which fall under one of two groups: 1) sparse attention under a hand-crafted patterns and 2) full attention followed by a sparse variant of softmax such as $\alpha$-entmax. Unfortunately, the first group lacks adaptability to data while the second still requires quadratic cost in training. In this work, we propose SBM-Transformer, a model that resolves both problems by endowing each attention head with a mixed-membership Stochastic Block Model (SBM). Then, each attention head data-adaptively samples a bipartite graph, the adjacency of which is used as an attention mask for each input. During backpropagation, a straight-through estimator is used to flow gradients beyond the discrete sampling step and adjust the probabilities of sampled edges based on the predictive loss. The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input. By assessing the distribution of graphs, we theoretically show that SBM-Transformer is a universal approximator for arbitrary sequence-to-sequence functions in expectation. Empirical evaluations under the LRA and GLUE benchmarks demonstrate that our model outperforms previous efficient variants as well as the original Transformer with full attention. Our implementation can be found in https://github.com/sc782/SBM-Transformer.
Accept
This paper focuses on sparse attention modules for improving the computational cost of the transformer. The authors propose to adaptively learn the sparsity of the attention module through modeling the mask matrix as a bipartite graph generated by the stochastic block model (SBM). While there are some concerns, such as the method not being well suited for large GPU clusters, overall, all the reviewers find the proposed method interesting and novel, so I recommend accept. But the authors are advised to revise the presentation of the SBM to improve the accessibility and readability and incorporate other reviewers' suggestions.
train
[ "5WbF8Pv77EM", "Gl3snW4uvw-", "MDxYBxoyoG2", "yAFeC_JgTA6", "Uuof9FX2mSI", "Awf4sEV-VT", "G4FDUAiyvG6", "hu4MPxWQjO-", "kWNIUbu-fAM", "BLxQ85BQNJcN", "yaWG43jHG7v", "V-iKKvIh-rw", "FWNDkyTH-sC", "RMm7QaWiTwe", "Al9sNHsFmFU", "rAvBA17vVs", "ObqUtvGlBU7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. Just as a friendly reminder, we have also uploaded a revision of our manuscript to address your comments and concerns. Please feel free to take a look and let us know in case of any further questions. Thank you!", " I have read the response and the other reviews I have a better under...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "Gl3snW4uvw-", "Al9sNHsFmFU", "nips_2022_w_jvWzNXd6n", "nips_2022_w_jvWzNXd6n", "Awf4sEV-VT", "V-iKKvIh-rw", "nips_2022_w_jvWzNXd6n", "kWNIUbu-fAM", "ObqUtvGlBU7", "yaWG43jHG7v", "V-iKKvIh-rw", "rAvBA17vVs", "RMm7QaWiTwe", "Al9sNHsFmFU", "nips_2022_w_jvWzNXd6n", "nips_2022_w_jvWzNXd6n"...
nips_2022_C6Iin6nXJy
Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling
As deep learning blooms with growing demand for computation and data resources, outsourcing model training to a powerful cloud server becomes an attractive alternative to training at a low-power and cost-effective end device. Traditional outsourcing requires uploading device data to the cloud server, which can be infeasible in many real-world applications due to the often sensitive nature of the collected data and the limited communication bandwidth. To tackle these challenges, we propose to leverage widely available open-source data, which is a massive dataset collected from public and heterogeneous sources (e.g., Internet images). We develop a novel strategy called Efficient Collaborative Open-source Sampling (ECOS) to construct a proximal proxy dataset from open-source data for cloud training, in lieu of client data. ECOS probes open-source data on the cloud server to sense the distribution of client data via a communication- and computation-efficient sampling process, which only communicates a few compressed public features and client scalar responses. Extensive empirical studies show that the proposed ECOS improves the quality of automated client labeling, model compression, and label outsourcing when applied in various learning scenarios. Source codes will be released.
Accept
This paper presents a novel framework for outsourcing the model training to cloud servers without the need for the clients to upload their data to the cloud. Unlike federated learning, the clients don't even need to perform any local training. In particular, the server relies on a large amount of open-source data to perform model training. To reduce the negative impact of out-of-domain (OOD) open-source data, the server performs **efficient collaborative open-source sampling** (ECOS) with the help of the client. The authors show that ECOS can be performed with a small amount of communication from client to server. Moreover, this communication can be made differentially private, as shown in the paper. Overall, the proposed framework is novel and interesting. Its efficacy is demonstrated on two vision tasks of digit recognition and object recognition. The proposed framework outperforms a couple of natural baselines. Most of the reviewers are positive about this work. Some of the reviewers had asked for some clarification which the authors adequately provided during the rebuttal. Notably, the authors provided a generalization bound, results on another baseline (where clients train the model locally), and accuracy vs. differential privacy cost experiments. The authors are encouraged to include all these results in the revised paper. Even though the presentation of the paper has significantly improved after the author-reviewer discussion, there is still scope for some improvements: 1) Description of the sampling objective in Section 3.2 lacks flow. It's not clear why $\hat{S}$ is introduced. $\hat{S}$ does not even feature in the description of Algorithm 1. Please address this. 2) $\hat{D}^q$ is used to denote the $R$ centroids of the public data, whereas $\hat{D}^p$ (line 4, Algorithm 1) is used to denote the features of client data. Please use consistent notations. Even though the authors have mentioned that their framework can potentially work for language models with minimal modifications (e.g., using appropriate representations), it needs to be verified via detailed experiments. The authors should either include results on this or add this as a potential avenue for future work.
train
[ "Lthrjw5S-q9", "aBBKf4u_5p8", "hPExeti87t", "pC9vd8DmX_M", "BuskoC7grRk", "Y3LrxtKeh1N", "mtJulN1ylBb", "6Fc-o8thnvh", "4IXYbNhGiOo", "jcX3wQJ1ICq0", "l44sTBFqpMrO", "lV1aSxCIWPu", "TVRnVNhdwOI", "CByxKUwO5pI", "blv8feGZWhN", "9sp8DUa_086", "tZJ6TCynCrJ", "457cVo-Nrii", "kmoGC7mf...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_...
[ " Thanks for the positive responses. We will add the convergence analysis in our final version.\n\nIn brief, based on [A], the convergence bound on the client loss can be informally derived as \n\n$L(f_{\\theta_T}, \\mathcal{D}_t) \\le (1 - \\frac{\\mu}{m})^T \\epsilon(\\theta_0, D_s) + {1\\over 2} \\Delta_H (\\ma...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3, 4 ]
[ "pC9vd8DmX_M", "hPExeti87t", "9sp8DUa_086", "CByxKUwO5pI", "tZJ6TCynCrJ", "6SKx5sWU8XF", "pIpl29nZxq", "457cVo-Nrii", "tZJ6TCynCrJ", "6SKx5sWU8XF", "pIpl29nZxq", "pIpl29nZxq", "kmoGC7mfTPm", "457cVo-Nrii", "tZJ6TCynCrJ", "tZJ6TCynCrJ", "nips_2022_C6Iin6nXJy", "nips_2022_C6Iin6nXJy"...