paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_eE-S1U5GG94
Principle Components Analysis based frameworks for efficient missing data imputation algorithms
Missing data is a commonly occurring problem in practice. Many imputation methods have been developed to fill in the missing entries. However, not all of them can scale to high-dimensional data, especially the multiple imputation techniques. Meanwhile, the data nowadays tends toward high-dimensional. Therefore, in this work, we propose \textit{Principal Component Analysis Imputation} (PCAI), a simple but versatile framework based on Principal Component Analysis (PCA) to speed up the imputation process and alleviate memory issues of many available imputation techniques, without sacrificing the imputation quality in term of MSE. In addition, the frameworks can be used even when some or all of the missing features are categorical, or when the number of missing features is large. Next, we introduce \textit{PCA Imputation - Classification} (PIC), an application of PCAI for classification problems with some adjustments. We validate our approach by experiments on various scenarios, which shows that PCAI and PIC can work with various imputation algorithms, including the state-of-the-art ones, and improve the imputation speed significantly while achieving competitive mean square error/classification accuracy compared to direct imputation (i.e., impute directly on the missing data).
Reject
This paper proposes a framework based on principle components analysis (PCA) to speed up the missing data imputation. It divides the feature sets into two partitions -- the fully observed one and the one that contains missing values. The proposed method applies PCA to the fully observed partition to do dimensionality reduction, followed by the existing imputation methods. The authors further propose to apply PCA to the imputed data to speed up the downstream classification task. The major weakness is that the methodological contribution is quite limited. Projecting data into lower dimensional spaces to speed up downstream tasks is not new. In particular, the main assumption of random missingness has been considered before 10-20 years ago and the more challenging setting of non-random missingness was not considered. Overall the reviewers mostly agree that the contribution is limited.
train
[ "yVgfx9-v3bk", "AEvB4mQSey", "vuZpUYq6yl3", "I9Q_8Srr-lg", "o1btXSJ6GDd", "jZAtrZ6XWNu", "1rIrXm2HXYw", "gNa6Cj4eMki", "u6PT6qezVi", "NQ-iWW6tb6", "FednkHVaIEs", "_S-LzJ-iwA", "S1KV49qfUP4" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your comments. We do believe that it would be interesting to explore such settings and to explore the use of SVD and LDA as alternatives to PCA, since LDA is a popular supervised dimension reduction technique that makes use of the information from the labels, and SVD is another commonly u...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "vuZpUYq6yl3", "I9Q_8Srr-lg", "1rIrXm2HXYw", "u6PT6qezVi", "gNa6Cj4eMki", "S1KV49qfUP4", "_S-LzJ-iwA", "FednkHVaIEs", "NQ-iWW6tb6", "nips_2022_eE-S1U5GG94", "nips_2022_eE-S1U5GG94", "nips_2022_eE-S1U5GG94", "nips_2022_eE-S1U5GG94" ]
nips_2022_CflSnSkH--
Sequential Information Design: Learning to Persuade in the Dark
We study a repeated information design problem faced by an informed sender who tries to influence the behavior of a self-interested receiver. We consider settings where the receiver faces a sequential decision making (SDM) problem. At each round, the sender observes the realizations of random events in the SDM problem. This begets the challenge of how to incrementally disclose such information to the receiver to persuade them to follow (desirable) action recommendations. We study the case in which the sender does not know random events probabilities, and, thus, they have to gradually learn them while persuading the receiver. Our goal is to design online learning algorithms that are no-regret for the sender, while at the same time being persuasive for the receiver. We start by providing a non-trivial polytopal approximation of the set of sender's persuasive information structures. This is crucial to design efficient learning algorithms. Next, we prove a negative result: no learning algorithm can be persuasive. Thus, we relax persuasiveness requirements by focusing on algorithms that guarantee that the receiver's regret in following recommendations grows sub-linearly. In the full-feedback setting---where the sender observes all random events realizations---, we provide an algorithm with $\tilde{O}(\sqrt{T})$ regret for both the sender and the receiver. Instead, in the bandit-feedback setting---where the sender only observes the realizations of random events actually occurring in the SDM problem---, we design an algorithm that, given an $\alpha \in [1/2, 1]$ as input, ensures $\tilde{O}({T^\alpha})$ and $\tilde{O}( T^{\max \{ \alpha, 1-\frac{\alpha}{2} \} })$ regrets for the sender and the receiver, respectively. This result is complemented by a lower bound showing that such a regrets trade-off is essentially tight.
Accept
The reviews are all positive and reviewers agree that the paper studies a novel setting with solid technical contributions.
train
[ "LqSWk-xR-9T", "yl1wbdJKN93", "sbI8XjqwrQD", "h6Mzg4xV4MI", "QxrQAvZwi0f", "ImsQQobgdi0", "MR3CHxLMRPz", "3-pAHCGJt75", "bZAz91IIyGB", "efY37RwTJ97", "k8zwiV66-Fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response, and the clarifications on the questions. ", " Thanks a lot for the response! Regarding i) and re-looking at it carefully, I agree that it would seem hard to explain the main theorem without the SDM language. It'd be great if you had a bit of space to give some intuitive explana...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "ImsQQobgdi0", "QxrQAvZwi0f", "h6Mzg4xV4MI", "k8zwiV66-Fz", "efY37RwTJ97", "bZAz91IIyGB", "3-pAHCGJt75", "nips_2022_CflSnSkH--", "nips_2022_CflSnSkH--", "nips_2022_CflSnSkH--", "nips_2022_CflSnSkH--" ]
nips_2022_6PpLxPPTPd
On Optimal Learning Under Targeted Data Poisoning
Consider the task of learning a hypothesis class $\mathcal{H}$ in the presence of an adversary that can replace up to an $\eta$ fraction of the examples in the training set with arbitrary adversarial examples. The adversary aims to fail the learner on a particular target test point $x$ which is \emph{known} to the adversary but not to the learner. In this work we aim to characterize the smallest achievable error $\epsilon=\epsilon(\eta)$ by the learner in the presence of such an adversary in both realizable and agnostic settings. We fully achieve this in the realizable setting, proving that $\epsilon=\Theta(\mathtt{VC}(\mathcal{H})\cdot \eta)$, where $\mathtt{VC}(\mathcal{H})$ is the VC dimension of $\mathcal{H}$. Remarkably, we show that the upper bound can be attained by a deterministic learner. In the agnostic setting we reveal a more elaborate landscape: we devise a deterministic learner with a multiplicative regret guarantee of $\epsilon \leq C\cdot\mathtt{OPT} + O(\mathtt{VC}(\mathcal{H})\cdot \eta)$, where $C > 1$ is a universal numerical constant. We complement this by showing that for any deterministic learner there is an attack which worsens its error to at least $2\cdot \mathtt{OPT}$. This implies that a multiplicative deterioration in the regret is unavoidable in this case. Finally, the algorithms we develop for achieving the optimal rates are inherently improper. Nevertheless, we show that for a variety of natural concept classes, such as linear classifiers, it is possible to retain the dependence $\epsilon=\Theta_{\mathcal{H}}(\eta)$ by a proper algorithm in the realizable setting. Here $\Theta_{\mathcal{H}}$ conceals a polynomial dependence on $\mathtt{VC}(\mathcal{H})$.
Accept
All reviewers agree that this paper comprehensively studies a fundamental question of PAC learning under instance-targeted poisoning, including the study of realizable, agnostic, deterministic learning settings; overall this paper makes a nice contribution to the field of robust machine learning. The authors are strongly encouraged to incorporate the comments by the reviewers, including revising on motivating examples, terminologies, etc.
train
[ "A1GwFnhR2L", "DQQixAGt4Nc", "eoy4KEvNTfB", "pf4wu6TliGZ", "vZAiWyzzOE", "QOobkrg_Hon", "NDN2fhSVrYs", "lWyKrldj9h", "lnfgGLGJkUw", "i0XR07Pnoxa", "ZrGzdLGDIS", "zcerpN-aPxk", "P9-GZvtafFB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " There was no additional discussion during the interactive period. With the feedback window soon to close, I have increased my score, and I believe this paper would be a valuable contribution to NeurIPS.", " Thank you for your detailed response. I have read it along with your responses to the other reviewers. I ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "DQQixAGt4Nc", "QOobkrg_Hon", "vZAiWyzzOE", "NDN2fhSVrYs", "P9-GZvtafFB", "zcerpN-aPxk", "ZrGzdLGDIS", "i0XR07Pnoxa", "nips_2022_6PpLxPPTPd", "nips_2022_6PpLxPPTPd", "nips_2022_6PpLxPPTPd", "nips_2022_6PpLxPPTPd", "nips_2022_6PpLxPPTPd" ]
nips_2022_OkLee4SfLKh
Distilled Gradient Aggregation: Purify Features for Input Attribution in the Deep Neural Network
Measuring the attribution of input features toward the model output is one of the popular post-hoc explanations on the Deep Neural Networks (DNNs). Among various approaches to compute the attribution, the gradient-based methods are widely used to generate attributions, because of its ease of implementation and the model-agnostic characteristic. However, existing gradient integration methods such as Integrated Gradients (IG) suffer from (1) the noisy attributions which cause the unreliability of the explanation, and (2) the selection for the integration path which determines the quality of explanations. FullGrad (FG) is an another approach to construct the reliable attributions by focusing the locality of piece-wise linear network with the bias gradient. Although FG has shown reasonable performance for the given input, as the shortage of the global property, FG is vulnerable to the small perturbation, while IG which includes the exploration over the input space is robust. In this work, we design a new input attribution method which adopt the strengths of both local and global attributions. In particular, we propose a novel approach to distill input features using weak and extremely positive contributor masks. We aggregate the intermediate local attributions obtained from the distillation sequence to provide reliable attribution. We perform the quantitative evaluation compared to various attribution methods and show that our method outperforms others. We also provide the qualitative result that our method obtains object-aligned and sharp attribution heatmap.
Accept
This paper proposes a new gradient-based attribution method Distilled Gradient Aggregation (DGA), which combines the strengths of both local and global attribution methods. The reviewers and meta-reviewer found the method novel, and supported by promising results both qualitatively and quantitatively. During the rebuttal phase, the authors made a _thorough_ effort in response to each reviewer's comments. As recognized by two reviewers (hnze, cvDy), the newly added results and discussions have significantly strengthened the contribution. The AC recommends acceptance given the paper tackles a critical problem, and presented an effective and convincing method that advances the field of explainable AI. Authors are strongly recommended to include these new results and writing changes suggested by the reviewers for the final version of the manuscript.
train
[ "FK_m5V_slK5", "AG2OJaPrg0C", "GzCfcpK4bus", "oXB5qz6T1WU", "wIX8L6m4vpw", "FeCtK3Hi0Gr", "BNOtj8buyPp", "8oTKwJHsRyL", "KIStdeq9Iox", "WKJwteLdeO-", "rK86vmbJOf0", "DYG_HDYKUCa", "pBjfofAYoW_", "kewOsFA5TDfW", "tLxRN9zAU7V", "ZiDPmvnpub", "8zykOn-Bz0", "3BMEnPcx80k", "bwygH2dtQ-...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Sorry for my late response. I appreciate the updated version of the submission, which helps to explain the motivations of the work. I extremely appreciate the experiment of sensitivity-n and hope that is included in the revision. \n\n**The response does not address my concerns about the motivations.**\n- For the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "LI6EdOavsnW", "GzCfcpK4bus", "tLxRN9zAU7V", "wIX8L6m4vpw", "DYG_HDYKUCa", "nips_2022_OkLee4SfLKh", "8oTKwJHsRyL", "3BMEnPcx80k", "LI6EdOavsnW", "LI6EdOavsnW", "LI6EdOavsnW", "QjpOEUy82yC", "QjpOEUy82yC", "QjpOEUy82yC", "YEWXzi6AgWA", "XK0PlAjnTsz", "XK0PlAjnTsz", "XK0PlAjnTsz", ...
nips_2022_3wg-rYuo5AN
Okapi: Generalising Better by Making Statistical Matches Match
We propose Okapi, a simple, efficient, and general method for robust semi-supervised learning based on online statistical matching. Our method uses a nearest-neighbours-based matching procedure to generate cross-domain views for a consistency loss, while eliminating statistical outliers. In order to perform the online matching in a runtime- and memory-efficient way, we draw upon the self-supervised literature and combine a memory bank with a slow-moving momentum encoder. The consistency loss is applied within the feature space, rather than on the predictive distribution, making the method agnostic to both the modality and the task in question. We experiment on the WILDS 2.0 datasets Sagawa et al., which significantly expands the range of modalities, applications, and shifts available for studying and benchmarking real-world unsupervised adaptation. Contrary to Sagawa et al., we show that it is in fact possible to leverage additional unlabelled data to improve upon empirical risk minimisation (ERM) results with the right method. Our method outperforms the baseline methods in terms of out-of-distribution (OOD) generalisation on the iWildCam (a multi-class classification task) and PovertyMap (a regression task) image datasets as well as the CivilComments (a binary classification task) text dataset. Furthermore, from a qualitative perspective, we show the matches obtained from the learned encoder are strongly semantically related. Code for our paper is publicly available at https://github.com/wearepal/okapi/.
Accept
in this paper, the authors propose an algorithm called Okapi that improves out-of-domain generalization by ensuring the representation similarity between in-domain (labelled) examples and out-of-domain (unlabelled) examples. despite the algorithm's high complexity, which initially concerned the reviewers, the empirical results (both in the original and response) as well as the authors' successful rebuttal convinced the reviewers that this manuscript is worth publication. therefore, i recommend acceptance. authors, please take into account the comments from the reviews when preparing your camera-ready version.
train
[ "5LzB9R-7Aqi", "yooPkVeCPg", "wj6HNQxA5py", "RU26Xpv3zRU", "C73RgWcooEt", "TIWQOK0Jf-", "S_8f26_gzr2", "Poj62XDzGNZ", "k4-d8bQa9q", "OXcuDt69A0-", "IR7wEj4zCI", "4KEtanmvN5z", "isz5KyEEpBd", "bueIMREcVS9", "xKkbOOFmhUj", "pmkEBWVg0Hg", "rd8Mbqhv1NR" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to read our responses and those of the other reviewers; we greatly appreciate your feedback and your providing the reference (despite it being behind a paywall). Said reference (Wang et al.) is indeed similar in the respects that it leverages k-NN and a memory bank for semi-supervise...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 5 ]
[ "yooPkVeCPg", "k4-d8bQa9q", "TIWQOK0Jf-", "C73RgWcooEt", "OXcuDt69A0-", "Poj62XDzGNZ", "xKkbOOFmhUj", "rd8Mbqhv1NR", "pmkEBWVg0Hg", "bueIMREcVS9", "bueIMREcVS9", "bueIMREcVS9", "nips_2022_3wg-rYuo5AN", "nips_2022_3wg-rYuo5AN", "nips_2022_3wg-rYuo5AN", "nips_2022_3wg-rYuo5AN", "nips_2...
nips_2022_FncDhRcRYiN
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling
In this paper we study the convex-concave saddle-point problem $\min_x \max_y f(x) + y^\top\mathbf{A}x - g(y)$, where $f(x)$ and $g(y)$ are smooth and convex functions. We propose an Accelerated Primal-Dual Gradient Method (APDG) for solving this problem, achieving (i) an optimal linear convergence rate in the strongly-convex-strongly-concave regime, matching the lower complexity bound (Zhang et al., 2021), and (ii) an accelerated linear convergence rate in the case when only one of the functions $f(x)$ and $g(y)$ is strongly convex or even none of them are. Finally, we obtain a linearly convergent algorithm for the general smooth and convex-concave saddle point problem $\min_x \max_y F(x,y)$ without the requirement of strong convexity or strong concavity.
Accept
All reviewers liked the paper and the overall impression is very positive - clear accept.
train
[ "Ghruxx52Sa", "6h-AEYJ5A_A", "43y2LiHh46d", "eSmijB1MkdE", "O3NjbqDfRM", "aMorBuhjsj0", "DwUXMgCC4Ky", "qnbN3NijDL", "vcLOnlLkMaz", "PBUmz0Cejwo", "kmJCXU4YCT4", "TyBT_37q4ZE", "G9O6QdsOogI", "Z6bDeSerx1e", "6fZNknGB7Sj", "Xy4QQQUmezP", "kNWVp4VjeS", "4akZLXMF2s" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks, and thanks for letting us know!", " Hi, thanks for pointing out the improvement in the bound. I did ignore this in my first attempt of understanding. Review has been updated.", " Thanks for the response. The response eases some of my concerns and I'm happy to raise my score from 5 to 6. My major conce...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 9, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "6h-AEYJ5A_A", "Z6bDeSerx1e", "kmJCXU4YCT4", "O3NjbqDfRM", "DwUXMgCC4Ky", "nips_2022_FncDhRcRYiN", "4akZLXMF2s", "nips_2022_FncDhRcRYiN", "4akZLXMF2s", "kNWVp4VjeS", "kNWVp4VjeS", "Xy4QQQUmezP", "6fZNknGB7Sj", "6fZNknGB7Sj", "nips_2022_FncDhRcRYiN", "nips_2022_FncDhRcRYiN", "nips_202...
nips_2022_e62ZssObZp
Accelerating SGD for Highly Ill-Conditioned Huge-Scale Online Matrix Completion
The matrix completion problem seeks to recover a $d\times d$ ground truth matrix of low rank $r\ll d$ from observations of its individual elements. Real-world matrix completion is often a huge-scale optimization problem, with $d$ so large that even the simplest full-dimension vector operations with $O(d)$ time complexity become prohibitively expensive. Stochastic gradient descent (SGD) is one of the few algorithms capable of solving matrix completion on a huge scale, and can also naturally handle streaming data over an evolving ground truth. Unfortunately, SGD experiences a dramatic slow-down when the underlying ground truth is ill-conditioned; it requires at least $O(\kappa\log(1/\epsilon))$ iterations to get $\epsilon$-close to ground truth matrix with condition number $\kappa$. In this paper, we propose a preconditioned version of SGD that preserves all the favorable practical qualities of SGD for huge-scale online optimization while also making it agnostic to $\kappa$. For a symmetric ground truth and the Root Mean Square Error (RMSE) loss, we prove that the preconditioned SGD converges to $\epsilon$-accuracy in $O(\log(1/\epsilon))$ iterations, with a rapid linear convergence rate as if the ground truth were perfectly conditioned with $\kappa=1$. In our numerical experiments, we observe a similar acceleration for ill-conditioned matrix completion under the root mean square error (RMSE) loss, Euclidean distance matrix (EDM) completion under pairwise square loss, and collaborative filtering under the Bayesian Personalized Ranking (BPR) loss.
Accept
The SGD version for this matrix completion problem is missing, and this work is trying to fill the missing piece. The concerns of the reviewers seem to be resolved. Please try to add the real-data experiments to the final version, maybe instead of the synthetic data.
train
[ "TJbUFLonFi", "a8tzVyYczd9", "as0Xh8BzFo7", "KoafEPIY_IT", "3382j4kaPn", "jaG0XPsbbcA", "S8x_zOR_LK", "115plOJ-EPY", "TSw5Vwe7fA4", "eQtWX8LsdYr", "T8PT5pJlOC3" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The reviewer asks whether it is still \"easy\" to compute an initial\npoint within an $1/(\\kappa\\cdot d)$ radius, in view of the extra\nfactor of $1/d$. If we define \"easy\" in terms of theoretical\ncomplexity in dimension $d$ while ignoring the condition number $\\kappa$,\nthen the answer is \"yes, it is easy...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "a8tzVyYczd9", "S8x_zOR_LK", "jaG0XPsbbcA", "T8PT5pJlOC3", "T8PT5pJlOC3", "eQtWX8LsdYr", "TSw5Vwe7fA4", "nips_2022_e62ZssObZp", "nips_2022_e62ZssObZp", "nips_2022_e62ZssObZp", "nips_2022_e62ZssObZp" ]
nips_2022_ZK6lzx0jqdZ
ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
Lightweight and efficiency are critical drivers for the practical application of image super-resolution (SR) algorithms. We propose a simple and effective approach, ShuffleMixer, for lightweight image super-resolution that explores large convolution and channel split-shuffle operation. In contrast to previous SR models that simply stack multiple small kernel convolutions or complex operators to learn representations, we explore a large kernel ConvNet for mobile-friendly SR design. Specifically, we develop a large depth-wise convolution and two projection layers based on channel splitting and shuffling as the basic component to mix features efficiently. Since the contexts of natural images are strongly locally correlated, using large depth-wise convolutions only is insufficient to reconstruct fine details. To overcome this problem while maintaining the efficiency of the proposed module, we introduce Fused-MBConvs into the proposed network to model the local connectivity of different features. Experimental results demonstrate that the proposed ShuffleMixer is about $3 \times$ smaller than the state-of-the-art efficient SR methods, e.g. CARN, in terms of model parameters and FLOPs while achieving competitive performance.
Accept
The paper received divergent reviews (with two reviewers leaning to reject and two leaning to accept). The key strengths of the paper are simple ideas and strong results on efficient and accurate image super-resolution. The raised weakness include: - "might fall out of scope for the conference" [Reviewer gjyf] The AC disagrees with the statement. As authors' responses pointed out, CNN designs for computer vision applications are welcome at NeurIPS. - "comparison with the top winners of the challenges" [Reviewer gjyf] The authors' rebuttal include direct comparisons with previous winners. - "how the performance is if the kernel size" [Reviewer gjyf] The authors' provide the additional ablation. There were no follow-up questions and no rebuttal acknowledgement from Reviewer gjyf. From the responses alone the AC thinks that the concerns have been sufficiently addressed. Another major weakness is - "The novelty of this paper may be limited. " [Reviewer YLsE]. It is true that basic components like depthwise convolution and channel shuffling have been explored in efficient CNN design. But as the reviewers' rebuttal described, the adoptation and modification of DW Conv or channel shuffle are non-trivial for the context of image SR. In sum, the AC reads the reviews and rebuttal. Weighting both the strength and the weakness of the work, the AC decides to side with reviewer X1ut and xXRb and recommends to accept.
train
[ "hjkbiBv_YK1", "hZ0A0XHdUrR", "WqhcuH9Whhp", "-wQwSsXS6jtf", "8pZ9QVwIWBL4", "r6lS_hy_8I", "JRs5492uCTA", "Fknk2ydJxvK", "Dao58IGKHTz", "uKmTwDZ3u8U", "cd1Lu-0V1gE", "qoZO7SFao1e", "8IFwCcfJk3r", "FZ3MmhWByT" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\nthank you for your clarification, now I can see the strength of the proposal. However, I find more doubts. As you are presenting channel splitting, shuffling strategy, and Fused-MBConv, how they can work with different kernel size? In the last table, the model D reaches 28.86, what is the behavior...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "8pZ9QVwIWBL4", "nips_2022_ZK6lzx0jqdZ", "JRs5492uCTA", "Fknk2ydJxvK", "Dao58IGKHTz", "uKmTwDZ3u8U", "FZ3MmhWByT", "8IFwCcfJk3r", "qoZO7SFao1e", "cd1Lu-0V1gE", "nips_2022_ZK6lzx0jqdZ", "nips_2022_ZK6lzx0jqdZ", "nips_2022_ZK6lzx0jqdZ", "nips_2022_ZK6lzx0jqdZ" ]
nips_2022_rlN6fO3OrP
BadPrompt: Backdoor Attacks on Continuous Prompts
The prompt-based learning paradigm has gained much research attention recently. It has achieved state-of-the-art performance on several NLP tasks, especially in the few-shot scenarios. While steering the downstream tasks, few works have been reported to investigate the security problems of the prompt-based models. In this paper, we conduct the first study on the vulnerability of the continuous prompt learning algorithm to backdoor attacks. We observe that the few-shot scenarios have posed a great challenge to backdoor attacks on the prompt-based models, limiting the usability of existing NLP backdoor methods. To address this challenge, we propose BadPrompt, a lightweight and task-adaptive algorithm, to backdoor attack continuous prompts. Specially, BadPrompt first generates candidate triggers which are indicative for predicting the targeted label and dissimilar to the samples of the non-targeted labels. Then, it automatically selects the most effective and invisible trigger for each sample with an adaptive trigger optimization algorithm. We evaluate the performance of BadPrompt on five datasets and two continuous prompt models. The results exhibit the abilities of BadPrompt to effectively attack continuous prompts while maintaining high performance on the clean test sets, outperforming the baseline models by a large margin. The source code of BadPrompt is publicly available.
Accept
This paper conducts a study on the vulnerability of the continuous prompt learning algorithm to backdoor attacks. The authors have made a few interesting observations, such as that the few-shot scenario poses challenge to backdoor attacks. The authors then propose BadPrompt for backdoor attacking continuous prompts. Overall the paper is well-written, and the perspective and insights provided in the paper are interesting and could be valuable to the community.
train
[ "jFo9CO9NcUz", "9FF8kddtkt", "tQ30ilEEm97", "ZCWCnjEmiev", "c3swe-2sDy", "46UoaS4_G-", "LvHFGvm9ok-", "IgB_HfXOJZR", "qc8tw-DOyN", "Dr4frTtIjj", "lv21Op2azPq", "kmaRlD6xqHB", "qkwyNJ0ogfa", "kybLym-qxts" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " One reviewer flagged the paper for potential ethics issues related to potential real-world vulnerabilities, and inadequate discussion of limitations, and a lack of discussion about potential defences.\n\n The authors have added text to the paper to address the various concerns raised by the reviewer. The addition...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "nips_2022_rlN6fO3OrP", "IgB_HfXOJZR", "LvHFGvm9ok-", "46UoaS4_G-", "qc8tw-DOyN", "kybLym-qxts", "qkwyNJ0ogfa", "kmaRlD6xqHB", "lv21Op2azPq", "nips_2022_rlN6fO3OrP", "nips_2022_rlN6fO3OrP", "nips_2022_rlN6fO3OrP", "nips_2022_rlN6fO3OrP", "nips_2022_rlN6fO3OrP" ]
nips_2022_B7Q2mbIFa6Q
A Characterization of Semi-Supervised Adversarially Robust PAC Learnability
We study the problem of learning an adversarially robust predictor to test time attacks in the semi-supervised PAC model. We address the question of how many labeled and unlabeled examples are required to ensure learning. We show that having enough unlabeled data (the size of a labeled sample that a fully-supervised method would require), the labeled sample complexity can be arbitrarily smaller compared to previous works, and is sharply characterized by a different complexity measure. We prove nearly matching upper and lower bounds on this sample complexity. This shows that there is a significant benefit in semi-supervised robust learning even in the worst-case distribution-free model, and establishes a gap between supervised and semi-supervised label complexities which is known not to hold in standard non-robust PAC learning.
Accept
This submission provides novel results on the benefits of unlabelled data (through a semi-supervised learning framework) for adversarial robust PAC learning. The results are correct, novel and will be of substantial interest to the corresponding sub-community of NeurIPS. I therefore recommend acceptance. Side note: the following publication studies SSL (PAC-type) sample complexity (upper and lower) bounds with the same algorithmic idea, and I believe should be acknowledged: Ruth Urner, Shai Shalev-Shwartz, Shai Ben-David: Access to Unlabeled Data can Speed up Prediction Time. ICML 2011: 641-648
train
[ "0g1MruByjpj", "ud_Jz1xa-I", "0qjtCqaTNEY", "4yxfEixSg5G", "4RR6XJkL-aI", "0LteekPH8tj", "YvHYqlmYW0", "I-lFO2Q1mTI" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the thoughtful and constructive feedback and for acknowledging that a simple algorithm and clean analysis are actually an advantage.\nWe appreciate the suggestion about the related work section, see the comments below.\n \n$\\textbf{Weaknesses}$:\n\n- \"The work of M Darnstädt, HU Simon ...
[ -1, -1, -1, -1, 5, 7, 6, 8 ]
[ -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "I-lFO2Q1mTI", "YvHYqlmYW0", "0LteekPH8tj", "4RR6XJkL-aI", "nips_2022_B7Q2mbIFa6Q", "nips_2022_B7Q2mbIFa6Q", "nips_2022_B7Q2mbIFa6Q", "nips_2022_B7Q2mbIFa6Q" ]
nips_2022_YgmiL2Ur01P
The First Optimal Acceleration of High-Order Methods in Smooth Convex Optimization
In this paper, we study the fundamental open question of finding the optimal high-order algorithm for solving smooth convex minimization problems. Arjevani et al. (2019) established the lower bound $\Omega\left(\epsilon^{-2/(3p+1)}\right)$ on the number of the $p$-th order oracle calls required by an algorithm to find an $\epsilon$-accurate solution to the problem, where the $p$-th order oracle stands for the computation of the objective function value and the derivatives up to the order $p$. However, the existing state-of-the-art high-order methods of Gasnikov et al. (2019b); Bubeck et al. (2019); Jiang et al. (2019) achieve the oracle complexity $\mathcal{O}\left(\epsilon^{-2/(3p+1)} \log (1/\epsilon)\right)$, which does not match the lower bound. The reason for this is that these algorithms require performing a complex binary search procedure, which makes them neither optimal nor practical. We fix this fundamental issue by providing the first algorithm with $\mathcal{O}\left(\epsilon^{-2/(3p+1)}\right)$ $p$-th order oracle complexity.
Accept
The paper is interesting and the reviewers on average recommend (weak-ish) acceptance. I agree with that asessment.
val
[ "qNIfrrGIg8", "JYVNTCxWbtp", "yaTpHbgf4JU", "o4wSW1-7_5j", "_WQmPBI-SOc", "cr_BY7zRtFC", "UjLm5VsuRKb", "tRIeCtFSUl9", "UMtYCa9p9De", "GZfeXAbcbJI", "eBN0wDnBMj", "vect_WFjut9", "1JmCX7ZnYk", "zeCIvIQa-j2", "PjGKUluBO2z", "_stHhzHbo9C", "nKUf02inT5", "2ab6sMCNT4t", "NWXCtGorE6D",...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " - The next example is finding an optimal primal algorithm for solving decentralized distributed optimization problems over fixed networks. We can mention at least five independent works that tried to achieve the optimal convergence rate but, for some reasons, failed and ended up with extra logarithmic factors:\n ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "_WQmPBI-SOc", "_WQmPBI-SOc", "o4wSW1-7_5j", "vect_WFjut9", "zeCIvIQa-j2", "nKUf02inT5", "_stHhzHbo9C", "PjGKUluBO2z", "-6WDLgLzubB", "-6WDLgLzubB", "8BIDaT4_pd", "hh5TZ--J9IF", "nips_2022_YgmiL2Ur01P", "NWXCtGorE6D", "NWXCtGorE6D", "NWXCtGorE6D", "NWXCtGorE6D", "NWXCtGorE6D", "n...
nips_2022_L7AV_pDUVCK
A Multilabel Classification Framework for Approximate Nearest Neighbor Search
Both supervised and unsupervised machine learning algorithms have been used to learn partition-based index structures for approximate nearest neighbor (ANN) search. Existing supervised algorithms formulate the learning task as finding a partition in which the nearest neighbors of a training set point belong to the same partition element as the point itself, so that the nearest neighbor candidates can be retrieved by naive lookup or backtracking search. We formulate candidate set selection in ANN search directly as a multilabel classification problem where the labels correspond to the nearest neighbors of the query point, and interpret the partitions as partitioning classifiers for solving this task. Empirical results suggest that the natural classifier based on this interpretation leads to strictly improved performance when combined with any unsupervised or supervised partitioning strategy. We also prove a sufficient condition for consistency of a partitioning classifier for ANN search, and illustrate the result by verifying this condition for chronological $k$-d trees.
Accept
In this paper, the authors propose a novel multi-label classification framework in the spatial partitioning-based method of approximate nearest neighbor search. The space partitioning based ANN search algorithm outputs the nearest neighbor candidate list based on the index structure. In this paper, the problem of candidate set selection to be a multi-label classification problem is considered, and searching an index of candidates as labeling query points. The paper is theoretically solid and gives accurate answers to the questions of the reviewers during the discussion period. In particular, the main point of this paper is that existing search-based algorithms can be viewed as specific sub-optimal methods in multi-label classification frameworks. The idea is original and the evaluation is appropriate.
train
[ "Nd8mgQcPYzT", "axX4Y3Pjuw7", "8WzlvGS3b1", "zUwBJbfMyGP", "f26Y65UkQEN", "8BfmHquOoA", "W7RLCB3pcxB", "QhCnIKvJv3" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We uploaded a revised version of the article where we addressed the weaknesses pointed out by the reviewers.\n\nThe most important revisions are as follows:\n- We addressed Weakness 3 and Question 2 by Reviewer efJe by discussing the effect of the \"curse of dimensionality\" for the nearest neighbour search on th...
[ -1, -1, -1, -1, -1, 4, 8, 3 ]
[ -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "nips_2022_L7AV_pDUVCK", "QhCnIKvJv3", "W7RLCB3pcxB", "8BfmHquOoA", "nips_2022_L7AV_pDUVCK", "nips_2022_L7AV_pDUVCK", "nips_2022_L7AV_pDUVCK", "nips_2022_L7AV_pDUVCK" ]
nips_2022_XNjCGDr8N-W
Power and limitations of single-qubit native quantum neural networks
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization. While the applications of QNN have been widely investigated, its theoretical foundation remains less understood. In this paper, we formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks that consist of interleaved encoding circuit blocks and trainable circuit blocks. First, we prove that single-qubit quantum neural networks can approximate any univariate function by mapping the model to a partial Fourier series. We in particular establish the exact correlations between the parameters of the trainable gates and the Fourier coefficients, resolving an open problem on the universal approximation property of QNN. Second, we discuss the limitations of single-qubit native QNNs on approximating multivariate functions by analyzing the frequency spectrum and the flexibility of Fourier coefficients. We further demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments. We believe these results would improve our understanding of QNNs and provide a helpful guideline for designing powerful QNNs for machine learning tasks.
Accept
This submission is borderline. Reviewers generally agreed that its theoretical contribution is sound and non-trivial, but at the same time pointed to a major limitation,, which is that the theory applies only to single Qubit QNNs, thus disregards the aspect of entanglement, which is one of the most important factors in quantum computational systems. This point is clearly acknowledged by the text (including the title), so I do not think much more can be done on the authors' part. I recommend that the paper be accepted, while highly encouraging the authors to at least provide some directions as to how their theory might be relevant to the multi Qubit case.
train
[ "W-_90IGuaN2", "RHlMGtHgH9e", "RW5zYoCQ2wI", "S_BkjiM8cB", "TDXhwzSGQmb", "KT3NXhlMqPd", "DOKhmcWwpho", "aCUCotJQ9jS", "jxfPJmndGQL", "qqxgY6W4SwX", "7RAMC2U2UiJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the responses which answer my questions to some extents. I would like to keep my score. ", " I thank the authors for adressing my comments. \nWith those comments and the surrounding discussion on this platform I can now better understand and appreciate the significance of the authors fi...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "KT3NXhlMqPd", "S_BkjiM8cB", "TDXhwzSGQmb", "7RAMC2U2UiJ", "qqxgY6W4SwX", "jxfPJmndGQL", "aCUCotJQ9jS", "nips_2022_XNjCGDr8N-W", "nips_2022_XNjCGDr8N-W", "nips_2022_XNjCGDr8N-W", "nips_2022_XNjCGDr8N-W" ]
nips_2022_nOw2HiKmvk1
Learning Debiased Classifier with Biased Committee
Neural networks are prone to be biased towards spurious correlations between classes and latent attributes exhibited in a major portion of training data, which ruins their generalization capability. We propose a new method for training debiased classifiers with no spurious attribute label. The key idea is to employ a committee of classifiers as an auxiliary module that identifies bias-conflicting data, i.e., data without spurious correlation, and assigns large weights to them when training the main classifier. The committee is learned as a bootstrapped ensemble so that a majority of its classifiers are biased as well as being diverse, and intentionally fail to predict classes of bias-conflicting data accordingly. The consensus within the committee on prediction difficulty thus provides a reliable cue for identifying and weighting bias-conflicting data. Moreover, the committee is also trained with knowledge transferred from the main classifier so that it gradually becomes debiased along with the main classifier and emphasizes more difficult data as training progresses. On five real-world datasets, our method outperforms prior arts using no spurious attribute label like ours and even surpasses those relying on bias labels occasionally. Our code is available at https://github.com/nayeong-v-kim/LWBC.
Accept
The manuscript considers the problem of learning to classify from spuriously-correlated data when annotations for the spurious attributes are absent. The general idea is to learn an ensemble of intentionally biased models which will help to identify minority (bias-conflicting) examples in train data and increase their importance / weight while training a final de-biased model. Additional details of the method include (1) pre-training representations with self-supervised learning and learning shallow models on top of these representations, and (2) alternate updating of the biased ensemble and de-biased classifier where the knowledge is transferred from the classifier to the ensemble. The approach shows strong performance on five datasets: CelebA, ImageNet-9, ImageNet-A, BAR and NICO. Qualitative results are additionally illustrated for the CelebA dataset to provide insight into the sample weights and whether they align with the biases in the dataset. Reviewers acknowledged several positive aspects of the manuscript including the proposed idea of distilling the knowledge of difficult samples from the main classifier to train committee classifiers. Performing this with a self-supervised backbone was also carefully considered. The manuscript is based on a combination of known techniques (bootstrapped ensemble, using auxiliary module to identify biases, self-supervised learning, knowledge distillation), however, the proposed approach shows strong performance in practice. Discussion phase has addressed many concerns related to experimental evaluations inc. hyperparameter tuning, importance of self-supervised backbone, and importance of bootstrapping.
train
[ "UBu-UPsiKE", "__txsSZ54nd", "kjvz3bmqpUM", "-3iL1H3XExP", "Hr0P3vA-6yD", "G9y6GyaBcN", "sFH0-HCBQDM", "Ps10jOV4xsW", "JwIelK3TZAd", "Byk5QUbWdxK", "0R1uxeUfDM", "NQAsyOVLv1", "yDhet_7WhyV", "yHkWrirg-Lr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed answers to my questions and additional experiments! I hope the authors will include the ablation experiments results into the next revision of the paper. \n\nA few minor comments:\n\nTable R3. For clean comparison I would encourage the authors to fix the architecture while varying the m...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "kjvz3bmqpUM", "Ps10jOV4xsW", "-3iL1H3XExP", "Hr0P3vA-6yD", "0R1uxeUfDM", "sFH0-HCBQDM", "NQAsyOVLv1", "yDhet_7WhyV", "Byk5QUbWdxK", "yHkWrirg-Lr", "nips_2022_nOw2HiKmvk1", "nips_2022_nOw2HiKmvk1", "nips_2022_nOw2HiKmvk1", "nips_2022_nOw2HiKmvk1" ]
nips_2022_eow_ZGaw24j
Effectiveness of Vision Transformer for Fast and Accurate Single-Stage Pedestrian Detection
Vision transformers have demonstrated remarkable performance on a variety of computer vision tasks. In this paper, we illustrate the effectiveness of the deformable vision transformer for single-stage pedestrian detection and propose a spatial and multi-scale feature enhancement module, which aims to achieve the optimal balance between speed and accuracy. Performance improvement with the use of vision transformers on various commonly used single-stage structures is demonstrated. The design of the proposed architecture is investigated in depth. Comprehensive comparisons with state-of-the-art single- and two-stage detectors on various pedestrian datasets are performed. The proposed detector achieves leading performance on Caltech and Citypersons datasets among single- and two-stage methods using fewer parameters than the baseline. The log-average miss rates for Reasonable and Heavy are decreased to 2.6% and 28.0% on the Caltech test set, and 10.9% and 38.6% on the Citypersons validation set respectively. The proposed method outperforms SOTA two-stage detectors in the Heavy subset on the Citypersons validation set with considerably faster inference speed.
Accept
This paper improves single-stage pedestrian detectors with deformable vision transformers. The paper initially received mixed reviews, *i.e.* two weak accept and two bordeline reject recommendations. The reviewers' main concerns were essentially related to the novelty and contributions of the submission and to the performance's assessment of the proposed approach. The rebuttal provided elements to clarify the scope and contributions of the paper, but R6hn5 still challenges the novelty. The AC's owns reading of the submission leads to the following analysis: - The AC agrees that the novelty can be challenged, in the sense that the submission uses known components, namely deformable transformers and Center and Scale Prediction (CSP), in the context of pedestrian detection. Although the paper is overall well written, a more coarse-to-fine presentation of the approach, starting from the description of the overall pipeline in Figure 3, and then describing the proposed multi-scale deformable attention (MSDA) and CSP would have help the reading flow in AC's opinion. - On the other hand, the paper is clearly motivated and the adaptation of both components have not been successfully applied to pedestrian detection yet. The approach shows that specific architecture design, *e.g.* with a smaller number of encoders, is required to make the approach successful. The combination of MSDA and CSP in this context is also meaningful and experimentally validated in ablation studies. Finally, the absolute performances reached by the approach is convincing, by reducing the gap between single-stage approaches and two-step methods, and even outperforming state-of-the-art performances in several cases. Therefore, the AC recommends paper acceptance. It highly encourages the authors to take into account reviewers and AC remarks to improve the final paper.
train
[ "1xa5A6e1na", "o1xUJgq7XDh", "Hcj_e2sox_k", "tGu5omXMull", "KZhFWom2B0", "HmgvHJ5Bs2", "Abrh9GWEoEV", "2NyGCdTbd6V", "78rIPkzQbLA", "N1IcKwIu12_" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " - The key contribution is showing the effectiveness of the deformable vision transformer to address a significant issue in single-stage detectors which is the lack of accuracy. \n- Our motivation is to address the lack of spatially adaptive features in single-stage pedestrian detectors. It is proved that this ca...
[ -1, -1, -1, -1, -1, -1, 6, 6, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "o1xUJgq7XDh", "tGu5omXMull", "N1IcKwIu12_", "78rIPkzQbLA", "2NyGCdTbd6V", "Abrh9GWEoEV", "nips_2022_eow_ZGaw24j", "nips_2022_eow_ZGaw24j", "nips_2022_eow_ZGaw24j", "nips_2022_eow_ZGaw24j" ]
nips_2022__cXUMAnWJJj
Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a Polynomial Net Study
Neural tangent kernel (NTK) is a powerful tool to analyze training dynamics of neural networks and their generalization bounds. The study on NTK has been devoted to typical neural network architectures, but it is incomplete for neural networks with Hadamard products (NNs-Hp), e.g., StyleGAN and polynomial neural networks (PNNs). In this work, we derive the finite-width NTK formulation for a special class of NNs-Hp, i.e., polynomial neural networks. We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK. Based on our results, we elucidate the separation of PNNs over standard neural networks with respect to extrapolation and spectral bias. Our two key insights are that when compared to standard neural networks, PNNs can fit more complicated functions in the extrapolation regime and admit a slower eigenvalue decay of the respective NTK, leading to a faster learning towards high-frequency functions. Besides, our theoretical results can be extended to other types of NNs-Hp, which expand the scope of our work. Our empirical results validate the separations in broader classes of NNs-Hp, which provide a good justification for a deeper understanding of neural architectures.
Accept
This paper derives the NTK of Hadamard-product neural networks and shows the different behavior from the standard neural networks in terms of spectral bias in the extrapolation regime. After the author response and author-reviewer discussion, all the reviewers are in support of accepting this paper. Therefore, I recommend acceptance. Regarding the NTK-based optimization analysis, it seems that the following paper is not mentioned, which is a concurrent work with [Allen-Zhu et al.,2019, Chizat et al., 2019, Du et al., 2019a, 2018]. [*] Zou et al. Gradient Descent Optimizes Over-parameterized Deep ReLU Networks, Machine Learning, 2020. Please address the missing reference and prepare the camera ready by incorporating the author response.
train
[ "zwBVdAP9Ij", "VjOjqz1MFQX", "5uGRai95jR", "3ynt-CxHULW", "KEYX4T5uFPu", "kMmS-vl2ZQz", "56MG5FARl00", "qzDw66J39-9", "gC1Qd8WhZGf", "0Rv2uEcJmhO", "8bxEocBQRsR", "RKcQHMSHdei", "Y74m_GQmM1z", "JtK0uIS3QdA", "1mNU-MpEyA8", "8dbWapvzmEQ", "b6aetRYo_i8", "z-9ldjKEsR2", "D8dKGdNZCEJ...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Dear reviewers and AC,\n\n**Tl-dr**: We are truly thankful for all your insightful comments. Those constructive comments have enabled us to make a number of updates in the paper that we believe have further clarified our contributions, strengthened our theoretical contributions (e.g. new proofs in NNs-HP), and pr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 4 ]
[ "nips_2022__cXUMAnWJJj", "5uGRai95jR", "kMmS-vl2ZQz", "KEYX4T5uFPu", "RKcQHMSHdei", "JtK0uIS3QdA", "Y74m_GQmM1z", "gC1Qd8WhZGf", "0Rv2uEcJmhO", "8bxEocBQRsR", "1mNU-MpEyA8", "a4wYlrbAU3", "p0MD640yi6S", "D8dKGdNZCEJ", "z-9ldjKEsR2", "z-9ldjKEsR2", "nips_2022__cXUMAnWJJj", "nips_202...
nips_2022_-3cHWtrbLYq
Local Identifiability of Deep ReLU Neural Networks: the Theory
Is a sample rich enough to determine, at least locally, the parameters of a neural network? To answer this question, we introduce a new local parameterization of a given deep ReLU neural network by fixing the values of some of its weights. This allows us to define local lifting operators whose inverses are charts of a smooth manifold of a high dimensional space. The function implemented by the deep ReLU neural network composes the local lifting with a linear operator which depends on the sample. We derive from this convenient representation a geometrical necessary and sufficient condition of local identifiability. Looking at tangent spaces, the geometrical condition provides: 1/ a sharp and testable necessary condition of identifiability and 2/ a sharp and testable sufficient condition of local identifiability. The validity of the conditions can be tested numerically using backpropagation and matrix rank computations.
Accept
Ratings: 6/5/5. Confidence: 3/4/3. Discussion among reviewers: No. This paper provides results on local identifiability of deep ReLU networks. Identifiability of neural networks is an important theoretical topic, with practical implications such as reproducability, and we think the NeurIPS community would find the material interesting. Although the result is only for fairly specific assumptions that typically don't hold in practice, and only on local identifiability, the reviewers generally agree that the material is well presented. I think the result could lead as a stepping stone towards more general results, and I think that the results are worthy of presentation at NeurIPS. My recommendation is to accept, assuming the list of promised updates to the paper, as detailed by the author, are executed.
test
[ "2n0W701kkFW", "Ui1H8nz1bG", "wKIJ1saIiy", "j1ezFspCAB", "Lppdl7OYQG5", "4vlSUQnpDLf", "MXbL0vgljCR", "A0OxkKJwsk4", "1kB7TBwcYEJ", "CdEXOsYpsE", "iRH2-AU2go", "NFQcW69NZAT", "k7QWkeTfcJA", "Swn9SHg1-L", "3-EFVnSoJB1", "eNpoc5JPxWC", "Jf8bwTigWq6", "h7EGY8F9S0c", "UCzyWl3dwOh", ...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " Thank you for increasing your score. We appreciate it.\n\n**On changes in the manuscript**\n\nIndeed, we will add a simplified version of our responses on related works and paper motivation to the Introduction, especially the discussion in Response 1.b. We have posted a comment entitled \"*Summary of changes in t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "4vlSUQnpDLf", "A0OxkKJwsk4", "1kB7TBwcYEJ", "MXbL0vgljCR", "CdEXOsYpsE", "3-EFVnSoJB1", "h7EGY8F9S0c", "1kB7TBwcYEJ", "eNpoc5JPxWC", "nips_2022_-3cHWtrbLYq", "aAUfpnWHq5D", "k7QWkeTfcJA", "Swn9SHg1-L", "3-EFVnSoJB1", "rKC0QFrHNUn", "Jf8bwTigWq6", "UCzyWl3dwOh", "nips_2022_-3cHWtrb...
nips_2022_pD5Pl5hen_g
The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization
In this paper, we revisit the smooth and strongly-convex-strongly-concave minimax optimization problem. Zhang et al. (2021) and Ibrahim et al. (2020) established the lower bound $\Omega\left(\sqrt{\kappa_x\kappa_y} \log \frac{1}{\epsilon}\right)$ on the number of gradient evaluations required to find an ϵ-accurate solution, where κx and κy are condition numbers for the strong convexity and strong concavity assumptions. However, the existing state-of-the-art methods do not match this lower bound: algorithms of Lin et al. (2020) and Wang and Li (2020) have gradient evaluation complexity $\mathcal{O}\left(\sqrt{\kappa_x\kappa_y} \log^3 \frac{1}{\epsilon}\right)$ and $\mathcal{O}\left( \sqrt{\kappa_x\kappa_y}\log^3 (\kappa_x\kappa_y)\log\frac{1}{\epsilon}\right)$, respectively. We fix this fundamental issue by providing the first algorithm with $\mathcal{O}\left(\sqrt{\kappa_x\kappa_y} \log \frac{1}{\epsilon}\right)$ gradient evaluation complexity. We design our algorithm in three steps: (i) we reformulate the original problem as a minimization problem via the pointwise conjugate function; (ii) we apply a specific variant of the proximal point algorithm to the reformulated problem; (iii) we compute the proximal operator inexactly using the optimal algorithm for operator norm reduction in monotone inclusions.
Accept
The reviewers all agree that the paper considers an important problem, and the results are novel and interesting. Congratulations!
train
[ "BCAzFZyPCKe", "qPTh0yEUlLa", "eVfkvl8_1d", "xxkyddPp_Oj", "NOZx93amb-4", "ulrf3tW26S0", "_tP-tx42UM8", "ybTPpPqGq_8", "ZN91XEjdbqK", "dgCyT0f3Nzp", "cSBHyRLcN2", "_aenWJJ-S6W", "LltHMg2Et2S", "2Vux6mFlKkr", "Wh4jqDBxBHj", "S-QdGZmdK_", "EqmuneCihZh" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Indeed, both papers (Thekumparampil et al., 2022) and (Jin et al., 2022) use Fenchel conjugate functions $p^*(x)$ and $q^*(y)$ to solve the minimax optimization problem with bilinear coupling\n$$\n\\min_x \\max_y p(x) + x^\\top A y - q(y).\n$$\nHowever, the ideas used in these papers are substantially different f...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 2 ]
[ "eVfkvl8_1d", "eVfkvl8_1d", "LltHMg2Et2S", "NOZx93amb-4", "dgCyT0f3Nzp", "_tP-tx42UM8", "ZN91XEjdbqK", "EqmuneCihZh", "S-QdGZmdK_", "Wh4jqDBxBHj", "nips_2022_pD5Pl5hen_g", "2Vux6mFlKkr", "2Vux6mFlKkr", "nips_2022_pD5Pl5hen_g", "nips_2022_pD5Pl5hen_g", "nips_2022_pD5Pl5hen_g", "nips_2...
nips_2022_-8tU21J6BcB
On the Robustness of Graph Neural Diffusion to Topology Perturbations
Neural diffusion on graphs is a novel class of graph neural networks that has attracted increasing attention recently. The capability of graph neural partial differential equations (PDEs) in addressing common hurdles of graph neural networks (GNNs), such as the problems of over-smoothing and bottlenecks, has been investigated but not their robustness to adversarial attacks. In this work, we explore the robustness properties of graph neural PDEs. We empirically demonstrate that graph neural PDEs are intrinsically more robust against topology perturbation as compared to other GNNs. We provide insights into this phenomenon by exploiting the stability of the heat semigroup under graph topology perturbations. We discuss various graph diffusion operators and relate them to existing graph neural PDEs. Furthermore, we propose a general graph neural PDE framework based on which a new class of robust GNNs can be defined. We verify that the new model achieves comparable state-of-the-art performance on several benchmark datasets.
Accept
This paper studies the robustness properties of graph neural partial differential equations and empirically demonstrates that graph neural PDEs are intrinsically more robust against topology perturbation compared to other graph neural networks. The reviewers found the experiments extensive and convincing. The authors provided an extensive and detailed discussion in the rebuttal phase and answered the questions raised by the reviewers. Overall, the reviewers believe that the paper discusses an interesting and important topic, but also provided some comments for improvements, such as comparing the PGD attacks with other GNNs / defense methods.
train
[ "gJfQp2qaDa3", "yxTW0CKe84d", "D9WIbtrOHC6", "yu6SM-H7VMQ", "8Hbo9x_vUVN", "23qJYOEuHu", "eENXL32_DAj", "v2cRfmEh6ln", "KzpjWFyXXTC", "cFTDBd9ZNV4", "Q-iBFYFg_hi", "EZ2THhbb54C", "oe5CbE4Ytrv", "SxxhRKu8-og", "6tzW_XL199Q", "PyKHbBDdzHN", "bJpwWrdmbS0M", "Vj40SwgFOS", "NO7MII2V05...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author"...
[ " Thank you sincerely again for your enormous efforts and time spent in the reviewing process. Your comments indeed help us improve our work a lot! ", " I thank the authors for their detailed response. In particular, the second paragraph describes the non-trivial connection between geodesic distance perturbation...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "yxTW0CKe84d", "KzpjWFyXXTC", "yu6SM-H7VMQ", "23qJYOEuHu", "KoNRnIesfi4", "R41zapmdEKS", "SdTqjmJuDL", "Q-iBFYFg_hi", "cFTDBd9ZNV4", "ae-O0v3_XJA", "qQtMzf1jt5", "nips_2022_-8tU21J6BcB", "nips_2022_-8tU21J6BcB", "6tzW_XL199Q", "tCFApKyJ84Z", "Qs8P1LxEQQ-", "Vj40SwgFOS", "R41zapmdEK...
nips_2022_BWa5IUE3L4
Graph Neural Network Bandits
We consider the bandit optimization problem with the reward function defined over graph-structured data. This problem has important applications in molecule design and drug discovery, where the reward is naturally invariant to graph permutations. The key challenges in this setting are scaling to large domains, and to graphs with many nodes. We resolve these challenges by embedding the permutation invariance into our model. In particular, we show that graph neural networks (GNNs) can be used to estimate the reward function, assuming it resides in the Reproducing Kernel Hilbert Space of a permutation-invariant additive kernel. By establishing a novel connection between such kernels and the graph neural tangent kernel (GNTK), we introduce the first GNN confidence bound and use it to design a phased-elimination algorithm with sublinear regret. Our regret bound depends on the GNTK's maximum information gain, which we also provide a bound for. Perhaps surprisingly, even though the reward function depends on all $N$ node features, our guarantees are independent of the number of graph nodes $N$. Empirically, our approach exhibits competitive performance and scales well on graph-structured domains.
Accept
This paper studies a bandit optimization problem where the rewards are smooth on a graph. The authors show that GNNs can be used to estimate the reward functions and confidence bounds. This is then used to design a phased elimination algorithm. The reviewers agree that the paper is interesting and makes a significant contribution to the area -- I recommend its acceptance.
train
[ "rg_fTx4M1M", "c4sCF9dBT-n", "qMXGwU-oJu", "5mjA_4QzEZG", "AgTDHx35HFX", "DGfvyA9j72t", "k9BNtGJPLL", "jgHD3R9W4m", "E6zokZlFnm0", "-8TuxAg5aCB", "hCJihWQ5B_T" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reconsidering your assessment. \nCould you please also update the score in your review to ``border-line accept”?\n\nWe agree that as future work, one could study the implications of our results towards a better understanding of GNNs. ", " Based on the satisfactory answers of the authors I upgrade ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "c4sCF9dBT-n", "DGfvyA9j72t", "AgTDHx35HFX", "hCJihWQ5B_T", "k9BNtGJPLL", "hCJihWQ5B_T", "-8TuxAg5aCB", "E6zokZlFnm0", "nips_2022_BWa5IUE3L4", "nips_2022_BWa5IUE3L4", "nips_2022_BWa5IUE3L4" ]
nips_2022_cIpU8OzGSCU
SelecMix: Debiased Learning by Contradicting-pair Sampling
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, i.e., when training labels are strongly correlated with undesirable features. To prevent a network from learning such features, recent methods augment training data such that examples displaying spurious correlations (i.e., bias-aligned examples) become a minority, whereas the other, bias-conflicting examples become prevalent. However, these approaches are sometimes difficult to train and scale to real-world data because they rely on generative models or disentangled representations. We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples. Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features. Identifying such pairs requires comparing examples with respect to unknown biased features. For this, we utilize an auxiliary contrastive model with the popular heuristic that biased features are learned preferentially during training. Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
Accept
This paper proposes to handle so-called spurious / undesirable signals that are correlated with but do not entail the label (and where it is considered undesirable that a classifier should rely on these signals). The authors propose a variation of the mixup training heuristic where for each example, one selects a bias-conflicting pair. Because the bias-conflicting pairs are rare, they are oversampled to form the mixup pairs and intuitively, this makes the biased signal less predictive. The authors compare against other methods in the case where the "spurious feature" is known and propose a further heuristic for automatically providing pseudo "bias labels" based on the intuition that spurious features of concern are often "easy to learn" and thus examples tend to be grouped together by their spurious features earlier in training. This seems to work well on some toy datasets but the degree to which guesses are piled upon guesses here is of concern. Overall, this is a borderline paper, with 2 of 3 reviewers liking it and one championing it for acceptance.
train
[ "F69uNyW7HoD", "edK-Wnv5UAR", "9DIEzhJO565", "0cn9o6v9ekd", "qp6JNr8nQ9S", "ojfg2rHJgGV", "EjPQyn8amSM", "UNAhzt5NFT", "FAltFZdPHDP" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate the reviewers’ time and effort. We followed all recommendations and included the suggested discussions, as well as additional experiments in Appendix A.8-A.10. We also improved clarity and rigor throughout the writing. Updates are highlighted in blue.", " I appreciate the authors for addres...
[ -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 1, 5 ]
[ "nips_2022_cIpU8OzGSCU", "qp6JNr8nQ9S", "0cn9o6v9ekd", "EjPQyn8amSM", "FAltFZdPHDP", "UNAhzt5NFT", "nips_2022_cIpU8OzGSCU", "nips_2022_cIpU8OzGSCU", "nips_2022_cIpU8OzGSCU" ]
nips_2022_COAcbu3_k4U
Maximum Common Subgraph Guided Graph Retrieval: Late and Early Interaction Networks
The graph retrieval problem is to search in a large corpus of graphs for ones that are most similar to a query graph. A common consideration for scoring similarity is the maximum common subgraph (MCS) between the query and corpus graphs, usually counting the number of common edges (i.e., MCES). In some applications, it is also desirable that the common subgraph be connected, i.e., the maximum common connected subgraph (MCCS). Finding exact MCES and MCCS is intractable, but may be unnecessary if ranking corpus graphs by relevance is the goal. We design fast and trainable neural functions that approximate MCES and MCCS well. Late interaction methods compute dense representations for the query and corpus graph separately, and compare these representations using simple similarity functions at the last stage, leading to highly scalable systems. Early interaction methods combine information from both graphs right from the input stages, are usually considerably more accurate, but slower. We propose both late and early interaction neural MCES and MCCS formulations. They are both based on a continuous relaxation of a node alignment matrix between query and corpus nodes. For MCCS, we propose a novel differentiable network for estimating the size of the largest connected common subgraph. Extensive experiments with seven data sets show that our proposals are superior among late interaction models in terms of both accuracy and speed. Our early interaction models provide accuracy competitive with the state of the art, at substantially greater speeds.
Accept
This paper presents a neural method for the graph retrieval problem. Reviewers agree with the technical contribution of this paper, its empirical soundness, and the well written presentation. Discussions in rebuttal and additional experiments provided useful information and made this paper stronger. We suggest the authors update their next version according to reviewers’ suggestions and rebuttal discussions.
test
[ "khFUovFiaws", "_AE8ZpFTzbN", "CZfME-m0FlQ", "j7Omrtnb3tj", "08YMS1vYRv", "5yN5v4RbMI8", "RDK1n2uHl2", "ioInBa0uxFW" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\nThank you for your comments and feedback to different issues pointed by all reviewers. I would like to retain the original score I had assigned to this paper. ", " > * Does different GNN architecture affect the results?\n\nOur empirical analysis revealed that our graph encoder module provides in...
[ -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "08YMS1vYRv", "ioInBa0uxFW", "j7Omrtnb3tj", "RDK1n2uHl2", "5yN5v4RbMI8", "nips_2022_COAcbu3_k4U", "nips_2022_COAcbu3_k4U", "nips_2022_COAcbu3_k4U" ]
nips_2022_4iEoOIQ7nL
Proximal Point Imitation Learning
This work develops new algorithms with rigorous efficiency guarantees for infinite horizon imitation learning (IL) with linear function approximation without restrictive coherence assumptions. We begin with the minimax formulation of the problem and then outline how to leverage classical tools from optimization, in particular, the proximal-point method (PPM) and dual smoothing, for online and offline IL, respectively. Thanks to PPM, we avoid nested policy evaluation and cost updates for online IL appearing in the prior literature. In particular, we do away with the conventional alternating updates by the optimization of a single convex and smooth objective over both cost and $Q$-functions. When solved inexactly, we relate the optimization errors to the suboptimality of the recovered policy. As an added bonus, by re-interpreting PPM as dual smoothing with the expert policy as a center point, we also obtain an offline IL algorithm enjoying theoretical guarantees in terms of required expert trajectories. Finally, we achieve convincing empirical performance for both linear and neural network function approximation.
Accept
It was agreed among the reviewers and AC that the paper should be accepted. Hope the authors will address the remaining comments from the reviews in preparing the final version of the paper.
val
[ "s1fRmHDHhU8", "LYcTWbm8Akw", "ZZc3nRO1SDY", "DQINpIVOMAN", "RNWAJzQ-rXa", "0k579KeUPbA", "HrbNEukhwleA", "xDZ55lSwVlx", "pSaBtpWReXR", "9YtIc8KbQ_e", "3vq3jP3E_Vw", "CaUtQ_98Ap", "n0-pzjRZzPt", "MsncHtyura", "1TQyxu4uzW5", "7Hror2oTCXH", "-YNXs9_zLY4", "RA0eq_7IxH", "yUqOpWv1Oiw...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nThank you for your answer. We are happy that we addressed some of your points. Could you please tell us which are your remaining concerns?\n\nThanks again.\n\nBest,\nAuthors \n\n", " Thanks to the authors for their responses.\n\nSome questions have been addressed and therefore, I will keep the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "LYcTWbm8Akw", "3vq3jP3E_Vw", "nips_2022_4iEoOIQ7nL", "RNWAJzQ-rXa", "0k579KeUPbA", "nips_2022_4iEoOIQ7nL", "nips_2022_4iEoOIQ7nL", "pSaBtpWReXR", "9YtIc8KbQ_e", "yUqOpWv1Oiw", "CaUtQ_98Ap", "n0-pzjRZzPt", "RA0eq_7IxH", "1TQyxu4uzW5", "7Hror2oTCXH", "-YNXs9_zLY4", "U0dO87XcxJQ", "n...
nips_2022_pF8btdPVTL_
Benign Overfitting in Two-layer Convolutional Neural Networks
Modern neural networks often have great expressive power and can be trained to overfit the training data, while still achieving a good test performance. This phenomenon is referred to as “benign overfitting”. Recently, there emerges a line of works studying “benign overfitting” from the theoretical perspective. However, they are limited to linear models or kernel/random feature models, and there is still a lack of theoretical understanding about when and how benign overfitting occurs in neural networks. In this paper, we study the benign overfitting phenomenon in training a two-layer convolutional neural network (CNN). We show that when the signal-to-noise ratio satisfies a certain condition, a two-layer CNN trained by gradient descent can achieve arbitrarily small training and test loss. On the other hand, when this condition does not hold, overfitting becomes harmful and the obtained CNN can only achieve a constant level test loss. These together demonstrate a sharp phase transition between benign overfitting and harmful overfitting, driven by the signal-to-noise ratio. To the best of our knowledge, this is the first work that precisely characterizes the conditions under which benign overfitting can occur in training convolutional neural networks.
Accept
This paper studies the benign overfitting phenomenon in training a two-layer convolutional neural network (CNN). Most importantly, this paper introduces SNR ratio in the analysis of benign overfitting. Under Condition 4.2, this paper provides the phase transition regime between "benign overfitting" and "harmful overfitting" in terms of SNR. A common concern from reviewers is the lack of connection to the practical settings. Still, all the reviewers agree to accept. I also think the theoretical result of this paper is interesting to the community, and I recommend accept.
train
[ "pJCpPLZNW8", "flVUdH3y07J", "UnfH6KA35jQ", "-oBjW8PkLdT", "2xotQHM6NEe", "T9X_RbIEkkQ", "boOPAnW_q6n", "JV4nGE_8YQ", "0EiCYi_uOGe", "aQJnbHKbdv9" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your follow-up questions. \n\nOur analysis can cover the case that the signal appears in multiple patches. For example, for the case you mentioned where half of the signal is in one patch and the other half is in another, it can be covered by our analysis with slight modification. This is because ou...
[ -1, -1, -1, -1, -1, -1, 7, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "flVUdH3y07J", "2xotQHM6NEe", "aQJnbHKbdv9", "0EiCYi_uOGe", "JV4nGE_8YQ", "boOPAnW_q6n", "nips_2022_pF8btdPVTL_", "nips_2022_pF8btdPVTL_", "nips_2022_pF8btdPVTL_", "nips_2022_pF8btdPVTL_" ]
nips_2022_4R5x8no2Ts-
Joint Entropy Search For Maximally-Informed Bayesian Optimization
Information-theoretic Bayesian optimization techniques have become popular for optimizing expensive-to-evaluate black-box functions due to their non-myopic qualities. Entropy Search and Predictive Entropy Search both consider the entropy over the optimum in the input space, while the recent Max-value Entropy Search considers the entropy over the optimal value in the output space. We propose Joint Entropy Search (JES), a novel information-theoretic acquisition function that considers an entirely new quantity, namely the entropy over the joint optimal probability density over both input and output space. To incorporate this information, we consider the reduction in entropy from conditioning on fantasized optimal input/output pairs. The resulting approach primarily relies on standard GP machinery and removes complex approximations typically associated with information-theoretic methods. With minimal computational overhead, JES shows superior decision-making, and yields state-of-the-art performance for information-theoretic approaches across a wide suite of tasks. As a light-weight approach with superior results, JES provides a new go-to acquisition function for Bayesian optimization.
Accept
The authors propose a Bayesian optimization method for black-box optimization problems using a new entropy search-type acquisition function that combines the advantages of predictive entropy search and max-value entropy search. The proposed method, joint entropy search (JES), uses the mutual information between the optimal solution/optimal value pair (x^*, f^*) and the candidate query points (x, y) as the acquisition function. In the experiments, the authors first explained the difference in behavior between JES and the baselines PES or MES through optimization of the benchmark function, and showed experimental results suggesting that JES is effective. The practical performance of JES compared to PES and MES was also evaluated by the MLP hyperparameter optimization task. Strengths: 1 - The intuition behind the basic idea is simple. 2 - A variety of empirical evaluations are provided. 3 - The inverse gamma greedy method is proposed to mitigate the influence of model misspecification, which is a property of all entropy search type acquisition functions including PES and MES. 4 - Experiments suggest that JES is more robust to observation noise than PES or MES. Weaknesses: 1 - While the method is very interesting, it also feels like an incremental step from PES and MES. 2 - As pointed out by reviewer 7H23, the authors incorrectly state that conditioning on f* yields a truncated Gaussian predictive distribution. Reviewer 7H23 correctly states that "it is known that a marginal distribution of multivariate truncated Gaussian is not a truncated Gaussian". The authors should revise the paper to indicate that they make the additional approximation that conditioning on f* yields a truncated Gaussian predictive distribution. Decision: A majority of reviewers are strongly positive about the paper. The only negative reviewer, 7H23, points out minor limitations, with the exception of the mistake made by the authors, as described in point 2 in the list of weaknesses above. The authors need to update the paper to clarify that they make the additional approximation that conditioning on f* yields a truncated Gaussian predictive distribution.
train
[ "MfkcsJlWNa", "4n6BGJG2K7O", "RPXASx9CyL8", "sqH7IK2yXwk", "6-5N4qDeN-E", "OPvL_iLV0vc", "s6N7Jl_OaTrI", "Jxr3hDlxp7J8", "m9IiBrFtJua", "KryJdw-o-OK", "_pZvJ3N3ft", "4zcOvPSUXH", "JgGhjd0iZ_Z", "xtnmqjgzCCV", "VaClraP9_A", "wkAUx5mukIC", "-gOVF2NijlH", "HIdKlMzkOpA", "QGa9VM2QXoJ...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Thank you for your careful responses to my concerns. \nThe comments I received helped me understand the inverse gamma greedy method.\nBased on the responses I have received and the comments of other reviewers, I would like to raise my score by one.", " We once again thank the reviewer for the initial feedback. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "RPXASx9CyL8", "6-5N4qDeN-E", "-gOVF2NijlH", "HIdKlMzkOpA", "xtnmqjgzCCV", "s6N7Jl_OaTrI", "Jxr3hDlxp7J8", "VaClraP9_A", "J4dwAzFf4o3", "HIdKlMzkOpA", "HIdKlMzkOpA", "HIdKlMzkOpA", "HIdKlMzkOpA", "QGa9VM2QXoJ", "J4dwAzFf4o3", "-gOVF2NijlH", "nips_2022_4R5x8no2Ts-", "nips_2022_4R5x8...
nips_2022_QXiYW3TrgXj
On the Learning Mechanisms in Physical Reasoning
Is dynamics prediction indispensable for physical reasoning? If so, what kind of roles do the dynamics prediction modules play during the physical reasoning process? Most studies focus on designing dynamics prediction networks and treating physical reasoning as a downstream task without investigating the questions above, taking for granted that the designed dynamics prediction would undoubtedly help the reasoning process. In this work, we take a closer look at this assumption, exploring this fundamental hypothesis by comparing two learning mechanisms: Learning from Dynamics (LfD) and Learning from Intuition (LfI). In the first experiment, we directly examine and compare these two mechanisms. Results show a surprising finding: Simple LfI is better than or on par with state-of-the-art LfD. This observation leads to the second experiment with Ground-truth Dynamics (GD), the ideal case of LfD wherein dynamics are obtained directly from a simulator. Results show that dynamics, if directly given instead of approximated, would achieve much higher performance than LfI alone on physical reasoning; this essentially serves as the performance upper bound. Yet practically, LfD mechanism can only predict Approximate Dynamics (AD) using dynamics learning modules that mimic the physical laws, making the following downstream physical reasoning modules degenerate into the LfI paradigm; see the third experiment. We note that this issue is hard to mitigate, as dynamics prediction errors inevitably accumulate in the long horizon. Finally, in the fourth experiment, we note that LfI, the extremely simpler strategy when done right, is more effective in learning to solve physical reasoning problems. Taken together, the results on the challenging benchmark of PHYRE show that LfI is, if not better, as good as LfD with bells and whistles for dynamics prediction. However, the potential improvement from LfD, though challenging, remains lucrative.
Accept
This paper investigates a simple and important question: does learning to predict physical dynamics help an agent perform better physical reasoning? While most prior work automatically treats this as a given, the paper provides interesting findings that intuition-based learning is better than dynamics-based learning, especially when the dynamics model is approximate. All the reviewers appreciated the clear writing and thorough experiments performed. I believe this will be an insightful and impactful paper.
train
[ "iIha0Jca7b", "-zKH0ZMu_a0", "ijRrmN3YmDZ", "GGAEIdrJDju", "w2xhalTkgA", "pPhBW3fA2N6", "JI4IcBxLhK", "nDBAZuPLzH_", "gB06ggiYv2U", "3e0gYVLzKkh" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response in answering my questions and concerns. After reading the other reviews and author responses, I will keep my current score! ", " We are grateful for your positive feedback on our experimental design and paper presentation, as well as for providing us with a potential LfD mod...
[ -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "GGAEIdrJDju", "3e0gYVLzKkh", "gB06ggiYv2U", "w2xhalTkgA", "nDBAZuPLzH_", "JI4IcBxLhK", "nips_2022_QXiYW3TrgXj", "nips_2022_QXiYW3TrgXj", "nips_2022_QXiYW3TrgXj", "nips_2022_QXiYW3TrgXj" ]
nips_2022_am86qcwErJm
Towards a Standardised Performance Evaluation Protocol for Cooperative MARL
Multi-agent reinforcement learning (MARL) has emerged as a useful approach to solving decentralised decision-making problems at scale. Research in the field has been growing steadily with many breakthrough algorithms proposed in recent years. In this work, we take a closer look at this rapid development with a focus on evaluation methodologies employed across a large body of research in cooperative MARL. By conducting a detailed meta-analysis of prior work, spanning 75 papers accepted for publication from 2016 to 2022, we bring to light worrying trends that put into question the true rate of progress. We further consider these trends in a wider context and take inspiration from single-agent RL literature on similar issues with recommendations that remain applicable to MARL. Combining these recommendations, with novel insights from our analysis, we propose a standardised performance evaluation protocol for cooperative MARL. We argue that such a standard protocol, if widely adopted, would greatly improve the validity and credibility of future research, make replication and reproducibility easier, as well as improve the ability of the field to accurately gauge the rate of progress over time by being able to make sound comparisons across different works. Finally, we release our meta-analysis data publicly on our project website for future research on evaluation accompanied by our open-source evaluation tools repository.
Accept
This paper performs a meta-research on cooperative MARL research, identifies three main issues, and proposes a standard evaluation protocol. All the reviewers agree that the paper is well written, provides interesting perspectives, and recommends reasonable solutions. A common concern that was shared by all reviewers is that the discussed issues were already highlighted in prior works on single-agent RL. The "Reply to all reviewers" in the rebuttal clearly addressed the above concern. After the discussion, all reviewers agree that publishing this study, including both the meta-analyses and the proposed evaluation protocol, would be beneficial for the MARL community. Thus, I recommend accepting this paper.
train
[ "RkGoT1fil_", "fiGHa18Trq0", "MzzMTYOuX7I", "XkFiGMLPv8O", "AAqC7HFXGI-", "SUu9cYhe2tV", "a-8UCQlRr9-", "F6c8lOt6az8", "gt8Dm6wzHiz", "6ITv6s3TZFI", "YwaNFgPdX0c", "CtSoCqWAua", "5Se4RXhiJHs", "t36P4KgVgYz" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Firstly, we would like to express our gratitude to the reviewer for taking the time to go through our reply. \n\n“If this is the case, the values in the plot are better associated with the metrics, otherwise it could be an unfair comparison for different reported values.”\n\nWe appreciate the reviewer's comments ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "MzzMTYOuX7I", "XkFiGMLPv8O", "a-8UCQlRr9-", "F6c8lOt6az8", "t36P4KgVgYz", "5Se4RXhiJHs", "CtSoCqWAua", "YwaNFgPdX0c", "6ITv6s3TZFI", "nips_2022_am86qcwErJm", "nips_2022_am86qcwErJm", "nips_2022_am86qcwErJm", "nips_2022_am86qcwErJm", "nips_2022_am86qcwErJm" ]
nips_2022_HaZuqj0Gvp2
Inverse Design for Fluid-Structure Interactions using Graph Network Simulators
Designing physical artifacts that serve a purpose---such as tools and other functional structures---is central to engineering as well as everyday human behavior. Though automating design using machine learning has tremendous promise, existing methods are often limited by the task-dependent distributions they were exposed to during training. Here we showcase a task-agnostic approach to inverse design, by combining general-purpose graph network simulators with gradient-based design optimization. This constitutes a simple, fast, and reusable approach that solves high-dimensional problems with complex physical dynamics, including designing surfaces and tools to manipulate fluid flows and optimizing the shape of an airfoil to minimize drag. This framework produces high-quality designs by propagating gradients through trajectories of hundreds of steps, even when using models that were pre-trained for single-step predictions on data substantially different from the design tasks. In our fluid manipulation tasks, the resulting designs outperformed those found by sampling-based optimization techniques. In airfoil design, they matched the quality of those obtained with a specialized solver. Our results suggest that despite some remaining challenges, machine learning-based simulators are maturing to the point where they can support general-purpose design optimization across a variety of fluid-structure interaction domains.
Accept
This work proposes to use GNNs in acting as good simulators of fluid dynamics. It is a solid example of using neural networks as differentiable simulators that can then be used for solving for design of components for fluid manipulation tasks. Overall, this work has been well received and should inspire related work in inverse design. The authors are encouraged to take the detailed reviews into account for camera-ready and especially make sure to make high quality reproducible code available to the community. There has also been discussion about the naming of the paper to reflect proper scope and that should be considered as well.
val
[ "-4MKh2j52R-", "A0aN_BRlRmp", "ckdivwEj03", "nG4UPO7E8H5", "sSRAckHxqVIr", "qJfq5LFOZQ3", "mDqr8LO-iYA", "dg_Ybmr2z7O", "xAxAui7IXFL", "U50IwNtMlUi", "GvYl0vn2q9_", "tKEs_VmURPd", "gsdhpVYUOV9", "pyFFw0I2-Qz", "zQn0NxRqM1p", "O5jGh7myxZg", "RQuE3Y4YunJ", "01_ELzHxJB1", "UlDo10bfT...
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thanks for your clarification. \n\n> We disagree that the paper would be better if it introduced a new model architecture or algorithm. As a community, we produce an enormous number of model variations but very few papers studying them thoroughly. This holds back progress\n\nSure, it is totally fine with me that ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "qJfq5LFOZQ3", "xAxAui7IXFL", "qJfq5LFOZQ3", "mDqr8LO-iYA", "dg_Ybmr2z7O", "pyFFw0I2-Qz", "RQuE3Y4YunJ", "01_ELzHxJB1", "tKEs_VmURPd", "GvYl0vn2q9_", "UlDo10bfTb0", "gsdhpVYUOV9", "VU2Kjdd0EcO", "zQn0NxRqM1p", "F9PlzmCeh16", "RQuE3Y4YunJ", "01_ELzHxJB1", "LQnPJFYVjzx", "4rHy2FiIL...
nips_2022_1wz-ksUupt2
Optimal Query Complexities for Dynamic Trace Estimation
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly, such as during an optimization process. Specifically, for any $m$ matrices $\mathbf{A}_1,...,\mathbf{A}_m$ with consecutive differences bounded in Schatten-$1$ norm by $\alpha$, we provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $\epsilon$ error with $\delta$ failure probability with an optimal query complexity of $\widetilde{O}(m \alpha\sqrt{\log(1/\delta)}/\epsilon + m\log(1/\delta))$, improving the dependence on both $\alpha$ and $\delta$ from Dharangutte and Musco (NeurIPS, 2021). Our procedure works without additional norm bounds on $\mathbf{A}_i$ and can be generalized to a bound for the $p$-th Schatten norm for $p \in [1,2]$, giving a complexity of $\widetilde{O}(m \alpha(\sqrt{\log(1/\delta)}/\epsilon)^p +m \log(1/\delta))$. By using novel reductions to communication complexity and information-theoretic analyses of Gaussian matrices, we provide matching lower bounds for static and dynamic trace estimation in all relevant parameters, including the failure probability. Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error {\it even in the static setting}, and (2) are the first unconditional lower bounds for dynamic trace estimation, resolving open questions of prior work.
Accept
The paper provides a novel algorithm with improved complexity bounds for dynamic trace estimation via matrix-vector product queries, assuming bounded differences in Schatten-p norms between consecutive matrices. The paper provides lower bounds proving the optimality of the proposed methods, which are new even in the static setting. All reviewers appreciated the novelty and technical depth of both the upper and lower bounds parts of this work, and expect the paper to have significant influence on future research on numerical linear algebra. Consequently, I recommend acceptance, possibly as a spotlight presentation.
train
[ "nR0giJvd8kA", "12V0RlyHrS", "XXv5W3shy8", "WYLR1aPz5R", "-ZnuvJ_usZ", "tJfza98Gd6g", "3qU5a3OCzVw", "iFAR-RsSA3w", "YEAlLRZEG0E", "gXoE5wIgIgW" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for all the clarifications. The unconditionality of the lower bound makes sense now. And I understand that there is no log(m) missing in Thm 3.1. \n\nAs for the experiments, it might be nice to include the information you included here in the final version. \n\nApart from this, I have no other comments...
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "tJfza98Gd6g", "3qU5a3OCzVw", "WYLR1aPz5R", "-ZnuvJ_usZ", "gXoE5wIgIgW", "YEAlLRZEG0E", "iFAR-RsSA3w", "nips_2022_1wz-ksUupt2", "nips_2022_1wz-ksUupt2", "nips_2022_1wz-ksUupt2" ]
nips_2022_tvDRmAxGIjw
Towards Efficient Post-training Quantization of Pre-trained Language Models
Network quantization has gained increasing attention with the rapid growth of large pre-trained language models~(PLMs). However, most existing quantization methods for PLMs follow quantization-aware training~(QAT) that requires end-to-end training with full access to the entire dataset. Therefore, they suffer from slow training, large memory overhead, and data accessibility issues. In this paper, we study post-training quantization~(PTQ) of PLMs, and propose module-wise quantization error minimization~(MREM), an efficient solution to mitigate these issues. By partitioning the PLM into multiple modules, we minimize the reconstruction error incurred by quantization for each module. In addition, we design a new model parallel training strategy such that each module can be trained locally on separate computing devices without waiting for preceding modules, which brings nearly the theoretical training speed-up (e.g., $4\times$ on $4$ GPUs). Experiments on GLUE and SQuAD benchmarks show that our proposed PTQ solution not only performs close to QAT, but also enjoys significant reductions in training time, memory overhead, and data consumption.
Accept
In this paper the authors propose a practical method for post-training quantization (PTQ) of language models that divides the parameters and quantizes each separately (following BRECQ) in parallel, with asynchronous updates and a teacher forcing method to reduce error propagation. They show improvements on GLUE and SQuAD benchmarks. Reviewers agreed that the paper represents a solid practical contribution with convincing results, advancing the LM PTQ literature. The authors did a good job of addressing concerns and providing further analysis in the rebuttal.
train
[ "gvRILABrA9", "LIO0T9TyzTH", "qn0XQu3A3S", "TiPIhEUHtYP", "Oy02bBwGZh9", "SrcOtJaAJe", "h_NyxCZ8A8c", "2WA6-QNgIMf", "ypKA_Ap5JVX", "_RYblHMoB9", "yYCnmIsKgA-", "g9fAa7cecn7", "7QO8s428wiX", "I8XcRed3I3", "eGCOJSJnmP5", "NBk6qL-gZ9", "3-8Cny8pgO0" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate the reviewer's response. The reconstruction error of the last transformer layer is already shown in the A1 to Q1.\n\nTo further clarify this point, we also show the layer-wise reconstruction error of QAT, REM, MREM-S and MREM-P below. The results are based on the W2-E2-A8 quantized BERT-base ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "LIO0T9TyzTH", "ypKA_Ap5JVX", "TiPIhEUHtYP", "eGCOJSJnmP5", "SrcOtJaAJe", "h_NyxCZ8A8c", "3-8Cny8pgO0", "NBk6qL-gZ9", "_RYblHMoB9", "yYCnmIsKgA-", "eGCOJSJnmP5", "I8XcRed3I3", "nips_2022_tvDRmAxGIjw", "nips_2022_tvDRmAxGIjw", "nips_2022_tvDRmAxGIjw", "nips_2022_tvDRmAxGIjw", "nips_20...
nips_2022_8B66-1c5AW
On A Mallows-type Model For (Ranked) Choices
We consider a preference learning setting where every participant chooses an ordered list of $k$ most preferred items among a displayed set of candidates. (The set can be different for every participant.) We identify a distance-based ranking model for the population's preferences and their (ranked) choice behavior. The ranking model resembles the Mallows model but uses a new distance function called Reverse Major Index (RMJ). We find that despite the need to sum over all permutations, the RMJ-based ranking distribution aggregates into (ranked) choice probabilities with simple closed-form expression. We develop effective methods to estimate the model parameters and showcase their generalization power using real data, especially when there is a limited variety of display sets.
Accept
This paper makes a contribution to probabilistic models for ranking data. The authors propose a new distribution similar to the Mallows model but with the so-called reverse major index instead of Kendall as distance function. They address the problem of ML estimation for ranking and choice and show formal consistency properties of the estimate. Simulation studies are also included. The paper has been well-received by all authors. Although a few critical comments have been raised in the original reviews, these could be dispelled in the rebuttal phase. That said, a critical discussion came up in the final phase of decision making (regrettably too late to enquire the authors on this point). Here, it was noticed that the new distance function proposed by the authors seems to have rather doubtful properties. In particular, neither symmetry nor the triangle inequality hold. For example The distance between (1,2,...,n-1,n) and (2,3,,...,n,1) is 1, but the distance between (2,3,...,n,1) and (1,2,...,n) is n-1. Somewhat counterintuitive behaviour of the distance can also be observed in other cases. For example, swapping the last two items has the same effect as moving the top-item to the bottom: d (1, 2, ..., n, n-1) = d( 2, 3, ..., n, 1) = 1. Or, swapping the second and third item leads to a much higher distance than moving the top-item to the bottom: d(2, 3, ..., n, 1) = 1 < d(1, 3, 2, 4, ..., n) = n-2. Maybe such properties could be explained or defended, but they should at least be addressed in the paper. Comparing Kendall and the new distance, the authors write: "It is difficult to tell which kernel is “better" from an axiomatic approach as both distance functions satisfy the basic axioms for ranking distances". This is at least highly misleading, to put it mildly. Note that the axiomatic approach to ranking distances goes back to Kemeny, who required quite a number of properties and showed that the Kendall distance is the UNIQUE distance satisfying all properties. So comparing Kendall with a distance that violates almost all of these properties, it should be easy to say which one is better from an axiomatic perspective ...
train
[ "RFKwxOf6iUq", "jHXa1wVQOt", "5cJGGXPOKS7", "OsuZDU2oWdh", "NvmvToLvgAl", "8MLhN5cVAqG", "kxDRj79dpFL", "ALGo6J41Ie2", "vfUJkKkoJT6", "jFGi5mxOX8", "3qhlTsBRZe37", "F9qkE20XitN", "zfBaDIDJscr", "ceeN2UtWszt", "G8Fsuh0YL4_", "kawygiQpaSY", "YwE2O-rxufp", "vwABUvxrR6", "iXLH6coOg7R...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " Once again, we sincerely thank you for your efforts and help in making our paper stronger (with multiple aspects ranging from theory to numerical studies and to writing). That is truly valuable to us!", " We thank you for your additional efforts in reading the proof of Theorem 2 and your help in clarifying the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "OsuZDU2oWdh", "8MLhN5cVAqG", "NvmvToLvgAl", "kxDRj79dpFL", "vfUJkKkoJT6", "jFGi5mxOX8", "G8Fsuh0YL4_", "vwABUvxrR6", "3qhlTsBRZe37", "F9qkE20XitN", "G8Fsuh0YL4_", "G8Fsuh0YL4_", "ceeN2UtWszt", "kawygiQpaSY", "q-LXnrGZAYu", "dswq5dUgbpk", "ytYUljXnwj", "iXLH6coOg7R", "nips_2022_8...
nips_2022_-Lm0B9UYMy6
Not too little, not too much: a theoretical analysis of graph (over)smoothing
We analyze graph smoothing with mean aggregation, where each node successively receives the average of the features of its neighbors. Indeed, it has quickly been observed that Graph Neural Networks (GNNs), which generally follow some variant of Message-Passing (MP) with repeated aggregation, may be subject to the oversmoothing phenomenon: by performing too many rounds of MP, the node features tend to converge to a non-informative limit. In the case of mean aggregation, for connected graphs, the node features become constant across the whole graph. At the other end of the spectrum, it is intuitively obvious that some MP rounds are necessary, but existing analyses do not exhibit both phenomena at once: beneficial ``finite'' smoothing and oversmoothing in the limit. In this paper, we consider simplified linear GNNs, and rigorously analyze two examples for which a finite number of mean aggregation steps provably improves the learning performance, before oversmoothing kicks in. We consider a latent space random graph model, where node features are partial observations of the latent variables and the graph contains pairwise relationships between them. We show that graph smoothing restores some of the lost information, up to a certain point, by two phenomena: graph smoothing shrinks non-principal directions in the data faster than principal ones, which is useful for regression, and shrinks nodes within communities faster than they collapse together, which improves classification.
Accept
The majority of reviewers are in favor of accepting * Not too little, not too much: a theoretical analysis of graph (over)smoothing* . The reviewers were impressed in general by this theoretical analysis of finite-step mean aggregation smoothing in linear GNNs which is well-supported by their simulation results. The paper's model gives evidence of both the value of smoothing but demonstrates that a threshold for over smoothing exists. In general the novelty of the analysis of a phenomena relevant to the community and the quality presentation leads me to recommend that this paper be accepted.
train
[ "UkqueJxgJgE", "VOAla1Jn3F", "m_JbCWOqg6B", "aW1o9asfbsW", "DFXFuO60kvc", "B04iuQAX9Ct", "3-nwCJ1EuGg", "L7zE4PKaXtW", "8NjjITjAdf7", "J5oSff7kvAa" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the additional comment. We agree with all limitations pointed out, which we find to be excellent inspiration for the future. Naturally we still believe the proposed theoretical study and proposed interpretations to be sufficient for a conference proceedings, and that further experiments, or novel pr...
[ -1, -1, -1, -1, -1, -1, 8, 6, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "VOAla1Jn3F", "m_JbCWOqg6B", "J5oSff7kvAa", "8NjjITjAdf7", "L7zE4PKaXtW", "3-nwCJ1EuGg", "nips_2022_-Lm0B9UYMy6", "nips_2022_-Lm0B9UYMy6", "nips_2022_-Lm0B9UYMy6", "nips_2022_-Lm0B9UYMy6" ]
nips_2022_nxl-IjnDCRo
On Analyzing Generative and Denoising Capabilities of Diffusion-based Deep Generative Models
Diffusion-based Deep Generative Models (DDGMs) offer state-of-the-art performance in generative modeling. Their main strength comes from their unique setup in which a model (the backward diffusion process) is trained to reverse the forward diffusion process, which gradually adds noise to the input signal. Although DDGMs are well studied, it is still unclear how the small amount of noise is transformed during the backward diffusion process. Here, we focus on analyzing this problem to gain more insight into the behavior of DDGMs and their denoising and generative capabilities. We observe a fluid transition point that changes the functionality of the backward diffusion process from generating a (corrupted) image from noise to denoising the corrupted image to the final sample. Based on this observation, we postulate to divide a DDGM into two parts: a denoiser and a generator. The denoiser could be parameterized by a denoising auto-encoder, while the generator is a diffusion-based model with its own set of parameters. We experimentally validate our proposition, showing its pros and cons.
Accept
The paper analyzes diffusion-based deep generative models (DDGMs). The paper postulates that a DDGMs can be divided into two parts, a denoiser and a generator. After the rebuttal and discussion period, three out of four reviewers supported acceptance of the paper. The reviewers aLmZ, QKfo, and 4jY9 all find the interpretation of a diffusion-based deep generative model as a decomposition of a generator and denoising part interesting. Those reviewers also note that this interpretation is useful as parts of the diffusion steps can potentially be replaced with a VAE, which would make the synthesis more efficient. The reviewers aLmZ and QKFo also note that one comparison is perhaps slightly unfair in that the number of parameters of the two models considered was different; the authors cleared this up with new simulations showing that matching the number of parameters (as should be done for a better comparison) does not substantially change the conclusions of the experiment. Finally, reviewer s9uB finds the approach and observations not to be novel and states that similar observations have been made in a recent CVPR paper by Benny and Wolf. While both the paper by Benny and Wolf and the paper under review study denoising diffusion models, the approaches taken by the two papers are substantially different, and both provide value to the community. I also find the interpretation of the DDGMs as a denoiser and generator to be interesting and useful and therefore recommend acceptance of the paper.
val
[ "2EFfrFG472", "4czSgVkJY_", "EIjFRQn3AWL", "ktX4su5YWg", "dTVCMHgLGB0", "PDpLuTkAXnI", "lr6BTVPdLWv", "NQZ7bq_bbHl", "QHL8onegX0g", "gZnd2wm3K-P", "VZAzmfJ1YOh", "US0ptCFvsRxV", "33qhKntFw_O", "JqTRaHS43ww", "1oQk4U1-v2R", "FF3WrOb73vK", "ck3fxnJDe-O", "elRL0Ck8moC", "FDONmpdCWvg...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " In our experiments, we did not evaluate how the size of the diffusion model affects the final performance, and unfortunately, we won't be able to provide any meaningful experimental study at the time of this discussion. Nevertheless, we evaluate in detail how combining the different amount of last diffusion steps...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 5 ]
[ "EIjFRQn3AWL", "US0ptCFvsRxV", "dTVCMHgLGB0", "QHL8onegX0g", "lr6BTVPdLWv", "1oQk4U1-v2R", "VZAzmfJ1YOh", "nips_2022_nxl-IjnDCRo", "JqTRaHS43ww", "nips_2022_nxl-IjnDCRo", "ck3fxnJDe-O", "FF3WrOb73vK", "FDONmpdCWvg", "FDONmpdCWvg", "elRL0Ck8moC", "nips_2022_nxl-IjnDCRo", "nips_2022_nx...
nips_2022_icGMu0iPonB
A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits
We consider the sequential optimization of an unknown, continuous, and expensive to evaluate reward function, from noisy and adversarially corrupted observed rewards. When the corruption attacks are subject to a suitable budget $C$ and the function lives in a Reproducing Kernel Hilbert Space (RKHS), the problem can be posed as {\em corrupted Gaussian process (GP) bandit optimization}. We propose a novel robust elimination-type algorithm that runs in epochs, combines exploration with infrequent switching to select a small subset of actions, and plays each action for multiple time instants. Our algorithm, {\em Robust GP Phased Elimination (RGP-PE)}, successfully balances robustness to corruptions with exploration and exploitation such that its performance degrades minimally in the presence (or absence) of adversarial corruptions. When $T$ is the number of samples and $\gamma_T$ is the maximal information gain, the corruption-dependent term in our regret bound is $O(C \gamma_T^{3/2})$, which is significantly tighter than the existing $O(C \sqrt{T \gamma_T})$ for several commonly-considered kernels. We perform the first empirical study of robustness in the corrupted GP bandit setting, and show that our algorithm is robust against a variety of adversarial attacks.
Accept
The paper studies Gaussian process bandit optimization in the adversarial corruption model. This setting was considered in the work of [7] and regret bounds were presented where the adversarial term contains both the time horizon T and the corruption level C, which is not ideal. The current paper presents an improved algorithm that for certain kernels such as SE removes the dependence on T in the adversarial term and for other settings such as linear recovers the current best results. The reviewers likes the contributions of the paper. One of the reviewers (objk) raised an objection about one of the assumptions mentioned in the paper and the authors clarified that the assumption is just a condition that can always be satisfied by appropriate choices of parameters. While the reviewer was not convinced, based on my own reading of the paper, I side with the authors. This is a decent theoretical contribution. The one weakness of the work is that it assumes that the corruption level (C) is known in advance (or an upper bound on it). Prior works on GP bandit optimization and also in the case of multi-armed bandit settings handle unknown corruptions. As a result this is a borderline paper but I'm slightly leaning towards acceptance at this time.
train
[ "jof5Fl-5vHz", "rckYG2aTmef", "wKONA_1enT", "RfP_cg9Imj8", "3U_aXMgudR", "_HZNQRFKz5T", "AKnnPWdahY_r", "QH5SCnpsfmq", "GNcZVol4bJb", "EdOZTJNL3i", "3jlYj7RyOu3", "Pltp2ez3fMw", "NKXI5OUYQo" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are grateful for your feedback. We kindly request your acknowledgment of our responses, and that you let us know if there are any issues that you still find problematic, and/or check that your score is in agreement with your updated understanding of our work. ", " We thank the reviewer for acknowledging the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "QH5SCnpsfmq", "wKONA_1enT", "_HZNQRFKz5T", "nips_2022_icGMu0iPonB", "NKXI5OUYQo", "Pltp2ez3fMw", "3jlYj7RyOu3", "EdOZTJNL3i", "nips_2022_icGMu0iPonB", "nips_2022_icGMu0iPonB", "nips_2022_icGMu0iPonB", "nips_2022_icGMu0iPonB", "nips_2022_icGMu0iPonB" ]
nips_2022_ksVGCOlOEba
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning
We consider the problem of model compression for deep neural networks (DNNs) in the challenging one-shot/post-training setting, in which we are given an accurate trained model, and must compress it without any retraining, based only on a small amount of calibration input data. This problem has become popular in view of the emerging software and hardware support for executing models compressed via pruning and/or quantization with speedup, and well-performing solutions have been proposed independently for both compression approaches. In this paper, we introduce a new compression framework which covers both weight pruning and quantization in a unified setting, is time- and space-efficient, and considerably improves upon the practical performance of existing post-training methods. At the technical level, our approach is based on an exact and efficient realization of the classical Optimal Brain Surgeon (OBS) framework of [LeCun, Denker, and Solla, 1990] extended to also cover weight quantization at the scale of modern DNNs. From the practical perspective, our experimental results show that it can improve significantly upon the compression-accuracy trade-offs of existing post-training methods, and that it can enable the accurate compound application of both pruning and quantization in a post-training setting.
Accept
The reviewers have reached a consensus in favor of accepting this paper, and I agree with this consensus. This is a technically solid paper that makes a good contribution to the field of post-training compression. The issues brought up in the reviews were adequately addressed by the author response, and I expect the final version of the paper will clarify some of the confusion regarding comparison with retraining.
train
[ "w6nmun2Sn-G", "JtPLd6ynUTW", "MLkscbq2wN5", "xAjqRCIcNgC", "aO7TNAHujbS", "PljAkdyFdLo", "IcMb6yc-ocf", "VlVP-ewFxjc", "2cgNyxCCouf", "aQ7H3GQYvnc", "ewe2chbZVad", "Re5IAKgh04I" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response, we are glad that we managed to address most of your concerns!\n\nThank you also for your improvement suggestions, which we would like to briefly address: \n\n> the proposed work is more for a data insufficient scenario, whereas for the majority of the practical use cases, including th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "MLkscbq2wN5", "VlVP-ewFxjc", "PljAkdyFdLo", "nips_2022_ksVGCOlOEba", "nips_2022_ksVGCOlOEba", "IcMb6yc-ocf", "Re5IAKgh04I", "ewe2chbZVad", "aQ7H3GQYvnc", "nips_2022_ksVGCOlOEba", "nips_2022_ksVGCOlOEba", "nips_2022_ksVGCOlOEba" ]
nips_2022_F-L7BxiE_V
Movement Penalized Bayesian Optimization with Application to Wind Energy Systems
Contextual Bayesian optimization (CBO) is a powerful framework for sequential decision-making given side information, with important applications, e.g., in wind energy systems. In this setting, the learner receives context (e.g., weather conditions) at each round, and has to choose an action (e.g., turbine parameters). Standard algorithms assume no cost for switching their decisions at every round. However, in many practical applications, there is a cost associated with such changes, which should be minimized. We introduce the episodic CBO with movement costs problem and, based on the online learning approach for metrical task systems of Coester and Lee (2019), propose a novel randomized mirror descent algorithm that makes use of Gaussian Process confidence bounds. We compare its performance with the offline optimal sequence for each episode and provide rigorous regret guarantees. We further demonstrate our approach on the important real-world application of altitude optimization for Airborne Wind Energy Systems. In the presence of substantial movement costs, our algorithm consistently outperforms standard CBO algorithms.
Accept
This paper proposes a contextual Bayesian optimization algorithm with penalization for movement cost, which is motivated by the problem of tuning the altitude of wind turbines to maximize energy output while minimizing the altitude adjustment. That is, a movement cost is incurred which is larger if the difference between the actions selected in the current and the previous steps is larger. The proposed algorithm is based on the problem of metrical task systems, and combines lower confidence bound from Bayesian optimization with online mirror descent. The regret of the proposed algorithm is analyzed, and the algorithm achieves competitive empirical performances in a real-world wind energy systems experiment. All reviewers agree that this is an important and novel problem for Contextual Bayesian Optimization. The experiment were a nice demonstration of the practicality of the approach. All four reviewers were on the positive side for acceptance.
train
[ "kIUI5pnVhS8", "hZaoyl5rgsJ", "IuvKsXZE0a", "pC9LN2XaG3g", "JXT1Ano6T-M", "rk-tl6dsYUf", "L-sU1wghZdas", "WfewgSPN178", "YtudzYctys", "Mo38lr1ab8u", "UCtdSdt6zhI", "K_KdY3qLg3l", "JZwsK572unt", "2HHdu2IKeR" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for going through our rebuttal in detail, acknowledging all our responses, and adapting the score. As suggested by the reviewer, we will add the analysis for non-episodic/single episode setting with all the intricate details.", " We thank the reviewer for the response. We will try to furth...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "pC9LN2XaG3g", "IuvKsXZE0a", "WfewgSPN178", "L-sU1wghZdas", "nips_2022_F-L7BxiE_V", "Mo38lr1ab8u", "K_KdY3qLg3l", "2HHdu2IKeR", "JZwsK572unt", "UCtdSdt6zhI", "nips_2022_F-L7BxiE_V", "nips_2022_F-L7BxiE_V", "nips_2022_F-L7BxiE_V", "nips_2022_F-L7BxiE_V" ]
nips_2022_XtxG6dBOpAQ
A Regret-Variance Trade-Off in Online Learning
We consider prediction with expert advice for strongly convex and bounded losses, and investigate trade-offs between regret and ``variance'' (i.e., squared difference of learner's predictions and best expert predictions). With $K$ experts, the Exponentially Weighted Average (EWA) algorithm is known to achieve $O(\log K)$ regret. We prove that a variant of EWA either achieves a \textsl{negative} regret (i.e., the algorithm outperforms the best expert), or guarantees a $O(\log K)$ bound on \textsl{both} variance and regret. Building on this result, we show several examples of how variance of predictions can be exploited in learning. In the online to batch analysis, we show that a large empirical variance allows to stop the online to batch conversion early and outperform the risk of the best predictor in the class. We also recover the optimal rate of model selection aggregation when we do not consider early stopping. In online prediction with corrupted losses, we show that the effect of corruption on the regret can be compensated by a large variance. In online selective sampling, we design an algorithm that samples less when the variance is large, while guaranteeing the optimal regret bound in expectation. In online learning with abstention, we use a similar term as the variance to derive the first high-probability $O(\log K)$ regret bound in this setting. Finally, we extend our results to the setting of online linear regression.
Accept
The paper introduces the valuable idea of exploiting strong convexity of losses in online learning, together with variance-based regret bounds for contemporary algorithms like Squint and Metagrad, to introduce negative terms in cumulative regret bounds and make the algorithms useful in many applications such as early stopping in online-to-batch conversion and other settings. A dominant concern from the reviewers' side was about the amount of (technical) material packed into the paper, which was alleviated by the detailed author response. As a result, the reviewers largely agree that the paper deserves to be accepted -- an opinion which I share. PS. I request the author(s) to please resolve the incomplete [TODO]s in the paper checklist appropriately for the final version.
train
[ "WOlFM3wo5ii", "sqNhB0djmfV", "fVwulb2ru2u", "B7huNm1gOs0", "bXJe_QQG2bc", "NloRZLx6ApT", "fLg7N4Xs3-o" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response, I don't have any further questions.", " Thank you for your careful reading of the paper. \n\nWe do not know if the strong convexity assumption is necessary, but it does play a critical role in our results. The key challenge for relaxing the curvature assumption lies in derivin...
[ -1, -1, -1, -1, 7, 4, 7 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "sqNhB0djmfV", "fLg7N4Xs3-o", "NloRZLx6ApT", "bXJe_QQG2bc", "nips_2022_XtxG6dBOpAQ", "nips_2022_XtxG6dBOpAQ", "nips_2022_XtxG6dBOpAQ" ]
nips_2022_PpP9TiUZLoF
An $\alpha$-regret analysis of Adversarial Bilateral Trade
We study sequential bilateral trade where sellers and buyers valuations are completely arbitrary ({\sl i.e.}, determined by an adversary). Sellers and buyers are strategic agents with private valuations for the good and the goal is to design a mechanism that maximizes efficiency (or gain from trade) while being incentive compatible, individually rational and budget balanced. In this paper we consider gain from trade which is harder to approximate than social welfare. We consider a variety of feedback scenarios and distinguish the cases where the mechanism posts one price and when it can post different prices for buyer and seller. We show several surprising results about the separation between the different scenarios. In particular we show that (a) it is impossible to achieve sublinear $\alpha$-regret for any $\alpha<2$, (b) but with full feedback sublinear $2$-regret is achievable (c) with a single price and partial feedback one cannot get sublinear $\alpha$ regret for any constant $\alpha$ (d) nevertheless, posting two prices even with one-bit feedback achieves sublinear $2$-regret, and (e) there is a provable separation in the $2$-regret bounds between full and partial feedback.
Accept
This is an interesting paper on bandit with non-trivial structure and algorithms. All the reviews are positive, as is my own opinion. I quite happily suggest acceptance for this paper.
val
[ "cV6gweHQNJo", "4NSUM2ATcQ", "lE7hypekjBg", "rgRCA_ZyEV", "LPiXj-GBc7_", "or5r3nbHS15", "byiHxgQCwnm", "nGXoi_CaVRd", "4S4EpElSfY", "4uaURTQBw-O", "FRcP8QZcS4D", "k0Lmlj6009I", "DgvhMjDgdJG", "VXW7l3vatZ", "x27Fdi1tA1p", "jKARc-3-TW", "QlGd6HLFa_B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors, \n\nThanks for the detailed reply and that answered my questions.", " From the example, the reviewer really appreciates the responses during the rebuttal period and understands the importance of gain-from-trade, and how it's different from the social welfare benchmark. The reviewer has changed the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3, 3 ]
[ "4uaURTQBw-O", "LPiXj-GBc7_", "FRcP8QZcS4D", "byiHxgQCwnm", "or5r3nbHS15", "nGXoi_CaVRd", "jKARc-3-TW", "x27Fdi1tA1p", "x27Fdi1tA1p", "VXW7l3vatZ", "QlGd6HLFa_B", "nips_2022_PpP9TiUZLoF", "nips_2022_PpP9TiUZLoF", "nips_2022_PpP9TiUZLoF", "nips_2022_PpP9TiUZLoF", "nips_2022_PpP9TiUZLoF"...
nips_2022_RTan64GlCLV
Unified Optimal Transport Framework for Universal Domain Adaptation
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a source domain to a target domain without any constraints on label sets. Since both domains may hold private classes, identifying target common samples for domain alignment is an essential issue in UniDA. Most existing methods require manually specified or hand-tuned threshold values to detect common samples thus they are hard to extend to more realistic UniDA because of the diverse ratios of common classes. Moreover, they cannot recognize different categories among target-private samples as these private samples are treated as a whole. In this paper, we propose to use Optimal Transport (OT) to handle these issues under a unified framework, namely UniOT. First, an OT-based partial alignment with adaptive filling is designed to detect common classes without any predefined threshold values for realistic UniDA. It can automatically discover the intrinsic difference between common and private classes based on the statistical information of the assignment matrix obtained from OT. Second, we propose an OT-based target representation learning that encourages both global discrimination and local consistency of samples to avoid the over-reliance on the source. Notably, UniOT is the first method with the capability to automatically discover and recognize private categories in the target domain for UniDA. Accordingly, we introduce a new metric H^3-score to evaluate the performance in terms of both accuracy of common samples and clustering performance of private ones. Extensive experiments clearly demonstrate the advantages of UniOT over a wide range of state-of-the-art methods in UniDA.
Accept
This work tackles universal domain adaptation, a challenging problem that is usually encountered in real practice. The proposed UniOT algorithm utilizes Optimal Transport to enable common class detection and private class discovery. It is interesting to see that OT can be extended to simultaneously solve these two problems. Reviewers were at the borderline in their preliminary opinions, but after rebuttal and reconsideration, most reviewers acknowledged that their concerns were addressed, and improved their final rating substantially. AC considered the paper itself as well as all reviewing threads, and concluded that the paper has put forward a nice Unified OT framework for the challenging universal domain adaptation problem, yielding promising empirical performance while introducing some nice technical benefits such as the removal of the tedious weight thresholding for outlier class discovery. Thus the paper is recommended for acceptance.
train
[ "mnZc13r9IBz", "BajLAcRPlUq", "DRI1C0sfuC0", "gxBO5En4ihT", "BzmCfezJKVdK", "6jNJ5uIthxI", "lC9kKAwKhqo", "xwbReBrMxs5", "FpZPojnP6FP", "P-3Yzt8gB8O", "umLbOuXaCID", "ZRKbY0QN2QE" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 4ds8, \nMany thanks for your valuable comments and suggestions. As per your suggestions, we have added some qualitative illustrations in our revised supplementary material (Fig.S4, S5 and S6). Would you please let us know if our qualitative figures (Fig.S4, S5 and S6) in the supplementary is suffici...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "umLbOuXaCID", "nips_2022_RTan64GlCLV", "gxBO5En4ihT", "BzmCfezJKVdK", "ZRKbY0QN2QE", "umLbOuXaCID", "P-3Yzt8gB8O", "FpZPojnP6FP", "nips_2022_RTan64GlCLV", "nips_2022_RTan64GlCLV", "nips_2022_RTan64GlCLV", "nips_2022_RTan64GlCLV" ]
nips_2022_61UwgeIotn
Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation
Learning new task-specific skills from a few trials is a fundamental challenge for artificial intelligence. Meta reinforcement learning (meta-RL) tackles this problem by learning transferable policies that support few-shot adaptation to unseen tasks. Despite recent advances in meta-RL, most existing methods require the access to the environmental reward function of new tasks to infer the task objective, which is not realistic in many practical applications. To bridge this gap, we study the problem of few-shot adaptation in the context of human-in-the-loop reinforcement learning. We develop a meta-RL algorithm that enables fast policy adaptation with preference-based feedback. The agent can adapt to new tasks by querying human's preference between behavior trajectories instead of using per-step numeric rewards. By extending techniques from information theory, our approach can design query sequences to maximize the information gain from human interactions while tolerating the inherent error of non-expert human oracle. In experiments, we extensively evaluate our method, Adaptation with Noisy OracLE (ANOLE), on a variety of meta-RL benchmark tasks and demonstrate substantial improvement over baseline algorithms in terms of both feedback efficiency and error tolerance.
Accept
**Strengths**: This paper introduces an interesting new problem setting (meta-RL for preference-based adaptation) that is of practical relevance, along with a sensible new approach that makes progress on addressing the problem. **Weaknesses**: After the author discussion period, there are two remaining concerns: * Additional experiments, particularly human experiments and more complex tasks. * Further discussion / clarification / support for when preference-based feedback is preferable to other forms of supervision, e.g., sparse rewards. Overall, the reviewers and AC agree that this paper makes a worthy contribution to NeurIPS, despite the weaknesses. Nonetheless, we expect that human experiments in particular would help with both of the weaknesses and increase the impact of the paper, so we especially encourage the authors to work on such experiments before the camera ready version.
val
[ "Fmplt0uG8S4", "ADPPI1ASTRx", "eknxETfBbi", "gp-JGV3yuo", "GOx9CJ1zOF", "Dq7um06kFLhM", "e3ZsPoJOSCN", "FQr9i8RMKRu", "nqP5_XL81GI", "iwutnw7l4M", "Z3QPWV_cD_a", "3IAE8BssanX", "xX9VFbl26g8", "gla1QL6vXyS", "Ztes6r2wNR" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your responsive reply and for the inspiring review that helps us to improve our work. We will continually work on human-based experiments and incorporate rebuttal discussions into the next revision.", " I thank the authors for their response. The new experiments with human feedback and the metaworld ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "ADPPI1ASTRx", "Z3QPWV_cD_a", "GOx9CJ1zOF", "Dq7um06kFLhM", "FQr9i8RMKRu", "e3ZsPoJOSCN", "Ztes6r2wNR", "nqP5_XL81GI", "gla1QL6vXyS", "xX9VFbl26g8", "3IAE8BssanX", "nips_2022_61UwgeIotn", "nips_2022_61UwgeIotn", "nips_2022_61UwgeIotn", "nips_2022_61UwgeIotn" ]
nips_2022_0zHXmOXwkIf
Paraphrasing Is All You Need for Novel Object Captioning
Novel object captioning (NOC) aims to describe images containing objects without observing their ground truth captions during training. Due to the absence of caption annotation, captioning models cannot be directly optimized via sequence-to-sequence training or CIDEr optimization. As a result, we present Paraphrasing-to-Captioning (P2C), a two-stage learning framework for NOC, which would heuristically optimize the output captions via paraphrasing. With P2C, the captioning model first learns paraphrasing from a language model pre-trained on text-only corpus, allowing expansion of the word bank for improving linguistic fluency. To further enforce the output caption sufficiently describing the visual content of the input image, we perform self-paraphrasing for the captioning model with fidelity and adequacy objectives introduced. Since no ground truth captions are available for novel object images during training, our P2C leverages cross-modality (image-text) association modules to ensure the above caption characteristics can be properly preserved. In the experiments, we not only show that our P2C achieves state-of-the-art performances on nocaps and COCO Caption datasets, we also verify the effectiveness and flexibility of our learning framework by replacing language and cross-modality association models for NOC. Implementation details and code are available in the supplementary materials.
Accept
All reviewers appreciated this paper's simple and intuitive ideas on promoting fluency, fidelity and adequacy in novel-object captioning (NOC) task via paraphrasing modules. They also appreciated the good results on multiple benchmarks and also human evaluation, plus the good writing. The authors also have very detailed useful responses in the rebuttal period. Some suggestions made to the authors to incorporate were clearer ablation tables, out-of-domain task discussion, self-contained and more clear notations and formulations.
train
[ "7ctKC0MlftA", "h812PdY83X", "gIz1CbhpId", "FuWnzZr_J2r9", "giYpaIQB83d", "vaAR0OA2Hl9", "ssOGhmhnVDT", "bgfY8CXLhn5", "oAQnC3568IH", "2ONWStRDzPr", "KMU2b7W54ys", "VJYi75pSfxS", "yd_569Pw752", "PSSoJX3Rw22", "vk0W8G_NZV6", "KO2QZ392Igf", "dx3NEaKIjl" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I wish the next version of the paper will be much more self-contained.", " **Updated A3**: We apologize for the misunderstanding. For P and A used in Tables 1, 2, and 3, BERT_large is selected as our paraphrase model P, and CLIP is utilized as the association model A. We will clarify this information in L194.",...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "h812PdY83X", "gIz1CbhpId", "vaAR0OA2Hl9", "2ONWStRDzPr", "dx3NEaKIjl", "ssOGhmhnVDT", "bgfY8CXLhn5", "oAQnC3568IH", "KO2QZ392Igf", "vk0W8G_NZV6", "VJYi75pSfxS", "yd_569Pw752", "PSSoJX3Rw22", "nips_2022_0zHXmOXwkIf", "nips_2022_0zHXmOXwkIf", "nips_2022_0zHXmOXwkIf", "nips_2022_0zHXmO...
nips_2022_FHgpw2Cn__
Consistency of Constrained Spectral Clustering under Graph Induced Fair Planted Partitions
Spectral clustering is popular among practitioners and theoreticians alike. While performance guarantees for spectral clustering are well understood, recent studies have focused on enforcing ``fairness'' in clusters, requiring them to be ``balanced'' with respect to a categorical sensitive node attribute (e.g. the race distribution in clusters must match the race distribution in the population). In this paper, we consider a setting where sensitive attributes indirectly manifest in an auxiliary \textit{representation graph} rather than being directly observed. This graph specifies node pairs that can represent each other with respect to sensitive attributes and is observed in addition to the usual \textit{similarity graph}. Our goal is to find clusters in the similarity graph while respecting a new individual-level fairness constraint encoded by the representation graph. We develop variants of unnormalized and normalized spectral clustering for this task and analyze their performance under a \emph{fair} planted partition model induced by the representation graph. This model uses both the cluster membership of the nodes and the structure of the representation graph to generate random similarity graphs. To the best of our knowledge, these are the first consistency results for constrained spectral clustering under an individual-level fairness constraint. Numerical results corroborate our theoretical findings.
Accept
The paper presents a new interesting spectral generalization of fair clustering. The new notion captures many of the previously introduced notion and unify few of them. The authors present also an algorithm for the new notion. Overall the paper presents interesting idea and it is nicely written so it would be a nice contribution to NeurIPS program. The suggestion of the AC is to accept the paper as a poster.
train
[ "6CvFvRl96G", "bRcSuNiVWyZ", "JMqMAMsbOFf", "QKfJrjDQO-", "EMyIUHdSMBj", "BgLt4vj2fJk", "RBybnNt49rc", "y9tk0ipnwzW", "WjQhU2TjR3K", "pxeLxoqra0jl", "1EOZaR4pS8", "FPv9eMdL0OA", "WBCTd1HLx6y", "5PO1DgG8KWV", "5-kPGDuTyfW" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have given detailed responses to several important questions raised by Reviewer N1aR. We request the reviewer to let us know if they have any further questions. We will be happy to provide answers. This will really help us to improve the paper. Thanks again.\n", " We have given detailed responses to several ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "y9tk0ipnwzW", "pxeLxoqra0jl", "QKfJrjDQO-", "EMyIUHdSMBj", "WjQhU2TjR3K", "1EOZaR4pS8", "nips_2022_FHgpw2Cn__", "5-kPGDuTyfW", "5PO1DgG8KWV", "WBCTd1HLx6y", "FPv9eMdL0OA", "nips_2022_FHgpw2Cn__", "nips_2022_FHgpw2Cn__", "nips_2022_FHgpw2Cn__", "nips_2022_FHgpw2Cn__" ]
nips_2022_EaRoPGzxRkO
Causal Discovery in Probabilistic Networks with an Identifiable Causal Effect
Causal identification is at the core of the causal inference literature, where complete algorithms have been proposed to identify causal queries of interest. The validity of these algorithms hinges on the restrictive assumption of having access to a correctly specified causal structure. In this work, we study the setting where a probabilistic model of the causal structure is available. Specifically, the edges in a causal graph are assigned probabilities which may, for example, represent degree of belief from domain experts. Alternatively, the uncertainly about an edge may reflect the confidence of a particular statistical test. The question that naturally arises in this setting is: Given such a probabilistic graph and a specific causal effect of interest, what is the subgraph which has the highest plausibility and for which the causal effect is identifiable? We show that answering this question reduces to solving an NP-hard combinatorial optimization problem which we call the edge ID problem. We propose efficient algorithms to approximate this problem, and evaluate our proposed algorithms against real-world networks and randomly generated graphs.
Reject
This paper studies the problem of causal identifiability in probabilistic causal models, where each edge is associated with a probability value that indicate the uncertainty about the existence of edge and whether a given causal effect is identifiable. Two technical problems are considered: 1) finding the most probable graph that renders a desired causal query identifiable, and 2) finding the graph with the highest aggregate probability over its edge-induced subgraphs that renders a desired causal query identifiable. A reasonable amount of discussions took place between the authors and the reviewers, and among the reviewers themselves. At the end, we get four confident (4) reviews with ratings 5, 6, 6, and 7: The reviewers appreciate the novel problem setting, interesting complexity results, reasonable algorithms and clear presentation. However, there is also concern that since the paper solves a surrogate problem than the one it sets out to solve, there needs to be more acknowledgement and up front discussion about this gap in the paper, and discussion on potential ways to bridge this gap. In general, a broader discussion on the practical utility of the work developed here, rather than just the ideal question is expected in the final revision, along with other reviewer feedback.
train
[ "_ydRQ8P5EbV", "qsMKOrp0o06", "iFmzDegvHpU", "h9Ib1ZQkXo", "pbLkbhSBvps", "iSDmpOA0Ex4", "roVRTdRDUY0", "x9zJthlkffE", "G0qUfWWWlqC", "BTLFQ8vldt3", "JqAd-OU-TZS", "T9-uYrgfIMM", "ymFNJBTDosW" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their detailed rebuttal. The authors answer most of my questions/concerns and I would like to increase my score.\n\n- Re edge probabilities over bidirected edges: I can see from the example how such edge probabilities could potentially be obtained with interventional data. Ho...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "x9zJthlkffE", "h9Ib1ZQkXo", "G0qUfWWWlqC", "iSDmpOA0Ex4", "JqAd-OU-TZS", "T9-uYrgfIMM", "x9zJthlkffE", "ymFNJBTDosW", "BTLFQ8vldt3", "nips_2022_EaRoPGzxRkO", "nips_2022_EaRoPGzxRkO", "nips_2022_EaRoPGzxRkO", "nips_2022_EaRoPGzxRkO" ]
nips_2022_UmaiVbwN1v
A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
Despite the remarkable success of pre-trained language models (PLMs), they still face two challenges: First, large-scale PLMs are inefficient in terms of memory footprint and computation. Second, on the downstream tasks, PLMs tend to rely on the dataset bias and struggle to generalize to out-of-distribution (OOD) data. In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance. Such subnetworks can be found in three scenarios: 1) the fine-tuned PLMs, 2) the raw PLMs and then fine-tuned in isolation, and even inside 3) PLMs without any parameter fine-tuning. However, these results are only obtained in the in-distribution (ID) setting. In this paper, we extend the study on PLMs subnetworks to the OOD setting, investigating whether sparsity and robustness to dataset bias can be achieved simultaneously. To this end, we conduct extensive experiments with the pre-trained BERT model on three natural language understanding (NLU) tasks. Our results demonstrate that \textbf{sparse and robust subnetworks (SRNets) can consistently be found in BERT}, across the aforementioned three scenarios, using different training and compression methods. Furthermore, we explore the upper bound of SRNets using the OOD information and show that \textbf{there exist sparse and almost unbiased BERT subnetworks}. Finally, we present 1) an analytical study that provides insights on how to promote the efficiency of SRNets searching process and 2) a solution to improve subnetworks' performance at high sparsity. The code is available at \url{https://github.com/llyx97/sparse-and-robust-PLM}.
Accept
This paper proposes methods to fine sub-networks in BERT that would lead to good performance out of distribution. It considers different settings of when to search for sub-networks in the pre-train/fine-tune paradigm. This paper has received borderline reviews, three mildly positive and one mildly negative. The strengths in this work seem to be: * interesting findings * thorough evaluation * important problem * clarity I agree with the importance of the problem, the very comprehensive evaluation in multiple settings, and the interesting findings, both from the perspective of interpretability and of robustness. The reviewers noted several weaknesses, which I'll discuss below. But I think the strengths outweigh the weaknesses and the paper would make interesting contributions to the community. Weaknesses: * generalizability to arbitrary tasks and to other models * how to trade off iid and ood performance * how to choose which earlier steps to start from? There was some discussion with the authors that have led reviewers to update their reviews. Of the weaknesses, I find the choice of tasks reasonable as they are common and well studies in OOD generalization settings. I agree with the need to experiment with other models besides BERT. The authors have added experiments with RoBERTa in their revision, for one task and setting. This makes me more confident of the applicability of the approach, but I'd suggest including similar experiments with the other tasks and settings. The question of trade-off of iid and ood performance is inherent to the field and there's no perfect solution. The authors have made reasonable effort by not using OOD data for model selection. One concern that I still have is the use of only product-of-experts for bias mitigation, which is known to be sub-par especially in terms of trade-off, compared to Confidence Regularization. Experiments with confidence regularization would have been great to add, but I also know that the Utama et al. results are sometimes difficult to replicate. Still, consider trying it out and reporting your results. On when to start pruning: I agree this is a major limitation that should be clearly stated and discussed. The author response gave some thoughts, please include a thoughtful discussion in the next revision.
train
[ "kDoCj3vBjsn", "xYFoNE2GQLC", "qILno7gRX1E", "ua3f_v6OREI", "U1dkVDCETjUT", "KyRyhOye2VcR", "aGj6QlwbT5B", "HIqsrOwvEXD", "tIhbXAlm0TG", "wKhh_uXDIQ", "D5ZM1BWd6TI", "T31fv2nzSck", "yNDSOZx3ZZ-", "oqmxQOSHEah", "QV34CVhpNqi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed responses! I'm convinced that studying these two problems jointly can be useful and important. However, in my opinion, the contriubtions of this paper still do not guarantee it to be accepted to a top-tier conference. I have reassessed the paper and updated my score. ", " Thank you Auth...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "wKhh_uXDIQ", "D5ZM1BWd6TI", "oqmxQOSHEah", "oqmxQOSHEah", "T31fv2nzSck", "aGj6QlwbT5B", "QV34CVhpNqi", "T31fv2nzSck", "yNDSOZx3ZZ-", "oqmxQOSHEah", "QV34CVhpNqi", "nips_2022_UmaiVbwN1v", "nips_2022_UmaiVbwN1v", "nips_2022_UmaiVbwN1v", "nips_2022_UmaiVbwN1v" ]
nips_2022_Wtg9TUL0d81
What Makes Graph Neural Networks Miscalibrated?
Given the importance of getting calibrated predictions and reliable uncertainty estimations, various post-hoc calibration methods have been developed for neural networks on standard multi-class classification tasks. However, these methods are not well suited for calibrating graph neural networks (GNNs), which presents unique challenges such as accounting for the graph structure and the graph-induced correlations between the nodes. In this work, we conduct a systematic study on the calibration qualities of GNN node predictions. In particular, we identify five factors which influence the calibration of GNNs: general under-confident tendency, diversity of nodewise predictive distributions, distance to training nodes, relative confidence level, and neighborhood similarity. Furthermore, based on the insights from this study, we design a novel calibration method named Graph Attention Temperature Scaling (GATS), which is tailored for calibrating graph neural networks. GATS incorporates designs that address all the identified influential factors and produces nodewise temperature scaling using an attention-based architecture. GATS is accuracy-preserving, data-efficient, and expressive at the same time. Our experiments empirically verify the effectiveness of GATS, demonstrating that it can consistently achieve state-of-the-art calibration results on various graph datasets for different GNN backbones.
Accept
This paper studies the calibration problem for GNN node classification. It identifies 5 factors contributing to miscalibration: general under-confidence, diversity of distribution, distance to training, relative confidence level, and neighborhood similarity. A temperature scaling method is proposed where each node is assigned with a different temperature. All reviewers vote for accept. However, multiple reviewers have raised concerns on the validity of the 5 factors. I encourage the authors to thoroughly address them in the revised version.
test
[ "Z5Q5P8MwaWk", "ktHF8xUlyR-", "mm6VjPNFDpq", "GNH5otJf5Lz", "2xVLiTxKfb4", "y88R7UwkC72", "ql7mN6PKTs5", "bQsthlKF5v", "8B_BcQmNQ2w", "7Qtk2rUvI9" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for their clarification. Most of my questions have been answered. I still think the paper is interesting and have good contributions, but I also agree with other reviewers that it has limitations in the novelty of some factors and the correlation of the factors. I will keep my score. ", " Than...
[ -1, -1, -1, -1, -1, -1, 6, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "mm6VjPNFDpq", "y88R7UwkC72", "ql7mN6PKTs5", "bQsthlKF5v", "8B_BcQmNQ2w", "7Qtk2rUvI9", "nips_2022_Wtg9TUL0d81", "nips_2022_Wtg9TUL0d81", "nips_2022_Wtg9TUL0d81", "nips_2022_Wtg9TUL0d81" ]
nips_2022_0dt8wdYIAV
Sequence-to-Set Generative Models
In this paper, we propose a sequence-to-set method that can transform any sequence generative model based on maximum likelihood to a set generative model where we can evaluate the utility/probability of any set. An efficient importance sampling algorithm is devised to tackle the computational challenge of learning our sequence-to-set model. We present GRU2Set, which is an instance of our sequence-to-set method and employs the famous GRU model as the sequence generative model. To further obtain permutation invariant representation of sets, we devise the SetNN model which is also an instance of the sequence-to-set model. A direct application of our models is to learn an order/set distribution from a collection of e-commerce orders, which is an essential step in many important operational decisions such as inventory arrangement for fast delivery. Based on the intuition that small-sized sets are usually easier to learn than large sets, we propose a size-bias trick that can help learn better set distributions with respect to the $\ell_1$-distance evaluation metric. Two e-commerce order datasets, TMALL and HKTVMALL, are used to conduct extensive experiments to show the effectiveness of our models. The experimental results demonstrate that our models can learn better set/order distributions from order data than the baselines. Moreover, no matter what model we use, applying the size-bias trick can always improve the quality of the set distribution learned from data.
Accept
Reviewers found the problem to be well-motivated and the results convincing. They objected to not using transformers and some difficulties to outperform the strong histogram baseline and presentation of the key size-bias trick. The authors addressed some of the issues raised and all reviewers rated weak accept.
train
[ "JGlAKraRpet", "6jiw1zM7hy6", "_Zw4Hr7xH64", "OI508G66B-6", "HFH_9n_2XHq", "5oif1Rb3lfM", "rAJEQcEzgy5", "hSu22E9K6Sc", "YY8xmje4XU", "smNx1OvhdK1", "g1SSXcpjz6h", "sc7muComNvL", "DkJaA13t5Et", "M_7VhfyvAqR", "_AM_2bQxsf_", "eu82e41x_iW", "di1-o85LbNF", "8PUNnLnldSC" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. Unfortunately, we received an email saying that we are not allowed to answer reviewers' questions now. We would be happy to reply to your new comment if the PC chairs, the senior AC, and the AC allow us to do so.", " I would like to thank the authors for their response to my review ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 1 ]
[ "6jiw1zM7hy6", "rAJEQcEzgy5", "eu82e41x_iW", "nips_2022_0dt8wdYIAV", "8PUNnLnldSC", "di1-o85LbNF", "eu82e41x_iW", "di1-o85LbNF", "8PUNnLnldSC", "8PUNnLnldSC", "8PUNnLnldSC", "di1-o85LbNF", "eu82e41x_iW", "eu82e41x_iW", "eu82e41x_iW", "nips_2022_0dt8wdYIAV", "nips_2022_0dt8wdYIAV", ...
nips_2022_TPOJzwv2pc
Active Exploration for Inverse Reinforcement Learning
Inverse Reinforcement Learning (IRL) is a powerful paradigm for inferring a reward function from expert demonstrations. Many IRL algorithms require a known transition model and sometimes even a known expert policy, or they at least require access to a generative model. However, these assumptions are too strong for many real-world applications, where the environment can be accessed only through sequential interaction. We propose a novel IRL algorithm: Active exploration for Inverse Reinforcement Learning (AceIRL), which actively explores an unknown environment and expert policy to quickly learn the expert’s reward function and identify a good policy. AceIRL uses previous observations to construct confidence intervals that capture plausible reward functions and find exploration policies that focus on the most informative regions of the environment. AceIRL is the first approach to active IRL with sample-complexity bounds that does not require a generative model of the environment. AceIRL matches the sample complexity of active IRL with a generative model in the worst case. Additionally, we establish a problem-dependent bound that relates the sample complexity of AceIRL to the suboptimality gap of a given IRL problem. We empirically evaluate AceIRL in simulations and find that it significantly outperforms more naive exploration strategies.
Accept
This submission is solid as a work that introduces a novel IRL approach and analyzes its theoretical underpinnings. Its evaluation is done on very much toy problems though, and the metareviewer suggests reinforcing it with more complicated benchmarks, e.g., low-dimensional MuJoCo-based ones used in D4RL, to demonstrate that it works well even in continuous-state/-action settings.
train
[ "GM0ecTHqXb4", "yX4VPIBtjx", "LyxgbdKk6ag", "xkS66Lj-2lL", "FVaNbQ8XLVM", "RhcunMnS85f", "760FjGu33Uz", "EpwMLPX1UO", "FraCFVPvZss" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and clarifications. The paper would benefit from including this discussion on how Active IRL fits with standard IRL. The author's responses have addressed both of my stated weaknesses therefore I raise my score.", " We thank the reviewer for the comments. We answer to the “Weakness” b...
[ -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "FVaNbQ8XLVM", "FraCFVPvZss", "EpwMLPX1UO", "760FjGu33Uz", "RhcunMnS85f", "nips_2022_TPOJzwv2pc", "nips_2022_TPOJzwv2pc", "nips_2022_TPOJzwv2pc", "nips_2022_TPOJzwv2pc" ]
nips_2022_a7-YO5NJGyp
A Universal Error Measure for Input Predictions Applied to Online Graph Problems
We introduce a novel measure for quantifying the error in input predictions. The error is based on a minimum-cost hyperedge cover in a suitably defined hypergraph and provides a general template which we apply to online graph problems. The measure captures errors due to absent predicted requests as well as unpredicted actual requests; hence, predicted and actual inputs can be of arbitrary size. We achieve refined performance guarantees for previously studied network design problems in the online-list model, such as Steiner tree and facility location. Further, we initiate the study of learning-augmented algorithms for online routing problems, such as the online traveling salesperson problem and the online dial-a-ride problem, where (transportation) requests arrive over time (online-time model). We provide a general algorithmic framework and we give error-dependent performance bounds that improve upon known worst-case barriers, when given accurate predictions, at the cost of slightly increased worst-case bounds when given predictions of arbitrary quality.
Accept
"Algorithms via (ML-based) predictions"---especially for online problems---is a young, fast-growing, important area. Of course, the predictions will usually not be perfect and will involve some sort of error. As this area is nascent, it is vital to develop and analyze different forms of error and for various fundamental problems, which this paper does well. In particular, this work develops a new notion of error for two types of "metric" problems in the above genre: online TSP and Dial-a Ride, and online Steiner tree/forest. The first type has arrivals over continuous time, while the second has an online request-sequence (as is typical in the algorithmic study of online problems). The error measure addresses some shortcomings of previous measures, and compares to recent works from Xu et al. (AAAI '22) and Azar et al. (SODA '22). Xu et al. use set-differences to characterize error; the non-erroneous part of the prediction has to exactly match the input locations. Azar et al. relax this by allowing the common set between predictions and actual input to not match exactly and use the cost of a min-cost matching between these points to quantify the extent of the match. The gap addressed in the present paper is that these two types of works force the cardinality of the common parts of the predicted and actual sets to be equal. The present paper's error measure essentially replaces the matching with hyperedges e with one vertex on one side (predicted or actual) and multiple vertices on the other side; cost of e is defined based on the problem. This paper develops the following results parametrized by this error: online TSP and Dial-a-Ride---online algorithms that degrade gracefully with the error, and Steiner tree/forest---showing that the algorithm of Azar et al. does indeed gracefully degrades with this error. The paper was generally appreciated by the reviewers; the authors are encouraged to take the review comments into account.
train
[ "dW1gfgT6Xs", "vuqzS5o5lpM", "HEHRHIDl5Jw", "LtLDEMduB_b", "PATud6IcfaJ", "K-9uUDhUYzi", "kIi1vYUzj80", "hrSvAKdpZxm", "YLeq30aaXyB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response! I agree with the big-picture that this is an interesting notion of error that addresses some gaps in the existing literature. I'm not entirely convinced that the cover error is the \"right\" notion of error in general but am sympathetic to the fact that this is very subjective and think ...
[ -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "LtLDEMduB_b", "YLeq30aaXyB", "hrSvAKdpZxm", "kIi1vYUzj80", "K-9uUDhUYzi", "nips_2022_a7-YO5NJGyp", "nips_2022_a7-YO5NJGyp", "nips_2022_a7-YO5NJGyp", "nips_2022_a7-YO5NJGyp" ]
nips_2022_xpdaDM_B4D
FeLMi : Few shot Learning with hard Mixup
Learning from a few examples is a challenging computer vision task. Traditionally, meta-learning-based methods have shown promise towards solving this problem. Recent approaches show benefits by learning a feature extractor on the abundant base examples and transferring these to the fewer novel examples. However, the finetuning stage is often prone to overfitting due to the small size of the novel dataset. To this end, we propose Few shot Learning with hard Mixup (FeLMi) using manifold mixup to synthetically generate samples that helps in mitigating the data scarcity issue. Different from a naïve mixup, our approach selects the hard mixup samples using an uncertainty-based criteria. To the best of our knowledge, we are the first to use hard-mixup for the few-shot learning problem. Our approach allows better use of the pseudo-labeled base examples through base-novel mixup and entropy-based filtering. We evaluate our approach on several common few-shot benchmarks - FC-100, CIFAR-FS, miniImageNet and tieredImageNet and obtain improvements in both 1-shot and 5-shot settings. Additionally, we experimented on the cross-domain few-shot setting (miniImageNet → CUB) and obtain significant improvements.
Accept
The submission introduces an approach to few-shot learning called Few-Shot Learning with Hard Mixup (FeLMi) which, as its name suggests, applies hard manifold mixup as an augmentation strategy for adapting a pre-trained model to a small training set of downstream examples. The model is first trained on the base classes using a combination of supervised learning and Invariant and Equivariant Representation learning (IER), then a linear classifier is trained on top of the frozen backbone using the novel classes' support set and pseudolabels are generated for the entire base class dataset. Base class examples are filtered to exclude ones with low pseudolabel entropy (using a thresholding hyperparameter). Feature-level mixup is applied on base-novel and novel-novel example pairs, and the resulting examples are subsampled to the N hardest ones based on the difference in top-2 probabilities. The model is then fine-tuned on the pseudolabeled base examples, novel examples, and hard-mixup examples. Results are presented on two CIFAR100-based few-shot classification benchmarks (CIFAR-FS, FC-100) and mini-ImageNet in the 5-way, 1-shot and 5-way, 5-shot settings. FeLMi is shown to outperform competing approaches. Ablation analyses are also presented to assess the contribution of various components on performance improvements. Reviewers highlight the submission's writing quality and clarity (7gPu, 2Qz6, WHpk). Opinions are split on how straightforward the proposed approach is, with Reviewers 3CtS and WHpk noting its simplicity, and Reviewer 2Qz6 expressing concerns over its many moving parts. Opinions are also split on the significance of the performance improvements; Reviewer 7gPu finds FeLMi's performance competitive with competing approaches, and Reviewers 3CtS and WHpk are concerned that the improvements are modest. The authors respond by emphasizing that FeLMi is simple and effective, but Reviewer 3CtS remains eager to see a clearer performance gap. Reviewer 3CtS is also concerned that the approach is not source-free, to which the authors respond that the unlabeled data could also come from another source than the upstream training dataset. Following the discussions, opinions remain divided among reviewers, although the majority is either leaning towards or strongly recommending acceptance. Reviewer 3CtS still recommends rejection, but is open to an acceptance recommendation. I therefore recommend acceptance.
train
[ "n1Rqh3xqime", "Ic6UkgvwEqS", "jkn41iki864", "YddhhbMj3Xr", "NA24IrD-Os", "pWyQxaRScE2", "S56ZldchXEm", "ITMwLAnQfXF", "tneoCN6IQI-", "YcZBvlvr4Q", "cXTWiToLdTS", "zvttJiHskcT5", "XTB-8jP9a-b", "8e2K0OOTBH", "5EiiKQZ3QNY", "_5-EPmcSl2r", "X_r2wgmRnVV", "NG6hSuBVbj" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for appreciating our effort. We would like to clarify some more concerns that the reviewer might have.\n\n**Q1. [Complexity of the model]**\n\nAns: We agree that our proposed technique has several components. However, we want to point out that the components e.g., entropy filtering, hard mix...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "Ic6UkgvwEqS", "tneoCN6IQI-", "YddhhbMj3Xr", "XTB-8jP9a-b", "pWyQxaRScE2", "S56ZldchXEm", "NG6hSuBVbj", "NG6hSuBVbj", "YcZBvlvr4Q", "X_r2wgmRnVV", "zvttJiHskcT5", "_5-EPmcSl2r", "5EiiKQZ3QNY", "nips_2022_xpdaDM_B4D", "nips_2022_xpdaDM_B4D", "nips_2022_xpdaDM_B4D", "nips_2022_xpdaDM_B...
nips_2022_--fdtqo-iKM
Reinforcement Learning in a Birth and Death Process: Breaking the Dependence on the State Space
In this paper, we revisit the regret of undiscounted reinforcement learning in MDPs with a birth and death structure. Specifically, we consider a controlled queue with impatient jobs and the main objective is to optimize a trade-off between energy consumption and user-perceived performance. Within this setting, the diameter $D$ of the MDP is $\Omega(S^S)$, where $S$ is the number of states. Therefore, the existing lower and upper bounds on the regret at time $T$, of order $O (\sqrt{DSAT})$ for MDPs with $S$ states and $A$ actions, may suggest that reinforcement learning is inefficient here. In our main result however, we exploit the structure of our MDPs to show that the regret of a slightly-tweaked version of the classical learning algorithm UCRL2 is in fact upper bounded by $\tilde{\mathcal{O}} (\sqrt{E_2AT})$ where $E_2$ is a weighted second moment of the stationary measure of a reference policy. Importantly, $E_2$ is bounded independently of $S$. Thus, our bound is asymptotically independent of the number of states and of the diameter. This result is based on a careful study of the number of visits performed by the learning algorithm to the states of the MDP, which is highly non-uniform.
Accept
This paper studies reinforcement learning in a restrictive set of MDPs. It showed that a tweaked version of the classic UCRL2 algorithm achieves an upper bound independent of the diameter and state size. This bound is obtained by carefully analyzing the stationary measure for this restricted MDP class and exploiting its high non-uniformity. It is an important problem to identify structure in the MDP that enables efficient learning.
train
[ "Ph-3SoLGLun8", "MYukyWCwtAy", "uCVGWzhDh-", "KpfZkFeCOEm", "XJODEtgHRqH", "H6RUNPsEVnZ", "1BXGoqiMcG", "PvvaLmINn5d", "KBkTxotWPBq", "zafrLo7W4q", "3s0aze7khEX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply! After reading other reviews and replies, I would like to keep my score. ", " Thanks for your reply. Now I get that the main contribution is to provide a new problem-dependent measure, at least for the Birth and Death process. I have increased my score from 3 to 5. ", " Although this was...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "KpfZkFeCOEm", "XJODEtgHRqH", "zafrLo7W4q", "zafrLo7W4q", "3s0aze7khEX", "KBkTxotWPBq", "PvvaLmINn5d", "nips_2022_--fdtqo-iKM", "nips_2022_--fdtqo-iKM", "nips_2022_--fdtqo-iKM", "nips_2022_--fdtqo-iKM" ]
nips_2022_FCNMbF_TsKm
Meta-Learning with Self-Improving Momentum Target
The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance. However, obtaining a target model for each task can be highly expensive, especially when the number of tasks for meta-learning is large. To tackle this issue, we propose a simple yet effective method, coined Self-improving Momentum Target (SiMT). SiMT generates the target model by adapting from the temporal ensemble of the meta-learner, i.e., the momentum network. This momentum network and its task-specific adaptations enjoy a favorable generalization performance, enabling self-improving of the meta-learner through knowledge distillation. Moreover, we found that perturbing parameters of the meta-learner, e.g., dropout, further stabilize this self-improving process by preventing fast convergence of the distillation loss during meta-training. Our experimental results demonstrate that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods under various applications, including few-shot regression, few-shot classification, and meta-reinforcement learning. Code is available at https://github.com/jihoontack/SiMT.
Accept
This submission proposes a strategy to improve meta-learning that can be applied to many different base meta-learning methods. The base meta-learning method is used to independently adapt both the online network whose parameters are being optimized and a momentum network constructed by taking an exponential moving average of the online network's weights, and a distillation loss is used to encourage the adapted online network (with dropout on its parameters) to match the adapted momentum network. Extensive experiments demonstrate that the proposed method improves performance of several base meta-learning methods, and that each component of the method is necessary to attain optimal performance. Reviewers initially praised the idea and empirical evaluation, but noted that the results obtained were far from state-of-the-art, and asked for additional experiments with better-performing base meta-learning methods. The authors provided these experiments during the response period, and the reviewers now unanimously recommend acceptance. The AC agrees with the reviewers' assessment.
train
[ "1bGaI7zb9lN", "HJUTfrlKEg", "_Q04ZAQjTn", "N-HNdd4KKkt", "gkWsZ5lJrqL9", "OpHgkLII4Vh", "P7124_vdjGU", "YhBToDWIdO", "CPLYzldY8Jtw", "pOp-qxinnIT", "vudIhTNNcTv", "TL5r3fd1DDK", "auiuvIWU7m4", "VwXvp7Dy1tC", "yo1v5LICjom", "nozw_xmlIG", "XQoQUiLodSa", "Kg4n0DHxy5", "rdNnH9f8LQQ"...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for letting us know! We are happy to hear that our rebuttal addressed your concerns well.\n\nThank you very much,\\\nAuthors", " The additional results have addressed my concerns. I have increased my rating.\n", " Dear reviewers,\n\nThank you for your time and efforts again in reviewing our paper.\n...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "HJUTfrlKEg", "CPLYzldY8Jtw", "nips_2022_FCNMbF_TsKm", "gkWsZ5lJrqL9", "yo1v5LICjom", "nips_2022_FCNMbF_TsKm", "nozw_xmlIG", "nozw_xmlIG", "XQoQUiLodSa", "XQoQUiLodSa", "XQoQUiLodSa", "Kg4n0DHxy5", "Kg4n0DHxy5", "Kg4n0DHxy5", "rdNnH9f8LQQ", "nips_2022_FCNMbF_TsKm", "nips_2022_FCNMbF_...
nips_2022_p9lC_i9WeFE
Generalization Analysis of Message Passing Neural Networks on Large Random Graphs
Message passing neural networks (MPNN) have seen a steep rise in popularity since their introduction as generalizations of convolutional neural networks to graph-structured data, and are now considered state-of-the-art tools for solving a large variety of graph-focused problems. We study the generalization error of MPNNs in graph classification and regression. We assume that graphs of different classes are sampled from different random graph models. We show that, when training a MPNN on a dataset sampled from such a distribution, the generalization gap increases in the complexity of the MPNN, and decreases, not only with respect to the number of training samples, but also with the average number of nodes in the graphs. This shows how a MPNN with high complexity can generalize from a small dataset of graphs, as long as the graphs are large. The generalization bound is derived from a uniform convergence result, that shows that any MPNN, applied on a graph, approximates the MPNN applied on the geometric model that the graph discretizes.
Accept
This paper gives a new approximation and generalization error bound for a class of MPNN (Message Passing Neural Networks) in a setting where the underlying graph is randomly generated. First, a discretization error from the continuous limit is given, and second the generalization error on a finite training set is given. Some numerical experiments are also given as an empirical evaluation of the theory. Overall, this is a solid theoretical work with sufficient novelty. The rate of convergence is new and the community would benefit from the analysis. The presentation is also good. The readers can grasp the overall contribution rather easily and the theoretical results are also clearly described. The major weakness of this paper is the numerical experiments. However, combined with the theoretical contribution, this paper has enough value. The authors properly responded to the reviewers' questions. An additional experimental result is also given. I recommend the authors to include the additional experiment properly to enhance the empirical evaluation. In summary, this paper gives a good contribution and can be accepted. # minor point: (1) The citation style is not of NeurIPS standard. Please look at author instruction. (2) The abstract is presented in the Italic style. However, the standard format is the Roman style. I recommend to fix it.
train
[ "rc3gRXIg2z_", "ddY5UOFhORP", "t6kri-i5T8F", "I0YdkSZwdQB", "3bl7eMUKBA9", "1msDD1JuCl2", "mg53JSQrMFk", "g41OWECqGwl", "5NOlbdVfwjT", "czjRsq4wufM", "p9hGjQHMY2S", "TfcXzoDUE1a" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed replies! The explanation for the importance of different assumptions helps me understand the paper better. My concerns have been addressed. And I will raise my score accordingly. ", " Dear Reviewer 6gwZ ,\n\nWe would like to kindly encourage you to read and comment about our rebuttal. W...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 3 ]
[ "mg53JSQrMFk", "p9hGjQHMY2S", "1msDD1JuCl2", "g41OWECqGwl", "1msDD1JuCl2", "TfcXzoDUE1a", "p9hGjQHMY2S", "czjRsq4wufM", "nips_2022_p9lC_i9WeFE", "nips_2022_p9lC_i9WeFE", "nips_2022_p9lC_i9WeFE", "nips_2022_p9lC_i9WeFE" ]
nips_2022_uyEYNg2HHFQ
Hyper-Representations as Generative Models: Sampling Unseen Neural Network Weights
Learning representations of neural network weights given a model zoo is an emerg- ing and challenging area with many potential applications from model inspection, to neural architecture search or knowledge distillation. Recently, an autoencoder trained on a model zoo was able to learn a hyper-representation, which captures intrinsic and extrinsic properties of the models in the zoo. In this work, we ex- tend hyper-representations for generative use to sample new model weights. We propose layer-wise loss normalization which we demonstrate is key to generate high-performing models and several sampling methods based on the topology of hyper-representations. The models generated using our methods are diverse, per- formant and capable to outperform strong baselines as evaluated on several down- stream tasks: initialization, ensemble sampling and transfer learning. Our results indicate the potential of knowledge aggregation from model zoos to new models via hyper-representations thereby paving the avenue for novel research directions.
Accept
This paper learns hyper-representations to generate parameters of neural networks. The authors propose layer-wise loss normalization in the generation process. The proposed method is demonstrated on several tasks. The paper received two positive reviews and one negative review. Reviewer afuw was not convinced by the usefulness of the proposed method, and initially recommended rejection. The authors did address some key concerns of this reviewer in the rebuttal with added experiments. This reviewer raised the rating to borderline reject after reading the rebuttal, but remained concerned with relatively simple architectures and relatively small datasets. The authors further explained that this is currently the norm in this line of research. The authors addressed most of the concerns of the other two reviewers in their rebuttals. Overall, I feel this is an interesting exploration. Meanwhile I do share the concern of Reviewer afuw.
train
[ "h9OlAjdBpo0", "hSf4pdCfUs", "nQAR2oQJhZV", "05aNgi8H_fc", "_bAlvGAgo2r", "bP4MREqLmyW", "6qNLQ1dpYDV", "KDswL8-zo4b", "UZzXg2xIm36", "2d3WS_vkm6_", "I1jidie4sqR", "Tviry-8UQsa" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank reviewer “afuw” for the feedback, we highly appreciate the response and score upgrade as well as the acknowledgement of the additional experiments. We would like to clarify the remaining three points. \n\n\n>1. “but those million parameter networks are what people actually use in practice.” \n\nWe unde...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "05aNgi8H_fc", "nQAR2oQJhZV", "_bAlvGAgo2r", "6qNLQ1dpYDV", "Tviry-8UQsa", "I1jidie4sqR", "2d3WS_vkm6_", "UZzXg2xIm36", "nips_2022_uyEYNg2HHFQ", "nips_2022_uyEYNg2HHFQ", "nips_2022_uyEYNg2HHFQ", "nips_2022_uyEYNg2HHFQ" ]
nips_2022_HOG-G4arLnU
Isometric 3D Adversarial Examples in the Physical World
Recently, several attempts have demonstrated that 3D deep learning models are as vulnerable to adversarial example attacks as 2D models. However, these methods are still far from stealthy and suffer from severe performance degradation in the physical world. Although 3D data is highly structured, it is difficult to bound the perturbations with simple metrics in the Euclidean space. In this paper, we propose a novel $\epsilon$-isometric ($\epsilon$-ISO) attack method to generate natural and robust 3D adversarial examples in the physical world by considering the geometric properties of 3D objects and the invariance to physical transformations. For naturalness, we constrain the adversarial example and the original one to be $\epsilon$-isometric by adopting the Gaussian curvature as the surrogate metric under a theoretical analysis. For robustness under physical transformations, we propose a maxima over transformation (MaxOT) method to actively search for the most difficult transformations rather than random ones to make the generated adversarial example more robust in the physical world. Extensive experiments on typical point cloud recognition models validate that our approach can improve the attack success rate and naturalness of the generated 3D adversarial examples than the state-of-the-art attack methods.
Accept
The authors propose a new method for constructing adversarial distortions of 3d objects. The method centers around the a new distance metric between 3D objects which the authors argue is better suited for natural looking perturbations relative to the euclidian metric. Reviewers overall felt the paper was strong, the results convincing and the method new and interesting. The primary concerns were raised by reviewer WZCq, who questioned whether or not the method can be expected to have practical impact. The AC agrees with these concerns, particularly given that most documented examples of attacks on real systems do not involve any notion of subtleness, nor do attackers seem motivated to constrain adversarial inputs to small perturbations of clean inputs [1]. However, given the strong technical contribution of the work and the overall strong reviews, the AC recommends accepting the work but encourages authors to consider revising the text noting that epsilon, (or small, or subtle) perturbations is in no means a strict constraint on the attacker action space in real world settings. 1. Gilmer et. al. "Motivating the rules of the game for adversarial example research", 2018.
train
[ "KAs_NhBsoV6", "dxNR18e7c_", "LQ-lmnKnp4E", "igCfHnDIGUY", "KKP-BMoK4PD", "vRJLnXVe0Zt", "oNajGEYNl41", "Q4tYjdAPlZF", "Q8PX0HvIhBq", "9spcPBXydmV", "X-KUdPOlUB8", "z5WH6sG7m0t", "M-9aYjY2I1Y", "MZoLgOuSGlI" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Appreciate for spending considerable time on our paper. We would also like to remind you that you have not updated your rating in the OpenReview system. Thanks!", " Dear Authors,\n\nThanks for your response, it addressed my concerns. I will update my rating to reflect this.", " Dear Reviewer mBRv,\n\nThanks a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "dxNR18e7c_", "X-KUdPOlUB8", "z5WH6sG7m0t", "vRJLnXVe0Zt", "nips_2022_HOG-G4arLnU", "Q4tYjdAPlZF", "MZoLgOuSGlI", "M-9aYjY2I1Y", "z5WH6sG7m0t", "z5WH6sG7m0t", "z5WH6sG7m0t", "nips_2022_HOG-G4arLnU", "nips_2022_HOG-G4arLnU", "nips_2022_HOG-G4arLnU" ]
nips_2022_RJemsN3V_kt
Uncovering the Structural Fairness in Graph Contrastive Learning
Recent studies show that graph convolutional network (GCN) often performs worse for low-degree nodes, exhibiting the so-called structural unfairness for graphs with long-tailed degree distributions prevalent in the real world. Graph contrastive learning (GCL), which marries the power of GCN and contrastive learning, has emerged as a promising self-supervised approach for learning node representations. How does GCL behave in terms of structural fairness? Surprisingly, we find that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN. We theoretically show that this fairness stems from intra-community concentration and inter-community scatter properties of GCL, resulting in a much clear community structure to drive low-degree nodes away from the community boundary. Based on our theoretical analysis, we further devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes. Extensive experiments on various benchmarks and evaluation protocols validate the effectiveness of the proposed method.
Accept
This paper identifies a fairness problem in graph contrastive learning (GCL), i.e., GCN often performs bad for low-degree nodes. The key to solve this problem is the observation that GCL can offer more fair representation for both low and high degree nodes. Authors also support their claims with theoretical analysis. All reviewers appreciate the contributions made by this submission. It is suggested that to simplify notations and make the theorems are self-contained in the final version.
val
[ "w7K7Rvd6OjC", "jxW3FpJn6OM", "Rr3SfkG3_iK", "M1huepnxs9B", "v0UYkY9idC", "cGYW1_fTJau", "sfOPK4IxGqH", "FaUYrGGcNoq", "BdM4Al8kNyC", "F4tHSMDyppI", "6nbwisTc_Si" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I think most of my concerns are addressed. For my first question, it would be good if the authors can provide more evidence on the correlation to make it more convincing. Also, it makes more convincing and clear to explicitly write out the underlying assumption of homophily (or include...
[ -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "cGYW1_fTJau", "v0UYkY9idC", "6nbwisTc_Si", "F4tHSMDyppI", "BdM4Al8kNyC", "sfOPK4IxGqH", "FaUYrGGcNoq", "nips_2022_RJemsN3V_kt", "nips_2022_RJemsN3V_kt", "nips_2022_RJemsN3V_kt", "nips_2022_RJemsN3V_kt" ]
nips_2022_w0QoqmUT9vJ
Ordered Subgraph Aggregation Networks
Numerous subgraph-enhanced graph neural networks (GNNs) have emerged recently, provably boosting the expressive power of standard (message-passing) GNNs. However, there is a limited understanding of how these approaches relate to each other and to the Weisfeiler-Leman hierarchy. Moreover, current approaches either use all subgraphs of a given size, sample them uniformly at random, or use hand-crafted heuristics instead of learning to select subgraphs in a data-driven manner. Here, we offer a unified way to study such architectures by introducing a theoretical framework and extending the known expressivity results of subgraph-enhanced GNNs. Concretely, we show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations by relating them to the established $k\mathsf{\text{-}WL}$ hierarchy. In addition, we explore different approaches for learning to sample subgraphs using recent methods for backpropagating through complex discrete probability distributions. Empirically, we study the predictive performance of different subgraph-enhanced GNNs, showing that our data-driven architectures increase prediction accuracy on standard benchmark datasets compared to non-data-driven subgraph-enhanced graph neural networks while reducing computation time.
Accept
The paper considered subgraph-enhanced graph neural networks that provably boost the expressive power of standard message-passing graph neural networks. It proposed a theoretical framework for studying the expressive power of these GNNs and the relation to the Weisfeiler-Leman hierarchy and extended the existing results. It then addressed the limitation of subgraph sampling in existing methods, exploring different sampling approaches using state-of-the-art data-driven methods. Empirical results showed data-driven architectures increase prediction accuracy and reduce computation time. The work has made novel and solid contributions: theoretical analysis of the expressive power of subgraph-enhanced GNNs, data-driven approaches for sampling subgraphs, and strong performance of the proposed approaches. The authors have addressed well the comments by the reviewers during the response period and strengthened the work.
train
[ "f52x-52xf6s", "glwNiAQFdQ", "zGw_ZqntOYD", "Tfv4rOGZd02", "hXH6yT8HQdq", "gHfKsyDzoe7", "c4udqYND4R7", "nbLNV52EVH7", "N9esVbG4fE", "mCfKeXMCD3L", "5cZZ9F0FYU5", "VqyDr-nq3n", "MGyn448WRs", "IeUt_UuAILE", "M8pyOUd6iy9" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the authors for the quick and thorough response to the reviews. These discussed clarifications will make the paper more clear. Also, the added experiments are a plus as ablations and for understanding the properties of various proposed aspects of the models. I think this work is a great contribution, and...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "N9esVbG4fE", "5cZZ9F0FYU5", "Tfv4rOGZd02", "hXH6yT8HQdq", "nips_2022_w0QoqmUT9vJ", "c4udqYND4R7", "nbLNV52EVH7", "MGyn448WRs", "M8pyOUd6iy9", "IeUt_UuAILE", "VqyDr-nq3n", "nips_2022_w0QoqmUT9vJ", "nips_2022_w0QoqmUT9vJ", "nips_2022_w0QoqmUT9vJ", "nips_2022_w0QoqmUT9vJ" ]
nips_2022__P4JCoz83Mb
Distilling Representations from GAN Generator via Squeeze and Span
In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains. The controllable synthesis ability of GAN generators suggests that they maintain informative, disentangled, and explainable image representations, but leveraging and transferring their representations to downstream tasks is largely unexplored. In this paper, we propose to distill knowledge from GAN generators by squeezing and spanning their representations. We \emph{squeeze} the generator features into representations that are invariant to semantic-preserving transformations through a network before they are distilled into the student network. We \emph{span} the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs and boost the student network performance in a real domain. Experiments justify the efficacy of our method and reveal its great significance in self-supervised representation learning. Code is available at https://github.com/yangyu12/squeeze-and-span.
Accept
While there were initial concerns about the tasks used for evaluation, the authors did extensive extra experiments and satisfied many of these concerns.
train
[ "dA-AP-e4l85", "arbowjm05T", "xLYvgSDsh_", "ba7p2hXGo6X", "Al3O438rXA", "hNdmqMXV3Sy", "N1ye9ef0J-v", "7_-8MIxzSkrk", "cIU9Ov8Dc9m", "yQqiRLiKA37", "eosmnAZi85d", "2su8U9EqP0K", "CskyedSm2pC", "-cuVQdTzAQd" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Please check the further response to reviewer pYVQ and our revised submission for additional ImageNet results. We hope these results could address your concern.", " **1. Experiments with different GAN architectures (further response to question 3)**\n> 3: Studying the generator features of different GAN archite...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "7_-8MIxzSkrk", "hNdmqMXV3Sy", "hNdmqMXV3Sy", "hNdmqMXV3Sy", "N1ye9ef0J-v", "yQqiRLiKA37", "nips_2022__P4JCoz83Mb", "-cuVQdTzAQd", "CskyedSm2pC", "CskyedSm2pC", "2su8U9EqP0K", "nips_2022__P4JCoz83Mb", "nips_2022__P4JCoz83Mb", "nips_2022__P4JCoz83Mb" ]
nips_2022_MwSXgQSxL5s
Provably expressive temporal graph networks
Temporal graph networks (TGNs) have gained prominence as models for embedding dynamic interactions, but little is known about their theoretical underpinnings. We establish fundamental results about the representational power and limits of the two main categories of TGNs: those that aggregate temporal walks (WA-TGNs), and those that augment local message passing with recurrent memory modules (MP-TGNs). Specifically, novel constructions reveal the inadequacy of MP-TGNs and WA-TGNs, proving that neither category subsumes the other. We extend the 1-WL (Weisfeiler-Leman) test to temporal graphs, and show that the most powerful MP-TGNs should use injective updates, as in this case they become as expressive as the temporal WL. Also, we show that sufficiently deep MP-TGNs cannot benefit from memory, and MP/WA-TGNs fail to compute graph properties such as girth. These theoretical insights lead us to PINT --- a novel architecture that leverages injective temporal message passing and relative positional features. Importantly, PINT is provably more expressive than both MP-TGNs and WA-TGNs. PINT significantly outperforms existing TGNs on several real-world benchmarks.
Accept
This paper considered temporal graph networks (TGNs). It first analyzed the representational power and limits of the two main categories of TGNs (WA-TGN and MP-TGN), proving neither category subsumes the other. It extended the 1-WL (Weisfeiler-Leman) test to TGNs and showed when TGNs become as expressive as the temporal WL. It also showed that sufficiently deep MP-TGNs cannot benefit from memory, and MP-TGNs fail to compute graph properties such as girth. Based on the theoretical results, it proposed a provably more expressive TGN called PINT and showed that it outperforms existing methods on several real-world benchmarks. The work has made solid and novel contributions, including theoretical studies of the expressive power of TGNs, a new framework, and strong empirical performance of the proposed framework. The authors also addressed well the comments from the reviewers and further strengthened the work during the response period.
train
[ "TKmV_cXOBN0", "AdAFBVJplqk", "Y4RsL0iWVB", "VQLz13dFjXt", "2SCsSL88BUV", "eHhC_2wU3EK", "jDtNLZnipBh", "2y37Bal81zG", "gxY0OBfw3Sz", "w6K4skwhdsm", "613F34oqPPp", "41067V_HmuV", "CpJBBktqkoV", "pLaOUAAJ1-", "K1dQAzDk186", "CgmhO0mU4qP", "peh2Di2leLd", "GLN7KolKIJN", "6pzKsEOJ1Cv...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks! We appreciate your reply and support for our work.", " Thanks again for your detailed feedback. We have now uploaded a revised version of the manuscript to reflect our steps to address all your concerns. In addition, we conducted experiments to assess the performance of TGN-CAW [1] in our setting (same ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "Y4RsL0iWVB", "w6K4skwhdsm", "CpJBBktqkoV", "2SCsSL88BUV", "K1dQAzDk186", "nips_2022_MwSXgQSxL5s", "2y37Bal81zG", "pLaOUAAJ1-", "w6K4skwhdsm", "41067V_HmuV", "6pzKsEOJ1Cv", "6pzKsEOJ1Cv", "GLN7KolKIJN", "peh2Di2leLd", "CgmhO0mU4qP", "nips_2022_MwSXgQSxL5s", "nips_2022_MwSXgQSxL5s", ...
nips_2022_TkJIkNrzpNJ
Exploitability Minimization in Games and Beyond
Pseudo-games are a natural and well-known generalization of normal-form games, in which the actions taken by each player affect not only the other players' payoffs, as in games, but also the other players' strategy sets. The solution concept par excellence for pseudo-games is the generalized Nash equilibrium (GNE), i.e., a strategy profile at which each player's strategy is feasible and no player can improve their payoffs by unilaterally deviating to another strategy in the strategy set determined by the other players' strategies. The computation of GNE in pseudo-games has long been a problem of interest, due to applications in a wide variety of fields, from environmental protection to logistics to telecommunications. Although computing GNE is PPAD-hard in general, it is still of interest to try to compute them in restricted classes of pseudo-games. One approach is to search for a strategy profile that minimizes exploitability, i.e., the sum of the regrets across all players. As exploitability is nondifferentiable in general, developing efficient first-order methods that minimize it might not seem possible at first glance. We observe, however, that the exploitability-minimization problem can be recast as a min-max optimization problem, and thereby obtain polynomial-time first-order methods to compute a refinement of GNE, namely the variational equilibria (VE), in convex-concave cumulative regret pseudo-games with jointly convex constraints. More generally, we also show that our methods find the stationary points of the exploitability in polynomial time in Lipschitz-smooth pseudo-games with jointly convex constraints. Finally, we demonstrate in experiments that our methods not only outperform known algorithms, but that even in pseudo-games where they are not guaranteed to converge to a GNE, they may do so nonetheless, with proper initialization.
Accept
On the one hand, this paper does not suffer from important criticisms. On the other hand, the paper has not a champion as all the Reviewers set their score between Borderline and Weak Accept. The main weakness concerns the need to improve the presentation and clarify the contributions. I believe this issue can be addressed in a minor revision which does not require a further step of revision. Therefore, I don't see crucial reasons to reject the paper.
train
[ "jBTdcXMGsInS", "L3_zpGaARC", "MK3p6dLZDVB", "z-3iETYrak7a", "fmSvCREriJC", "NdHVcgJ3eul", "hbFTXKVG_D5Q", "RD-HPDCK2ty", "mrpnucKqje", "HKTXCjjp4s", "-RsGxWrwlQF", "PYLKPphX04W" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the clarifications. Now the contributions are more clear. I think in that these are not well presented in the current version of the paper.\nHowever I'll raise my score accordingly.\nNonetheless, I think the paper needs to work on the exposition in order to make clearer such contributions,...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 2, 3 ]
[ "fmSvCREriJC", "z-3iETYrak7a", "NdHVcgJ3eul", "PYLKPphX04W", "-RsGxWrwlQF", "HKTXCjjp4s", "mrpnucKqje", "nips_2022_TkJIkNrzpNJ", "nips_2022_TkJIkNrzpNJ", "nips_2022_TkJIkNrzpNJ", "nips_2022_TkJIkNrzpNJ", "nips_2022_TkJIkNrzpNJ" ]
nips_2022_XBXEfw6OxRh
Black-box coreset variational inference
Recent advances in coreset methods have shown that a selection of representative datapoints can replace massive volumes of data for Bayesian inference, preserving the relevant statistical information and significantly accelerating subsequent downstream tasks. Existing variational coreset constructions rely on either selecting subsets of the observed datapoints, or jointly performing approximate inference and optimizing pseudodata in the observed space akin to inducing points methods in Gaussian Processes. So far, both approaches are limited by complexities in evaluating their objectives for general purpose models, and require generating samples from a typically intractable posterior over the coreset throughout inference and testing. In this work, we present a black-box variational inference framework for coresets that overcomes these constraints and enables principled application of variational coresets to intractable models, such as Bayesian neural networks. We apply our techniques to supervised learning problems, and compare them with existing approaches in the literature for data summarization and inference.
Accept
This paper bridges black-box probabilistic modeling to the use of variational 'pseudo coresets'. The reviewers seem to have reached a positive consensus and have engaged in a fruitful discussion during the rebuttal period. Overall, the contributions are solid and of interest to the community, but I recommend the reviewers take into consideration the remaining concerns raised by reviewer vKEG in preparing a thoughtful revision.
train
[ "6HBvDEjwIVP", "Fr3vIBu3qz", "9wIcxQD_BT", "PnThlJB-3h8", "w1v_1aEbZGz", "KOESR7w3r_Y", "Vigj1Kyrq_E", "3KfYTZpF8NO", "rSEiX_RLE0uB", "HMLFyzqQRF", "h3kFespFGA", "3qAm0cIIf2W", "D7d-TKv3O74", "QXTQiXw2A7y", "xj4T_RG4EK0", "PdJP6Q3nrxX", "8AYp0n82vOp" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response as well as your succinct summary of the contribution and the opportunities for future work in our paper. We are pleased to see that we have been able to address your core concerns with our additional experiments and that you would like to see our work featured at the conference. We wil...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "Fr3vIBu3qz", "h3kFespFGA", "PnThlJB-3h8", "w1v_1aEbZGz", "KOESR7w3r_Y", "HMLFyzqQRF", "3KfYTZpF8NO", "rSEiX_RLE0uB", "8AYp0n82vOp", "PdJP6Q3nrxX", "xj4T_RG4EK0", "QXTQiXw2A7y", "QXTQiXw2A7y", "nips_2022_XBXEfw6OxRh", "nips_2022_XBXEfw6OxRh", "nips_2022_XBXEfw6OxRh", "nips_2022_XBXEf...
nips_2022_yZcPRIZEwOG
Policy Optimization with Linear Temporal Logic Constraints
We study the problem of policy optimization (PO) with linear temporal logic (LTL) constraints. The language of LTL allows flexible description of tasks that may be unnatural to encode as a scalar cost function. We consider LTL-constrained PO as a systematic framework, decoupling task specification from policy selection, and an alternative to the standard of cost shaping. With access to a generative model, we develop a model-based approach that enjoys a sample complexity analysis for guaranteeing both task satisfaction and cost optimality (through a reduction to a reachability problem). Empirically, our algorithm can achieve strong performance even in low sample regimes.
Accept
We have three reviews with high scores but not high confidence (confidence 2,3,3). However, the reviews by pGRX and Kr29 seems fairly thorough with strong author responses. I was particularly satisfied with the author responses as were the these two reviewers. I have a lingering concern about whether the baselines for the Pacman and Mountain Car are strong enough. But the conceptual and theoretical contribution seems to warrant publication.
train
[ "vOs2M-W5bcG", "fg5K3Ad22Mo", "yX6XB_hZEii", "zO0XODQyPV", "fHTBgdV8K5s", "2EJN24IjSHR", "kqik836Sc4o", "pjpyfLBPhyF", "nJXxHLlAz3n", "U4F36hEg-Tx" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response, we appreciate your feedback.\n\nIndeed, one of the works you cite is related to full LTL and not dependent on reward machines, however that work is conceptually similar to LCRL in that it redefines reward as 1 if the LTL is solved and 0 otherwise. They do some reward shaping in the sam...
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "zO0XODQyPV", "yX6XB_hZEii", "2EJN24IjSHR", "fHTBgdV8K5s", "U4F36hEg-Tx", "nJXxHLlAz3n", "pjpyfLBPhyF", "nips_2022_yZcPRIZEwOG", "nips_2022_yZcPRIZEwOG", "nips_2022_yZcPRIZEwOG" ]
nips_2022_4d_tnQ_agHI
An Analytical Theory of Curriculum Learning in Teacher-Student Networks
In animals and humans, curriculum learning---presenting data in a curated order---is critical to rapid learning and effective pedagogy. A long history of experiments has demonstrated the impact of curricula in a variety of animals but, despite its ubiquitous presence, a theoretical understanding of the phenomenon is still lacking. Surprisingly, in contrast to animal learning, curricula strategies are not widely used in machine learning and recent simulation studies reach the conclusion that curricula are moderately effective or ineffective in most cases. This stark difference in the importance of curriculum raises a fundamental theoretical question: when and why does curriculum learning help? In this work, we analyse a prototypical neural network model of curriculum learning in the high-dimensional limit, employing statistical physics methods. We study a task in which a sparse set of informative features are embedded amidst a large set of noisy features. We analytically derive average learning trajectories for simple neural networks on this task, which establish a clear speed benefit for curriculum learning in the online setting. However, when training experiences can be stored and replayed (for instance, during sleep), the advantage of curriculum in standard neural networks disappears, in line with observations from the deep learning literature. Inspired by synaptic consolidation techniques developed to combat catastrophic forgetting, we investigate whether consolidating synapses at curriculum change points can boost the benefits of curricula. We derive generalisation performance as a function of consolidation strength (implemented as a Gaussian prior connecting learning phases), and show that this consolidation mechanism can yield a large improvement in test performance. Our reduced analytical descriptions help reconcile apparently conflicting empirical results, trace regimes where curriculum learning yields the largest gains, and provide experimentally-accessible predictions for the impact of task parameters on curriculum benefits. More broadly, our results suggest that fully exploiting a curriculum may require explicit consolidation at curriculum boundaries.
Accept
In this paper, authors provide an analytical theory of curriculum learning for an online teacher-student setting where a subset of features are relevant and are used by the teacher while a student might get distracted and use irrelevant features. In such a setting, the difficulty of examples can be captured by the variance in the irrelevant features. The insights from this analysis help explain existing empirical observations reported in prior work for effectiveness of curriculum learning as opposed to random-ordering and anti-curriculum learning in compute-limited regime. Further interesting connections to the literature on cognitive science are discussed in the paper. Given the lack of theoretical basis for curriculum learning, reviewers are in agreement that this paper is in-time and impactful for the ML community and they all recommended accepting the paper. However, during the discussion period it became clear that this recommendation was based on the promise of authors to revise the paper which never happened during the rebuttal period. My final recommendation is to accept the paper based on the authors' promise that the final version will include all changes promised by authors in their response to reviewers. Although there is no formal conditional acceptance mechanism, the authors should view this as a conditional acceptance based on taking their word that they will revise accordingly (see the list of changes below). I will check the camera-ready version and call-out anything less than that. **List of changes promised by authors (quoted from their response)** **Authors' response to reviewer Cgtb:** 1- "Unfortunately, implicit curricula cannot be directly studied within our framework, since we always assume the hardness information is completely disclosed. Considering a pseudo-labeling step before the actual learning stage would likely make the (already involved) computation unfeasible. Empirically this procedure has been investigated in [e.g. https://arxiv.org/abs/1812.05159, https://arxiv.org/abs/2012.03107] and, limiting our interest in the generalization aspect, this heuristic does not seem to induce a sizeable improvement. We will include this discussion in the revision." 2- "We mean an algorithm that explicitly depends on the curriculum, for instance by changing its objective function when example difficulty changes. That is, a curriculum aware algorithm would adapt the learning process in order to account for different levels of difficulties in the data. A simple way of implementing this is to modify the training loss, as proposed in this paper. Other approaches may involve adapting the optimization algorithm as proposed in https://proceedings.mlr.press/v139/ruiz-garcia21a.html, or possibly modifying the architecture https://www.sciencedirect.com/science/article/pii/0010027793900584. A key message emerging from our work is that standard algorithms do not dramatically benefit from curriculum, and we believe curriculum-aware algorithms may be the way forward. We will include a clear definition and discussion in the revision." **Authors' response to reviewer S1ip:** 1- "The first point raised by the reviewer is the object of current investigations (see answer below). We will add more details on the CIFAR experiments in the revised version." 2- "We agree that this appears counter-intuitive. We also have hoped for greater intuition on this point, but we do not have a simple explanation for this phenomenon. This result is what appears from solving the equations and it was checked in the numerical simulations. A possible intuition could be that, in some settings, the large amount of noise contained in the hard data will always be too disruptive for effective learning. Thus, leaving the “clean” data for last could allow the model to better exploit the easy data. We will add this possibility to the revision. Even without a clear intuition, our contribution here is to show, in an identical setting and without finite size effects, that both anticurriculum and curriculum can indeed outperform the baseline." **Authors' response to reviewer S1ip:** 1- "Overall, the analytical solution is between 2 and 6 orders of magnitude faster. We will add this approximate speed-up factor and discussion to the supplement (or if space allows, the revision)." 2- "The large input limit means that the input size N and the dataset size M go to infinity with finite ratio \alpha=M/N. This is an important point that must have gotten lost during the iterations, thank you for catching this. We will reintroduce it in the revised version." 3- "Figure 1 is based on notations and definitions introduced in section 3 and is not easily understood at the point it is presented. Reply: We will replace the image with explicit notation." 4- "Line 143: "starting from a large initialisation", I assume it means the random initialization scale of the student or teacher network. It is not entirely clear. Please include more details about this initialization. Does it use a normal random distribution? Reply: Yes, we use a normal distribution with a fixed variance. When we refer to large/small initialization we mean large/small initial variance. We will clarify this in the revised version." 5- "Figure 1: "The curriculum boundary lies at α = 1/2". What is the curriculum boundary? It is never defined. The abstract talks about "curriculum boundary consolidation", but it is not further elaborated. Reply: We mean the switching point between the two levels of difficulty. We will clarify this." 6- "Half-way through the paper section numbers stop being used. What I assume are sections 5 and 6 do not have numbers. And I am not sure if the subsections of Section 4 belong there. Reply: We will add 5 and 6 to the last two sections." 7- "Figure 3c "Accuracy hard samples." is not an accurate title. It is accuracy of all samples, easy and hard. It's just that anti-curriculum performs best, which is influenced by hard samples. Reply: We will replace it with just accuracy." 8- "The figure captions are inconsistent in the uses of (a), (b), (left, center, right), and (top, bottom). For example Figure 3 says (a), (b) and then "the right panel". Reply: We will use only the a,b,c notation." 9- "As stated in the paper, the answer does depend on the setup and in principle we would expect different behaviours for non-convex models. In particular, the presence of different basins of attraction towards different minima would suggest that initializing close to a good one would produce a performance improvement. However, note that empirical results in the ML field are not showing clear signals in this direction. A possible explanation is that relying on memory effects in the learning dynamics would require one to hit a sweet spot in the learning rate value and in the number of training epochs, and this seems hard to be achieved consistently. For this reason, we speculate that explicitly enforcing this memory by altering the loss with the curriculum information could be useful even in these settings. We will further emphasize that this statement about memorylessness applies only to our setting." 10- "We thank the reviewer for pointing out several elements that were missing in this version: we will make sure to include them in the revised version of this paper."
test
[ "n6iAgYOON2N", "X81W0T_Fus0", "JTl5o-ysY9hB", "3kkidHk8esJ", "b9DNjX_U6ya", "YyjuPK93PmE", "IHW7tXRnpnJ", "MlflM40pWY", "KGVuFaI7DLN", "px9QJD64VZL", "2Qlm_J20ChP", "kPrFAs98poA" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for answering my questions.\n\nI find the ideas presented in this paper valuable to the community, and beneficial for future research in curriculum learning.\n\nHowever, I find the quality of presentation at the time of submission inappropriate for a publication.\n\nAt this point, I will rai...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "X81W0T_Fus0", "b9DNjX_U6ya", "MlflM40pWY", "KGVuFaI7DLN", "YyjuPK93PmE", "kPrFAs98poA", "kPrFAs98poA", "2Qlm_J20ChP", "px9QJD64VZL", "nips_2022_4d_tnQ_agHI", "nips_2022_4d_tnQ_agHI", "nips_2022_4d_tnQ_agHI" ]
nips_2022_m2JJO3iEe_5
Smoothed Embeddings for Certified Few-Shot Learning
Randomized smoothing is considered to be the state-of-the-art provable defense against adversarial perturbations. However, it heavily exploits the fact that classifiers map input objects to class probabilities and do not focus on the ones that learn a metric space in which classification is performed by computing distances to embeddings of class prototypes. In this work, we extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings. We provide analysis of the Lipschitz continuity of such models and derive a robustness certificate against $\ell_2$-bounded perturbations that may be useful in few-shot learning scenarios. Our theoretical results are confirmed by experiments on different datasets.
Accept
This paper proposes a certified robustness method for few-shot learning classification based on randomized smoothing. The reviewers found the theoretical results and empirical evaluations successful in demonstrating the robustness of the method, providing a practical algorithm for robust few-shot learning problem. There were some concerns about the lack of comparison against other methods from the literature. But the authors addressed the issue in the rebuttal by running some additional experiments. The reviewers suggested that the authors motivate the use of FGSM and evaluate their method against other attacks as well.
train
[ "v_Z-BTjdDLg", "K1DyYs0TNxY", "dy3P1s6Npu7", "VqAfZiDyhps", "nuiplRGQQxMX", "eXiDpgdj3dn", "lKqjcIDN8pa", "sy0K6Ivhnhx1", "iFT5vVFhj-p", "U4NhPKf5Pw_", "nii5mXD2OJ", "UsM-MDr5dUG", "S0U064Zlse", "tamOJz0fIqL", "C8QYX2mVZc", "sOnBthxE3CZ", "0SbFfMtBYco", "4_a6CLF7pIp", "KLONcUwNvP...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", ...
[ " Thank you for your response and for your suggestion. We have conducted additional experiments where we compare FGSM attack and its multi-step version, PGD attack (see Appendix B.2.1). Due to the limited time, we ran experiments for 1-shot settings only.\n\nIn all experiments, we run PGD attack for $s=20$ iteratio...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "eXiDpgdj3dn", "nuiplRGQQxMX", "VqAfZiDyhps", "sy0K6Ivhnhx1", "KLONcUwNvPh", "iFT5vVFhj-p", "mfiWo3ppK5", "hOkVT2Vtdin", "nii5mXD2OJ", "UsM-MDr5dUG", "4_a6CLF7pIp", "S0U064Zlse", "tamOJz0fIqL", "C8QYX2mVZc", "sOnBthxE3CZ", "0SbFfMtBYco", "6sxtJQrsK4n", "AKEKssFPk7", "mfiWo3ppK5",...
nips_2022_rP9xfRSF4F
When to Intervene: Learning Optimal Intervention Policies for Critical Events
Providing a timely intervention before the onset of a critical event, such as a system failure, is of importance in many industrial settings. Before the onset of the critical event, systems typically exhibit behavioral changes which often manifest as stochastic co-variate observations which may be leveraged to trigger intervention. In this paper, for the first time, we formulate the problem of finding an optimally timed intervention (OTI) policy as minimizing the expected residual time to event, subject to a constraint on the probability of missing the event. Existing machine learning approaches to intervention on critical events focus on predicting event occurrence within a pre-defined window (a classification problem) or predicting time-to-event (a regression problem). Interventions are then triggered by setting model thresholds. These are heuristic-driven, lacking guarantees regarding optimality. To model the evolution of system behavior, we introduce the concept of a hazard rate process. We show that the OTI problem is equivalent to an optimal stopping problem on the associated hazard rate process. This key link has not been explored in literature. Under Markovian assumptions on the hazard rate process, we show that an OTI policy at any time can be analytically determined from the conditional hazard rate function at that time. Further, we show that our theory includes, as a special case, the important class of neural hazard rate processes generated by recurrent neural networks (RNNs). To model such processes, we propose a dynamic deep recurrent survival analysis (DDRSA) architecture, introducing an RNN encoder into the static DRSA setting. Finally, we demonstrate RNN-based OTI policies with experiments and show that they outperform popular intervention methods
Accept
The reviewers all appreciate the direction of this work, and while the merits and significance of the work have limitations -- as reflected in the weak scores -- all reviewers were positive and found no reason to reject. I agree with this and recommend acceptance on the basis that the merits of the contribution and having this as part of the program outweigh the concerns regarding significance. That said, I strongly encourage the authors to use the detailed feedback from the reviewers to improve their paper.
test
[ "WtAuP5sKwtZ", "MbPtm1BF_4R", "hV2ha67YWJ", "oaMY77dXNx", "gvjp411tfh3", "wNM61ZZvkQr", "Vw5Lt2nq6oO", "QY1YDzeKBg4", "rzGwfcLZBwo", "cKzwNUjdCJ6", "Mq6d3o2zaXxn", "uTl9RX22KI2", "Hb-Ms24nXyt", "zFWQPj2LE_W", "g0oO-dO9CKE", "YZ5XHQPuG1", "4DSWyneuPO", "DKKWB8Q7k2M", "oYIajDHlGAi"...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ " >\"At any time the relationship that $L$ has to the covariates ${\\mathcal X}_j$ is encoded by the hazard rate function\"\n\n>So presumably this function is quite complex if it is allowed to depend on the entire history. I'm skeptical then about the start of Section 4 where it's claimed the RNN process is Marko...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 2 ]
[ "gvjp411tfh3", "hV2ha67YWJ", "rzGwfcLZBwo", "mu-i8yjdDbe", "oYIajDHlGAi", "WL0wY-40m0S", "5bSvhKYnjmM", "-v3w12nlfBg", "5bSvhKYnjmM", "5bSvhKYnjmM", "uTl9RX22KI2", "YZ5XHQPuG1", "5bSvhKYnjmM", "5bSvhKYnjmM", "5bSvhKYnjmM", "5bSvhKYnjmM", "nips_2022_rP9xfRSF4F", "OclXuDTKBwN", "WL...
nips_2022_KieCChVB6mN
Sparse Probabilistic Circuits via Pruning and Growing
Probabilistic circuits (PCs) are a tractable representation of probability distributions allowing for exact and efficient computation of likelihoods and marginals. There has been significant recent progress on improving the scale and expressiveness of PCs. However, PC training performance plateaus as model size increases. We discover that most capacity in existing large PC structures is wasted: fully-connected parameter layers are only sparsely used. We propose two operations: pruning and growing, that exploit the sparsity of PC structures. Specifically, the pruning operation removes unimportant sub-networks of the PC for model compression and comes with theoretical guarantees. The growing operation increases model capacity by increasing the dimensions of latent states. By alternatingly applying pruning and growing, we increase the capacity that is meaningfully used, allowing us to significantly scale up PC learning. Empirically, our learner achieves state-of-the-art likelihoods on MNIST-family image datasets and an Penn Tree Bank language data compared to other PC learners and less tractable deep generative models such as flow-based models and variational autoencoders (VAEs).
Accept
The paper introduces sparseness-inducing techniques for probabilistic circuits (PCs), leading to novel structure learning approaches for PCs. The reviewers were very positive about this paper, found it well-written and to be improving state-of-the-art. The techniques are novel and shown to be effective on generative modeling tasks.
train
[ "l9S9g2JbSTNx", "CtSENwwZUlc", "sYcdWSudBg1", "08LWb8OUJBe", "ByW2ByFgWJ9u", "HSvJQQ5ZWeR", "SjOJq-C5YHV", "mIBIXY6eHG3", "3exYL06wxkQ" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for following up on my questions. This is good work and I hope to see the research community take it forward. ", " We thank the reviewer for their time spent reading and reviewing our paper.\n\n> this may introduce a bias/overfitting for the training data\n\nIndeed this may introduce overfitting for t...
[ -1, -1, -1, -1, -1, 8, 7, 9, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "CtSENwwZUlc", "3exYL06wxkQ", "mIBIXY6eHG3", "SjOJq-C5YHV", "HSvJQQ5ZWeR", "nips_2022_KieCChVB6mN", "nips_2022_KieCChVB6mN", "nips_2022_KieCChVB6mN", "nips_2022_KieCChVB6mN" ]
nips_2022_uCBx_6Hc7cu
On the relationship between variational inference and auto-associative memory
In this article, we propose a variational inference formulation of auto-associative memories, allowing us to combine perceptual inference and memory retrieval into the same mathematical framework. In this formulation, the prior probability distribution onto latent representations is made memory dependent, thus pulling the inference process towards previously stored representations. We then study how different neural network approaches to variational inference can be applied in this framework. We compare methods relying on amortized inference such as Variational Auto Encoders and methods relying on iterative inference such as Predictive Coding and suggest combining both approaches to design new auto-associative memory models. We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models such as Hopfield Networks, End-to-End Memory Networks and Neural Turing Machines.
Accept
This submission provides a link between auto-associative memory and variational inference by making priors of representations being memory dependent. This view provides new important insights, as well as new algorithms. The reviewers raised concerns mostly about 1) the clarity of the presentation, and 2) connections to previous work. The authors addressed both of these points during the rebuttal stage. Still some questions remain about performance evaluation, but the AC is of the opinion that these concerns are minor compared to the conceptual advances the paper makes.
train
[ "cCIgxBY3Bx3", "S-BbDTMcUie", "Uao35f8FGrQ", "7U63rK7qbdu", "NuxQGJOmZB", "TeG7soCZCla", "l-lsadQDCn-", "E--_JnlK4-", "W-GV6YipOZ3", "CjnST0uvmbr", "tMODp-LQKsl", "a0hahGjRIyV", "tnTfVuY2zk", "i0JfLd5M_b1", "B7gPG1sOZ02" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " If by any chance this was a mistake from your part, please note that the score of 5 you gave us corresponds to the mention \"borderline accept\". If you meant to recommend \"weak accept\" as said in your comments, this should correspond to a score of 6.", " Thank you for your answer and clarification. I am stil...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "S-BbDTMcUie", "l-lsadQDCn-", "7U63rK7qbdu", "TeG7soCZCla", "nips_2022_uCBx_6Hc7cu", "E--_JnlK4-", "B7gPG1sOZ02", "W-GV6YipOZ3", "i0JfLd5M_b1", "tnTfVuY2zk", "a0hahGjRIyV", "nips_2022_uCBx_6Hc7cu", "nips_2022_uCBx_6Hc7cu", "nips_2022_uCBx_6Hc7cu", "nips_2022_uCBx_6Hc7cu" ]
nips_2022_36-xl1wdyu
Neural network architecture beyond width and depth
This paper proposes a new neural network architecture by introducing an additional dimension called height beyond width and depth. Neural network architectures with height, width, and depth as hyper-parameters are called three-dimensional architectures. It is shown that neural networks with three-dimensional architectures are significantly more expressive than the ones with two-dimensional architectures (those with only width and depth as hyper-parameters), e.g., standard fully connected networks. The new network architecture is constructed recursively via a nested structure, and hence we call a network with the new architecture nested network (NestNet). A NestNet of height $s$ is built with each hidden neuron activated by a NestNet of height $\le s-1$. When $s=1$, a NestNet degenerates to a standard network with a two-dimensional architecture. It is proved by construction that height-$s$ ReLU NestNets with $\mathcal{O}(n)$ parameters can approximate $1$-Lipschitz continuous functions on $[0,1]^d$ with an error $\mathcal{O}(n^{-(s+1)/d})$, while the optimal approximation error of standard ReLU networks with $\mathcal{O}(n)$ parameters is $\mathcal{O}(n^{-2/d})$. Furthermore, such a result is extended to generic continuous functions on $[0,1]^d$ with the approximation error characterized by the modulus of continuity. Finally, we use numerical experimentation to show the advantages of the super-approximation power of ReLU NestNets.
Accept
The authors propose a new architecture which has superior approximation rates for a given number of parameters; this is a very interesting notion that is shown on a simple example to be quite effective. The reviewers are supportive of the paper with their main concerns being the added computational cost and the lack of any examples on real world data sets, even small ones such as MNIST. The authors should include a small experimental section showing the text accuracy for MNIST or similar, along with the computational time for both training and applying the network.
train
[ "GDYM8mEOLl", "dq_HUt7p72", "ICW5bhsUEra", "WejLyRBeHPU", "uAGSNKhC41N", "fXcMaK7Vc0v", "rBqTj7MUVLe", "cb_lpt_xZiB", "efkq0obhlCt", "pBjei96nLSJ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the further comment. We agree that adding non-synthetic experiments would improve our paper. We are trying a Fashion-MNIST experiment that compares the performances of a simple NestNet and a standard NN of almost the same size. The preliminary experimental results imply that the NestNet outperforms ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 2 ]
[ "dq_HUt7p72", "fXcMaK7Vc0v", "WejLyRBeHPU", "uAGSNKhC41N", "pBjei96nLSJ", "efkq0obhlCt", "cb_lpt_xZiB", "nips_2022_36-xl1wdyu", "nips_2022_36-xl1wdyu", "nips_2022_36-xl1wdyu" ]
nips_2022_v1bxRZJ9c8V
Learning interacting dynamical systems with latent Gaussian process ODEs
We study uncertainty-aware modeling of continuous-time dynamics of interacting objects. We introduce a new model that decomposes independent dynamics of single objects accurately from their interactions. By employing latent Gaussian process ordinary differential equations, our model infers both independent dynamics and their interactions with reliable uncertainty estimates. In our formulation, each object is represented as a graph node and interactions are modeled by accumulating the messages coming from neighboring objects. We show that efficient inference of such a complex network of variables is possible with modern variational sparse Gaussian process inference techniques. We empirically demonstrate that our model improves the reliability of long-term predictions over neural network based alternatives and it successfully handles missing dynamic or static information. Furthermore, we observe that only our model can successfully encapsulate independent dynamics and interaction information in distinct functions and show the benefit from this disentanglement in extrapolation scenarios.
Accept
The paper proposes a Gaussian-process approach to modelling nonlinear dynamical systems with multiple interacting objects. The main strength of the paper is the empirical performance in terms of uncertainty quantification, backed up by clear writing and logical experimentation. Some reviewers were concerned about novelty, but I consider that additional structure proposed here to be a significant and interesting improvement on the GPODE. One reviewer raised a question about computation complexity which was covered in the discussion. Please add this discussion to the manuscript. I agree with reviewer gdCd that "The claim in the first line in the abstract that "for the first time uncertainty-aware modelling of continuous-time dynamics of interacting objects" is false" please modify the manuscript appropriately. Other than those edits, this paper seems to have excited the reviewers and I'm in agreement that this presents an interesting approach that I expect others to pick up and build on.
train
[ "OwyAh-HsmjB", "8qHD77vWgg8", "78YrV18nV6t", "AN1ckLCwiPf", "5_VCHtk10Q3", "FjyXh73H_BL", "GRJgB7-h7KZ", "BGGIONseYwE", "8Tn8GIcFCBV", "LEjfFJoco5f", "2o3WSfVMhoi", "cGB_xGTiVkW", "LVz3X0IjQfk", "2f9RWgV0zM", "63ZNtkH4YA7", "67oOdUEyjcj" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the concrete suggestion and increasing their score. We will highlight the differences with GPODE more rigorously in the camera-ready version.", " I thank the authors for their response. I would encourage the authors to include highlight the differences between GP-ODE and their approach...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 9, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "8qHD77vWgg8", "LEjfFJoco5f", "FjyXh73H_BL", "8Tn8GIcFCBV", "BGGIONseYwE", "GRJgB7-h7KZ", "67oOdUEyjcj", "63ZNtkH4YA7", "2f9RWgV0zM", "LVz3X0IjQfk", "cGB_xGTiVkW", "nips_2022_v1bxRZJ9c8V", "nips_2022_v1bxRZJ9c8V", "nips_2022_v1bxRZJ9c8V", "nips_2022_v1bxRZJ9c8V", "nips_2022_v1bxRZJ9c8V...
nips_2022_xI5660uFUr
Selective compression learning of latent representations for variable-rate image compression
Recently, many neural network-based image compression methods have shown promising results superior to the existing tool-based conventional codecs. However, most of them are often trained as separate models for different target bit rates, thus increasing the model complexity. Therefore, several studies have been conducted for learned compression that supports variable rates with single models, but they require additional network modules, layers, or inputs that often lead to complexity overhead, or do not provide sufficient coding efficiency. In this paper, we firstly propose a selective compression method that partially encodes the latent representations in a fully generalized manner for deep learning-based variable-rate image compression. The proposed method adaptively determines essential representation elements for compression of different target quality levels. For this, we first generate a 3D importance map as the nature of input content to represent the underlying importance of the representation elements. The 3D importance map is then adjusted for different target quality levels using importance adjustment curves. The adjusted 3D importance map is finally converted into a 3D binary mask to determine the essential representation elements for compression. The proposed method can be easily integrated with the existing compression models with a negligible amount of overhead increase. Our method can also enable continuously variable-rate compression via simple interpolation of the importance adjustment curves among different quality levels. The extensive experimental results show that the proposed method can achieve comparable compression efficiency as those of the separately trained reference compression models and can reduce decoding time owing to the selective compression.
Accept
Thanks for your submission to NeurIPS. Initially, this paper was leaning reject, with three negative reviewers who had various concerns. The rebuttal really helped a lot, and two of the negative reviewers raised their scores based on the rebuttal, leading to 3 of 4 accept scores. The final reviewer also mentioned in a comment that they would raise their score to a borderline accept, but never updated the official score. I also took a look at the reviews and rebuttal, and it seems that the major concerns have indeed been addressed. Given all of this, I am happy to recommend acceptance of the paper at this point.
train
[ "xDiWu_Pfjvh", "CHD6MKYgPYn", "Sg-o5yvG9c", "nyyZ_RFu7J", "AJ2xwG9ln7K", "QEQIOdMXmr8", "_dLJwyyrXpr", "JLrV365dGx", "BCSziBuootf", "xpFF8Oy-gkD", "Pdb4pHWzca", "eH3FEIRKcFS", "w3sP12Azifu", "PuQ3nbJP28", "ADGjPq3UYYS", "GoIaC0DPOZL", "iB0usLaT70o", "a3Tc3mxETFJ", "_msRCT48w5J", ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official...
[ " Thanks for your specific and comprehensive understanding of our rebuttals. We authors also appreciate your final judgment on our work.", " I think authors have given a great response. Many concerns are resolved. I incline to recommend the acceptance of this work.", " We appreciate your careful comments and th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 4 ]
[ "CHD6MKYgPYn", "eH3FEIRKcFS", "nyyZ_RFu7J", "_dLJwyyrXpr", "3a85ycPfna9", "D3Azdsrnciq", "UVJroSX5iG7", "3a85ycPfna9", "3a85ycPfna9", "3a85ycPfna9", "D3Azdsrnciq", "D3Azdsrnciq", "UVJroSX5iG7", "UVJroSX5iG7", "UVJroSX5iG7", "_msRCT48w5J", "_msRCT48w5J", "_msRCT48w5J", "nips_2022_...
nips_2022_fRbvozXEGTb
Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization
We study differentially private (DP) algorithms for smooth stochastic minimax optimization, with stochastic minimization as a byproduct. The holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss, using an algorithm with a linear time-complexity in the number of training samples. We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring their own base optimization algorithm and use it as a black-box to obtain the near-optimal privacy-loss trade-off. Our framework is inspired from the recently proposed Phased-ERM method [22] for nonsmooth differentially private stochastic convex optimization (DP-SCO), which exploits the stability of the empirical risk minimization (ERM) for the privacy guarantee. The flexibility of our approach enables us to sidestep the requirement that the base algorithm needs to have bounded sensitivity, and allows the use of sophisticated variance-reduced accelerated methods to achieve near-linear time-complexity. To the best of our knowledge, these are the first near-linear time algorithms with near-optimal guarantees on the population duality gap for smooth DP-SMO, when the objective is (strongly-)convex--(strongly-)concave. Additionally, based on our flexible framework, we enrich the family of near-linear time algorithms for smooth DP-SCO with the near-optimal privacy-loss trade-off.
Accept
This work studies minimax optimization for convex-concave objective. It studies the population loss version of this question and shows linear time differentially private algorithms for this problem that achieve the optimal privacy utility trade-off. The algorithm is based on the phased ERM approach. The reviewers were in agreement that this problem is of interest and the paper makes a significant improvement on previous work to be interesting. I would recommend acceptance.
test
[ "HxZAz6EDdop", "jl9uhUifWrfb", "Mfd6oHnqNC", "ZLmrjAEuhUa", "s0-UwoZDJAu", "udR8nFjixQ", "qnZGTaTEQ6A", "omhnQOtzghm", "Urz6pqz9hyP", "JNdn03833Ip", "FroStP_x764", "FtTU31HvhqZ", "LpA5I-VYCnU", "TWYaVlqVUCO" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for the follow-up question. We provide here two examples such that SC lower-bound can be regarded as a trivial lower-bound of SC-SC functions.\n\n1. Given an SC-SC function $F(x,y)$ defined on $\\mathcal{X}\\times\\mathcal{Y}$, we restrict the **domain** $\\mathcal{Y}$ to be a **singleton**, i.e., $\\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 1, 4 ]
[ "jl9uhUifWrfb", "omhnQOtzghm", "JNdn03833Ip", "Urz6pqz9hyP", "nips_2022_fRbvozXEGTb", "FroStP_x764", "FtTU31HvhqZ", "FtTU31HvhqZ", "LpA5I-VYCnU", "TWYaVlqVUCO", "nips_2022_fRbvozXEGTb", "nips_2022_fRbvozXEGTb", "nips_2022_fRbvozXEGTb", "nips_2022_fRbvozXEGTb" ]
nips_2022_7TleYo6Tmlo
Zero-Sum Stochastic Stackelberg Games
Zero-sum stochastic games have found important applications in a variety of fields, from machine learning to economics. Work on this model has primarily focused on the computation of Nash equilibrium due to its effectiveness in solving adversarial board and video games. Unfortunately, a Nash equilibrium is not guaranteed to exist in zero-sum stochastic games when the payoffs at each state are not convex-concave in the players' actions. A Stackelberg equilibrium, however, is guaranteed to exist. Consequently, in this paper, we study zero-sum stochastic Stackelberg games. Going beyond known existence results for (non-stationary) Stackelberg equilibria, we prove the existence of recursive (i.e., Markov perfect) Stackelberg equilibria (recSE) in these games, provide necessary and sufficient conditions for a policy profile to be a recSE, and show that recSE can be computed in (weakly) polynomial time via value iteration. Finally, we show that zero-sum stochastic Stackelberg games can model the problem of pricing and allocating goods across agents and time. More specifically, we propose a zero-sum stochastic Stackelberg game whose recSE correspond to the recursive competitive equilibria of a large class of stochastic Fisher markets. We close with a series of experiments that showcase how our methodology can be used to solve the consumption-savings problem in stochastic Fisher markets.
Accept
There was a positive consensus amongst the reviewers about this paper. It examines an interesting setting and makes worthwhile contributions. I believe that it that would make a nice contribution to NeurIPS and that it is likely to lead to more follow-up work. At the same time the authors should take care to improve the discussion of the related work and improve readability at the points that the reviewers have identified.
train
[ "FTt1AzDO_TI", "ZQM6r3y9xkg", "J2YyVh_WTQH", "YkDyvXdx6bG", "tNU_RGUS-qp", "Pdar6ir33Wk", "903CKE8mqLz", "7JWzN3eS4mba", "KywiZC_wFkh", "8MEWSlQMFXy", "4b5soW7IV3i" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Vorobeychik and Singh consider a model which is both more general than ours, and more specific. It is more general in that they consider a general-sum stochastic Stackelberg game setting; it is more specific, in that the actions of the leader do not affect the feasible strategies of the follower. As a result, our...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "ZQM6r3y9xkg", "903CKE8mqLz", "YkDyvXdx6bG", "4b5soW7IV3i", "Pdar6ir33Wk", "8MEWSlQMFXy", "7JWzN3eS4mba", "KywiZC_wFkh", "nips_2022_7TleYo6Tmlo", "nips_2022_7TleYo6Tmlo", "nips_2022_7TleYo6Tmlo" ]
nips_2022_2TdPjch_ogV
Learnable Graph Convolutional Attention Networks
Existing Graph Neural Networks (GNNs) compute the message exchange between nodes by either aggregating uniformly (convolving) the features of all the neighboring nodes, or by applying a non-uniform score (attending) to the features. Recent works have shown the strengths and weaknesses of the resulting GNN architectures, respectively, GCNs and GATs. In this work, we aim at exploiting the strengths of both approaches to their full extent. To that end, we first introduce a graph convolutional attention layer (CAT), which relies on convolutions to compute the attention scores. Unfortunately, as in the case of GCNs and GATs, we then show that there exists no clear winner between the three—neither theoretically nor in practice—since their performance directly depends on the nature of the data (i.e., of the graph and features). This result brings us to the main contribution of this work, the learnable graph convolutional attention network (L-CAT): a GNN architecture that allows us to automatically interpolate between GCN, GAT and CAT in each layer, by only introducing two additional (scalar) parameters. Our results demonstrate that L-CAT is able to efficiently combine different GNN layers across the network, outperforming competing methods in a wide range of datasets, and resulting in a more robust model that needs less cross-validation.
Reject
The paper explores the advantages of both GCN and GAT by proposing a learnable network that can interpolate between GCN, GAT and CAT for each layer automatically. The proposed research idea is novel and discovers an interesting perspective by combining and interpolating between convolution and attention networks. The paper is theoretically sound, and extensive experiments are conducted over 15 datasets with a comprehensive analysis. However, all the reviewers consistently raise concerns regarding incremental improvement compared with baselines, and another common concern is that the authors do not extend the proposed method with more advanced convolutional and attention networks. The authors argue that their intuition is to design a more robust replacement to GCN/GAT, not a SOTA. However, the author should be aware that L-CAT is able to extend to other networks does not guarantee it will work, and it is possible that the convolution and attention network may conflict during the training. Since the proposed method is a novel and general paradigm, solid experiments are needed to thoroughly evaluate its performance. The motivation is promising, but more experiments should be conducted to sufficiently prove the superiority of the proposed method.
test
[ "9tkFloGGPn", "9AGRoCgXc7", "inKdcpDSKFJ", "MfocQ2WXvbS", "UUe75PJ3gkr", "-9UXf9VuN8xZ", "wzrWnNis-zX", "s5af2wEMY5C", "hHTJVXZFbZW", "1NtVW5l67_t", "K_o_32Cfw-f", "VufhW6ZFcQW" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank for the authors' reply and sorry for the late response. All my questions have been well resolved. The most interesting point I found in this work is the theoretical extension to [1] under the \"easy regime\". Even though the result may look incremental, this paper successfully finds a scenario where the lin...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "-9UXf9VuN8xZ", "inKdcpDSKFJ", "MfocQ2WXvbS", "K_o_32Cfw-f", "VufhW6ZFcQW", "1NtVW5l67_t", "hHTJVXZFbZW", "nips_2022_2TdPjch_ogV", "nips_2022_2TdPjch_ogV", "nips_2022_2TdPjch_ogV", "nips_2022_2TdPjch_ogV", "nips_2022_2TdPjch_ogV" ]
nips_2022_IpBjWtJp40j
The Hessian Screening Rule
Predictor screening rules, which discard predictors before fitting a model, have had considerable impact on the speed with which sparse regression problems, such as the lasso, can be solved. In this paper we present a new screening rule for solving the lasso path: the Hessian Screening Rule. The rule uses second-order information from the model to provide both effective screening, particularly in the case of high correlation, as well as accurate warm starts. The proposed rule outperforms all alternatives we study on simulated data sets with both low and high correlation for \(\ell_1\)-regularized least-squares (the lasso) and logistic regression. It also performs best in general on the real data sets that we examine.
Accept
This papers proposed a Hessian screening rule for lasso and its generalized linear model extension for logistic regression. The proposed screening rules have been demonstrated to be effective in both simulated and real datasets. The idea is novel and the evaluation is convincing. The authors mention that extensions to MCP and SCAD may also be possible, even though the objective may not be convex. A brief discussion will be helpful.
train
[ "vbD24gxFb3R", "I5ZlFpxM3k", "z4s2PGU1o4", "LvEcLVv8XNx", "Cs_J64Sa1MZ", "CAQ-OAom32", "bAD4PfB49i8", "PcJdoluj_1Z", "1-QVmNj0Pw", "QhnQxYORgxQ", "WfPywEuhYms", "nX_yw5077HM", "UqAYrWEeODX", "CBsU3E-Tdiov", "e15zwWDReZl", "9q1ahq_JQMk", "Lj7Pm_uKE82", "_sUje9-rjZH" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Since several reviewers have now brought up discussions regarding theoretical guarantees for convergence, we recognize that this could be clarified in the paper. We will therefore add a theorem or proposition on the convergence of our method to the final revision, showing that our method converges towards the tru...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "PcJdoluj_1Z", "1-QVmNj0Pw", "LvEcLVv8XNx", "QhnQxYORgxQ", "_sUje9-rjZH", "PcJdoluj_1Z", "1-QVmNj0Pw", "WfPywEuhYms", "nX_yw5077HM", "_sUje9-rjZH", "Lj7Pm_uKE82", "9q1ahq_JQMk", "e15zwWDReZl", "nips_2022_IpBjWtJp40j", "nips_2022_IpBjWtJp40j", "nips_2022_IpBjWtJp40j", "nips_2022_IpBjW...
nips_2022_1PRnYiuJkQx
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback
This work studies online zero-order optimization of convex and Lipschitz functions. We present a novel gradient estimator based on two function evaluations and randomization on the $\ell_1$-sphere. Considering different geometries of feasible sets and Lipschitz assumptions we analyse online dual averaging algorithm with our estimator in place of the usual gradient. We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise. We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem. In the case of canceling noise that was previously studied in the literature, our guarantees are either comparable or better than state-of-the-art bounds obtained by~\citet{duchi2015} and \citet{Shamir17} for non-adaptive algorithms. Our analysis is based on deriving a new weighted Poincaré type inequality for the uniform measure on the $\ell_1$-sphere with explicit constants, which may be of independent interest.
Accept
Overall all reviewers were positive about this paper and the overall impression is very good - accept.
train
[ "ggwjbbsu90k", "LeCsP_hIGHe", "1VBAKf14CSI", "qAUmIF1AWq", "uKpwxTbGPoA", "Xx28xHIIG0A", "wUWMG_QyZx", "MZorJoa_zCv" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for their reply and valuable feedback. We will take into account your suggestions and provide a discussion on point 3.", " I'd like to thank the authors for the great reply to my review. And I am glad one of my confusions turned out to be on an important typo in the paper.\n\...
[ -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "LeCsP_hIGHe", "uKpwxTbGPoA", "MZorJoa_zCv", "wUWMG_QyZx", "Xx28xHIIG0A", "nips_2022_1PRnYiuJkQx", "nips_2022_1PRnYiuJkQx", "nips_2022_1PRnYiuJkQx" ]
nips_2022_Fjw_7Hv-mwB
Shielding Federated Learning: Aligned Dual Gradient Pruning Against Gradient Leakage
Federated learning (FL) is a distributed learning framework that claims to protect user privacy. However, gradient inversion attacks (GIAs) reveal severe privacy threats to FL, which can recover the users' training data from outsourced gradients. Existing defense methods adopt different techniques, e.g., differential privacy, cryptography, and gradient perturbation, to against the GIAs. Nevertheless, all current state-of-the-art defense methods suffer from a trade-off between privacy, utility, and efficiency in FL. To address the weaknesses of existing solutions, we propose a novel defense method, Aligned Dual Gradient Pruning (ADGP), based on gradient sparsification, which can improve communication efficiency while preserving the utility and privacy of the federated training. Specifically, ADGP slightly changes gradient sparsification with a stronger privacy guarantee. Through primary gradient parameter selection strategies during training, ADGP can also significantly improve communication efficiency with a theoretical analysis of its convergence and generalization. Our extensive experiments show that ADGP can effectively defend against the most powerful GIAs and significantly reduce the communication overhead without sacrificing the model's utility.
Reject
This paper proposes a method for defending against gradient inversion attacks. A gradient inversion attack attempts to reconstruct the training data from the model and its gradient. Gradient inversion attacks have been performed in practice, which demonstrates that sharing gradients rather than raw data provides limited privacy protection. The proposed method,"Aligned Dual Gradient Pruning (ADGP)," perturbs the gradients by zeroing out a large subset of the coordinates including both small and large values. The key claim is that this prevents reconstruction of the training data. The paper provides both theoretical and experimental results to support this claim. However, these results rely on implicit assumptions about the form of the gradient inversion attack. Specifically, they assume a vanilla gradient inversion attack that does not compensate for the ADGP defense. In particular, ADGP creates sparse gradients and it is assumed that the attacker attempts to reconstruct an input whose unperturbed gradient is sparse, even if the true unperturbed gradient is not sparse. This assumption is a form of "security through obscurity." We should assume that the attacker is aware of the defense and tailors the reconstruction to the defense. Thus theoretical/empirical evaluation should consider attacks that are designed specifically for ADGP. Overall, the key claim of the paper is not adequately supported (and, in my opinion, it seems likely that the proposed defense is not effective). Thus this paper should not be published.
train
[ "F9yK6nJUGY1", "DSXJU769itl", "46zyYOjksE", "NRm8sfk-QV6", "WFUBHUPM545", "lyd92TcNo7", "02FlNIbfAS_", "CZYBifIuqxi", "iW9AbGjx14P", "f9Z1Md42nen", "3WvXFvfFz9y", "bQ8dGI3E4_w", "QL7egHBry8W", "RTCwnuI3JH", "Lz4CSdrLJLq", "RHDHZOTgbrHQ", "H0tYPT3RGvN", "aOxhUTrvVjV", "M8viZb7b5F5...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thanks for your response. Here is the further explanation. \nWe cannot provide a specific theoretical analysis based on this analytical attack because, as we discussed in the previous response, it is impossible to list the exact attack equations. But the attack you are concerned about does exist [22, 40], and we...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 7, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5, 4, 3 ]
[ "NRm8sfk-QV6", "lyd92TcNo7", "WFUBHUPM545", "CZYBifIuqxi", "02FlNIbfAS_", "RHDHZOTgbrHQ", "3WvXFvfFz9y", "iW9AbGjx14P", "f9Z1Md42nen", "bQ8dGI3E4_w", "0IuunwSeco2", "Bro8KxoTpE6", "OkMluRQXfx-", "83WPnhlnI4P", "RHDHZOTgbrHQ", "H0tYPT3RGvN", "XwCDANEK7U5", "wjYVhJuxpg5", "0IuunwSe...
nips_2022_uzn0WLCfuC_
Expected Improvement for Contextual Bandits
The expected improvement (EI) is a popular technique to handle the tradeoff between exploration and exploitation under uncertainty. This technique has been widely used in Bayesian optimization but it is not applicable for the contextual bandit problem which is a generalization of the standard bandit and Bayesian optimization. In this paper, we initiate and study the EI technique for contextual bandits from both theoretical and practical perspectives. We propose two novel EI-based algorithms, one when the reward function is assumed to be linear and the other for more general reward functions. With linear reward functions, we demonstrate that our algorithm achieves a near-optimal regret. Notably, our regret improves that of LinTS \cite{agrawal13} by a factor $\sqrt{d}$ while avoiding to solve a NP-hard problem at each iteration as in LinUCB \cite{Abbasi11}. For more general reward functions which are modeled by deep neural networks, we prove that our algorithm achieves a $\tilde{\mathcal O} (\tilde{d}\sqrt{T})$ regret, where $\tilde{d}$ is the effective dimension of a neural tangent kernel (NTK) matrix, and $T$ is the number of iterations. Our experiments on various benchmark datasets show that both proposed algorithms work well and consistently outperform existing approaches, especially in high dimensions.
Accept
This paper proposes and analyzes algorithms based on expected improvement for the contextual bandit setting, and proves that the resulting algorithm can attain $O(d \sqrt{T})$ regret in the linear bandit setting (the result improves over Linear TS). All the reviewers agree that the modified LinEI algorithm and its analysis are novel and important to the community. I agree with the reviewers, and recommend accepting the paper. For the final version of the paper, it would be helpful to add more details on why the pure EI strategy does not work and add the scaling with $d$ experiments to the main paper (the response to Rev. rpMu). If there is space, it would also be helpful to add a proof sketch for the LinEI algorithm and distinguish it from the LinTS analysis which is more standard and known in the community (response to Rev. KPAe).
train
[ "8e98tZZQ2v", "M7QjN9zs-t", "RWyvLJTNoaN", "ZA3CFIb-axL", "E2Gm-e_qYv", "mNDV4QtMSbU", "oCMZVN1EToB", "tUVsni8MdGAp", "MWvhJ6sk4lg", "X9wBq1kNoy", "1v4cpfPX6Lt", "fqvLL9TFIlx", "YOskcQplxP5", "GHeJz2z3c-Y", "xqCuBv-KvKx", "vFKu5uurCCK", "2jPMOSg79S7", "36IStlUJPZ3" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your constructive comments on our paper. We will keep this table in the main text and will push several experiments to the appendix with a quick pointer in the main text. We are happy that you enjoyed our responses during the rebuttal. ", " Table looks great. Keep it in the main text. The casua...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "M7QjN9zs-t", "RWyvLJTNoaN", "ZA3CFIb-axL", "mNDV4QtMSbU", "oCMZVN1EToB", "X9wBq1kNoy", "tUVsni8MdGAp", "X9wBq1kNoy", "xqCuBv-KvKx", "1v4cpfPX6Lt", "2jPMOSg79S7", "YOskcQplxP5", "vFKu5uurCCK", "36IStlUJPZ3", "nips_2022_uzn0WLCfuC_", "nips_2022_uzn0WLCfuC_", "nips_2022_uzn0WLCfuC_", ...
nips_2022_lUyAaz-iA4u
Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution
Recently Loizou et al. (2021), proposed and analyzed stochastic gradient descent (SGD) with stochastic Polyak stepsize (SPS). The proposed SPS comes with strong convergence guarantees and competitive performance; however, it has two main drawbacks when it is used in non-over-parameterized regimes: (i) It requires a priori knowledge of the optimal mini-batch losses, which are not available when the interpolation condition is not satisfied (e.g., regularized objectives), and (ii) it guarantees convergence only to a neighborhood of the solution. In this work, we study the dynamics and the convergence properties of SGD equipped with new variants of the stochastic Polyak stepsize and provide solutions to both drawbacks of the original SPS. We first show that a simple modification of the original SPS that uses lower bounds instead of the optimal function values can directly solve issue (i). On the other hand, solving issue (ii) turns out to be more challenging and leads us to valuable insights into the method's behavior. We show that if interpolation is not satisfied, the correlation between SPS and stochastic gradients introduces a bias, which effectively distorts the expectation of the gradient signal near minimizers, leading to non-convergence - even if the stepsize is scaled down during training. To fix this issue, we propose DecSPS, a novel modification of SPS, which guarantees convergence to the exact minimizer - without a priori knowledge of the problem parameters. For strongly-convex optimization problems, DecSPS is the first stochastic adaptive optimization method that converges to the exact solution without restrictive assumptions like bounded iterates/gradients.
Accept
As the reviewers have pointed out, it is a well-written paper with solid contribution.
train
[ "s5tk0Pd00l", "-F1pU3T4-H4", "eJXMlHCzyrv", "NUvh0ZAI7iG", "qnqkCJB2aa", "6DcABZ_mb24", "YbXeVWIHdjN", "gs4kjP65Z9G", "cyaNNGU33mL", "EX-arqbbon3", "i2jAWfLfFHB", "pgy1K5j9aPJ", "2Zl5FzFEjBO", "MchbsCNI4q6", "G6rgLLFS8id", "eimEDeL9G0U", "cSeYpQxImaF" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comment and for the positive evaluation of our paper! Since you believe it should be accepted, may you please consider increasing your score - this would totally boost our chances!\nThank you again.", " Thank you for your response. I believe that this paper makes useful and necessary contribu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "-F1pU3T4-H4", "pgy1K5j9aPJ", "NUvh0ZAI7iG", "i2jAWfLfFHB", "cyaNNGU33mL", "pgy1K5j9aPJ", "gs4kjP65Z9G", "EX-arqbbon3", "eimEDeL9G0U", "cSeYpQxImaF", "G6rgLLFS8id", "MchbsCNI4q6", "nips_2022_lUyAaz-iA4u", "nips_2022_lUyAaz-iA4u", "nips_2022_lUyAaz-iA4u", "nips_2022_lUyAaz-iA4u", "nip...
nips_2022_DylWBluOgqN
Collaborative Decision Making Using Action Suggestions
The level of autonomy is increasing in systems spanning multiple domains, but these systems still experience failures. One way to mitigate the risk of failures is to integrate human oversight of the autonomous systems and rely on the human to take control when the autonomy fails. In this work, we formulate a method of collaborative decision making through action suggestions that improves action selection without taking control of the system. Our approach uses each suggestion efficiently by incorporating the implicit information shared through suggestions to modify the agent's belief and achieves better performance with fewer suggestions than naively following the suggested actions. We assume collaborative agents share the same objective and communicate through valid actions. By assuming the suggested action is dependent only on the state, we can incorporate the suggested action as an independent observation of the environment. The assumption of a collaborative environment enables us to use the agent's policy to estimate the distribution over action suggestions. We propose two methods that use suggested actions and demonstrate the approach through simulated experiments. The proposed methodology results in increased performance while also being robust to suboptimal suggestions.
Accept
The paper considers a collaborative decision-making setting in which one agent can suggest actions for a listening agent to execute. The listening agent is not bound to take these suggested actions. The paper models the listening agent's decision-making process as a POMDP, treating the suggestions provided by other agents as observations (i.e., dependent only on the state) that are used to update the belief state. The paper describes two representations of the suggested actions and presents empirical results on simulated domains that demonstrate that the listening agent's performance improves when following the suggestions of a perfect oracle, while it degrades given imperfect suggestions in proportion to the level of noise. The paper was reviewed by three researchers who read the author response and discussed the paper together with the AC. The reviewers largely agree that the collaborative decision-making problem as formulated is interesting. The reviewers find that the idea of formulating suggestion-following as a POMDP with suggestions modeled as observations is clever, and that the description of the proposed formulation is clear and easy to follow. The two formulations of the observation function are reasonable, albeit very simple initial models, though the paper does not provide much insight into when one should be used over the other. A primary concern with the paper is that it only provides empirical results. The work would have significantly benefited from theoretical results, which would help to understand how the method would generalize to other domains.
test
[ "UMWKrDlepZW", "TFTJ_sni3S", "fv3v9wiJoFs", "2-TTgu_PKdl", "PPbwpmSDVjL", "T5NOhp3fAEW", "L3XtOOJBiMo", "VIshSDTxWAv1", "ov-24rxwM2R", "uyY80OncFBh", "CB-mom4fYi", "kl72fzRK-Qk", "stXpxuF_L3z", "K-mn_uJhoN-", "ndbURZ3kU3", "Qwf6WegmPF" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification and elaboration! I appreciate it.", " Thank you for the time and effort of the review and the response!\n\nBased on the provided feedback, we have uploaded our updated paper with supplemental material and other minor changes. We have included experimental results of a suggester wit...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "2-TTgu_PKdl", "T5NOhp3fAEW", "L3XtOOJBiMo", "PPbwpmSDVjL", "stXpxuF_L3z", "CB-mom4fYi", "kl72fzRK-Qk", "Qwf6WegmPF", "ndbURZ3kU3", "K-mn_uJhoN-", "Qwf6WegmPF", "ndbURZ3kU3", "K-mn_uJhoN-", "nips_2022_DylWBluOgqN", "nips_2022_DylWBluOgqN", "nips_2022_DylWBluOgqN" ]
nips_2022__FMJmDEPLzs
BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis
Binaural audio plays a significant role in constructing immersive augmented and virtual realities. As it is expensive to record binaural audio from the real world, synthesizing them from mono audio has attracted increasing attention. This synthesis process involves not only the basic physical warping of the mono audio, but also room reverberations and head/ear related filtration, which, however, are difficult to accurately simulate in traditional digital signal processing. In this paper, we formulate the synthesis process from a different perspective by decomposing the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. Accordingly, we propose BinauralGrad, a novel two-stage framework equipped with diffusion models to synthesize them respectively. Specifically, in the first stage, the common information of the binaural audio is generated with a single-channel diffusion model conditioned on the mono audio, based on which the binaural audio is generated by a two-channel diffusion model in the second stage. Combining this novel perspective of two-stage synthesis with advanced generative models (i.e., the diffusion models), the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples. Experiment results show that on a benchmark dataset, BinauralGrad outperforms the existing baselines by a large margin in terms of both object and subject evaluation metrics (Wave L2: $0.128$ vs. $0.157$, MOS: $3.80$ vs. $3.61$). The generated audio samples\footnote{\url{https://speechresearch.github.io/binauralgrad}} and code\footnote{\url{https://github.com/microsoft/NeuralSpeech/tree/master/BinauralGrad}} are available online.
Accept
The use of diffusion models for binaural audio synthesis is interesting and the two-stage design is novel and well motivated. The authors also addressed the reviewers concerns.
train
[ "DvbIaWx6pE", "i_Hks1Gigwl", "CX-Ltx7XdaZ", "0_Ejno4hgFw", "W2zAtAolJZA", "w14dCDrWs9w" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank for your time and effort on reviewing our paper. We are grateful for your positive feedback. Our paper benefits a lot from your valuable and constructive comments. We response to your questions as follow.\n\n**Q1: About the standard deviations of MOS test.**\n\nSince binaural audio provides str...
[ -1, -1, -1, 8, 5, 7 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "w14dCDrWs9w", "W2zAtAolJZA", "0_Ejno4hgFw", "nips_2022__FMJmDEPLzs", "nips_2022__FMJmDEPLzs", "nips_2022__FMJmDEPLzs" ]
nips_2022_YG4Dg7xtETg
MAtt: A Manifold Attention Network for EEG Decoding
Recognition of electroencephalographic (EEG) signals highly affect the efficiency of non-invasive brain-computer interfaces (BCIs). While recent advances of deep-learning (DL)-based EEG decoders offer improved performances, the development of geometric learning (GL) has attracted much attention for offering exceptional robustness in decoding noisy EEG data. However, there is a lack of studies on the merged use of deep neural networks (DNNs) and geometric learning for EEG decoding. We herein propose a manifold attention network (mAtt), a novel geometric deep learning (GDL)-based model, featuring a manifold attention mechanism that characterizes spatiotemporal representations of EEG data fully on a Riemannian symmetric positive definite (SPD). The evaluation of the proposed mAtt on both time-synchronous and -asyncronous EEG datasets suggests its superiority over other leading DL methods for general EEG decoding. Furthermore, analysis of model interpretation reveals the capability of mAtt in capturing informative EEG features and handling the non-stationarity of brain dynamics.
Accept
The paper proposes a manifold attention network (mAtt), that leverages the properties of the SPD manifold in order to perform spatiotemporal analysis on EEG signals. The results showed significant improvement over the SOTA methods on two typical datasets (motor imagery and SSVEP). The paper is clearly written with adequate analysis. The idea of representing EEG signals as covariance matrices and then projecting them on the SPD manifold has been widely used. The novelty of the approach lies mainly in the application of the attention mechanism on the SPD manifold. The authors' responses have successfully addressed some reviewers' concerns and provided additional (supportive) experimental results, which convinced reviewers to update their evaluations. Some minor concerns about why geometric deep learning is necessary in EEG classification and the clarity of the paper could be further improved. The authors are encouraged to take reviewers' detailed comments into account in the final version.
train
[ "__xePhF5fye", "KBxxRxScqDj", "BbMX3c7gQf", "omuJ8-xR1u", "xjbNfdS4pSt", "vUmHX8rzz63", "pci5Nq4z1nQ", "3bwcw-aEB6O", "fWcbiE0HrdV6", "cxypEhGLRn8", "7UriHc2zh91", "r32Cy3-a5IX", "G1fbWKfXHXu", "OqbWtDK68uR", "TUw1NL8SAHA", "72sIF3o1auj", "ByIFF5Wk-BP" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate the reviewer's positive response to our revision and are delighted to see the score changed.", " We thank for the reviewer's feedback regarding our revision. It is unfortunately that our answers for Q1 and Q3 were not satisfying enough. In our work, the properties of manifold such as the a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 3, 4 ]
[ "omuJ8-xR1u", "xjbNfdS4pSt", "vUmHX8rzz63", "7UriHc2zh91", "fWcbiE0HrdV6", "cxypEhGLRn8", "72sIF3o1auj", "G1fbWKfXHXu", "OqbWtDK68uR", "ByIFF5Wk-BP", "TUw1NL8SAHA", "nips_2022_YG4Dg7xtETg", "nips_2022_YG4Dg7xtETg", "nips_2022_YG4Dg7xtETg", "nips_2022_YG4Dg7xtETg", "nips_2022_YG4Dg7xtET...
nips_2022_kK200QKfvjB
Feature Learning in $L_2$-regularized DNNs: Attraction/Repulsion and Sparsity
We study the loss surface of DNNs with $L_{2}$ regularization. We show that the loss in terms of the parameters can be reformulated into a loss in terms of the layerwise activations $Z_{\ell}$ of the training set. This reformulation reveals the dynamics behind feature learning: each hidden representations $Z_{\ell}$ are optimal w.r.t. to an attraction/repulsion problem and interpolate between the input and output representations, keeping as little information from the input as necessary to construct the activation of the next layer. For positively homogeneous non-linearities, the loss can be further reformulated in terms of the covariances of the hidden representations, which takes the form of a partially convex optimization over a convex cone. This second reformulation allows us to prove a sparsity result for homogeneous DNNs: any local minimum of the $L_{2}$-regularized loss can be achieved with at most $N(N+1)$ neurons in each hidden layer (where $N$ is the size of the training set). We show that this bound is tight by giving an example of a local minimum that requires $N^{2}/4$ hidden neurons. But we also observe numerically that in more traditional settings much less than $N^{2}$ neurons are required to reach the minima.
Accept
This paper studies l_2 regularization on the norms of fully connected networks and discusses how that influences feature learning. It gives two reformulations of the loss (from parameters to the activations) which give intuition on attraction/repulsion effects and an "effective" number of neurons (that the minimum can always be achieved by a network of size N(N+1)). Overall the reviewers find the reformulations novel and interesting, while there are some concerns most are addressed in the author response.
train
[ "bbEjor41kn5", "s4bgf5KBgI2", "AcUsOCVG5bu", "FIMKEaH7AbN", "WYWk9_4Lhe8", "-yAV4zfenok", "JyJQtBvBdOJ", "cpD1l4jBEIv", "WtkW3FoLH0w", "igqho2chdu", "FyyRvj78aNX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the discussions and for updating your score.\n\nRegarding your questions:\n- We do not assume that $n_\\ell \\geq N$ in Proposition 1. Where in the proof do you think $Z_\\ell^\\sigma (Z_\\ell^\\sigma)^+=I$ is needed?\n- We already did that since your last response but we couldn't upload a new version....
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "s4bgf5KBgI2", "FIMKEaH7AbN", "JyJQtBvBdOJ", "WYWk9_4Lhe8", "cpD1l4jBEIv", "FyyRvj78aNX", "igqho2chdu", "WtkW3FoLH0w", "nips_2022_kK200QKfvjB", "nips_2022_kK200QKfvjB", "nips_2022_kK200QKfvjB" ]
nips_2022_rAVqc7KSGDa
Revisiting Active Sets for Gaussian Process Decoders
Decoders built on Gaussian processes (GPs) are enticing due to the marginalisation over the non-linear function space. Such models (also known as GP-LVMs) are often expensive and notoriously difficult to train in practice, but can be scaled using variational inference and inducing points. In this paper, we revisit active set approximations. We develop a new stochastic estimate of the log-marginal likelihood based on recently discovered links to cross-validation, and we propose a computationally efficient approximation thereof. We demonstrate that the resulting stochastic active sets (SAS) approximation significantly improves the robustness of GP decoder training, while reducing computational cost. The SAS-GP obtains more structure in the latent space, scales to many datapoints, and learns better representations than variational autoencoders, which is rarely the case for GP decoders.
Accept
This paper proposes a stochastic algorithm to make inference in Gaussian process latent variables models more efficient. The key idea is to exploit an equivalence between the marginal likelihood and the leave-R-out cross validation score (due to Fong and Holmes) to reduce the original cubic complexity in the full dimensional space to cubic complexity on subsampled data plus the (linear) complexity of combining these estimates. Some reviewers felt the fundamental idea of the paper might be extendable to other GP models, and there were concerns about the clarity in several parts. (Though all agreed this was improved in the revision, further improvement remains possible.) There were also some concerns that the proposed idea presented a relatively small methodological leap. However, in the end the consensus was that the paper provides a valuable contribution linking intuitions about subsampling to formal guarantees and for this reason makes a valuable contribution.
train
[ "nrfIzpMjq4z", "iS6ttAuRYEE", "zQhknFxmgcX", "YD41lbGgF8", "uEGQOVKNIec", "LQ6qyhPs0al", "HiFUAh9AkYu", "0L0YM9I4GOM", "oChZzBzxllX", "1c2MUIZ9HcC", "i8fZdShKECl6", "HFtWij7uoAE", "b2yOgm0GeKL", "1UpMoByTHZ", "gqJ_purY0vt", "AbnpioNX_O", "PuA92F0Q1D", "Jff8P1P0Vu", "YR8CmX0DX1M" ...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the acknowledgement on our response and for the positive consideration of our work. We are also glad to hear that all concerns are now clarified.\n\nAs a final comment, we wanted to remark that the connection between the GP-LVM and amortization has been considered before in the literatur...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iS6ttAuRYEE", "1c2MUIZ9HcC", "YD41lbGgF8", "uEGQOVKNIec", "AbnpioNX_O", "HiFUAh9AkYu", "b2yOgm0GeKL", "nips_2022_rAVqc7KSGDa", "YR8CmX0DX1M", "YR8CmX0DX1M", "Jff8P1P0Vu", "Jff8P1P0Vu", "Jff8P1P0Vu", "PuA92F0Q1D", "PuA92F0Q1D", "PuA92F0Q1D", "nips_2022_rAVqc7KSGDa", "nips_2022_rAVq...
nips_2022_Ixp6pznZgv7
Matching in Multi-arm Bandit with Collision
In this paper, we consider the matching of multi-agent multi-armed bandit problem, i.e., while agents prefer arms with higher expected reward, arms also have preferences on agents. In such case, agents pulling the same arm may encounter collisions, which leads to a reward of zero. For this problem, we design a specific communication protocol which uses deliberate collision to transmit information among agents, and propose a layer-based algorithm that helps establish optimal stable matching between agents and arms. With this subtle communication protocol, our algorithm achieves a state-of-the-art $O(\log T)$ regret in the decentralized matching market, and outperforms existing baselines in experimental results.
Accept
This paper is concerned with multi-player multi-armed bandit with arm preferences. It improves the existing literature by improving the regret upper-bound from log^{1+eps}(T) to log(T). The main ingredient is, as usual in the literature, forced collision. The issue in the model is that since arms have preferences, some communication between players might be impossible (if all arms always prefer a single player to another one). The trick is then to add a phase to "discover" what kind of communication is possible (via collision) and then this issue can somehow be circumvented. This idea is quite elegant, and the paper quite clear (even though the lack of space makes it quite difficult for the reviewers to understand the protocols involved). The major concern was the incrementality of this paper with respect to the existing literature: the trick is nice, but maybe not breathtaking. I hesitated quite a lot, discussed with colleagues, and finally decided that competing/matching bandits is a nice and intriguing setting that deserve to be investigated more. Hence I recommend acceptance.
train
[ "RBIR6dYAqMU", "pjpd_tJuej", "48Sx-DUhiX", "xl9ntFu4fGb", "xy4lfbdBUqi", "Dn1uruVr2wO", "eVVvqj5KZr", "F9oGYkNqRmo", "8nmM7n7QoIE", "Xtqu0ExkfLZ", "sFracJFdkC1", "sVgE1MLL6pe", "qN-lQXqiY0V", "qZIzjffJ0_", "_TlyTp2Zj1u", "rnIeRE-ewn" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your helpful advice and comments, we will include the comparison of the dependence on the number of agents in our final version.", " Thanks for your helpful advice and comments, we will improve our presentation and incorporate some diagrams for the process of communication and matching. We will also...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4, 3, 3 ]
[ "48Sx-DUhiX", "xl9ntFu4fGb", "sFracJFdkC1", "F9oGYkNqRmo", "Dn1uruVr2wO", "eVVvqj5KZr", "qN-lQXqiY0V", "rnIeRE-ewn", "_TlyTp2Zj1u", "qZIzjffJ0_", "sVgE1MLL6pe", "nips_2022_Ixp6pznZgv7", "nips_2022_Ixp6pznZgv7", "nips_2022_Ixp6pznZgv7", "nips_2022_Ixp6pznZgv7", "nips_2022_Ixp6pznZgv7" ]
nips_2022_gvwDosudtyA
Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
We consider reinforcement learning in an environment modeled by an episodic, tabular, step-dependent Markov decision process of horizon $H$ with $S$ states, and $A$ actions. The performance of an agent is measured by the regret after interacting with the environment for $T$ episodes. We propose an optimistic posterior sampling algorithm for reinforcement learning (OPSRL), a simple variant of posterior sampling that only needs a number of posterior samples logarithmic in $H$, $S$, $A$, and $T$ per state-action pair. For OPSRL we guarantee a high-probability regret bound of order at most $O(\sqrt{H^3SAT})$ ignoring $\text{poly}\log(HSAT)$ terms. The key novel technical ingredient is a new sharp anti-concentration inequality for linear forms of a Dirichlet random vector which may be of independent interest. Specifically, we extend the normal approximation-based lower bound for Beta distributions by Alfers and Dinges (1984) to Dirichlet distributions. Our bound matches the lower bound of order $\Omega(\sqrt{H^3SAT})$, thereby answering the open problems raised by Agrawal and Jia (2017) for the episodic setting.
Accept
The paper propose a variant of posterior sampling RL, namely OPSRL, and provided a minimax regret analysis for this algorithm. This settles an open question in the RL regret theory. The key technical novelty is a new anti-concentration inequality, which can be used to improved the analysis by Agrawal and Jia 17 and closed their gap. Reviewers appreciate this theoretical contribution. Some reviewer questioned the applicability of the method and lack of numerical experiments. The authors supplied experiments after rebuttal, which largely addressed some of these questions. One reviewer raised his score from reject to borderline acceptance. Overall, this is a solid paper with meaningful theoretical contribution to posterior sampling and to RL theory.
train
[ "o0CNUsLYn1M", "Kbno-7Gs-w", "MJKBBa88ap", "jGM3iHLWgT8n", "cSSDAKQr6JaX", "VRMZ5mWaIK9", "gNqEuXCpOyJ", "SBmP9_mczVr", "ClyUxa2k0W8", "1nrmog2FHMY", "HNK9O9dcqM", "K74tTn9A4Qk", "kBwaUkl0Y50" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. The preliminary experiments are very interesting, and from the experiments it seems like OPSRL with a constant number of samples is performing well. It would be great to extend the current analysis to that direction.", " We thank Reviewer Qhqb for reading the rebuttal.\n\nThe very re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "ClyUxa2k0W8", "MJKBBa88ap", "VRMZ5mWaIK9", "1nrmog2FHMY", "nips_2022_gvwDosudtyA", "gNqEuXCpOyJ", "SBmP9_mczVr", "kBwaUkl0Y50", "K74tTn9A4Qk", "HNK9O9dcqM", "nips_2022_gvwDosudtyA", "nips_2022_gvwDosudtyA", "nips_2022_gvwDosudtyA" ]
nips_2022_UMdY6-r7yRu
Measuring Data Reconstruction Defenses in Collaborative Inference Systems
The collaborative inference systems are designed to speed up the prediction processes in edge-cloud scenarios, where the local devices and the cloud system work together to run a complex deep-learning model. However, those edge-cloud collaborative inference systems are vulnerable to emerging Reconstruction Attacks, where malicious cloud service providers are able to recover the edge-side users’ private data. To defend against such attacks, several countermeasures have been recently introduced. Unfortunately, little is known about the robustness of those defense countermeasures. In this paper, we take the first step towards measuring the robustness of those state-of-the-art defenses with respect to reconstruction attacks. Specifically, we show the latent privacy features still retain in the obfuscated representations. Motivated by such an observation, we propose a novel technology called Sensitive Feature Distillation (SFD) to restore sensitive information from the protected feature representations. Our experiments show that SFD can break through defense mechanisms in model partitioning scenarios, demonstrating the inadequacy of existing defense mechanisms as a privacy-preserving technique against reconstruction attacks. We hope our findings inspire further work in improving the robustness of defense mechanisms against reconstruction attacks for collaborative inference systems.
Accept
The paper presents a new model inversion attack for edge-cloud learning which is shown to overcome existing defenses against model inversion in such scenario. The attack is based on Sensitive Feature Distillation (SFD) which simultaneously uses two existing techniques, shadow model and feature-based knowledge distillation. The authors claim that the proposed attack method can effectively extract the purified feature map from the intentionally obfuscated one and recover the private image for popular image datasets such as MNIST, CIFAR10, and CelebA. The proposed method is quite novel and its evaluation is solid. The method is based on certain assumptions and utilizes several techniques from related work in order to meet these assumptions, which makes the contribution rather incremental. The paper is well-written and the relationship to existing work is clearly presented and discussed.
train
[ "32Stbz38cu", "uboHEqxObsl", "-G4nDr7Pj7l", "LEgB_zIFwqZ", "T8s67H1H6mA", "R49nm3RZwqn", "xg8UF_iVGIV", "DrUOjgPTOSm", "HycqZwgJ4v", "GF6IYvB5Uyd", "-o32IWK0XYk" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your follow-up comments. We would like to further clarify our major contributions. \n\nAs indicated by our title, \"Measuring Model Inversion Defences in Edge–Cloud Collaborative Inference Systems\", the major goal of this work is to take the first step toward **measuring** the robustness ...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, 4, 7 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, 5, 4 ]
[ "uboHEqxObsl", "xg8UF_iVGIV", "R49nm3RZwqn", "R49nm3RZwqn", "R49nm3RZwqn", "nips_2022_UMdY6-r7yRu", "GF6IYvB5Uyd", "GF6IYvB5Uyd", "-o32IWK0XYk", "nips_2022_UMdY6-r7yRu", "nips_2022_UMdY6-r7yRu" ]
nips_2022_U07d1Y-x2E
Memory Efficient Continual Learning with Transformers
In many real-world scenarios, data to train machine learning models becomes available over time. Unfortunately, these models struggle to continually learn new concepts without forgetting what has been learnt in the past. This phenomenon is known as catastrophic forgetting and it is difficult to prevent due to practical constraints. For instance, the amount of data that can be stored or the computational resources that can be used might be limited. Moreover, applications increasingly rely on large pre-trained neural networks, such as pre-trained Transformers, since compute or data might not be available in sufficiently large quantities to practitioners to train from scratch. In this paper, we devise a method to incrementally train a model on a sequence of tasks using pre-trained Transformers and extending them with Adapters. Different than the existing approaches, our method is able to scale to a large number of tasks without significant overhead and allows sharing information across tasks. On both image and text classification tasks, we empirically demonstrate that our method maintains a good predictive performance without retraining the model or increasing the number of model parameters over time. The resulting model is also significantly faster at inference time compared to Adapter-based state-of-the-art methods.
Accept
Some of the reviewers had concerns with experiments that would have compared not distilling (i.e. high-memory-usage everyone gets their own adapter) with the authors' method. I am willing to recommend acceptance; but I agree that some sort of experiment like this would be useful to contextualize what is traded off for smaller memory footprint. I would ask the authors to include some experiment like this for the camera ready.
train
[ "2Ow9LyA4qv1", "UUs5_tcQkQe", "whrdW_K8Gy", "5icY9A49tj5", "ATZcfyYuh7j", "Kh03YBOy4s3", "un7k2ayKEFW", "6VJIe4dAlK", "zhCpuJcXIOd", "XeyMDOGvTq9", "HeH11cqbl50", "ygLMDe-wLVo", "1mxVh2OIabE", "dOZfhR_ueZx", "QlipujQcsaW", "UWpBgeE7uI1", "M4rxaSSNbSj", "uP5A20rFyR1", "tbSfxS1Nscg...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "...
[ " Dear Reviewer s76Y,\nThe experiment was not in the original paper since it was requested during the rebuttal phase and we did our best to share the results in a timely manner. We will add the experiment to the next version of the paper.\nWe are glad you appreciated our efforts to provide additional evidence and w...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "UUs5_tcQkQe", "whrdW_K8Gy", "5icY9A49tj5", "Kh03YBOy4s3", "Kh03YBOy4s3", "un7k2ayKEFW", "6VJIe4dAlK", "XeyMDOGvTq9", "1mxVh2OIabE", "dOZfhR_ueZx", "ygLMDe-wLVo", "dOZfhR_ueZx", "QlipujQcsaW", "UWpBgeE7uI1", "NwsFN2wJpEx", "gv60ctmRV-m", "tbSfxS1Nscg", "w_aTe56AlM", "fECVfrW0GBm"...
nips_2022_JVtoIJrSxuO
Oracle Inequalities for Model Selection in Offline Reinforcement Learning
In offline reinforcement learning (RL), a learner leverages prior logged data to learn a good policy without interacting with the environment. A major challenge in applying such methods in practice is the lack of both theoretically principled and practical tools for model selection and evaluation. To address this, we study the problem of model selection in offline RL with value function approximation. The learner is given a nested sequence of model classes to minimize squared Bellman error and must select among these to achieve a balance between approximation and estimation error of the classes. We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal oracle inequalities up to logarithmic factors. The algorithm, ModBE, takes as input a collection of candidate model classes and a generic base offline RL algorithm. By successively eliminating model classes using a novel one-sided generalization test, ModBE returns a policy with regret scaling with the complexity of the minimally complete model class. In addition to its theoretical guarantees, it is conceptually simple and computationally efficient, amounting to solving a series of square loss regression problems and then comparing relative square loss between classes. We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
Accept
This paper studies the model selection problem in the offline RL setting. The paper focuses on a theoretical study where a sequence of nested models are provided and the algorithm ought to output a class that nearly matches the optimal one (the so-called oracle inequality). It is surprising that the proposed algorithm is simple (QVI + generalization testing) but can achieve the oracle inequality. Both the meta-reviewer and some of the reviewers believe that the paper have a solid theoretical contribution and is qualified for publication in NeurIPS. However, in addition to the issues mentioned by the reviewers, the meta-reviewer finds that the presentation could be additionally improved. For instance, in the current form of the paper, it is hard to understand the general idea by just reading the first 8 pages -- e.g., one might simply think to estimate each model separately and then choose the best one; why this simpler algorithm does not work? It would be beneficial if the authors could provide a technical overview in the main text rather than provide it in the appendix. Other minor issues including the notations. In Def. 1, the meaning of w is not provided; w was first mentioned, then then in the equations what was used is w_{n, \delta}; also, it is said w is a function -- but it is a function of what? n, delta, and F_k? With that being said, the recommendation of this paper is an accept. The meta-reviewer encourages the authors to incorporate the reviewers' comments and further improve the presentation.
train
[ "AeggEinEZ7_", "tNihJml_BTS", "250ixrvnFX", "w7thD0ahUc8", "G0DVv7ZwYyg", "Qzyo1uOhG90", "ELF1TttoJG4", "_n0FoIQeRW", "ND0ehMapnz", "xD2opczUnqK", "b9B9yGaRvYo", "W8iyrAAl2X" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. The focus of this paper is primarily the theoretical aspects and the contribution is to advance our understanding of these previously unknown topics. We understand your concerns about the approximation error, but this is also an issue dealt with by many offline RL algorithms in theory. T...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "tNihJml_BTS", "xD2opczUnqK", "Qzyo1uOhG90", "b9B9yGaRvYo", "xD2opczUnqK", "ELF1TttoJG4", "W8iyrAAl2X", "b9B9yGaRvYo", "xD2opczUnqK", "nips_2022_JVtoIJrSxuO", "nips_2022_JVtoIJrSxuO", "nips_2022_JVtoIJrSxuO" ]
nips_2022_aaar9y7qjfw
Laplacian Autoencoders for Learning Stochastic Representations
Established methods for unsupervised representation learning such as variational autoencoders produce none or poorly calibrated uncertainty estimates making it difficult to evaluate if learned representations are stable and reliable. In this work, we present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence. This is maximized using Monte Carlo EM with a variational distribution that takes the shape of a Laplace approximation. We develop a new Hessian approximation that scales linearly with data size allowing us to model high-dimensional data. Empirically, we show that our Laplacian autoencoder estimates well-calibrated uncertainties in both latent and output space. We demonstrate that this results in improved performance across a multitude of downstream tasks.
Accept
Thanks to the authors for this submission. The reviewers were all quite positive — one reviewer noted that the “author response was really thorough and insightful”. The reviewer-author discussion appeared to be fruitful, and new experimental results were presented to address reviewer questions. The reviewers agreed that the presentation was clear, the experiments thorough, and that the problem addressed will be of broader interest to members of the ML community.
train
[ "yqRzZ6utvyu", "OHGpQxhX8_m", "7y5IUCli-rE", "qFzMwHFe7ah", "uGmvliTvJmg", "nJlUicJ85eE", "_4cN40pGTJS", "PNq6g2v7aoM", "ZovWpxcuX1", "wbjrVGTbrTv", "SGTBqfw2ah1", "Dn2sI962xY", "77kAF_Soiry", "W97JZ6uVO5E", "CH7aWkS6KL" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Intuition on optimization of variance:** We believe that the confusion of the sentence: \n\n_If we momentarily assume that $p(x | z) = \\mathcal{N}(x | \\mu(z), \\sigma^2(z))$, we see that optimally $\\sigma^2(z)$ should be as large as possible away from training data in order to increase $p(x)$ on the training...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "ZovWpxcuX1", "ZovWpxcuX1", "CH7aWkS6KL", "SGTBqfw2ah1", "SGTBqfw2ah1", "ZovWpxcuX1", "SGTBqfw2ah1", "SGTBqfw2ah1", "W97JZ6uVO5E", "SGTBqfw2ah1", "77kAF_Soiry", "nips_2022_aaar9y7qjfw", "nips_2022_aaar9y7qjfw", "nips_2022_aaar9y7qjfw", "nips_2022_aaar9y7qjfw" ]
nips_2022_k3MX8EK6Zf
Assistive Teaching of Motor Control Tasks to Humans
Recent works on shared autonomy and assistive-AI technologies, such as assistive robotic teleoperation, seek to model and help human users with limited ability in a fixed task. However, these approaches often fail to account for humans' ability to adapt and eventually learn how to execute a control task themselves. Furthermore, in applications where it may be desirable for a human to intervene, these methods may have inhibited their ability to learn how to succeed with full self-control. In this paper, we focus on the problem of assistive teaching of motor control tasks such as parking a car or landing an aircraft. Despite their ubiquitous role in humans' daily activities and occupations, motor tasks are rarely taught in a uniform way due to their high complexity and variance. We propose an AI-assisted teaching algorithm that leverages skill discovery methods from reinforcement learning (RL) literature to (i) break down any motor control task into teachable skills, (ii) construct novel drill sequences, and (iii) individualize curricula to students with different capabilities. Through an extensive mix of synthetic and user studies on two motor control tasks - parking a car with a joystick and writing characters from the Balinese alphabet - we show that assisted teaching with skills improve student performance by around 40% compared to practicing full trajectories without skills, and practicing with individualized drills can result in up to 25% further improvement.
Accept
The reviewers appreciated the extensive replies and the paper updates. Those managed to clear up most of the concerns of the reviewers that now all rate the paper cautiously positive. On the downside, now a lot of important material and discussion has been included in the appendix, while ideally the paper should be self-contained (i.e., no requiring the reader to also read the appendix). The paper proposes a very interesting approach with a solid contribution and solid experiments. It is an important first step, while - like the authors point out - there are still quite a few limitations before this can be used practically, that will require separate papers to address.
train
[ "SlhcTVFi94b", "dU-u3iFoBvQ", "w6jhu5ycNx8", "yTETqpC4LV", "qCSxKS_SpjP", "yQWu2gNzpax", "kqWD_ncrqmK2", "x_BivTxA9rG", "nrVFCz6NOu0", "_Sjf9TJg1C3", "1vpVX7uthHt", "duHqT2PuVP", "zio09_DWiDc", "CNLlijLrigV", "58sZP5yVI7r", "7U0dR9TPOy4z", "fN6PFx-TZo-", "NN8yMGG1c-T", "SdeU0UZqH...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author...
[ " Thank you so much for the detailed investigation!", " The authors' response has addressed most of my questions. The additional discussions on the current limitations of the algorithm is greatly appreciated and would help readers better understand the scope of the work. Therefore I would maintain my current acce...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "tUmq8l9ewg", "kqWD_ncrqmK2", "yTETqpC4LV", "6fiUkWEir1W", "e30RzdoGpqN", "PajdJkYSqK4", "WA_WVraOb2", "97M-RaGtGw5", "nips_2022_k3MX8EK6Zf", "WA_WVraOb2", "WA_WVraOb2", "WA_WVraOb2", "WA_WVraOb2", "WA_WVraOb2", "WA_WVraOb2", "WA_WVraOb2", "97M-RaGtGw5", "97M-RaGtGw5", "97M-RaGtG...
nips_2022_gsdHDI-p6NI
Sound and Complete Verification of Polynomial Networks
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently. However, robustness of PNs is unclear and thus obtaining certificates becomes imperative for enabling their adoption in real-world applications. Existing verification algorithms on ReLU neural networks (NNs) based on classical branch and bound (BaB) techniques cannot be trivially applied to PN verification. In this work, we devise a new bounding method, equipped with BaB for global convergence guarantees, called Verification of Polynomial Networks or VPN for short. One key insight is that we obtain much tighter bounds than the interval bound propagation (IBP) and DeepT-Fast [Bonaert et al., 2021] baselines. This enables sound and complete PN verification with empirical validation on MNIST, CIFAR10 and STL10 datasets. We believe our method has its own interest to NN verification. The source code is publicly available at https://github.com/megaelius/PNVerification.
Accept
The authors propose a novel verification technique for polynomial neural networks. They compare their approach against competitive baselines and demonstrates improvements in the quality of bounds obtained and their ability to verify input-output properties. The reviewers agreed that the paper contains novel and interesting ideas and all concerns brought up by the reviewers were thoroughly addressed in the rebuttal phase. Hence, I recommend acceptance.
test
[ "zKtpWAAKH5k", "raa9WzU1Se0", "K_RyHgz93gO", "AwNXImLrZI", "bc4aY1XRMsv", "QkpLYuFhAA2", "_Q4nVIwvxi", "_cWA2ESTZZB", "HkbZNmSoRku", "fQHvot6cDhk", "Gu0gZueeRZL", "gk2NKiyByOd_", "lpfY4tMV1t3x", "HpWOZg6Px5M", "IkPaYACYA1I", "uNmN2eH8Bw1", "SaQzHV47K9_" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer sZV9,\n\nWe are grateful for your constructive feedback and your acknowledgement of our updates.", " Dear reviewer rPvn,\n\nWe are grateful for your insightful comments and your acknowledgement of our updates. We also hope that our ideas will be useful to the verification community.", " We thank...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "AwNXImLrZI", "bc4aY1XRMsv", "QkpLYuFhAA2", "HkbZNmSoRku", "_cWA2ESTZZB", "Gu0gZueeRZL", "HkbZNmSoRku", "gk2NKiyByOd_", "fQHvot6cDhk", "SaQzHV47K9_", "uNmN2eH8Bw1", "IkPaYACYA1I", "HpWOZg6Px5M", "nips_2022_gsdHDI-p6NI", "nips_2022_gsdHDI-p6NI", "nips_2022_gsdHDI-p6NI", "nips_2022_gsd...
nips_2022_zFW48MVzCKC
Multiagent Q-learning with Sub-Team Coordination
In many real-world cooperative multiagent reinforcement learning (MARL) tasks, teams of agents can rehearse together before deployment, but then communication constraints may force individual agents to execute independently when deployed. Centralized training and decentralized execution (CTDE) is increasingly popular in recent years, focusing mainly on this setting. In the value-based MARL branch, credit assignment mechanism is typically used to factorize the team reward into each individual’s reward — individual-global-max (IGM) is a condition on the factorization ensuring that agents’ action choices coincide with team’s optimal joint action. However, current architectures fail to consider local coordination within sub-teams that should be exploited for more effective factorization, leading to faster learning. We propose a novel value factorization framework, called multiagent Q-learning with sub-team coordination (QSCAN), to flexibly represent sub-team coordination while honoring the IGM condition. QSCAN encompasses the full spectrum of sub-team coordination according to sub-team size, ranging from the monotonic value function class to the entire IGM function class, with familiar methods such as QMIX and QPLEX located at the respective extremes of the spectrum. Experimental results show that QSCAN’s performance dominates state-of-the-art methods in matrix games, predator-prey tasks, the Switch challenge in MA-Gym. Additionally, QSCAN achieves comparable performances to those methods in a selection of StarCraft II micro-management tasks.
Accept
This paper presents QSCAN, a hierarchical approach to value function decomposition where the agents are grouped into sub-teams to solve different sub-tasks via local coordination. QSCAN extends the monotonic mixing network in QPLEX in order to represent sub-team coordination, and shown to outperform QPLEX and QMIX in a number of benchmark tasks. Reviewers were all in favor of accepting the paper based on the sound contribution of the proposed approach. However, figures are illegible due to font sizes. Please fix the figures in the final version of the paper.
train
[ "S4d6_gMdf5g", "DU2JKpOt3Ga", "hPpwU_l7Z6Q", "P6CslnOoxta", "z8f5lAomW60", "UllJ0hzEY-E", "I7px1ehKYM", "UnW8lBvlFW", "zq48GNbH0gT", "tOg4CkfJuT", "Y-olQpwwEWq", "0bJRbqsLnQ-", "dh2E_SjSM2_", "RMc66WOekwM", "1XGhR5XgNZf", "Zymlq9s53D", "NXY8ZlxtkNp" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have just finished experiments of VAST($\\eta=0.5$) and VAST($\\eta=0.25$) (as **Sec. 6.2** in VAST paper [1]) on SMAC [2] tasks `3m`, `8m`, `1c3s5z`, `3s5z`, `5m_vs_6m`. `3m` and `8m` are two easy tasks in SMAC, and we use them to examine whether VAST can solve SMAC-style tasks. Meanwhile, we use the results ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "z8f5lAomW60", "P6CslnOoxta", "UllJ0hzEY-E", "I7px1ehKYM", "nips_2022_zFW48MVzCKC", "dh2E_SjSM2_", "zq48GNbH0gT", "RMc66WOekwM", "RMc66WOekwM", "RMc66WOekwM", "1XGhR5XgNZf", "Zymlq9s53D", "NXY8ZlxtkNp", "nips_2022_zFW48MVzCKC", "nips_2022_zFW48MVzCKC", "nips_2022_zFW48MVzCKC", "nips_...
nips_2022_GJGU6FgB7mg
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL
The cooperative Multi-Agent Reinforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in real-world applications. Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works. In this paper, we verify that the transformer implements complex relational reasoning, and we propose and analyze model-free and model-based offline MARL algorithms with the transformer approximators. We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents. These results are consequences of a novel generalization error bound of the transformer and a novel analysis of the Maximum Likelihood Estimate (MLE) of the system dynamics with the transformer. Our model-based algorithm is the first provably efficient MARL algorithm that explicitly exploits the permutation invariance of the agents. Our improved generalization bound may be of independent interest and is applicable to other regression problems related to the transformer beyond MARL.
Accept
The paper presents theoretical results justifying the use of transformers in cooperative multi-agent RL. The authors demonstrate that, with this choice of architecture, sub-optimality gaps grow independently of the number of agents. The theoretical contribution seems strong, with a more limited experimental evaluation. The paper is dense mathematically and was hard to assess. The theorems were nevertheless deemed strong enough to justify acceptance, however please do address comments from reviewer gWc2 in the final version.
train
[ "kO3n604yEy", "YvM_KoX9CYG", "kWsfJJ-LzdX", "7Fl4ATy58PV", "DvPMmJ52Bh", "sgjW2b8Urf", "Np-WaCvUF5", "xZicvVnOijn", "ceoBIxihTei", "vejFPdvaDep", "WfHpQcNqAqs", "mmldMv-i5FE", "0L8achOwmf", "KGdsaryiTb4", "D2QzKqBJaOX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for raising these questions concerning experimental validations. \nLet us reiterate the description and implications of this experiment here. We simulate the proposed algorithms on a cooperative navigation task in the Multiple Particle Environment (MPE), where $N$ agents move cooperatively...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3 ]
[ "YvM_KoX9CYG", "ceoBIxihTei", "0L8achOwmf", "DvPMmJ52Bh", "Np-WaCvUF5", "Np-WaCvUF5", "D2QzKqBJaOX", "KGdsaryiTb4", "vejFPdvaDep", "WfHpQcNqAqs", "mmldMv-i5FE", "0L8achOwmf", "nips_2022_GJGU6FgB7mg", "nips_2022_GJGU6FgB7mg", "nips_2022_GJGU6FgB7mg" ]
nips_2022_MhpB7Rxyyr
HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks
Domain adaptation framework of GANs has achieved great progress in recent years as a main successful approach of training contemporary GANs in the case of very limited training data. In this work, we significantly improve this framework by proposing an extremely compact parameter space for fine-tuning the generator. We introduce a novel domain-modulation technique that allows to optimize only 6 thousand-dimensional vector instead of 30 million weights of StyleGAN2 to adapt to a target domain. We apply this parameterization to the state-of-art domain adaptation methods and show that it has almost the same expressiveness as the full parameter space. Additionally, we propose a new regularization loss that considerably enhances the diversity of the fine-tuned generator. Inspired by the reduction in the size of the optimizing parameter space we consider the problem of multi-domain adaptation of GANs, i.e. setting when the same model can adapt to several domains depending on the input query. We propose the HyperDomainNet that is a hypernetwork that predicts our parameterization given the target domain. We empirically confirm that it can successfully learn a number of domains at once and may even generalize to unseen domains. Source code can be found at https://github.com/MACderRu/HyperDomainNet
Accept
This paper presents a domain adaptation technique to finetune a GAN's generator by using a small number of domain modulation parameters. This makes finetuning a pre-trained GAN to different domains very efficient, while not sacrificing upon the generation quality. The paper received generally positive reviews. There were some concerns about missing experimental results and the authors provided those during the rebuttal phase. There were also some concerns about missing quantitative results for the text-based domain adaptation and multi-domain adaptation. This aspects was also satisfactorily discussed in the rebuttal. Based on the reviews, and the author response and discussion, and my own reading of the paper, I vote for acceptance. However, the authors are advised to take into account the reviewers' feedback and suggestions to improve the camera-ready version.
train
[ "DiIle0Zydw", "_Pu_-w0ZPjK", "xtNBQYY77T1", "8SbgKKiNiK", "nVbKENjqlNa", "wh9vp5DKZQZ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer M94K,\n\nThank you for your thoughtful feedback and constructive comments. We will address your concerns regarding a more clear and detailed description of our method in further revisions of our paper. Below we address your questions about target-target directional loss.\n\n> In the supplementary, e...
[ -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "nVbKENjqlNa", "8SbgKKiNiK", "wh9vp5DKZQZ", "nips_2022_MhpB7Rxyyr", "nips_2022_MhpB7Rxyyr", "nips_2022_MhpB7Rxyyr" ]
nips_2022_13S0tUMqynI
Challenging Common Assumptions in Convex Reinforcement Learning
The classic Reinforcement Learning (RL) formulation concerns the maximization of a scalar reward function. More recently, convex RL has been introduced to extend the RL formulation to all the objectives that are convex functions of the state distribution induced by a policy. Notably, convex RL covers several relevant applications that do not fall into the scalar formulation, including imitation learning, risk-averse RL, and pure exploration. In classic RL, it is common to optimize an infinite trials objective, which accounts for the state distribution instead of the empirical state visitation frequencies, even though the actual number of trajectories is always finite in practice. This is theoretically sound since the infinite trials and finite trials objectives are equivalent and thus lead to the same optimal policy. In this paper, we show that this hidden assumption does not hold in convex RL. In particular, we prove that erroneously optimizing the infinite trials objective in place of the actual finite trials one, as it is usually done, can lead to a significant approximation error. Since the finite trials setting is the default in both simulated and real-world RL, we believe shedding light on this issue will lead to better approaches and methodologies for convex RL, impacting relevant research areas such as imitation learning, risk-averse RL, and pure exploration among others.
Accept
This paper provides new insights for convex reinforcement learning using finite vs. infinite trial objectives. Many of the reviewers felt this was a significant theoretical contribution that was presented clearly with the potential for significant theoretical impact. However, there were also concerns about the practicality of these insights: whether existing "real-world" applications of RL and IL were susceptible to the identified theoretical weaknesses of infinite trial objectives, and whether the experiments in this paper were sufficiently complex to demonstrate the advantages of the insights. The latter of these concerns was more widely shared among the reviewers. While I hope the authors might better address the practicality of their paper when revising their paper, I believe the theoretical contributions of the paper are sufficient for publication in NeurIPS and recommend acceptance.
train
[ "bqNimwsu33A", "eJZGo8NYSNe", "joOX-sk6ZA_", "cXg06NtZghr", "EObLnQo7C9", "Qz1t_JuzNUL", "3WzXgIwCq7f", "RBCOujC3guH", "XzW4qDyhvS8", "jfehJZfN-6K", "Q5Y4OuFUWOi", "gyoac-_3xH", "TDc9N5V45nKP", "bkDSnUIksqi", "9GnCORyD1B", "KX6IjJ84VF7", "bdXfLpvSh_Q", "NTTy5yJ1vX", "r-ObRToh97C"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thanks for the thorough responses to my questions. As is stated below, my primary concern is exactly what I am supposed to take from this work as a practitioner that uses RL/IRL/IL. The results given here seem like a misunderstanding amongst the theory community as to what is done, or can actually be done in prac...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "jfehJZfN-6K", "Q5Y4OuFUWOi", "gyoac-_3xH", "TDc9N5V45nKP", "bkDSnUIksqi", "KX6IjJ84VF7", "NTTy5yJ1vX", "9GnCORyD1B", "nips_2022_13S0tUMqynI", "p7hov2Zzkur", "gyoac-_3xH", "TDc9N5V45nKP", "bkDSnUIksqi", "p7hov2Zzkur", "M05P4xg6EbV", "bdXfLpvSh_Q", "JjiKdKF9POK", "sLCp-wJ0Wju", "n...
nips_2022_ch5Uth1IGj_
Neural-Symbolic Entangled Framework for Complex Query Answering
Answering complex queries over knowledge graphs (KG) is an important yet challenging task because of the KG incompleteness issue and cascading errors during reasoning. Recent query embedding (QE) approaches embed the entities and relations in a KG and the first-order logic (FOL) queries into a low dimensional space, making the query can be answered by dense similarity searching. However, previous works mainly concentrate on the target answers, ignoring intermediate entities' usefulness, which is essential for relieving the cascading error problem in logical query answering. In addition, these methods are usually designed with their own geometric or distributional embeddings to handle logical operators like union, intersection, and negation, with the sacrifice of the accuracy of the basic operator -- projection, and they could not absorb other embedding methods to their models. In this work, we propose a Neural and Symbolic Entangled framework (ENeSy) for complex query answering, which enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness. The projection operator in ENeSy could be any embedding method with the capability of link prediction, and the other FOL operators are handled without parameters. With both neural and symbolic reasoning results contained, ENeSy answers queries in ensembles. We evaluate ENeSy on complex query answering benchmarks, and ENeSy achieves the state-of-the-art, especially in the setting of training model only with the link prediction task.
Accept
This paper proposes a neural-symbolic approach (ENeSy) for answering FOL-based queries over KGs. The main promise of the proposed approach lies in its ability to alleviate the problem of cascading error as it existed in previous approaches and also the incompleteness of KGs. The experiments are performed on two datasets (FB15K-237 and NELL-995) which established the merits of the proposed ENeSy over the baselines.
 The reviewers are generally happy with the clarity and substance of the work. There are some suggested baselines in "3t6R"'s review which (to my understanding) are addressed in the authors' response. The experiments on computation speed (in response to "7Bea") are also informative. I hope all of these will be reflected in their revised draft. Lastly, I hope the authors release the scripts necessary for reproducing their experiments.
train
[ "-A4xOjami6", "YELMH6zonu", "gcHbBw_70rw", "99O4imFwzCD", "ph034eSATuT", "tBVZkcZXL7f", "2Ek3z2_BMz", "L181BnFYFmI" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I don't have further questions at the moment.", " Dear reviewer, we appreciate any suggestions from you, and we welcome any more questions about our response. Looking forward to hearing from you.", " We gratefully thanks for the precious time you spent and your positive feedback. ...
[ -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "99O4imFwzCD", "2Ek3z2_BMz", "tBVZkcZXL7f", "L181BnFYFmI", "2Ek3z2_BMz", "nips_2022_ch5Uth1IGj_", "nips_2022_ch5Uth1IGj_", "nips_2022_ch5Uth1IGj_" ]
nips_2022_KTf5SGYZQvt
Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs
In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identify an $\epsilon$-optimal policy with probability $1-\delta$. While minimax optimal algorithms exist for this problem, its instance-dependent complexity remains elusive in episodic Markov decision processes (MDPs). In this paper, we propose the first nearly matching (up to a horizon squared factor and logarithmic terms) upper and lower bounds on the sample complexity of PAC RL in deterministic episodic MDPs with finite state and action spaces. In particular, our bounds feature a new notion of sub-optimality gap for state-action pairs that we call the deterministic return gap. While our instance-dependent lower bound is written as a linear program, our algorithms are very simple and do not require solving such an optimization problem during learning. Their design and analyses employ novel ideas, including graph-theoretical concepts (minimum flows) and a new maximum-coverage exploration strategy.
Accept
The paper studies PAC reinforcement learning in tabular episodic MDPs with deterministic transitions and provides upper and lower bounds on the sample-complexity that match up to horizon and log-factors. Overall, all reviewers rate this paper positively (after the authors' responses and discussion). They view the contribution of a fine-grained instance-dependent guarantees in this setting as significant and particularly appreciated the novel insights, e.g., relating the MaxCoverage function in Algorithm 1, to the StaticMaxCoverage in Algorithm 3 or the inclusion of graph-theoretical concepts in the lower bound analysis. There were also several limitations raised, in particular the deterministic transition assumption and the reward range assumption used in the lower bound. However, some of these can be addressed by clarification and more detailed discussion in the camera ready. All in all, this is a solid paper and is recommended to be accepted.
train
[ "9lZk4AD_r7", "8XtSs-iHlnE", "CjgMVEmpTK7", "PL-UoX3d7c", "uA7xCKr2WD_", "sSRurBiuOvZ", "yA2iQi5CBEw", "gMZP-a-Muu", "ZGvpZWsQn1R", "SM6aokfvpGk", "v_WNEP3YwUw", "elQelNznvmj", "Wl_zb_b4WsC", "A_FFWBpL6Zs", "K9vM1WdefQH" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the quick answer.\n\nIn the final version, we will certainly include a precise discussion on the settings where lower and upper bound match (i.e., Gaussian rewards with no restriction on their mean) and also give some hints on how to refine our results if we assume mean rewards to be bou...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 4 ]
[ "8XtSs-iHlnE", "CjgMVEmpTK7", "uA7xCKr2WD_", "ZGvpZWsQn1R", "sSRurBiuOvZ", "yA2iQi5CBEw", "gMZP-a-Muu", "K9vM1WdefQH", "A_FFWBpL6Zs", "Wl_zb_b4WsC", "elQelNznvmj", "nips_2022_KTf5SGYZQvt", "nips_2022_KTf5SGYZQvt", "nips_2022_KTf5SGYZQvt", "nips_2022_KTf5SGYZQvt" ]
nips_2022_QZDmftWNAMJ
S2P: State-conditioned Image Synthesis for Data Augmentation in Offline Reinforcement Learning
Offline reinforcement learning (Offline RL) suffers from the innate distributional shift as it cannot interact with the physical environment during training. To alleviate such limitation, state-based offline RL leverages a learned dynamics model from the logged experience and augments the predicted state transition to extend the data distribution. For exploiting such benefit also on the image-based RL, we firstly propose a generative model, S2P (State2Pixel), which synthesizes the raw pixel of the agent from its corresponding state. It enables bridging the gap between the state and the image domain in RL algorithms, and virtually exploring unseen image distribution via model-based transition in the state space. Through experiments, we confirm that our S2P-based image synthesis not only improves the image-based offline RL performance but also shows powerful generalization capability on unseen tasks.
Accept
## Summary The performance of the offline RL methods can be limited by the amount of coverage in the dataset. Most real-world problems have limited coverage in the offline RL datasets and the sample efficiency of the pixel-based offline RL methods in general is often poor. Thus, it is an important direction of research to improve the sample-efficiency those continuous control pixel-based offline RL algorithms. This paper proposes a method called S2P which generates pixel based observations from the states by using a generative model. The paper shows improved results on offline DeepMind control datasets. ## Decision The paper in general is well-written and clear. The idea is simple and seems to be effective compared to other data augmentation approaches. The reviewers were in general positive about this paper. I think the NeurIPS and offline RL community *would benefit from the findings of this paper*. However, I think a few clarifications in the final version of the paper would make the contributions of this paper more clear. 1. I found the improvements shown in the paper very encouraging. However, I found the choice of the dataset odd and confusing. In particular, I am curious why the authors did not decide to use the standards datasets published in RL Unplugged benchmark for offline RL. I think the authors should justify why they did not use those datasets in the camera-ready version of the paper, provide results on those datasets, and release the datasets that they used in this paper (perhaps contacting the RL Unplugged authors to see if it is possible to release them under RL Unplugged benchmark.) As it stands out, this paper only compares against baselines that the authors themselves implemented in the paper. Nevertheless, running experiments on the RL Unplugged would enable us to be able to compare S2P against other published offline RL baselines. 2. Authors should include the standard deviations in the camera-ready version of the paper as request by *reviewer i7hg*. 3. In general, I think the authors did a good job during the rebuttal and many of the reviewers raised their scores as a result of additional results and experiments that the authors have provided. The authors should include those results in camera-ready version of the paper including the clarifications about the questions that the reviewers asked in the rebuttal. 4. Currently the links and the references in the supplementary material are all broken. The authors should fix those in the camera-ready version of the paper. **With the above points addressed in the camera-ready version of the paper, I think this paper would be ready for publication.**
train
[ "ojho0BbmpL", "f-tv5-Ubl2J", "ELoC3zQtAl2", "D6in6ieInKR", "SPyo3bvC2Y", "IivnoIbRwJI", "hQwT58qXZ6Y", "qNuAy_YmvH", "iJoP3RVWGuY", "2FZkYl7ZhLm", "p3Hxjmvya1q", "oesSEWXR6Na", "_L5yueNmGu8", "Q-zMuS30tUA", "QhQ0LVhBHc", "Trw4Dp-3Vb", "e0Qa6hg0hDa", "kKl_f3LmLZC", "Dn48CgkZG7U", ...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Thanks a lot for the comments\n\nQ2. Thanks for the detailed explanation. I have no further questions on the amount of data used. Thanks also for the demonstration on mixing different amounts of augmented data. I hope you are able to include this in the camera-ready version.\n\nQ4. The additional ablation looks g...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "oesSEWXR6Na", "iJoP3RVWGuY", "2FZkYl7ZhLm", "GwUqkcQI2yW", "Dn48CgkZG7U", "hQwT58qXZ6Y", "qNuAy_YmvH", "_L5yueNmGu8", "GwUqkcQI2yW", "GwUqkcQI2yW", "Dn48CgkZG7U", "Dn48CgkZG7U", "kKl_f3LmLZC", "kKl_f3LmLZC", "e0Qa6hg0hDa", "nips_2022_QZDmftWNAMJ", "nips_2022_QZDmftWNAMJ", "nips_20...
nips_2022_KAIyxWrP9-
A Differentially Private Linear-Time fPTAS for the Minimum Enclosing Ball Problem
The Minimum Enclosing Ball (MEB) problem is one of the most fundamental problems in clustering, with applications in operations research, statistic and computational geometry. In this works, we give the first differentially private (DP) fPTAS for the Minimum Enclosing Ball problem, improving both on the runtime and the utility bound of the best known DP-PTAS for the problem, of Ghazi et al (2020). Given $n$ points in $\mathbb{R}^d$ that are covered by the ball $B(\theta_{opt},r_{opt})$, our simple iterative DP-algorithm returns a ball $B(\theta,r)$ where $r\leq (1+\gamma)r_{opt}$ and which leaves at most $\tilde O(\frac{\sqrt d}{\gamma\epsilon})$ points uncovered in $\tilde O(n/\gamma^2)$-time. We also give a local-model version of our algorithm, that leaves at most $\tilde O(\frac{\sqrt {nd}}{\gamma\epsilon})$ points uncovered, improving on the $n^{0.67}$-bound of Nissim and Stemmer (2018) (at the expense of other parameters). In addition, we test our algorithm empirically and discuss future open problems.
Accept
The paper improves the state-of-the-art for the minimum enclosing ball problem under differential privacy constraints. Moreover, the algorithm and its analysis are simple and intuitive. The reviewers agreed that the paper is a concrete advance in the area, and the ideas may lead to a practical implementation. The authors carefully responded to all the issues raised by the reviewers, clearing the way to acceptance.
train
[ "rhh5b1QAzKf", "oJau_7NomXg", "5dig1_ofor", "izLKeLJlkQh", "o6U_r2ghxOK", "FFxrB1z1EZy", "WeFXfrBk5l", "w1anQiDA1yn", "iQBi2huyr8_", "jpFgQWSNRjz" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my questions. Based on your responses, I am happy to raise my score.\n\nI would recommend reading (also adding) the reference [1]. Paper [1] shows how the coreset construction for MEB and many other problems are simply special instances of Frank-Wolfe algorithm and using a simple farthest ...
[ -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "izLKeLJlkQh", "5dig1_ofor", "jpFgQWSNRjz", "iQBi2huyr8_", "w1anQiDA1yn", "WeFXfrBk5l", "nips_2022_KAIyxWrP9-", "nips_2022_KAIyxWrP9-", "nips_2022_KAIyxWrP9-", "nips_2022_KAIyxWrP9-" ]
nips_2022_B26CPuYw9VA
Debiased Causal Tree: Heterogeneous Treatment Effects Estimation with Unmeasured Confounding
Unmeasured confounding poses a significant threat to the validity of causal inference. Despite that various ad hoc methods are developed to remove confounding effects, they are subject to certain fairly strong assumptions. In this work, we consider the estimation of conditional causal effects in the presence of unmeasured confounding using observational data and historical controls. Under an interpretable transportability condition, we prove the partial identifiability of conditional average treatment effect on the treated group (CATT). For tree-based models, a new notion, \emph{confounding entropy}, is proposed to measure the discrepancy introduced by unobserved confounders between the conditional outcome distribution of the treated and control groups. The confounding entropy generalizes conventional confounding bias, and can be estimated effectively using historical controls. We develop a new method, debiased causal tree, whose splitting rule is to minimize the empirical risk regularized by the confounding entropy. Notably, our method integrates current observational data (for empirical risk) and their historical controls (for confounding entropy) harmoniously. We highlight that, debiased causal tree can not only estimate CATT well in the presence of unmeasured confounding, but also is a robust estimator of conditional average treatment effect (CATE) against the imbalance of the treated and control populations when all confounders are observed. An extension of combining multiple debiased causal trees to further reduce biases by gradient boosting is considered. The computational feasibility and statistical power of our method are evidenced by simulations and a study of a credit card balance dataset.
Accept
The authors propose a novel way of dealing with unobserved confounding when having access to historical untreated data on the treated subjects and using transportability ideas and finding partitions of the covariate space that minimize the confounding bias using the historical untreated data. They introduce an interesting notion of confounding entropy and use it successfully in practice within a gradient boosted forest framework. Despite initial reviewer concerns, the authors rebuttal has addressed main concerns.
train
[ "dFnwd1A368", "3B976qPHzH", "UjxK6-P2Kod", "umT_htVuni", "00ArkJ0zc3f", "cQ176TuwRwv", "xa2qRiXuLt_", "wPJU3HyK-Dz", "3CM_U4vXHr5", "MyeMd1-1ThmH", "FHXbgBsQl2e", "iQLnpqXdwk4", "bpHTu5RhC9s", "B_bLb7gDB_g", "JI-1nk6D3Ll", "gcOcvAslWEw", "0Xh32fwOVdD", "z4q4UlQftW", "G5bMJNZtuFj"...
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Thanks for your insightful and constructive suggestions which helped a lot to polish this article. We also feel encouraged about your positive and precise comments on the strength of our paper. We notice that your positive evaluation of our paper is altered by specific modeling as well as implementation choices,...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "umT_htVuni", "taXM0wHqrMk", "nips_2022_B26CPuYw9VA", "B_bLb7gDB_g", "cS1zdgtuxHE", "taXM0wHqrMk", "wPJU3HyK-Dz", "3CM_U4vXHr5", "MyeMd1-1ThmH", "FHXbgBsQl2e", "iQLnpqXdwk4", "bpHTu5RhC9s", "0Xh32fwOVdD", "JI-1nk6D3Ll", "gcOcvAslWEw", "cS1zdgtuxHE", "z4q4UlQftW", "G5bMJNZtuFj", "...
nips_2022_zLVLB-OncUY
Optimal Positive Generation via Latent Transformation for Contrastive Learning
Contrastive learning, which learns to contrast positive with negative pairs of samples, has been popular for self-supervised visual representation learning. Although great effort has been made to design proper positive pairs through data augmentation, few works attempt to generate optimal positives for each instance. Inspired by semantic consistency and computational advantage in latent space of pretrained generative models, this paper proposes to learn instance-specific latent transformations to generate Contrastive Optimal Positives (COP-Gen) for self-supervised contrastive learning. Specifically, we formulate COP-Gen as an instance-specific latent space navigator which minimizes the mutual information between the generated positive pair subject to the semantic consistency constraint. Theoretically, the learned latent transformation creates optimal positives for contrastive learning, which removes as much nuisance information as possible while preserving the semantics. Empirically, using generated positives by COP-Gen consistently outperforms other latent transformation methods and even real-image-based methods in self-supervised contrastive learning.
Accept
This is a theory-oriented paper for contrastive representation learning. It extends the GenRep by replacing the fixed latent transformation to a learnable transform. The results on ImageNet-1K image classification linear probing and VOC object detection demonstrate the effectiveness of the proposed methods. The paper receives unanimous accept from all reviewers, leading to an ``Accept'' decision. However, the impact of this paper could be higher (by attracting more attention), if it can compare with absolute SoTA methods in the leaderboard [*] in computer vision community. [*] https://paperswithcode.com/sota/self-supervised-image-classification-on
train
[ "w76mpTS7XS", "GVojEIixITf", "-7boy7I64F", "vLfk0f_RPLa", "6lqPDZIi2FV", "QQS6Sqr7EKA", "HzgbnPVR9_8", "0zAxIcqAPgW", "SQcgsAytOrT", "PILMZ2QSILu", "nqlxz5HSPP2", "veTyHWJd4Xj", "N0kUe79ABc-", "N04u6Yto5ln" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing the comments. I have no further questions. ", " Many thanks to the author's responses. My concerns have been addressed.", " We thank all reviewers for the time and effort in reviewing our paper and the insightful feedback. We are encouraged that reviewers find our paper is well-motiva...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 3 ]
[ "HzgbnPVR9_8", "PILMZ2QSILu", "nips_2022_zLVLB-OncUY", "nqlxz5HSPP2", "nqlxz5HSPP2", "veTyHWJd4Xj", "veTyHWJd4Xj", "N0kUe79ABc-", "N0kUe79ABc-", "N04u6Yto5ln", "nips_2022_zLVLB-OncUY", "nips_2022_zLVLB-OncUY", "nips_2022_zLVLB-OncUY", "nips_2022_zLVLB-OncUY" ]