paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_xN3XX6pKSD5
Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis
We introduce DMTet, a deep 3D conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels. It marries the merits of implicit and explicit 3D representations by leveraging a novel hybrid 3D representation. Compared to the current implicit approaches, which are trained to regress the signed distance values, DMTet directly optimizes for the reconstructed surface, which enables us to synthesize finer geometric details with fewer artifacts. Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology. The core of DMTet includes a deformable tetrahedral grid that encodes a discretized signed distance function and a differentiable marching tetrahedra layer that converts the implicit signed distance representation to the explicit surface mesh representation. This combination allows joint optimization of the surface geometry and topology as well as generation of the hierarchy of subdivisions using reconstruction and adversarial losses defined explicitly on the surface mesh. Our approach significantly outperforms existing work on conditional shape synthesis from coarse voxel inputs, trained on a dataset of complex 3D animal shapes. Project page: https://nv-tlabs.github.io/DMTet/.
accept
This paper received mostly positive reviews, with the exception of reviewer f9sZ. Taking into account feedback from the other reviewers as well as the message to the area chair, the AC tends to agree that f9sZ's concerns are not sufficient to hold back publishing this work. The final revision of this paper should clarify issues of contribution/novelty brought up in several reviews and incorporate the additional experiments/fixes mentioned in the rebuttal posts.
train
[ "SzSwg279RTR", "t5XwHi501Or", "FgXzmzspu-n", "Eu6FegtONhL", "MrmZ2ZcORj", "cSo2lX2eW_m", "GLmu1LWjlL0", "s4tbGuBGdUH", "2rwXu1FcK__", "zF4my9j13ce", "QMPwWWtYuG6", "y2VAU-64mC", "fBg_UEqyoN_", "9y9RJWE2bRn" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I think we look at the issues from different angles. I look at these issues from the perspectives of guarantees or at least theoretical insights, while the authors argue from the empirical perspectives. Ultimately, it is the call of area chairs.\n\nFor example, I can understand that for simple shapes, self inters...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "Eu6FegtONhL", "FgXzmzspu-n", "MrmZ2ZcORj", "MrmZ2ZcORj", "cSo2lX2eW_m", "zF4my9j13ce", "9y9RJWE2bRn", "QMPwWWtYuG6", "fBg_UEqyoN_", "y2VAU-64mC", "nips_2021_xN3XX6pKSD5", "nips_2021_xN3XX6pKSD5", "nips_2021_xN3XX6pKSD5", "nips_2021_xN3XX6pKSD5" ]
nips_2021_4PK-St2iVZn
Learning to Combine Per-Example Solutions for Neural Program Synthesis
The goal of program synthesis from examples is to find a computer program that is consistent with a given set of input-output examples. Most learning-based approaches try to find a program that satisfies all examples at once. Our work, by contrast, considers an approach that breaks the problem into two stages: (a) find programs that satisfy only one example, and (b) leverage these per-example solutions to yield a program that satisfies all examples. We introduce the Cross Aggregator neural network module based on a multi-head attention mechanism that learns to combine the cues present in these per-example solutions to synthesize a global solution. Evaluation across programs of different lengths and under two different experimental settings reveal that when given the same time budget, our technique significantly improves the success rate over PCCoder [Zohar et. al 2018] and other ablation baselines.
accept
The paper presents a new technique for synthesizing straight-line programs from input-output examples. The core novelty in the technique is that rather than trying to synthesize a program that works correctly for all available examples, the algorithm first finds separate programs that work for each individual example and then uses a cross aggregator to merge the individually working programs into a program that works correctly on all inputs. As the authors point out in their related work section, there is precedent for this idea in the program synthesis community, but this has never been done before in a neural setting. The authors addressed many of the questions and concerns raised in the reviews and provided a very thorough rebuttal. I think the main open question is the generality of the technique, since it is only evaluated on an important but limited domain, and it is not clear how to generalize it to more complex settings. That said, this paper presents an interesting and novel contribution and should be accepted.
train
[ "raiL0EKAfWn", "9L5zeBhx49T", "kNtaRAWfsn", "B4Idjn1PiE", "aZBpukxKgt2", "xSka6KU6jdW", "gVcobwzOyX7", "h3aUdMqrTho", "L87Y_vk9VxC", "XtBlYY8OPPN", "Y7kDzjVPwds", "2A3dqxWYrin", "rOPw22YBU4D", "_uFN29aPT72", "axAm2QF1xHw", "2JykeEwvG3", "gfptED817Em", "KjF9efMMEP", "4KL-zFNyp0d",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " I see. Thanks for the explanation (and sorry for the delayed response. I actually thought I did reply earlier).", "This paper proposes a new approach for neural program synthesis by first searching for per-example solutions and then learn to combine them into a global solution. The paper uses PCCoder for synthe...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "2A3dqxWYrin", "nips_2021_4PK-St2iVZn", "B4Idjn1PiE", "h3aUdMqrTho", "KjF9efMMEP", "L87Y_vk9VxC", "nips_2021_4PK-St2iVZn", "Y7kDzjVPwds", "XtBlYY8OPPN", "axAm2QF1xHw", "rOPw22YBU4D", "_uFN29aPT72", "gfptED817Em", "2JykeEwvG3", "gVcobwzOyX7", "DDyfC-jsMo", "9L5zeBhx49T", "4KL-zFNyp0...
nips_2021_LVWcGZr-8h
On Success and Simplicity: A Second Look at Transferable Targeted Attacks
Achieving transferability of targeted attacks is reputed to be remarkably difficult. The current state of the art has resorted to resource-intensive solutions that necessitate training model(s) for each target class with additional data. In our investigation, we find, however, that simple transferable attacks which require neither model training nor additional data can achieve surprisingly strong targeted transferability. This insight has been overlooked until now, mainly because the widespread practice of attacking with only few iterations has largely limited the attack convergence to optimal targeted transferability. In particular, we, for the first time, identify that a very simple logit loss can largely surpass the commonly adopted cross-entropy loss, and yield even better results than the resource-intensive state of the art. Our analysis spans a variety of transfer scenarios, especially including three new, realistic scenarios: an ensemble transfer scenario with little model similarity, a worse-case scenario with low-ranked target classes, and also a real-world attack on the Google Cloud Vision API. Results in these new transfer scenarios demonstrate that the commonly adopted, easy scenarios cannot fully reveal the actual strength of different attacks and may cause misleading comparative results. We also show the usefulness of the simple logit loss for generating targeted universal adversarial perturbations in a data-free manner. Overall, the aim of our analysis is to inspire a more meaningful evaluation on targeted transferability. Code is available at https://github.com/ZhengyuZhao/Targeted-Tansfer.
accept
This paper studies targeted attacks, where one of the main findings is showcasing that the simple logit loss can lead to very good transferability of attacks. The reviewers liked the contributions of the paper, partially because the logit loss and associated approach was simple but effective. The reviewers also appreciated the experiments in this paper and found them interesting and well executed. In light of all of this, I recommend acceptance. The reviewers do also point out a couple of related works, such as [1] below, and I encourage the authors to discuss their findings in the context of this work and other prior work. The authors have also made some promises, such as to open source their code, and I would encourage them to follow through on any updates that have been pointed out in the reviews. [1] Chaoning Zhang, Philipp Benz, Tooba Imtiaz, and In So Kweon. Understanding adversarial examples from the mutual influence of images and perturbations. In CVPR, 2020.
train
[ "mevt4zQdZzL", "ZhsSV17lTDf", "ijthon5t9p0", "fhBI0-xa3zK", "jmXQkf8r0oo", "xFtiTJDOW8V", "P-Cv9lQLN9y", "SwYhkSHj6j", "NXXGXzDLqSX" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q1:** Theoretical analysis to explain the superiority of the logit attack.\n\n**A1:** Please see our response to Q1 of the Reviewer yhEq as shown above.\n\n**Q2:** Some experimental settings about C&W and source models for ensemble transfer are not clear.\n\n**A2:** As can be seen from the following table, the ...
[ -1, -1, -1, -1, -1, 6, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "NXXGXzDLqSX", "xFtiTJDOW8V", "SwYhkSHj6j", "P-Cv9lQLN9y", "nips_2021_LVWcGZr-8h", "nips_2021_LVWcGZr-8h", "nips_2021_LVWcGZr-8h", "nips_2021_LVWcGZr-8h", "nips_2021_LVWcGZr-8h" ]
nips_2021_9UjRw5bqURS
Provably efficient, succinct, and precise explanations
Guy Blanc, Jane Lange, Li-Yang Tan
accept
This paper relates the problem of interpreting the output of a black-box machine learning classifier at any point x by releasing an (exponentially small) fragment of a decision tree -- the conjunction labeling a path from root to leaf activated at x. This is natural if one views an "explanation" as a "certificate" for the value of a function at a given point (and ties in neatly to certificate complexity studied in computational complexity theory). In order to accomplish construct such explanations efficiently, the paper introduces and gives an algorithm for the problem of implicitly learning decision trees based on some recent developments in the area. Interpreting machine learning classifiers is an important goal and one that could indeed benefit from fresh theoretical approaches. This innovative paper introduces one approach that could fit the bill. We recommend acceptance.
train
[ "Qd8IzRwys1", "n1m8BjM-NnB", "nt5ixKueQm", "snGW00NRDm", "QWDlBxzCHhe", "gx2dAb6_5Ye" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This work proposes (1) a new theoretical model for algorithms to explain the\npredictions of black-box classifiers and (2) an algorithm in this model that\nproduces such explanations with polynomial guarantees on its running time for\nproducing explanations of a given fidelity.\n\nThe model is an analogue of learn...
[ 7, 6, 7, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "nips_2021_9UjRw5bqURS", "nips_2021_9UjRw5bqURS", "nips_2021_9UjRw5bqURS", "nt5ixKueQm", "n1m8BjM-NnB", "Qd8IzRwys1" ]
nips_2021__tQns0wUl_3
Refined Learning Bounds for Kernel and Approximate $k$-Means
Yong Liu
accept
The work presents nearly optimal bounds for the excess clustering risk of kernel k-means, improving the rate of convergence from k/sqrt(n) to sqrt(k/n), and also shows that approximate (Nystrom with uniform sampling of sqrt(nk) landmark points) has the same order of excess clustering risk. This theoretical result is a strong contribution and of great relevance to the community.
train
[ "1L7esSbtlLj", "Ba2zla59sxR", "lyCCuh6OD-S", "wjSHN5LD_LI", "E0iuNWzAP-z", "RkoAOP7Z5J", "zpvcQQWIqSV", "MbGzMrIrJOY", "qcm2NkosC4p", "QqOESn7jTku", "nnRCZdXgPsQ" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your recognition of this manuscript. As there are some additional results added and also for better anonymous review, a different title of this manuscript is adopted. According to the policy of NeurIPS 2021 on Preprints, that is \"Authors may submit anonymized work to NeurIPS that is already available ...
[ -1, -1, 6, 7, -1, -1, -1, -1, -1, 7, 8 ]
[ -1, -1, 1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "Ba2zla59sxR", "zpvcQQWIqSV", "nips_2021__tQns0wUl_3", "nips_2021__tQns0wUl_3", "nnRCZdXgPsQ", "lyCCuh6OD-S", "QqOESn7jTku", "nips_2021__tQns0wUl_3", "wjSHN5LD_LI", "nips_2021__tQns0wUl_3", "nips_2021__tQns0wUl_3" ]
nips_2021_-msETI57gCH
Learning Causal Semantic Representation for Out-of-Distribution Prediction
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output. To address the problem, we propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately, and develop methods for OOD prediction from a single training domain, which is common and challenging. The methods are based on the causal invariance principle, with a novel design in variational Bayes for both efficient learning and easy prediction. Theoretically, we prove that under certain conditions, CSG can identify the semantic factor by fitting training data, and this semantic-identification guarantees the boundedness of OOD generalization error and the success of adaptation. Empirical study shows improved OOD performance over prevailing baselines.
accept
This paper attempts to robustify models learned on a particular dataset to OOD examples which could plausibly be addressed if the model learns based on the semantics of training observations, rather than spurious factors. The proposed solution is—inspired by recent work in causality—to introduce a pair of latent variables modelling the semantic relationship between inputs and labels, and the natural spurious variations/noise in the input, respectively. Neural variational inference is used to jointly learn this generative model. Experiments show it works on some relatively toy data, which is arguably complex enough to prove the concept. I personally like the idea, and would like to have seen a richer discussion here. I have read all the reviews and rebuttals carefully, it is seems like the main issues had to do with clarity, the strength of some of the claims in the paper, and the appropriateness of comparisons and datasets used. I feel in the rebuttals the authors have responded to the latter two points to my satisfaction, and if the concerned reviewers are not satisfied with those rebuttals, they have not made this known to me. The authors have committed to reviewing the strength of some the assertions about the approach in their rebuttal, and I count on them to make good on this in the preparation of their camera ready. Regarding clarity, I did find section 4 a bit hard to read due to the density of mathematical expressions. My suggestion would be to summarise the findings of section 5 in the main paper, move the rest of section 5 to the appendix, and expand section 4 to give a little bit of visual breathing room and more detail where needed. I don't think this sort of change warrants asking for further reviews, and believe this paper will be of interest to NeurIPS attendees. I recommend acceptance.
train
[ "npWWwzhXGc5", "9rGYva4kLEM", "qHUuOomNGC", "bg8_hPzgItY", "lyqt8WZOWL-", "5WlDQ9W8h2v", "ikweQTxwaO", "xyGSc6OzQwp", "-IdX84HbccO", "rdjPgSMLWfE", "0LiyxT7j5zF" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for acknowledging our novelty, and technical and theoretical contributions. We’ll further improve the paper according to your valuable feedback.\n\n* Clarity.\n\n We paid dedicated effort to make the paper easy to follow (noted by Reviewer SkAw), nevertheless it is still possible that it does not please a...
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 2 ]
[ "0LiyxT7j5zF", "0LiyxT7j5zF", "rdjPgSMLWfE", "nips_2021_-msETI57gCH", "rdjPgSMLWfE", "xyGSc6OzQwp", "-IdX84HbccO", "nips_2021_-msETI57gCH", "nips_2021_-msETI57gCH", "nips_2021_-msETI57gCH", "nips_2021_-msETI57gCH" ]
nips_2021_DtXBYsSOxCD
A first-order primal-dual method with adaptivity to local smoothness
Maria-Luiza Vladarean, Yura Malitsky, Volkan Cevher
accept
The paper proposes an adaptive variant of the Condat-Vu algorithm for composite bilinear saddle-point problems, providing theoretical guarantees and reporting some promising experimental results. After fairly extensive discussion, the reviewers have converged to a positive consensus on the paper, and I consequently recommend its acceptance. Based on the discussion, the authors should carefully revise their manuscript, paying close attention to the following points: * Comparison with the related works mentioned by the reviewers. * Clearing-up of the derivation of the strongly-convex case, aiming for a concise bound easily comparable with the best known non-adaptive bound. * Inclusion of additional results with more extensive tuning of the non-adaptive baseline.
train
[ "uEv6z4CMuX8", "9PoBBMHfVLK", "9zeB_mnOCSB", "IC3gkfAiGd0", "93hbjisuhy5", "OY2QgN5GB3V", "qNuPj1wHqXt", "rNOBfccQSbO", "l4PFtIi4hNS", "xv9Sk_h_SuD", "_VjPbi6x_vg", "PueJ4G0CMmq", "_yAxWaR7bg1", "v-ofeTRMw_f", "r8t3azO8Dvw" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Authors have responded positively to my comments as well as sufficiently addressed concerns of other reviewers. They have engaged in fruitful discussion. I am satisfied with overall paper provided that they make the suggested changes in the final version. I keep my score.", " **Regarding the condition number d...
[ -1, -1, -1, 7, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, 5, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "_yAxWaR7bg1", "9zeB_mnOCSB", "rNOBfccQSbO", "nips_2021_DtXBYsSOxCD", "nips_2021_DtXBYsSOxCD", "rNOBfccQSbO", "_VjPbi6x_vg", "xv9Sk_h_SuD", "nips_2021_DtXBYsSOxCD", "PueJ4G0CMmq", "l4PFtIi4hNS", "93hbjisuhy5", "IC3gkfAiGd0", "r8t3azO8Dvw", "nips_2021_DtXBYsSOxCD" ]
nips_2021_P84bifNCpFQ
A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning
For an image query, unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives. Although intuitive, such a native label assignment strategy cannot reveal the underlying semantic similarity between a query and its positives and negatives, and impairs performance, since some negatives are semantically similar to the query or even share the same semantic class as the query. In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination, while accurate labels benefit its generalization. Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning. It improves the label quality via two complementary modules: (i) self-labeling refinery (SLR) to generate accurate labels and (ii) momentum mixup (MM) to enhance similarity between query and its positive. SLR uses a positive of a query to estimate semantic similarity between a query and its positive and negatives, and combines estimated similarity with vanilla label assignment in contrastive learning to iteratively generate more accurate and informative soft labels. We theoretically show that our SLR can exactly recover the true semantic labels of label-corrupted data, and supervises networks to achieve zero prediction error on classification tasks. MM randomly combines queries and positives to increase semantic similarity between the generated virtual queries and their positives so as to improves label accuracy. Experimental results on CIFAR10, ImageNet, VOC and COCO show the effectiveness of our method.
accept
The reviewers are unanimously positive about the paper. Relevant question, good analysis and well-justified remedy.
val
[ "mB3perjsuD7", "ZAkEDdFF-bM", "KUdgRsNMFLM", "K7gSz0CLoGj", "zAIeC3kslu", "ExRiHXmj9h", "F2fZqKC_hAH", "G9uN4dGWCHG", "XGr33FP6l91", "borQO4QiyX7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " We are sincerely happy that our response helps you better understand and addresses your main concerns. Many thanks to you and your insightful comments again!!", "This paper discusses why inaccurate one-hot label in contrastive learning cannot reveal well semantic similarities between samples, and how these inac...
[ -1, 6, -1, -1, -1, 6, -1, -1, -1, 7 ]
[ -1, 3, -1, -1, -1, 4, -1, -1, -1, 4 ]
[ "KUdgRsNMFLM", "nips_2021_P84bifNCpFQ", "F2fZqKC_hAH", "zAIeC3kslu", "XGr33FP6l91", "nips_2021_P84bifNCpFQ", "ZAkEDdFF-bM", "borQO4QiyX7", "ExRiHXmj9h", "nips_2021_P84bifNCpFQ" ]
nips_2021_e5RK939Zz1S
Adversarial Robustness with Semi-Infinite Constrained Learning
Despite strong performance in numerous applications, the fragility of deep learning to input perturbations has raised serious questions about its use in safety-critical domains. While adversarial training can mitigate this issue in practice, state-of-the-art methods are increasingly application-dependent, heuristic in nature, and suffer from fundamental trade-offs between nominal performance and robustness. Moreover, the problem of finding worst-case perturbations is non-convex and underparameterized, both of which engender a non-favorable optimization landscape. Thus, there is a gap between the theory and practice of robust learning, particularly with respect to when and why adversarial training works. In this paper, we take a constrained learning approach to address these questions and to provide a theoretical foundation for robust learning. In particular, we leverage semi-infinite optimization and non-convex duality theory to show that adversarial training is equivalent to a statistical problem over perturbation distributions. Notably, we show that a myriad of previous robust training techniques can be recovered for particular, sub-optimal choices of these distributions. Using these insights, we then propose a hybrid Langevin Markov Chain Monte Carlo approach for which several common algorithms (e.g., PGD) are special cases. Finally, we show that our approach can mitigate the trade-off between nominal and robust performance, yielding state-of-the-art results on MNIST and CIFAR-10. Our code is available at: https://github.com/arobey1/advbench.
accept
The paper proposes to solve the constrained robust learning problem via a dual formulation of the problem by where the inner maximization can be written as optimizing over a set of distributions over perturbations (for a given (x,y)). The main result of the paper is a characterization of the optimal distribution. Based on this the authors propose a sampling approach, where for each (x,y) the perturbation is sampled from an approximation to the optimal perturbation distribution. The paper presents a practical algorithm based on the theory that achieves competitive results on benchmark datasets. The reviewers raised concern about some of the theoretical portions of the paper that follow from standard tools. Given that the empirical results are competitive with algorithms such as TRADES (that are also based on theoretical insights), it was unclear whether the main theoretical result (proposition 3.2) is novel enough for NeurIPS. It was also pointed out by the reviewers that the empirical comparisons where not conducted with respect to proper benchmarks such as RobustBench (this was addressed by the authors in the response via additional experimental results). Overall this is a borderline paper.
train
[ "R8RvC7Bf1ac", "YkwegbbbOvK", "MArqdWqCfy4", "t813gWhFpj-", "HajBGERi7M", "MGN2_1fD0uw", "0AJ-yMjrfsn", "Qp3mqeJaILV", "8KcauJ_3wG1", "dK-hPLEereq", "z3yTpe0H9iN", "8fU9WedZtKP", "Ej6D8bhrb-j", "i0vrVkkmTjo", "9JnMtpUZRq", "wQTvHiorRZg", "PZAdpUKKxq", "-VTMPt1kEZW", "-AlHDwQxJro"...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_re...
[ " ## Regarding our contribution & convergence analysis \n\nIn this response, we will do our best to answer your questions. However, before we do that, we want to emphasize that\n\n> **Analyzing the convergence of Algorithm 1 is beyond the scope of our paper.** \n\nHere's why:\n\n* **This paper is not about conver...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ "MArqdWqCfy4", "R8RvC7Bf1ac", "t813gWhFpj-", "HajBGERi7M", "MGN2_1fD0uw", "0AJ-yMjrfsn", "Qp3mqeJaILV", "8KcauJ_3wG1", "dK-hPLEereq", "8fU9WedZtKP", "Ej6D8bhrb-j", "Ej6D8bhrb-j", "i0vrVkkmTjo", "9JnMtpUZRq", "-AlHDwQxJro", "A9KnhE_yqa", "-VTMPt1kEZW", "nips_2021_e5RK939Zz1S", "ni...
nips_2021_Rx9dBZaV_IP
Conformal Time-series Forecasting
Current approaches for multi-horizon time series forecasting using recurrent neural networks (RNNs) focus on issuing point estimates, which is insufficient for decision-making in critical application domains where an uncertainty estimate is also required. Existing approaches for uncertainty quantification in RNN-based time-series forecasts are limited as they may require significant alterations to the underlying model architecture, may be computationally complex, may be difficult to calibrate, may incur high sample complexity, and may not provide theoretical guarantees on frequentist coverage. In this paper, we extend the inductive conformal prediction framework to the time-series forecasting setup, and propose a lightweight algorithm to address all of the above limitations, providing uncertainty estimates with theoretical guarantees for any multi-horizon forecast predictor and any dataset with minimal exchangeability assumptions. We demonstrate the effectiveness of our approach by comparing it with existing benchmarks on a variety of synthetic and real-world datasets.
accept
The paper extends the conformal prediction methodology to the time series setting, with exchangeability assumed between time series. In doing so, the proposed approach can provide uncertainty estimates with coverage guarantees. The reviewers agree that the paper tackles a relevant problem, proposes a reasonable solution, is well written and theoretically sound. While the reviewers pointed out some areas for improvements (additional related work, improvements to the empirical evaluation), the authors were able to alleviate all major concerns during the discussion period, making this a solid contribution with the potential to spark interesting follow-up work.
test
[ "xtbW6iPDBCm", "r4bIxBCp-QJ", "-dZ7PsOmk84", "AShJv3F-Mks", "14PliPxpRr-", "Zg6_xvTBkT_", "kAsqsWXUGpK", "fs9gV9WuROm", "HUjxZkOS2JO", "fCvUNvFfDE5", "mAWnZQP-rvU", "frlAXN6V6Cs", "PuZ_VQoiM2s", "FT6bLtbuitC", "B9b1YW5NcLg", "mfxPj266ngV", "zbHND_5v7P", "8iTRWG_gW4W", "FZmGZllIOp...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer vdnC\n\nWe would like to thank you again for your review and follow up to ask if there is anything else we could do to improve the paper and its score and if there are any more remaining questions or comments. :)\n\nThank you!", " Dear Reviewer Z5gg,\n\nWe would like to thank you again for your re...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "B9b1YW5NcLg", "erJobANOher", "FZmGZllIOpz", "zbHND_5v7P", "nips_2021_Rx9dBZaV_IP", "frlAXN6V6Cs", "erJobANOher", "FZmGZllIOpz", "14PliPxpRr-", "nips_2021_Rx9dBZaV_IP", "frlAXN6V6Cs", "B9b1YW5NcLg", "nips_2021_Rx9dBZaV_IP", "FZmGZllIOpz", "fCvUNvFfDE5", "erJobANOher", "14PliPxpRr-", ...
nips_2021_yDwfVD_odRo
A 3D Generative Model for Structure-Based Drug Design
We study a fundamental problem in structure-based drug design --- generating molecules that bind to specific protein binding sites. While we have witnessed the great success of deep generative models in drug design, the existing methods are mostly string-based or graph-based. They are limited by the lack of spatial information and thus unable to be applied to structure-based design tasks. Particularly, such models have no or little knowledge of how molecules interact with their target proteins exactly in 3D space. In this paper, we propose a 3D generative model that generates molecules given a designated 3D protein binding site. Specifically, given a binding site as the 3D context, our model estimates the probability density of atom's occurrences in 3D space --- positions that are more likely to have atoms will be assigned higher probability. To generate 3D molecules, we propose an auto-regressive sampling scheme --- atoms are sampled sequentially from the learned distribution until there is no room for new atoms. Combined with this sampling scheme, our model can generate valid and diverse molecules, which could be applicable to various structure-based molecular design tasks such as molecule sampling and linker design. Experimental results demonstrate that molecules sampled from our model exhibit high binding affinity to specific targets and good drug properties such as drug-likeness even if the model is not explicitly optimized for them.
accept
The AC and reviewers all agree that this is an interesting submission. We strongly urge the authors to incorporate their clarifying comments into the manuscript. In addition, as mentioned by several reviewers, it is important that the authors be clear about the relative value of Vina score. As noted by kPk1, coming up with a new metric that is more appropriate would be a valuable direction for future study. Reviewer RaVz made some very pertinent point on Tanimoto similarity, SA scores...etc. It would be very exciting to see these comments addressed either in the present manuscript or in future work.
train
[ "0ZA6eybZorU", "GxCVFQjW-HO", "xqy8KrrJ2CU", "vu4qqTQFgpz", "88ppB6K2eP0", "mxKu3HInTT7", "YMsP2Wr38tQ", "Iwv104YE0KF", "0bNJdRWjmC_", "g9p3GG1zWHR", "N4_EmIdr3wa", "mtyuJHLEG7p", "pfj2XTWBT4g", "USIMrIfBnrR" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " We sincerely appreciate the reviewer’s constructive comments! \n\nWe agree that our present work has some limitations regarding metrics and other aspects. We will discuss more about the limitations in the next version and identify directions for future work. \n\nAs for the comparison, the ultimate goal of our met...
[ -1, -1, -1, 7, 6, -1, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, 3, 2, -1, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "0bNJdRWjmC_", "mxKu3HInTT7", "YMsP2Wr38tQ", "nips_2021_yDwfVD_odRo", "nips_2021_yDwfVD_odRo", "mtyuJHLEG7p", "pfj2XTWBT4g", "nips_2021_yDwfVD_odRo", "N4_EmIdr3wa", "vu4qqTQFgpz", "Iwv104YE0KF", "88ppB6K2eP0", "USIMrIfBnrR", "nips_2021_yDwfVD_odRo" ]
nips_2021_GPwmbxtG9Ow
Bootstrapping the Error of Oja's Algorithm
Robert Lunde, Purnamrita Sarkar, Rachel Ward
accept
The reviewers were unanimous in their support for this paper, with a clear theoretical contribution to our understanding of an important algorithm. Most reviewers also felt the paper was well-written, and we thus feel it will be broadly interesting to the NeurIPS community.
val
[ "oHIrLmqNWqG", "mNWUnYax1FF", "kdDG1IJw2jF", "VOE4igCETRe", "Sf5gIw00dxY", "BkTWRqruh0J", "qztRaDwRij", "n8CcXl9Xeu8", "n_N_UxjMFy", "TDbAUzrmFWM", "xIfTTWIem_Y", "n-A_t1WGFnz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for their reply. I suggest authors fixing typos and incorporating the explanation to the revision .I will not change my score and am very happy with the paper.", " Thank you very much for your response. I will leave the score unchanged.", " Thanks to the authors for the reply. I ...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, 3, 4, 3 ]
[ "n_N_UxjMFy", "qztRaDwRij", "n8CcXl9Xeu8", "BkTWRqruh0J", "nips_2021_GPwmbxtG9Ow", "n-A_t1WGFnz", "xIfTTWIem_Y", "TDbAUzrmFWM", "Sf5gIw00dxY", "nips_2021_GPwmbxtG9Ow", "nips_2021_GPwmbxtG9Ow", "nips_2021_GPwmbxtG9Ow" ]
nips_2021_Aqzn23LfwT
Landscape analysis of an improved power method for tensor decomposition
Joe Kileel, Timo Klock, João Pereira
accept
The paper studies the symmetric tensor decomposition problem, in which we are given a (noisy) superposition of outer products, and the goal is to recover these generating components. The paper analyzes the “subspace power method”, which seeks local maximizers of the norm of the projection of a component onto the span of a certain reshaping of the input tensor. For orthogonal tensors, this coincides with the standard power method; for general tensors with correlated components, the subspace power method has advantages: global maximizers are known to coincide with the generating components. In contrast, previous work does not prove that the problem lacks local maximizers, and as a consequence, does not establish algorithmic recovery guarantees. 
The contribution of this work is a landscape analysis of the SPM, which proves that under certain deterministic incoherence conditions, there are no suboptimal maximizers in a certain super level set of the objective function. For low rank incoherent tensors (r = O(D)), this establishes that local maximizers are global. For random tensors with rank approaching the conjectured barrier O(D^{m/2}) for efficient methods, this result rules out suboptimal maximizers with large objective function (nonvanishing objective function in certain scalings of the problem parameters). The latter result can be compared to analyses of Ge and Ma for the traditional power method. Initial reviews were generally positive, acknowledging the theoretical contributions of the work, while also raising questions about (i) the strength of the paper’s incoherence conditions, and (ii) the semi-global nature of the paper’s results for random overcomplete tensors. The reviewers found the authors’ response on these points generally satisfying; the final consensus is to accept the paper. The AC concurs — the paper contributes a novel analysis of a method that seems to have practical advantages in handling correlated tensors. While there are many remaining issues (global results for highly over complete tensors?), the paper makes a contribution to the literature in this area.
train
[ "wPysF5Svya4", "1ti9raScqWM", "Czktdo62eKL", "mLYcsblbG0z", "4dg4FBipWUf", "KeKqDEs6-J", "FxI4ZcirQat", "Q9pWXCbB2Va", "9lvdcklD10G" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considers the subspace power method (SPM) for symmetric tensors.\nSpecifically, this paper analyzes the non-convex optimization landscape associated with the SPM objective.\nThe main results are two theorems, theorem 6 and theorem 15. Under some assumptions, it is shown in theorem 6 that for low rank c...
[ 6, -1, -1, -1, -1, -1, 7, 9, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 5, 2 ]
[ "nips_2021_Aqzn23LfwT", "KeKqDEs6-J", "FxI4ZcirQat", "9lvdcklD10G", "Q9pWXCbB2Va", "wPysF5Svya4", "nips_2021_Aqzn23LfwT", "nips_2021_Aqzn23LfwT", "nips_2021_Aqzn23LfwT" ]
nips_2021_q6Kknb68dQf
Curriculum Offline Imitating Learning
Offline reinforcement learning (RL) tasks require the agent to learn from a pre-collected dataset with no further interactions with the environment. Despite the potential to surpass the behavioral policies, RL-based methods are generally impractical due to the training instability and bootstrapping the extrapolation errors, which always require careful hyperparameter tuning via online evaluation. In contrast, offline imitation learning (IL) has no such issues since it learns the policy directly without estimating the value function by bootstrapping. However, IL is usually limited in the capability of the behavioral policy and tends to learn a mediocre behavior from the dataset collected by the mixture of policies. In this paper, we aim to take advantage of IL but mitigate such a drawback. Observing that behavior cloning is able to imitate neighboring policies with less data, we propose \textit{Curriculum Offline Imitation Learning (COIL)}, which utilizes an experience picking strategy to make the agent imitate from adaptive neighboring policies with a higher return, and improves the current policy along curriculum stages. On continuous control benchmarks, we compare COIL against both imitation-based methods and RL-based methods, showing that COIL not only avoids just learning a mediocre behavior on mixed datasets but is also even competitive with state-of-the-art offline RL methods.
accept
This paper generated an interesting discussion between the reviewers and the authors. Especially, the authors made a very good job at addressing reviewers' concerns. They ran new experiments, finetuned the baselines and conducted a thorough ablation study. The paper is now considered as presenting a SotA algorithm for Offline imitation learning.
train
[ "Djz_0_TUuMC", "foqLFmjFTc", "fnRi4rKC9qr", "Nm5dAygtZB7", "PeVCawP0hIz", "CTQiwJjQNSR", "B0etxbkWw3-", "nE4VCYk-XSp", "ns5Gaa13BFv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper applied Online Imitation Learning to offline RL. The central idea is, at each iteration of training, the proposed approach will sample the data points from the neighboring policies based on the experience picking. The intuition is built based on the observation/analysis of the discrepancy between demo p...
[ 7, -1, -1, 6, -1, 7, -1, -1, -1 ]
[ 2, -1, -1, 4, -1, 3, -1, -1, -1 ]
[ "nips_2021_q6Kknb68dQf", "ns5Gaa13BFv", "B0etxbkWw3-", "nips_2021_q6Kknb68dQf", "CTQiwJjQNSR", "nips_2021_q6Kknb68dQf", "Nm5dAygtZB7", "CTQiwJjQNSR", "Djz_0_TUuMC" ]
nips_2021_AvHeCmK2fsE
Robust Pose Estimation in Crowded Scenes with Direct Pose-Level Inference
Multi-person pose estimation in crowded scenes is challenging because overlapping and occlusions make it difficult to detect person bounding boxes and infer pose cues from individual keypoints. To address those issues, this paper proposes a direct pose-level inference strategy that is free of bounding box detection and keypoint grouping. Instead of inferring individual keypoints, the Pose-level Inference Network (PINet) directly infers the complete pose cues for a person from his/her visible body parts. PINet first applies the Part-based Pose Generation (PPG) to infer multiple coarse poses for each person from his/her body parts. Those coarse poses are refined by the Pose Refinement module through incorporating pose priors, and finally are fused in the Pose Fusion module. PINet relies on discriminative body parts to differentiate overlapped persons, and applies visual body cues to infer the global pose cues. Experiments on several crowded scenes pose estimation benchmarks demonstrate the superiority of PINet. For instance, it achieves 59.8% AP on the OCHuman dataset, outperforming the recent works by a large margin.
accept
Initially, two of the reviewers expressed concerns about the paper (lack of clarity and limited novelty) and ranked the paper marginally below acceptance. As the ensuing rebuttal managed to successfully address most of reviewer’s concerns, ACs and the majority of the reviewers agreed that this is a strong paper that deserves acceptance. Authors are highly encouraged to address the key comments reported by reviewers as well as to implement all the improvements (as indicated by authors in the rebuttal) in the final camera-ready version.
train
[ "Nr9Dumv3M3H", "7OM8epqtCCN", "8VhcMZtXclF", "hlhZMIy--Ma", "B4i6AZrZjBq", "2KAPqh1Nqo1", "opcnMYqxpKz", "D1PSg8n6z-v", "-44lDymR_D8", "QUQgGucblpp", "lkY7696E2fn", "fjG34EB_3am", "WFaZaT2urhd", "HDgl5wGxDR", "RGga2M70eAq", "EQk8CtUXgnK", "BTcqmUhFpBT", "hOO4x8W9N-", "dXVcVzLPpQM...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "...
[ " Thanks for the further clarification. I have already upgraded my rating.", " Thanks for the feedback! Overall I still think this paper is interesting to the vision community and will keep the original rating.", " This paper presents a one-stage pose estimation approach that applies PPG to infer multiple poses...
[ -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 6, 7 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4, 3 ]
[ "opcnMYqxpKz", "fjG34EB_3am", "nips_2021_AvHeCmK2fsE", "-44lDymR_D8", "WFaZaT2urhd", "nips_2021_AvHeCmK2fsE", "D1PSg8n6z-v", "HDgl5wGxDR", "QUQgGucblpp", "lkY7696E2fn", "8VhcMZtXclF", "7or4OIEMdvp", "HLvs8UsBCKM", "2KAPqh1Nqo1", "oIYUpRQRIsc", "hOO4x8W9N-", "dXVcVzLPpQM", "nips_202...
nips_2021_yTXtUSV-gk4
Ising Model Selection Using $\ell_{1}$-Regularized Linear Regression: A Statistical Mechanics Analysis
Xiangming Meng, Tomoyuki Obuchi, Yoshiyuki Kabashima
accept
The authors in this paper study the learning of a Boltzmann machine, aka the Inverse Ising model, a classical problem in graphical models and Markov Random Fields. They propose an analysis of the performance of the l1-regularised linear regression estimator in finding the underlying non-zero coefficients. The analysis is performed using a non-rigourous but powerful heuristic approach from statistical mechanics, the replica method, that has been used in many other machine learning problems, in idealistic situations. The author confirmed the theoretical result by running a large number of numerical simulations. They show the sample complexity is such that O(log N) samples are enough to correctly identify the right neighbourhood of a generic variable. The theory presented seems to be powerful enough to predict both the precision and the recall rates for finite dimension, and seems to give fairly good prediction for graphs with many loops. Interestingly, the methodology of the present work can be apparently generalised and extended also to other estimators. The review process was intense, with six reviewer and a large number of forum replies. All reviewers testified of the quality of presented results. From a technical point of view, computation contained in this manuscript was found to involved and complex, yet it was found provides a non-trivial analytical result. The comparison with the numerics however strongly supports the claim that the replica computation is giving the correct answer, even though it is not rigorous. This paper was thus judged to represent a non-trivial contribution from statistical physics to machine learning. However, the use of the non-rigorous replica method and its acrobatic non-rigorous mathematics was judged to be sometimes dazzling. The rebuttal saw a number of discussions, and most reviewers agreed that the paper was impressive, and score were increased during the process. A criticism that remained, though, from a minority of reviewers, was that the derivation is likely inaccessible for non-experts. While this is true, it can be said of many theoretical at Neurips, including rigorous ones, and there is a long standing tradition of welcoming such non-rigorous paper at Neurips. Given the agreement on the quality of the results, the very good ranking & grading of the paper, the extensive numerical simulations that confirm the validity of theory, we believe the paper to be largely worthy of a publication at a venue like Neurips, and recommend acceptance.
test
[ "sv9iU74YcuQ", "lA1-Sw8kbka", "uAs_qbxhWOV", "z1n-isHR0O6", "RyqUPMjuF5Y", "_yHwgUiD7lp", "KI96ma2rte5", "BZLaKdwDPOF", "zUK9wSXKLjP", "2GxsxdH-7QD", "CYJbru2MZhc", "spw63n7_J4s", "28RWNDbJr1K", "rtUIEG_Fkd5", "ogvDmojDCg", "7cp_yPurg97", "uNqjHklrrRI", "KZGq2gOccol", "1WI41ObFTc...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_rev...
[ " Thank you very much for your feedback. We appreciate your clarification of the concern and positive comments on our results. We will do our best to improve the clarity in the revision to make it understandable to the broader community unfamiliar with the replica method as much as possible.", " Thank you for res...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 7 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "lA1-Sw8kbka", "z1n-isHR0O6", "aoxm8TcyqXA", "uAs_qbxhWOV", "nips_2021_yTXtUSV-gk4", "KI96ma2rte5", "zUK9wSXKLjP", "spw63n7_J4s", "28RWNDbJr1K", "KZGq2gOccol", "1WI41ObFTc", "ogvDmojDCg", "7cp_yPurg97", "nips_2021_yTXtUSV-gk4", "XvxxSd01Aug", "OBUNiMNrHT", "nips_2021_yTXtUSV-gk4", ...
nips_2021_EvhsTX6GMyM
Conformal Prediction using Conditional Histograms
This paper develops a conformal method to compute prediction intervals for non-parametric regression that can automatically adapt to skewed data. Leveraging black-box machine learning algorithms to estimate the conditional distribution of the outcome using histograms, it translates their output into the shortest prediction intervals with approximate conditional coverage. The resulting prediction intervals provably have marginal coverage in finite samples, while asymptotically achieving conditional coverage and optimal length if the black-box model is consistent. Numerical experiments with simulated and real data demonstrate improved performance compared to state-of-the-art alternatives, including conformalized quantile regression and other distributional conformal prediction approaches.
accept
The paper presents an original contribution for conformal prediction with optimized bandwidth. The paper constructs conformal confidence sets that have finite-sample marginal coverage and large-sample conditional coverage while being as narrow as possible. This, I believe, will be a valuable addition to the current conformal prediction / regression problems, as confirmed by reviewers.
train
[ "u5U3iPj_NMk", "6AnlYA3EaUH", "IPyeeGZw25D", "IX5AXfmDULU", "NhVj2MOGpIc", "oqz2m-Q1pl", "TlT9niBSLy", "7b9nqW_4D05", "Kqyglv86Xke", "VPD5J6Xwo0r", "ulQbJqhYDI3", "3gmS5IhNlQ", "F3gJsZUVtmf", "WdLi3rQ1m1", "7M9urTnc4Pg", "-Pw0iD1I2A_", "6Q0EddXbPXi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarifications; they were very helpful!", " Thank you for the helpful clarifications!", "This paper introduces a new method to construct conformal prediction intervals based on histograms of the conditional distribution of an outcome variable. Given a histogram of the conditional distributio...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7, 9 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3, 5 ]
[ "7b9nqW_4D05", "Kqyglv86Xke", "nips_2021_EvhsTX6GMyM", "NhVj2MOGpIc", "oqz2m-Q1pl", "3gmS5IhNlQ", "F3gJsZUVtmf", "6Q0EddXbPXi", "-Pw0iD1I2A_", "7M9urTnc4Pg", "IPyeeGZw25D", "WdLi3rQ1m1", "nips_2021_EvhsTX6GMyM", "nips_2021_EvhsTX6GMyM", "nips_2021_EvhsTX6GMyM", "nips_2021_EvhsTX6GMyM",...
nips_2021_ek0RuhPoGiD
Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels
Graph Neural Networks (GNNs) have achieved remarkable performance in the task of semi-supervised node classification. However, most existing GNN models require sufficient labeled data for effective network training. Their performance can be seriously degraded when labels are extremely limited. To address this issue, we propose a new framework termed Contrastive Graph Poisson Networks (CGPN) for node classification under extremely limited labeled data. Specifically, our CGPN derives from variational inference; integrates a newly designed Graph Poisson Network (GPN) to effectively propagate the limited labels to the entire graph and a normal GNN, such as Graph Attention Network, that flexibly guides the propagation of GPN; applies a contrastive objective to further exploit the supervision information from the learning process of GPN and GNN models. Essentially, our CGPN can enhance the learning performance of GNNs under extremely limited labels by contrastively propagating the limited labels to the entire graph. We conducted extensive experiments on different types of datasets to demonstrate the superiority of CGPN.
accept
This paper addresses the issue of doing label propagation on a large graph where ground truth labels are extremely sparse. The authors purpose a hybrid strategy using poisson networks and GNNs to learn a label propagation rule. The reviewers agree that this is a well motivated and timely paper, however the original draft was light on experiments and the reviewers wanted to see a range of additional studies. Several of these studies were provided during the rebuttal phase. I have reviewed these studies myself, and I feel they add some insights to the paper and should be added in the camera ready. There is still one remaining reject vote, but I think the criticisms were largely addressed in the rebuttal. One criticism in particular, that the authors did not spent enough time introducing the ELBO, is reasonable but I do not consider it a fatal flaw.
train
[ "0k_dGCX8W7c", "K_wFmN2zrq", "JA7Ld36d6ms", "tQt4spZgoeY", "znosZgcqT6", "Hbt5OJR2i7I", "Blr5QBBi-su" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer u74h,\n\nThank you again for your constructive suggestions. \n\nYour comments have enlightened us to think deeper and made our work more solid than before. We have carefully taken all the comments into consideration in the final version by providing detailed explanations and conducting more comprehe...
[ -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "Hbt5OJR2i7I", "Hbt5OJR2i7I", "Blr5QBBi-su", "znosZgcqT6", "nips_2021_ek0RuhPoGiD", "nips_2021_ek0RuhPoGiD", "nips_2021_ek0RuhPoGiD" ]
nips_2021_sO4tOk2lg9I
Collaborative Uncertainty in Multi-Agent Trajectory Forecasting
Uncertainty modeling is critical in trajectory-forecasting systems for both interpretation and safety reasons. To better predict the future trajectories of multiple agents, recent works have introduced interaction modules to capture interactions among agents. This approach leads to correlations among the predicted trajectories. However, the uncertainty brought by such correlations is neglected. To fill this gap, we propose a novel concept, collaborative uncertainty (CU), which models the uncertainty resulting from the interaction module. We build a general CU-based framework to make a prediction model learn the future trajectory and the corresponding uncertainty. The CU-based framework is integrated as a plugin module to current state-of-the-art (SOTA) systems and deployed in two special cases based on multivariate Gaussian and Laplace distributions. In each case, we conduct extensive experiments on two synthetic datasets and two public, large-scale benchmarks of trajectory forecasting. The results are promising: 1) The results of synthetic datasets show that CU-based framework allows the model to nicely rebuild the ground-truth distribution. 2) The results of trajectory forecasting benchmarks demonstrate that the CU-based framework steadily helps SOTA systems improve their performances. Specially, the proposed CU-based framework helps VectorNet improve by 57 cm regarding Final Displacement Error on nuScenes dataset. 3) The visualization results of CU illustrate that the value of CU is highly related to the amount of the interactive information among agents.
accept
All reviewers recommend accepting this paper. The work addresses the issue of estimating joint uncertainties between actors in a motion forecasting setting. Experiments are provided on some commonly used benchmarks for motion forecasting. Reviewers generally felt the paper was clearly written and the experiments were sufficient. The AC recommends acceptance.
train
[ "oaA6L9rA5a1", "Ki9JLbQKYF", "gk4tLtlM0-", "LpyiSt5LZth", "_rYYZSvnGZl", "XdK7qAMr2l9", "HtjuSPtFnUB", "KN-lVvHR_f", "LY9v2amD1D5", "KtRVqkkx6eq" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " After reading the response, I have decided to keep my current score. The response has not addressed my concerns about comparisons with generative models and analysis of multiple agents (>= 3). I generally think the concept proposed by the paper is interesting, although I have some doubt about its usefulness compa...
[ -1, -1, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "XdK7qAMr2l9", "LpyiSt5LZth", "nips_2021_sO4tOk2lg9I", "gk4tLtlM0-", "KN-lVvHR_f", "KtRVqkkx6eq", "gk4tLtlM0-", "LY9v2amD1D5", "nips_2021_sO4tOk2lg9I", "nips_2021_sO4tOk2lg9I" ]
nips_2021_swdfQTe_X9
Network-to-Network Regularization: Enforcing Occam's Razor to Improve Generalization
What makes a classifier have the ability to generalize? There have been a lot of important attempts to address this question, but a clear answer is still elusive. Proponents of complexity theory find that the complexity of the classifier's function space is key to deciding generalization, whereas other recent work reveals that classifiers which extract invariant feature representations are likely to generalize better. Recent theoretical and empirical studies, however, have shown that even within a classifier's function space, there can be significant differences in the ability to generalize. Specifically, empirical studies have shown that among functions which have a good training data fit, functions with lower Kolmogorov complexity (KC) are likely to generalize better, while the opposite is true for functions of higher KC. Motivated by these findings, we propose, in this work, a novel measure of complexity called Kolmogorov Growth (KG), which we use to derive new generalization error bounds that only depend on the final choice of the classification function. Guided by the bounds, we propose a novel way of regularizing neural networks by constraining the network trajectory to remain in the low KG zone during training. Minimizing KG while learning is akin to applying the Occam's razor to neural networks. The proposed approach, called network-to-network regularization, leads to clear improvements in the generalization ability of classifiers. We verify this for three popular image datasets (MNIST, CIFAR-10, CIFAR-100) across varying training data sizes. Empirical studies find that conventional training of neural networks, unlike network-to-network regularization, leads to networks of high KG and lower test accuracies. Furthermore, we present the benefits of N2N regularization in the scenario where the training data labels are noisy. Using N2N regularization, we achieve competitive performance on MNIST, CIFAR-10 and CIFAR-100 datasets with corrupted training labels, significantly improving network performance compared to standard cross-entropy baselines in most cases. These findings illustrate the many benefits obtained from imposing a function complexity prior like Kolmogorov Growth during the training process.
accept
This paper introduced a novel measure of the complexity of a class of functions. Reviewers agreed that the proposal was novel, and theoretically sound. Reviewers also felt that experiments exploring regularizing using (a proxy for) Kolmogorov Growth were convincing. One reviewer increased their score, and another increased their confidence in their (accept) score during discussion, both of which I take as very positive signs. Reviewers had specific actionable feedback in their reviews, which the authors should incorporate into their manuscript before the camera ready.
train
[ "MLYn2dYjuQP", "usPfE8AStl", "ZvcEnJ1ZelX", "zSvfxEYuSjm", "e8ykoIcgjsY", "oADD9SCCqnZ", "xYUxjpfA59a", "Xpa2itDQWUU", "l76-LtUyAdy", "O_OHabQb4g7", "7vxCUWsoshk", "gTa2ORfBPY", "SWngY3YR1Oa", "3ZDY52z65za" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " This looks good. Please make sure this and other changes are incorporated in the final version if the paper is accepted.", " We thank the reviewer for the follow-up queries and suggestions. Here are our responses to the same. All changes will appear in the final version of the paper.\n\n1. We used the single-st...
[ -1, -1, -1, 7, -1, -1, 7, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "e8ykoIcgjsY", "ZvcEnJ1ZelX", "O_OHabQb4g7", "nips_2021_swdfQTe_X9", "Xpa2itDQWUU", "gTa2ORfBPY", "nips_2021_swdfQTe_X9", "l76-LtUyAdy", "xYUxjpfA59a", "zSvfxEYuSjm", "3ZDY52z65za", "SWngY3YR1Oa", "nips_2021_swdfQTe_X9", "nips_2021_swdfQTe_X9" ]
nips_2021_Kr6jWI4PSRd
Generalized and Discriminative Few-Shot Object Detection via SVD-Dictionary Enhancement
Few-shot object detection (FSOD) aims to detect new objects based on few annotated samples. To alleviate the impact of few samples, enhancing the generalization and discrimination abilities of detectors on new objects plays an important role. In this paper, we explore employing Singular Value Decomposition (SVD) to boost both the generalization and discrimination abilities. In specific, we propose a novel method, namely, SVD-Dictionary enhancement, to build two separated spaces based on the sorted singular values. Concretely, the eigenvectors corresponding to larger singular values are used to build the generalization space in which localization is performed, as these eigenvectors generally suppress certain variations (e.g., the variation of styles) and contain intrinsical characteristics of objects. Meanwhile, since the eigenvectors corresponding to relatively smaller singular values may contain richer category-related information, we can utilize them to build the discrimination space in which classification is performed. Dictionary learning is further leveraged to capture high-level discriminative information from the discrimination space, which is beneficial for improving detection accuracy. In the experiments, we separately verify the effectiveness of our method on PASCAL VOC and COCO benchmarks. Particularly, for the 2-shot case in VOC split1, our method significantly outperforms the baseline by 6.2\%. Moreover, visualization analysis shows that our method is instrumental in doing FSOD.
accept
This paper offers an SVD based decomposition of a learned representation, factoring an embedding into components that are optimal for localization vs description. On balance the reviewers find the paper acceptable, with a majority of reviewers including the most knowledgeable reviewer in the area supporting acceptance, due to the novelty of the method and the improved performance. The AC concurs with these reviewers.
train
[ "fSSklo6MAhH", "D7uHLz8hzqB", "14GXhiDIeae", "hxi-SymHogX", "NUbTuD0rQEk", "lexEHrmk63k" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a solution for addressing the few-shot object detection problem through enhancing the generalization and discriminability capability, which relies on the Singular Value Decomposition (SVD). Moreover, the dictionary learning is employed to further enhance the high-level discriminative capability...
[ 6, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "nips_2021_Kr6jWI4PSRd", "lexEHrmk63k", "fSSklo6MAhH", "NUbTuD0rQEk", "nips_2021_Kr6jWI4PSRd", "nips_2021_Kr6jWI4PSRd" ]
nips_2021_CCvpHGFOzC3
Conditioning Sparse Variational Gaussian Processes for Online Decision-making
Wesley J. Maddox, Samuel Stanton, Andrew G. Wilson
accept
The paper contributes to the literature of sparse variational Gaussian processes applied to an online setting. The authors introduce the novel idea of treating inducing variables and inducing inputs at time t-1 as pseudo-data to be used at time t, leading to recursive expressions for the posterior distribution. The paper appears technically solid. The most critical reviewer does not oppose acceptance. Comments regarding clarity and typos highlighted by the reviewers can be addressed in the camera-ready version.
val
[ "FwAN2tcP4eY", "V8css9v1A3T", "2KJClMuCWe", "j6a0sKQi2x", "COOjY4jRgRR", "v-qVK1tul-", "LUblOfusBf", "YrLkLQJOKbl", "jRKxd_TaKFc", "4aqcfnNSFmc", "jrBzfdJsxiU", "sYg4HrR_k_r", "CCsRZAeD5bJ", "K-X5vyd3Io_", "umjqechqns8", "RP_y854dGl9", "cSdBbGqK_U" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Authors propose a novel method, named OVC for online variational conditioning of GPs under sparse approximations. Particularly, they are interested in the recursive update of the posterior GP distribution after receiving new data per time-step. The model can be used for both regression and classification tasks, ba...
[ 7, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_CCvpHGFOzC3", "CCsRZAeD5bJ", "LUblOfusBf", "LUblOfusBf", "jRKxd_TaKFc", "LUblOfusBf", "nips_2021_CCvpHGFOzC3", "4aqcfnNSFmc", "sYg4HrR_k_r", "K-X5vyd3Io_", "RP_y854dGl9", "jrBzfdJsxiU", "FwAN2tcP4eY", "cSdBbGqK_U", "nips_2021_CCvpHGFOzC3", "nips_2021_CCvpHGFOzC3", "nips_20...
nips_2021_RcbphT7qjTq
Spherical Motion Dynamics: Learning Dynamics of Normalized Neural Network using SGD and Weight Decay
In this paper, we comprehensively reveal the learning dynamics of normalized neural network using Stochastic Gradient Descent (with momentum) and Weight Decay (WD), named as Spherical Motion Dynamics (SMD). Most related works focus on studying behavior of effective learning rate" inequilibrium" state, i.e. assuming weight norm remains unchanged. However, their discussion on why this equilibrium can be reached is either absent or less convincing. Our work directly explores the cause of equilibrium, as a special state of SMD. Specifically, 1) we introduce the assumptions that can lead to equilibrium state in SMD, and prove equilibrium can be reached in a linear rate regime under given assumptions; 2) we propose ``angular update" as a substitute for effective learning rate to depict the state of SMD, and derive the theoretical value of angular update in equilibrium state; 3) we verify our assumptions and theoretical results on various large-scale computer vision tasks including ImageNet and MSCOCO with standard settings. Experiment results show our theoretical findings agree well with empirical observations. We also show that the behavior of angular update in SMD can produce interesting effect to the optimization of neural network in practice.
accept
This is a clear accept. Aside from the high scores of all the reviewers, I myself have seen versions of this paper in the past on arxiv (it was inexplicably rejected at a previous submission, but is even improved here) and have really benefitted from this work's thinking. I think the contribution of this work is clear and important. I think we'd be foolish not to accept this work here this year.
train
[ "15YEeVkp2sA", "tlqUjhgOfN", "YAIycNw6aZX", "lFgMXmilya0", "iSPqnCN9f7j", "mHs2r49Zmq4", "N7XjoPeyfY", "aLC3vVYNuCF", "-ZzPzqjitsc", "flX8Rap7rWl", "bWz4pvtA62", "Tz8W-c-MTv", "IvkLFHhUVrj", "vpqk4brlJCZ", "b29-4kTX88l", "ZkNrUVlpsT", "DtqVyVfGy9s", "3qwiP3Xg6YG", "NzkxXisG_lS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "...
[ "In this paper, the dynamics of scale-invariant (SI) weights of normalized neural networks trained with SGD(M) + WD are studied. It is known that the scale-invariant parameters' intrinsic domain is a unit hypersphere, which the authors have effectively taken into account by introducing the \"angular update\" (AU) t...
[ 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_RcbphT7qjTq", "YAIycNw6aZX", "lFgMXmilya0", "ZkNrUVlpsT", "N7XjoPeyfY", "nips_2021_RcbphT7qjTq", "-ZzPzqjitsc", "flX8Rap7rWl", "bWz4pvtA62", "Tz8W-c-MTv", "IvkLFHhUVrj", "vpqk4brlJCZ", "b29-4kTX88l", "b29-4kTX88l", "DtqVyVfGy9s", "15YEeVkp2sA", "mHs2r49Zmq4", "NzkxXisG_l...
nips_2021_zEuLFJCRk4X
Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations
Understanding the training dynamics of deep learning models is perhaps a necessary step toward demystifying the effectiveness of these models. In particular, how do training data from different classes gradually become separable in their feature spaces when training neural networks using stochastic gradient descent? In this paper, we model the evolution of features during deep learning training using a set of stochastic differential equations (SDEs) that each corresponding to a training sample. As a crucial ingredient in our modeling strategy, each SDE contains a drift term that reflects the impact of backpropagation at an input on the features of all samples. Our main finding uncovers a sharp phase transition phenomenon regarding the intra-class impact: if the SDEs are locally elastic in the sense that the impact is more significant on samples from the same class as the input, the features of training data become linearly separable---meaning vanishing training loss; otherwise, the features are not separable, no matter how long the training time is. In the presence of local elasticity, moreover, an analysis of our SDEs shows the emergence of a simple geometric structure called neural collapse of the features. Taken together, our results shed light on the decisive role of local elasticity underlying the training dynamics of neural networks. We corroborate our theoretical analysis with experiments on a synthesized dataset of geometric shapes as well as on CIFAR-10.
accept
This paper studies the training dynamics of neural network by considering a stochastic differential equation model based on the local elasticity assumption of features of neural networks. The proposed model is justified by numerical experiments and sheds new light on the training dynamics. Overall the referees all find this paper interesting and novel, and hence the meta-reviewer would recommend acceptance of the paper as a poster.
val
[ "azoxhcmhl0z", "j2BDwiY_5ij", "aU1juUFtg1W", "MAp9liF2u9Y", "ZZJhErTNmSD", "BA4FuGvh9lz", "YUQPi7hmLh9", "kXbNBc_N8k3", "ecYozhVJuYM", "e-PlGxd6-zH", "DlCxQYYTZyC", "vXQo5lOT1fI", "cORKszotAVc", "kxHue40kbXW", "0g2HB2RmLkR", "6PwELcNBrLj" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a stochastic differential equation model based on the local elasticity assumption of features of neural networks. The local elasticity phenomenon says samples have greater influence to other samples in the same class than those in different classes. Starting from this assumption, a system of l...
[ 7, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_zEuLFJCRk4X", "6PwELcNBrLj", "ecYozhVJuYM", "BA4FuGvh9lz", "nips_2021_zEuLFJCRk4X", "kXbNBc_N8k3", "ecYozhVJuYM", "nips_2021_zEuLFJCRk4X", "DlCxQYYTZyC", "kXbNBc_N8k3", "0g2HB2RmLkR", "6PwELcNBrLj", "azoxhcmhl0z", "nips_2021_zEuLFJCRk4X", "nips_2021_zEuLFJCRk4X", "nips_2021_...
nips_2021_VD3TMzyxKK
Probabilistic Forecasting: A Level-Set Approach
Large-scale time series panels have become ubiquitous over the last years in areas such as retail, operational metrics, IoT, and medical domain (to name only a few). This has resulted in a need for forecasting techniques that effectively leverage all available data by learning across all time series in each panel. Among the desirable properties of forecasting techniques, being able to generate probabilistic predictions ranks among the top. In this paper, we therefore present Level Set Forecaster (LSF), a simple yet effective general approach to transform a point estimator into a probabilistic one. By recognizing the connection of our algorithm to random forests (RFs) and quantile regression forests (QRFs), we are able to prove consistency guarantees of our approach under mild assumptions on the underlying point estimator. As a byproduct, we prove the first consistency results for QRFs under the CART-splitting criterion. Empirical experiments show that our approach, equipped with tree-based models as the point estimator, rivals state-of-the-art deep learning models in terms of forecasting accuracy.
accept
This work proposes a novel approach for transforming point estimators into probabilistic estimators. Overall, reviewers appreciate the novelty and significance of the work. However, they also made several important recommendations regarding the presentation and specifically the discussion of limitations. The authors should carefully consider reviewer feedback when working on their revision.
train
[ "-lSpVYjx4VK", "LpWY38Buhup", "AHw_iDBrRjE", "9p4WMgsEFOs", "721iLZ9G_1n", "Yk2DMSY43As", "yxPcOPzPc4", "LtDfETLKBLT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors propose a general method to derive probabilistic / conditional distribution predictions for any given trained base prediction model f(x) - i.e., CDF of y|x, by using an approach somewhat related to quantile regression forests (QRFs). In this approach prediction function f(x_i) outputs are grouped with...
[ 7, -1, 7, -1, -1, -1, -1, 6 ]
[ 3, -1, 3, -1, -1, -1, -1, 2 ]
[ "nips_2021_VD3TMzyxKK", "AHw_iDBrRjE", "nips_2021_VD3TMzyxKK", "nips_2021_VD3TMzyxKK", "LtDfETLKBLT", "-lSpVYjx4VK", "AHw_iDBrRjE", "nips_2021_VD3TMzyxKK" ]
nips_2021_c3RKZas9am
Roto-translated Local Coordinate Frames For Interacting Dynamical Systems
Miltiadis Kofinas, Naveen Nagaraja, Efstratios Gavves
accept
As all reviewers agree, the paper discusses an interesting approach to the problem of learning complex dynamical systems with notable experimental supports. Although some reviewers have concerns on the technical part, the authors' responses have resolved those. The paper still have some weak points, especially in the presentation, but I think I can expect the authors modify the paper in the camera-ready by reflecting the discussion. Based on these, I recommend acceptance (poster) for this paper.
train
[ "ujb9fI7pANU", "z2GImQ5hC1J", "xp_aChcJjts", "jETVrkZgCaL", "KBHSBoWzncK", "hIUkFEc1o5x", "fLg3DEWBtgR", "d6wNl9KQCaI", "6G5lb3Dox1w", "xoS9Ceix1sm", "n_pphchgCVF", "by3QxPmM3I4", "9GEzwqyOb2b", "cT3Q2QjSzT", "jCB_rckAyTb", "lqCHQY0EZ1i" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your kind words and your effort, as well as for raising your score, it is much appreciated!", "This paper proposes a to use local coordinate system per object in a spatio-temporal graph (with a GNN) of multiple object in order to learn and predict the dynamics of the objects interaction....
[ -1, 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 3, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "xp_aChcJjts", "nips_2021_c3RKZas9am", "9GEzwqyOb2b", "d6wNl9KQCaI", "hIUkFEc1o5x", "cT3Q2QjSzT", "nips_2021_c3RKZas9am", "6G5lb3Dox1w", "xoS9Ceix1sm", "n_pphchgCVF", "fLg3DEWBtgR", "lqCHQY0EZ1i", "z2GImQ5hC1J", "jCB_rckAyTb", "nips_2021_c3RKZas9am", "nips_2021_c3RKZas9am" ]
nips_2021_AIIzCpn_GJ
ParK: Sound and Efficient Kernel Ridge Regression by Feature Space Partitions
We introduce ParK, a new large-scale solver for kernel ridge regression. Our approach combines partitioning with random projections and iterative optimization to reduce space and time complexity while provably maintaining the same statistical accuracy. In particular, constructing suitable partitions directly in the feature space rather than in the input space, we promote orthogonality between the local estimators, thus ensuring that key quantities such as local effective dimension and bias remain under control. We characterize the statistical-computational tradeoff of our model, and demonstrate the effectiveness of our method by numerical experiments on large-scale datasets.
accept
The paper proposes an algorithm make the learning of a kernel-based in the large scale scenario using, as its core, a partition computed in the kernel feature space. If the paper goes beyond a pipeline of existing methods, the paper would gain a lot of strength if: - beyond the combination of existing methods, the amount of novelty of each part of this combination was discussed: how new is the feature space partitioning? How would it differ from a simple Kernelized partitioning methods with a clever (greedy) initialization? - what is the credit assignment in place regarding the proposed pipeline: what makes the method work? The partitioning, the Nystrom approximation? The optimisation procedure? This should be clarified. - Would other acceleration methods dedicated to kernels such as random features (Rahimi and Recht) would fit into the pipeline? If no, why? If the technical parts are okay, there is a lack of perspective on the work, preventing the reader to know what are the key parts that make the proposed method effective.
train
[ "tbjiRE8iRHg", "ArJOg8qAlxV", "m-S-UTqlR8L", "7UEE-YTWu3V", "bNnMrGN99oY", "uRO32CXA9ZK", "aerFlHcD17T", "Z-2gK7o1bh6", "GK1gMWIkqh5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I have read author's responses and they partially answer my questions, but I believe the current version has not met the standard of NeurIPS, thus I keep my score unchanged.", "The author introduces a partitioning method for solving large-scale kernel ridge regression. Instead of working on the input space, the...
[ -1, 5, 5, -1, -1, -1, -1, 7, 5 ]
[ -1, 4, 3, -1, -1, -1, -1, 4, 4 ]
[ "GK1gMWIkqh5", "nips_2021_AIIzCpn_GJ", "nips_2021_AIIzCpn_GJ", "ArJOg8qAlxV", "m-S-UTqlR8L", "GK1gMWIkqh5", "Z-2gK7o1bh6", "nips_2021_AIIzCpn_GJ", "nips_2021_AIIzCpn_GJ" ]
nips_2021_mV4hBipdm5l
Scaling Gaussian Processes with Derivative Information Using Variational Inference
Misha Padidar, Xinran Zhu, Leo Huang, Jacob Gardner, David Bindel
accept
This paper addresses the problem of scalable inference in Gaussian process (GP) regression with derivative information for the general case where both the number of observations (N) and the dimensionality (D) are large. While previous work has addressed the large-N-low-D and low-N-large-D regimes independently, the paper proposes the use of inducing directional derivatives (which play a similar role to that of the inducing variables in standard variationally sparse GP models) rather than full derivatives to derive a scalable variational inference algorithm. A comprehensive set of experiments (on synthetic data, implicit surface reconstruction, training graph convolutional networks with Bayesian optimization, large-scale regression with derivative information, Rover trajectory planning and standard GP regression on UCI datasets) show the benefits of the proposed approach with respect to previous methods. I believe the contribution of this paper to be significant to the NeurIPS/GP community as it develops a new method for settings where previous work cannot be applied. The reviewers raised several concerns regarding the choice of inducing directions and the use of different directions per inducing point. I believe the authors have addressed these successfully. A limitation of this work is the very small number of directional derivatives (p) used in the experiments (p=1, 2). As the complexity of the proposed algorithm scales as (M^3 p^3), exploring larger p values will necessarily lead to fewer inducing variables (M).
train
[ "iqwqPH_OEP9", "NXO0dfu80XP", "e43ki780JE", "f0vW5-Aazmx", "yyBSBXu3cSG", "KQe0qHlb6ww", "Ei0jrgeQ_sF", "uquZxEU-kC" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for pointing out some areas that we agree would benefit from additional clarification. We address these points below, and will be sure to incorporate this discussion into the final paper.\n\n**Regarding the selection of p**\n\nAs with M for SVGP, in principle p should simply be set as high as possible w...
[ -1, -1, -1, -1, 7, 5, 5, 7 ]
[ -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Ei0jrgeQ_sF", "yyBSBXu3cSG", "uquZxEU-kC", "KQe0qHlb6ww", "nips_2021_mV4hBipdm5l", "nips_2021_mV4hBipdm5l", "nips_2021_mV4hBipdm5l", "nips_2021_mV4hBipdm5l" ]
nips_2021_ST1P270dwOE
On the Representation of Solutions to Elliptic PDEs in Barron Spaces
Ziang Chen, Jianfeng Lu, Yulong Lu
accept
All the reviewers agree that it is a very interesting paper in its field. I share the same opinion. The main contribution to this paper is to show that solutions of elliptic PDE of the form \begin{equation} \- \nabla \cdot (A \nabla u) + cu = f \end{equation} can be approximated by a two-layer neural network in some Sobolev norm. This kind of results is really important due to the increase popularity of PDE solvers based on deep neural networks. The main assumption considered by the authors is that the coefficients $A,c$ and $f$ all belong to some Baron spaces which have to be closed by multiplication. To obtain such a results, the authors builds upon results on Baron spaces recently obtained in the literature and on the representation of solutions of elliptic PDE as fixed-points of a steepest descent scheme. While, this result and the strategy of the proof may seem not really surprising, the paper is very well-written and succeed in gathering all the ingredients of the proofs smoothly. That being said, some points have been raised by the reviewers. I encourage the authors to address these in the final version of their paper.
train
[ "LbEPbC7Srme", "dMES6KT4JZo", "Mo7dZ5U8iZY", "eQBpcWx3zAX", "UXqJzf0zuDL", "N1X-L4NMOZY", "s9_lW2pXZ1", "XiPBxIoAFRt", "mH8KnyI-ZMa", "Jpxq4kULZUH", "3wAi-Wh-9gW" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply.", " Thank the author for the informative response. My concerns are resolved.", " We appreciate the reviewers for spending their time carefully read our manuscript and for their helpful and encouraging comments. We have responded to their detailed comments in separate replies. Hopefully,...
[ -1, -1, -1, -1, -1, -1, -1, 9, 7, 6, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "N1X-L4NMOZY", "eQBpcWx3zAX", "nips_2021_ST1P270dwOE", "Jpxq4kULZUH", "XiPBxIoAFRt", "3wAi-Wh-9gW", "mH8KnyI-ZMa", "nips_2021_ST1P270dwOE", "nips_2021_ST1P270dwOE", "nips_2021_ST1P270dwOE", "nips_2021_ST1P270dwOE" ]
nips_2021_GOnkx08Gm6
A/B Testing for Recommender Systems in a Two-sided Marketplace
Two-sided marketplaces are standard business models of many online platforms (e.g., Amazon, Facebook, LinkedIn), wherein the platforms have consumers, buyers or content viewers on one side and producers, sellers or content-creators on the other. Consumer side measurement of the impact of a treatment variant can be done via simple online A/B testing. Producer side measurement is more challenging because the producer experience depends on the treatment assignment of the consumers. Existing approaches for producer side measurement are either based on graph cluster-based randomization or on certain treatment propagation assumptions. The former approach results in low-powered experiments as the producer-consumer network density increases and the latter approach lacks a strict notion of error control. In this paper, we propose (i) a quantification of the quality of a producer side experiment design, and (ii) a new experiment design mechanism that generates high-quality experiments based on this quantification. Our approach, called UniCoRn (Unifying Counterfactual Rankings), provides explicit control over the quality of the experiment and its computation cost. Further, we prove that our experiment design is optimal to the proposed design quality measure. Our approach is agnostic to the density of the producer-consumer network and does not rely on any treatment propagation assumption. Moreover, unlike the existing approaches, we do not need to know the underlying network in advance, making this widely applicable to the industrial setting where the underlying network is unknown and challenging to predict a priori due to its dynamic nature. We use simulations to validate our approach and compare it against existing methods. We also deployed UniCoRn in an edge recommendation application that serves tens of millions of members and billions of edge recommendations daily.
accept
In this submission, the authors study a very interesting problem, i.e., how to measure from the producer (or seller) side in a two-sided marketplace. This studied task can have great application potential. Due to this, here, I recommend to accept this submission. However, I do have the same major concern with reviewer neLM, that is, whether the strong assumptions the authors made can be guaranteed. This can hurt the impact of this submission. Further, this submission also can be improved based on the comments from all the reviewers and the discussion between reviewers and authors. Hope they find these useful, and make this submission a better one.
val
[ "epEMBw5cmJf", "ViKQmyiN9K", "lifuF10mxcW", "1gJbtphQokW", "-vVczyP5eQ", "WOsUE015HmT", "z96LtlCfrLO", "IVvBZV_x5ih", "IHbjgRh0CdC", "4yRkbj0v7Aj", "DkAfYo0CDyj", "zCmFB4SGxb7", "odzf2LU7NVr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response.", "This paper introduces unicorn, a method to estimate the effect of different recommender systems on the producers in a platform. This is more challenging than consumer-side A/B testing where a simple randomized trial suffices since a producer's content will be recommended across multi...
[ -1, 7, -1, -1, -1, -1, 5, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 4 ]
[ "4yRkbj0v7Aj", "nips_2021_GOnkx08Gm6", "-vVczyP5eQ", "DkAfYo0CDyj", "WOsUE015HmT", "IVvBZV_x5ih", "nips_2021_GOnkx08Gm6", "ViKQmyiN9K", "odzf2LU7NVr", "zCmFB4SGxb7", "z96LtlCfrLO", "nips_2021_GOnkx08Gm6", "nips_2021_GOnkx08Gm6" ]
nips_2021_bYi_2708mKK
Retiring Adult: New Datasets for Fair Machine Learning
Although the fairness community has recognized the importance of data, researchers in the area primarily rely on UCI Adult when it comes to tabular data. Derived from a 1994 US Census survey, this dataset has appeared in hundreds of research papers where it served as the basis for the development and comparison of many algorithmic fairness interventions. We reconstruct a superset of the UCI Adult data from available US Census sources and reveal idiosyncrasies of the UCI Adult dataset that limit its external validity. Our primary contribution is a suite of new datasets derived from US Census surveys that extend the existing data ecosystem for research on fair machine learning. We create prediction tasks relating to income, employment, health, transportation, and housing. The data span multiple years and all states of the United States, allowing researchers to study temporal shift and geographic variation. We highlight a broad initial sweep of new empirical insights relating to trade-offs between fairness criteria, performance of algorithmic interventions, and the role of distribution shift based on our new datasets. Our findings inform ongoing debates, challenge some existing narratives, and point to future research directions.
accept
Thanks for the strong submission. The reviewers were unanimous that the paper provided a valuable contribution, and disagreed only on whether it was submitted to the right track. Our job is to provide platform for good work, and since there was no disagreement on the quality of the work, we decided to accept.
val
[ "6T6wf68x-J6", "Nhm4UlFxyjr", "9R9h6N-3ZQW", "fEJwIR7UnI", "IqKhnoDY72d", "dyFqK4vIFXq", "Jze-hOSQVjr", "WduE-yU_hoy", "xUKeleQ-q2C", "FM5GVvRMrAP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors focus on the adult dataset, which was originated from the US census data and has records of individuals with a binary label of whether the income is higher or lower than 50k. The adult dataset has been one of the gold standard datasets in the algorithmic fair ML community, especially to test for the pe...
[ 7, -1, 7, 8, -1, -1, -1, -1, -1, 8 ]
[ 5, -1, 4, 4, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_bYi_2708mKK", "Jze-hOSQVjr", "nips_2021_bYi_2708mKK", "nips_2021_bYi_2708mKK", "WduE-yU_hoy", "9R9h6N-3ZQW", "FM5GVvRMrAP", "fEJwIR7UnI", "6T6wf68x-J6", "nips_2021_bYi_2708mKK" ]
nips_2021_7_t4Gvubkeo
Cardinality constrained submodular maximization for random streams
Paul Liu, Aviad Rubinstein, Jan Vondrak, Junyao Zhao
accept
The reviewers all agree that the paper makes substantial improvement over previous works in terms of memory requirement for an important problem.
train
[ "wWlNkReEvKg", "ZKXmL4uAecs", "sMXMG5sciVG", "_EfpeIlfrQg", "JSqP7XYpp3L", "NGaMQ6L4wzV", "gzBfHrqx7xQ", "LJkF48ZSIEq", "qBqYkOWs1S3" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Just acknowledging I've read the response. :)\n\nReview remains unchanged, of course.", " We thank the reviewer for their review! \n\nAs per the reviewer's suggestion, we plan to provide additional comparisons to [NTM+18] in the final version. As noted, we currently do not perform experiments on SieveStreaming ...
[ -1, -1, -1, -1, -1, 7, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "sMXMG5sciVG", "gzBfHrqx7xQ", "qBqYkOWs1S3", "LJkF48ZSIEq", "NGaMQ6L4wzV", "nips_2021_7_t4Gvubkeo", "nips_2021_7_t4Gvubkeo", "nips_2021_7_t4Gvubkeo", "nips_2021_7_t4Gvubkeo" ]
nips_2021_7Da3azsjjlh
Self-Instantiated Recurrent Units with Dynamic Soft Recursion
While standard recurrent neural networks explicitly impose a chain structure on different forms of data, they do not have an explicit bias towards recursive self-instantiation where the extent of recursion is dynamic. Given diverse and even growing data modalities (e.g., logic, algorithmic input and output, music, code, images, and language) that can be expressed in sequences and may benefit from more architectural flexibility, we propose the self-instantiated recurrent unit (Self-IRU) with a novel inductive bias towards dynamic soft recursion. On one hand, theSelf-IRU is characterized by recursive self-instantiation via its gating functions, i.e., gating mechanisms of the Self-IRU are controlled by instances of the Self-IRU itself, which are repeatedly invoked in a recursive fashion. On the other hand, the extent of the Self-IRU recursion is controlled by gates whose values are between 0 and 1 and may vary across the temporal dimension of sequences, enabling dynamic soft recursion depth at each time step. The architectural flexibility and effectiveness of our proposed approach are demonstrated across multiple data modalities. For example, the Self-IRU achieves state-of-the-art performance on the logical inference dataset [Bowman et al., 2014] even when comparing with competitive models that have access to ground-truth syntactic information.
accept
This paper introduces a self-instantiated recurrent unit that is related to the LSTM but with additional capabilities for soft recursive. The authors evaluate their method on a range of tasks including image classification, logical inference, sorting, tree traversal, music modeling, semantic parsing, and code generation. The reviewers and I agree that the evaluations are quite extensive. It's also clear the method performs well. The model presentation is mostly clear, but there were still a number of queries and points of confusion that popped up in the back-and-forth (relation to stacked LSTM, why only hidden and output gates determined recursively, which parameters are shared, etc.). The author rebuttal helped in this regard, but these points weren't completely resolved. I agree with R-VoSb that the ablation analysis isn't as revealing as it should have been in illuminating the architecture choices. Also, I found that section 3.7 wasn't very developed and didn't add much understanding. I wish it was clearer why and how the architecture works. That said, I recommend acceptance, based on the strength of the experiments and the cleverness of the architecture. I hope the exposition will be further improved in the final version, as there are many helpful comments from the reviewers.
test
[ "NZ32LNmUM2p", "_EgLtbjB_E6", "N3XvNM7LOp8", "2670LaV5EJp", "JTS3t63c5U", "MYxZDhc-Eos", "hmexgQv5bLT", "RT8gsUctYH", "dziZKJCm1aZ", "x_y4RejtRzw", "OOiUDqHNKLP", "KSnrmlHiwJ9", "hsUhjYrMto4", "EXVsk3jcsM2", "gWxnYZpeR0z", "XPJXD1BlJsH" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Please let us know if our response on Aug 10 addresses your questions or there is anything you'd like to discuss further. Thanks.", "This paper proposes a new architecture for sequence modelling. The idea is roughly: rather than have stacked layers as in a stacked LSTM, to have an adaptive recursion at each tim...
[ -1, 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "EXVsk3jcsM2", "nips_2021_7Da3azsjjlh", "2670LaV5EJp", "hsUhjYrMto4", "hsUhjYrMto4", "RT8gsUctYH", "nips_2021_7Da3azsjjlh", "x_y4RejtRzw", "_EgLtbjB_E6", "OOiUDqHNKLP", "KSnrmlHiwJ9", "hmexgQv5bLT", "XPJXD1BlJsH", "gWxnYZpeR0z", "nips_2021_7Da3azsjjlh", "nips_2021_7Da3azsjjlh" ]
nips_2021_SkU3kbKTrb6
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
Hippolyt Ritter, Martin Kukla, Cheng Zhang, Yingzhen Li
accept
This paper comes up with a variational inference strategy for learning a factored weight matrix, that they can then sample over efficiently using Matheron’s rule. They need <25% of the parameters of the original deep ensemble but are competitive across metrics. Four reviews were given for this paper with scores 6, 7, 4, 7. Thus there was a general consensus to accept with one reviewer arguing for a reject. In general the reviewers found the paper technically strong, novel and important in terms of the problem it addresses. One of the major criticisms of the paper is that it is too dense and the reader is overloaded with technical background. All reviewers seem to agree with this point (i.e. in discussion: “I wholly agree .. that the presentation of this paper is its major weakness”), but differ quite significantly in how important they think this is. Some of the reviewers applaud the authors for striving for completeness, but seem to have found it a little excessive. In particular, the reviewers didn't think presenting both the function space and weight space perspective was necessary. Thus the authors are strongly recommended to move one of these to the appendix to make space for other parts of the paper. There’s some criticism of the priors used - the reviewer felt like the empirical comparison wasn’t apples to apples because of the priors were different between their method and the baselines. This seems reasonable, but the choice of prior seems core to the proposed method (needs to be decomposable due to Matheron’s rule). Thus it would seem acceptable to not require the authors to change the prior of their method. The final criticism is that the method is slower than the original ensemble, so sparsity doesn’t help - though there is still (in theory) considerable memory savings. Again, the reviewers differed in their assessment of how important this issue is. The majority felt that this is a strong paper. Considering that the major cited weakness is in the presentation of the technical background, the recommended decision is to accept the paper with the ask that the authors rework the technical background to make it more accessible. In particular, the reviewers recommended that the function space perspective could be moved to the appendix in the camera ready. This would free up space to make the paper less mathematically dense and allow more space for other parts of the paper.
train
[ "kOazrkqQxNm", "SifcwmFWG9q", "kvEgwLr3Qli", "YKuFhosHgS", "p6SDsOUJDof", "c_dVkDH34VM", "IV7f31dkHl", "XKODFywqL4", "Ejxx1h21bEM", "FUEbNaPZio", "dBp2kH57ugR", "nwsvR6Tgdru", "oj2tbU1YcKb", "eVnYGRi9-a1", "yo_KkH1LJYJ", "NWkBmz4N6sn", "XClxb8W6j1-", "GW1osFEmh85" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarifications. I do not have any other questions and I will keep my score of Accept for this paper.", " Thank you for the clarifications overall. I'm still unconvinced by the rebuttal and think that there are a couple of outstanding issues with the paper.\n\nFirst, the presentation is close ...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 4 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "yo_KkH1LJYJ", "FUEbNaPZio", "p6SDsOUJDof", "nips_2021_SkU3kbKTrb6", "c_dVkDH34VM", "SifcwmFWG9q", "GW1osFEmh85", "eVnYGRi9-a1", "FUEbNaPZio", "GW1osFEmh85", "XClxb8W6j1-", "nips_2021_SkU3kbKTrb6", "GW1osFEmh85", "YKuFhosHgS", "NWkBmz4N6sn", "nips_2021_SkU3kbKTrb6", "nips_2021_SkU3kb...
nips_2021_Krtz-LgTYIt
Scalable Inference of Sparsely-changing Gaussian Markov Random Fields
Salar Fattahi, Andres Gomez
accept
This paper considers the task of estimating a sparse Gaussian graphical model with a graph that changes sparsely over time. A nonconvex optimization formulation is shown to be efficiently solvable and to achieve good performance both empirically and theoretically. This is an important problem and the paper makes worthwhile progress.
train
[ "m-mJHGfFchW", "74EavE8eQx", "pumxKPdQfw1", "Zg24WXlMWhd", "BLK54QDdVfa", "623PfkuEdbr", "NnXnaUPis5", "TuStWPLNXPR", "hq5xTW9QN-H", "nsFMOkM-vBE", "pf_WKD3Jeqe" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors consider the problem of learning Sparsely changing Gaussian Markov Random Fields. They pose the problem as a non-convex optimization problem that can be solved exactly. The authors provide statistical guarantees. Furthermore, they also provide a computationally efficient algorithm to rec...
[ 7, -1, -1, -1, -1, -1, -1, 6, 8, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "nips_2021_Krtz-LgTYIt", "pf_WKD3Jeqe", "nsFMOkM-vBE", "hq5xTW9QN-H", "m-mJHGfFchW", "TuStWPLNXPR", "nips_2021_Krtz-LgTYIt", "nips_2021_Krtz-LgTYIt", "nips_2021_Krtz-LgTYIt", "nips_2021_Krtz-LgTYIt", "nips_2021_Krtz-LgTYIt" ]
nips_2021_jScy7BjbZeQ
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation
Large pretrained language models (LMs) like BERT have improved performance in many disparate natural language processing (NLP) tasks. However, fine tuning such models requires a large number of training examples for each target task. Simultaneously, many realistic NLP problems are "few shot", without a sufficiently large training set. In this work, we propose a novel conditional neural process-based approach for few-shot text classification that learns to transfer from other diverse tasks with rich annotation. Our key idea is to represent each task using gradient information from a base model and to train an adaptation network that modulates a text classifier conditioned on the task representation. While previous task-aware few-shot learners represent tasks by input encoding, our novel task representation is more powerful, as the gradient captures input-output relationships of a task. Experimental results show that our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches on a collection of diverse few-shot tasks. We further conducted analysis and ablations to justify our design choices.
accept
The reviewers all agreed that that the method of encoding gradients to obtain a task embedding and using FILM to adapt the text-classification network to that task. Many of the initial concerns raised primarily by reviewer WPsh were addressed in the rebuttal through additional experiments, leading the reviewer to raise their score by 1. In general, the design choices and claims are well supported by ablation studies and comparisons against various baselines on various datasets. The writing is also structured and of decent quality. That said, the authors need to address the new "puzzling" issues raised by reviewer WPsh's updated review on the fine-tuning and comparison to other methods, especially as they cast doubt on the validity of the reported numbers.
train
[ "QFgjYrD10ju", "q2vBh4o2Hj", "dwO1CgohfgX", "g2b2M9jbJL", "OaVIG7nrOu", "Nej4flbLl-F", "TX-w-GqTneP", "oyi2nxParmq", "CbZUQemL2s", "2qrxW_qK-Vb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response to my review. I have read it and other reviews and I'd like to keep my score based on the response. Please include the clarifications and improve the presentation in the next version of the paper. Thanks!", "The paper considers the problem of few-shot text classification. It builds on...
[ -1, 5, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, 5, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "Nej4flbLl-F", "nips_2021_jScy7BjbZeQ", "OaVIG7nrOu", "q2vBh4o2Hj", "CbZUQemL2s", "oyi2nxParmq", "2qrxW_qK-Vb", "nips_2021_jScy7BjbZeQ", "nips_2021_jScy7BjbZeQ", "nips_2021_jScy7BjbZeQ" ]
nips_2021_5BnaKeEwuYk
Learnability of Linear Thresholds from Label Proportions
Rishi Saket
accept
This paper studies the problem of learning linear threshold functions (LTFs) in the following model, termed "learning from label proportions"(LLP). (Recall that in the standard statistical learning setting, one observes i.i.d. labeled examples (x, y).) In the LLP model considered in this work, in addition to labeled examples (x, y), one observes *pairs* of examples together with their *average* label. The main results of the paper are: (1) an efficient proper learner satisfying at least 2/5 fraction of the pairs, and (2) an NP-hardness of proper learning result ruling out approximation better than 1/2. For a special case of the problem, the authors show that 1/2 is in fact the optimal approximation ratio for efficient proper learning. The algorithmic result uses SDP, while the hardness result uses a PCP-style reduction from label cover. Overall, the reviewers agreed that this is a technically worthy and well-written theory paper that should be accepted to NeurIPS.
train
[ "VSOO4q1qG1M", "E-Pzk3DdU9", "XTQNdkA822V", "H8cSVcWHL9K", "XchErc1AYPo", "NCzezZ1E8Yo", "9yOGim1akwU", "VQG4nsBRBVE", "BiiC0h9r_BD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work studies the problem of learning from label proportions (LLP), that is given bags of vectors and given the average of the positive labels in each bag, we need to find an LTF that satisfies most of the bags. The authors study the case where the bags contain at most 2 vectors. They study two cases, one wher...
[ 7, 7, -1, -1, -1, -1, 7, 7, 5 ]
[ 4, 4, -1, -1, -1, -1, 2, 2, 1 ]
[ "nips_2021_5BnaKeEwuYk", "nips_2021_5BnaKeEwuYk", "H8cSVcWHL9K", "9yOGim1akwU", "VQG4nsBRBVE", "BiiC0h9r_BD", "nips_2021_5BnaKeEwuYk", "nips_2021_5BnaKeEwuYk", "nips_2021_5BnaKeEwuYk" ]
nips_2021_y7l4h5xtaqQ
A variational approximate posterior for the deep Wishart process
Recent work introduced deep kernel processes as an entirely kernel-based alternative to NNs (Aitchison et al. 2020). Deep kernel processes flexibly learn good top-layer representations by alternately sampling the kernel from a distribution over positive semi-definite matrices and performing nonlinear transformations. A particular deep kernel process, the deep Wishart process (DWP), is of particular interest because its prior can be made equivalent to deep Gaussian process (DGP) priors for kernels that can be expressed entirely in terms of Gram matrices. However, inference in DWPs has not yet been possible due to the lack of sufficiently flexible distributions over positive semi-definite matrices. Here, we give a novel approach to obtaining flexible distributions over positive semi-definite matrices by generalising the Bartlett decomposition of the Wishart probability density. We use this new distribution to develop an approximate posterior for the DWP that includes dependency across layers. We develop a doubly-stochastic inducing-point inference scheme for the DWP and show experimentally that inference in the DWP gives improved performance over doing inference in a DGP with the equivalent prior.
accept
The authors propose a variational approximation scheme for Deep Kernel Processes (DKP). Building on previous work, the DKP is an alternative representation of the Deep Gaussian Process (DGP), where the representation is over Gramm matrices rather than over function evaluations. Reviewers are in broad agreement that this work is a solid contribution in the development of DKPs, and in functional representation of Deep models in general. One reviewer raised a concern about the lack of theoretical guarantees that the new methodology brings over the DGP representation: in the rebuttal, the authors provide convincing arguments that the improvements in the ELBO due to the representation are directly connected to PAC bounds - I'd like to see a little discussion on this in the camera-ready version.
train
[ "GSNAZCqv0Lz", "QImlYnYOtk0", "XWLVyl4LHP", "24gYTLQAcTr", "Fdchxc_EeWj", "2QDKaeyMM-y", "vc0iFh0GfMl", "WMIDVJZE9Hu", "Z7c3U_aDF-E", "ysCA7pOZW4-", "bfLbMklY6rs" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors introduce effective inference strategies for deep Wishart processes (DWP). This class of deep models is of interest because it moves away from the featurized representation implicit in most deep models to one that is fully kernelized. Inference relies on a doubly stochastic inducing-point scheme famili...
[ 7, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "nips_2021_y7l4h5xtaqQ", "XWLVyl4LHP", "WMIDVJZE9Hu", "GSNAZCqv0Lz", "GSNAZCqv0Lz", "Z7c3U_aDF-E", "bfLbMklY6rs", "ysCA7pOZW4-", "nips_2021_y7l4h5xtaqQ", "nips_2021_y7l4h5xtaqQ", "nips_2021_y7l4h5xtaqQ" ]
nips_2021_jYzSTzvDP3p
Neural Pseudo-Label Optimism for the Bank Loan Problem
We study a class of classification problems best exemplified by the \emph{bank loan} problem, where a lender decides whether or not to issue a loan. The lender only observes whether a customer will repay a loan if the loan is issued to begin with, and thus modeled decisions affect what data is available to the lender for future decisions. As a result, it is possible for the lender's algorithm to ``get stuck'' with a self-fulfilling model. This model never corrects its false negatives, since it never sees the true label for rejected data, thus accumulating infinite regret. In the case of linear models, this issue can be addressed by adding optimism directly into the model predictions. However, there are few methods that extend to the function approximation case using Deep Neural Networks. We present Pseudo-Label Optimism (PLOT), a conceptually and computationally simple method for this setting applicable to DNNs. \PLOT{} adds an optimistic label to the subset of decision points the current model is deciding on, trains the model on all data so far (including these points along with their optimistic labels), and finally uses the resulting \emph{optimistic} model for decision making. \PLOT{} achieves competitive performance on a set of three challenging benchmark problems, requiring minimal hyperparameter tuning. We also show that \PLOT{} satisfies a logarithmic regret guarantee, under a Lipschitz and logistic mean label model, and under a separability condition on the data.
accept
The scores from four reviews are 4, 5, 5, and 7. The authors have a persuasive reply to the review with the lowest score, so this score is too harsh. As the authors say, the criticisms in one of the reviews with a 5 score are rather superficial. Overall, the paper is on an important topic and is correct and novel, so I lean towards publication. One issue not mentioned in the paper is delayed feedback. When a bank grants a loan, a lot of time passes before the true label becomes available, and during this time the function to be learned p(y|x) may change. A related issue mentioned by a reviewer is correlation between labels of different examples, as caused for example by a recession. The last section should discuss these issues briefly, and also discuss fairness, as opposed to merely mentioning it. Lack of exploration can lead to never giving loans to members of a group that has historically faced prejudice. Of course, the three issues just mentioned apply more broadly than just specifically to granting of credit.
train
[ "NRYa8uifUf3", "F5GHVxRXqyO", "VHB229tjMc", "i_JOqSuiL3I", "W0n72KZSmB", "CCFURLnWgYR", "63_aEHl3ypB", "vHbo8FNGPlL", "vjrKE1NOwOB", "4dK3K1pLCzB", "ay0c_rbYc5q", "NIYioDG_Nwh" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 7avZ,\n\nWe would like to make sure that our response answered your questions. We are particularly interested to see if our explanation about 'Online Learning' helped clarify some of the misunderstanding about our setup.\n\nThanks a lot, \n\nThe Authors", "This paper first motivates the \"bank loa...
[ -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "vHbo8FNGPlL", "nips_2021_jYzSTzvDP3p", "nips_2021_jYzSTzvDP3p", "63_aEHl3ypB", "nips_2021_jYzSTzvDP3p", "4dK3K1pLCzB", "VHB229tjMc", "ay0c_rbYc5q", "F5GHVxRXqyO", "NIYioDG_Nwh", "nips_2021_jYzSTzvDP3p", "nips_2021_jYzSTzvDP3p" ]
nips_2021_8gyF7P-kEud
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN. Specifically, we visualize (1) how the DNN gradually learns regional visual patterns in each intermediate layer during the training process, and (2) the effects of the DNN using non-discriminative patterns in low layers to construct disciminative patterns in middle/high layers through the forward propagation. Based on our visualization method, we can quantify knowledge points (\emph{i.e.} the number of discriminative visual patterns) learned by the DNN to evaluate the representation capacity of the DNN. Furthermore, this method also provides new insights into signal-processing behaviors of existing deep-learning techniques, such as adversarial attacks and knowledge distillation.
accept
This work proposes a new method for examining the patterns learned by intermediate layers of a DNN, by projecting features into a lower dimensional space that preserves discriminative structure. The reviewers were consistently positive about the paper’s novelty and impact, though some concerns were raised regarding clarity. The authors have done a good job of satisfying reviewer concerns in their rebuttals, and I trust that the authors can incorporate these clarifications into the final version without radically reshaping the paper. All in all, interesting and exciting work.
train
[ "yba14mceuwW", "FE-_07n4RqT", "GsuNkgMibY1", "HLw02hmjjh", "7IfMkeUhmgO", "3FmdZtrU46F", "QC4ME_rHMJw", "Wsq6Jd4FJu2", "PfasK45vDNu", "h2F27syc1JB", "viQzv4dh7RV", "-fspa-nkcdY", "3YvBt_z7TJs", "KBTZ5rugcfM", "N6_ZkBrTY4e", "FO-lYhLxrp", "Gtolpt_5Uxc" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much.", " Thanks for the updates and explanation, and nice work on the paper", " Thank you very much.", " Thank you for the comments and the new experiments. Congratulations on the paper.", " Thank you very much.", " I thank authors for their rebuttal. After reading all the reviews and re...
[ -1, -1, -1, -1, -1, -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "FE-_07n4RqT", "KBTZ5rugcfM", "HLw02hmjjh", "-fspa-nkcdY", "3FmdZtrU46F", "N6_ZkBrTY4e", "nips_2021_8gyF7P-kEud", "h2F27syc1JB", "nips_2021_8gyF7P-kEud", "viQzv4dh7RV", "PfasK45vDNu", "Gtolpt_5Uxc", "Gtolpt_5Uxc", "FO-lYhLxrp", "QC4ME_rHMJw", "nips_2021_8gyF7P-kEud", "nips_2021_8gyF7...
nips_2021_vLVEZr_66Ik
Learning 3D Dense Correspondence via Canonical Point Autoencoder
We propose a canonical point autoencoder (CPAE) that predicts dense correspondences between 3D shapes of the same category. The autoencoder performs two key functions: (a) encoding an arbitrarily ordered point cloud to a canonical primitive, e.g., a sphere, and (b) decoding the primitive back to the original input instance shape. As being placed in the bottleneck, this primitive plays a key role to map all the unordered point clouds on the canonical surface, and to be reconstructed in an ordered fashion. Once trained, points from different shape instances that are mapped to the same locations on the primitive surface are determined to be a pair of correspondence. Our method does not require any form of annotation or self-supervised part segmentation network and can handle unaligned input point clouds within a certain rotation range. Experimental results on 3D semantic keypoint transfer and part segmentation transfer show that our model performs favorably against state-of-the-art correspondence learning methods.
accept
This submission received 4 positive final ratings: 6, 6, 7, 6. On the positive side, reviewers acknowledged importance of the problem, originality of contributions (asymmetric Chamfer distance loss) an overall meaningful and well motivated approach, well executed ablation studies and strong performance. At the same time, they expressed concerns with the method's complexity (multiple loss terms that need balancing), requested additional clarifications (on training details, rotational invariance) and suggested to test the method on a real-world dataset. These concerns were mostly addressed in the rebuttal. The final recommendation is therefore to accept for poster presentation.
train
[ "o7LOf2v5tom", "OZ7LosFbVw", "HZg0g_CcaRd", "b1TonYfjvly", "kiYPlWJ2paw", "9spwxLVxvkW", "7YCOEX21nJ", "pIEyRhGNMZ", "4V9EoF6m6YJ", "t6dW4prsvlq", "3GXkuCYkVSu", "U3YBnm38iwZ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an autoencoder architecture that decouples point cloud’s representation into the global latent vector and spherical canonical embedding (in $\\mathbb{S}^2$ space) for each of its points. It is trained in the unsupervised way on the category-specific point clouds. This canonical map turns out to ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_vLVEZr_66Ik", "HZg0g_CcaRd", "b1TonYfjvly", "7YCOEX21nJ", "nips_2021_vLVEZr_66Ik", "U3YBnm38iwZ", "o7LOf2v5tom", "3GXkuCYkVSu", "t6dW4prsvlq", "nips_2021_vLVEZr_66Ik", "nips_2021_vLVEZr_66Ik", "nips_2021_vLVEZr_66Ik" ]
nips_2021_jyMpZyqrvYz
Speech-T: Transducer for Text to Speech and Beyond
Neural Transducer (e.g., RNN-T) has been widely used in automatic speech recognition (ASR) due to its capabilities of efficiently modeling monotonic alignments between input and output sequences and naturally supporting streaming inputs. Considering that monotonic alignments are also critical to text to speech (TTS) synthesis and streaming TTS is also an important application scenario, in this work, we explore the possibility of applying Transducer to TTS and more. However, it is challenging because it is difficult to trade off the emission (continuous mel-spectrogram prediction) probability and transition (ASR Transducer predicts blank token to indicate transition to next input) probability when calculating the output probability lattice in Transducer, and it is not easy to learn the alignments between text and speech through the output probability lattice. We propose SpeechTransducer (Speech-T for short), a Transformer based Transducer model that 1) uses a new forward algorithm to separate the transition prediction from the continuous mel-spectrogram prediction when calculating the output probability lattice, and uses a diagonal constraint in the probability lattice to help the alignment learning; 2) supports both full-sentence or streaming TTS by adjusting the look-ahead context; and 3) further supports both TTS and ASR together for the first time, which enjoys several advantages including fewer parameters as well as streaming synthesis and recognition in a single model. Experiments on LJSpeech datasets demonstrate that Speech-T 1) is more robust than the attention based autoregressive TTS model due to its inherent monotonic alignments between text and speech; 2) naturally supports streaming TTS with good voice quality; and 3) enjoys the benefit of joint modeling TTS and ASR in a single network.
accept
The paper proposes to apply Neural Transducers (popular for end-to-end ASR) to the problem of TTS, as well as TTS+ASR, which is agreed to be novel enough for NeurIPS by reviewers. The paper in its current form suffers however from its presentation, which raised many questions (mostly addressed by the rebuttal, though). In addition, it was noted that the experimental work could have been more convincing, as currently the quantitive results are no better than the baseline. Overall, the paper is thus very borderline, trending accept assuming the authors will heavily work on improving the presentation.
train
[ "xgSd7UpcTEB", "QPRKXfSXs3", "Bj426gP1nhw", "D9Pc9l1rfj", "fuNGmD2uGB", "GfvpixydF_U", "PZ_hfUjgHFt", "Hdq1WI1Cmei" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thank you for the detailed response. And also for publishing the code, which I really appreciate.\n\nI increased my score. I think this work is fine for acceptance, when my concerns are reasonably addressed.\n", "Motivation: Transducer for TTS, streaming\n\nModel:\nUses convolution + self-attention on text inpu...
[ -1, 6, -1, 6, -1, -1, -1, 5 ]
[ -1, 5, -1, 4, -1, -1, -1, 5 ]
[ "fuNGmD2uGB", "nips_2021_jyMpZyqrvYz", "Hdq1WI1Cmei", "nips_2021_jyMpZyqrvYz", "GfvpixydF_U", "QPRKXfSXs3", "D9Pc9l1rfj", "nips_2021_jyMpZyqrvYz" ]
nips_2021_sW40wkwfsZp
Multi-modal Dependency Tree for Video Captioning
Generating fluent and relevant language to describe visual content is critical for the video captioning task. Many existing methods generate captions using sequence models that predict words in a left-to-right order. In this paper, we investigate a graph-structured model for caption generation by explicitly modeling the hierarchical structure in the sentences to further improve the fluency and relevance of sentences. To this end, we propose a novel video captioning method that generates a sentence by first constructing a multi-modal dependency tree and then traversing the constructed tree, where the syntactic structure and semantic relationship in the sentence are represented by the tree topology. To take full advantage of the information from both vision and language, both the visual and textual representation features are encoded into each tree node. Different from existing dependency parsing methods that generate uni-modal dependency trees for language understanding, our method construct s multi-modal dependency trees for language generation of images and videos. We also propose a tree-structured reinforcement learning algorithm to effectively optimize the captioning model where a novel reward is designed by evaluating the semantic consistency between the generated sub-tree and the ground-truth tree. Extensive experiments on several video captioning datasets demonstrate the effectiveness of the proposed method.
accept
All reviewers who took part in the post rebuttal discussion recommend or lean to accept the paper. In my opinion, the concerns of reviewer c8JF have also been addressed. The paper contributes an interesting model of using dependency trees for video captioning, showing solid performance compared to SOTA, and clear experimental ablations. Additional results and ablations, including on image captioning in the author response strengthen the paper further. I recommend accepting the paper under the expectation that the authors address the concerns of the reviewers as done in the author response, including, but not limited to the following 1) include additional results and ablations (if they don't fit include them in supplement). 2) promptly release code, data and models when the paper is published. 3) include clarifications and typos / errors (e.g. bolding in Table 3)
train
[ "voc-DUbXKg5", "r6AFzGWuT9n", "FdH4e_6HinP", "RgN_AaaPT0U", "5hAtQaoqH-k", "6xVbWok7ZQH", "IOdyM4LxDcl", "xLYcDIlfA_0", "cNfgQgakIND", "NhL6Qib2C_W", "34xVBCWw_Zg", "mvRk_hD-hP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi, thanks for response, Reviewer xmz3!\n\nCan you please be specific why the comparison to SOTA and image captioning is \"not convincing enough\"?\nIs it the performance difference? is it the comparisons done? what would have convinced you?...\n\nYour Meta Reviewer", " Thank the authors for their effort and ti...
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "r6AFzGWuT9n", "xLYcDIlfA_0", "6xVbWok7ZQH", "nips_2021_sW40wkwfsZp", "IOdyM4LxDcl", "mvRk_hD-hP", "RgN_AaaPT0U", "34xVBCWw_Zg", "NhL6Qib2C_W", "nips_2021_sW40wkwfsZp", "nips_2021_sW40wkwfsZp", "nips_2021_sW40wkwfsZp" ]
nips_2021_cI4c6OpwIKq
Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence
In this paper, we follow Rodomanov and Nesterov’s work to study quasi-Newton methods. We focus on the common SR1 and BFGS quasi-Newton methods to establish better explicit (local) superlinear convergence rates. First, based on the greedy quasi-Newton update which greedily selects the direction to maximize a certain measure of progress, we improve the convergence rate to a condition-number-free superlinear convergence rate. Second, based on the random quasi-Newton update that selects the direction randomly from a spherically symmetric distribution, we show the same superlinear convergence rate established as above. Our analysis is closely related to the approximation of a given Hessian matrix, unconstrained quadratic objective, as well as the general strongly convex, smooth, and strongly self-concordant functions.
accept
All reviewers recommended acceptance, and this represents as a non-trivial improvement over the existing results. Please make the changes discussed in the reviews/responses, such as the addition of the log-sum-exp experiment.
train
[ "RYcUX9mgkVB", "EVr_6NhJF7D", "0gdrHF1VpnJ", "dPgiTFkFPqp", "htdvHS1LEcM", "q6wlrK3rj3I", "d-mIkltHNVR", "ohJj4B_vkgp", "dDbHBYIE0L", "pMzUEWEMshz", "8PY1iglKTkx", "4EpUj_hy-DC", "xNgLt3auNV", "IInBBVffxZ1", "wtOj9LtP1SF", "RCzUD6LoR4A" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am satisfied with the authors' response to both my and others reviews and will therefore keep my score.", " The answer clears up a few concerns I had. I still encourage the authors to contextualise the obtained results in terms of machine learning application, as they have done in the comment, early in the pa...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "dDbHBYIE0L", "8PY1iglKTkx", "nips_2021_cI4c6OpwIKq", "4EpUj_hy-DC", "d-mIkltHNVR", "pMzUEWEMshz", "0gdrHF1VpnJ", "RCzUD6LoR4A", "wtOj9LtP1SF", "IInBBVffxZ1", "xNgLt3auNV", "0gdrHF1VpnJ", "nips_2021_cI4c6OpwIKq", "nips_2021_cI4c6OpwIKq", "nips_2021_cI4c6OpwIKq", "nips_2021_cI4c6OpwIKq"...
nips_2021_i2pFtDzmPL6
Neural Tangent Kernel Maximum Mean Discrepancy
We present a novel neural network Maximum Mean Discrepancy (MMD) statistic by identifying a new connection between neural tangent kernel (NTK) and MMD. This connection enables us to develop a computationally efficient and memory-efficient approach to compute the MMD statistic and perform NTK based two-sample tests towards addressing the long-standing challenge of memory and computational complexity of the MMD statistic, which is essential for online implementation to assimilating new samples. Theoretically, such a connection allows us to understand the NTK test statistic properties, such as the Type-I error and testing power for performing the two-sample test, by adapting existing theories for kernel MMD. Numerical experiments on synthetic and real-world datasets validate the theory and demonstrate the effectiveness of the proposed NTK-MMD statistic.
accept
This paper proposes to use NTK to MMD, which connects MMD and neural networks. Based on this link, the paper proposes a way of solving the high computational cost of kernel methods by approximation using neural networks. The experimental comparisons could be improved, but generally acceptable. Overall, the connection between NTK/NN and MMD is a new direction and the work is worth presented in NeurIPS.
train
[ "sMf7z8173di", "dn9YNsFgbka", "UG-rj_Yq-Vg", "Ed2hbOC14Ll" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\nIn this paper, the authors propose an NTK-MMD to perform NTK based two-sample test. A \"short-time trained\" neural network is used to approximate the NTK-MMD statistic. \n\nPros.\n1. The idea of building a connection between kernel MMD and neural networks based on NTK is interesting. It can reduce the t...
[ 6, 7, 6, 7 ]
[ 3, 3, 4, 3 ]
[ "nips_2021_i2pFtDzmPL6", "nips_2021_i2pFtDzmPL6", "nips_2021_i2pFtDzmPL6", "nips_2021_i2pFtDzmPL6" ]
nips_2021_SJHRf5nW93
Subgraph Federated Learning with Missing Neighbor Generation
Graphs have been widely used in data mining and machine learning due to their unique representation of real-world objects and their interactions. As graphs are getting bigger and bigger nowadays, it is common to see their subgraphs separately collected and stored in multiple local systems. Therefore, it is natural to consider the subgraph federated learning setting, where each local system holds a small subgraph that may be biased from the distribution of the whole graph. Hence, the subgraph federated learning aims to collaboratively train a powerful and generalizable graph mining model without directly sharing their graph data. In this work, towards the novel yet realistic setting of subgraph federated learning, we propose two major techniques: (1) FedSage, which trains a GraphSage model based on FedAvg to integrate node features, link structures, and task labels on multiple local subgraphs; (2) FedSage+, which trains a missing neighbor generator along FedSage to deal with missing links across local subgraphs. Empirical results on four real-world graph datasets with synthesized subgraph federated learning settings demonstrate the effectiveness and efficiency of our proposed techniques. At the same time, consistent theoretical implications are made towards their generalization ability on the global graphs.
accept
The paper studies federated learning of graphs. It is assumed that the nodes of the graph are separated between the parties and cross-party edges are not observed. The authors suggest algorithms for learning in this scenario and experiment with them on several datasets. The reviewers had some concerns about this work, but the authors have used the discussion to explain some ideas. This allowed the reviewers to reach a consensus that this work is above the acceptance threshold.
train
[ "ZefxV_As2Nf", "6QIgneUxxrC", "ovkNcT7DWtx", "CqDsdngjBT", "8uEegy5sRV", "p63c8injU4j", "aN2hQlwNp6p", "mIzomqdmmjJ", "8AFV-s-Ur2S" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a federated learning framework for training federated GNNs under the scenario where each dataholder holds disjoint a disjoint subgraph. The paper introduces FedSAGE, which extends GraphSAGE to the federated setting, and FedSAGE+, which further enhances FedSAGE with a missing neighbor generator....
[ 6, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_SJHRf5nW93", "ovkNcT7DWtx", "CqDsdngjBT", "8uEegy5sRV", "ZefxV_As2Nf", "8AFV-s-Ur2S", "mIzomqdmmjJ", "nips_2021_SJHRf5nW93", "nips_2021_SJHRf5nW93" ]
nips_2021_e8WWUBeafM
Bellman-consistent Pessimism for Offline Reinforcement Learning
Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal
accept
Reviewers agreed that this paper constitutes a significant step forward for offline reinforcement learning with general function approximation, as it facilitates weaker notions of both transfer error and approximation error than prior work. While the algorithm in the paper is primarily theoretical, the authors also give a more practical/implementable version, which is valuable as well. In addition, the paper is clear and well-written.
train
[ "ZggFC_BIrnh", "-XWmZAEN0v", "6Pg2H_afV_s", "J8Afvd5OjsC", "1btEqunTsCT", "2WaWLGfOTuX", "_7DLHLOP77p", "gBVSJR0vxNO", "wxyqy8lU8Ma", "iW8zCrb-9e6", "B3FvQczKZ2e" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies offline reinforcement with function approximation, where the function class is general (but finite), without the strong assumption of coverage of the dataset. It proposes an algorithm based on a new notion of Bellman-consistency combined with pessimism that is applied globally (at the initial st...
[ 7, -1, -1, 8, -1, -1, -1, -1, -1, 7, 8 ]
[ 3, -1, -1, 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_e8WWUBeafM", "6Pg2H_afV_s", "2WaWLGfOTuX", "nips_2021_e8WWUBeafM", "2WaWLGfOTuX", "J8Afvd5OjsC", "ZggFC_BIrnh", "B3FvQczKZ2e", "iW8zCrb-9e6", "nips_2021_e8WWUBeafM", "nips_2021_e8WWUBeafM" ]
nips_2021_Tsp2PL7-GQ
Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Deep neural networks are powerful machines for visual pattern recognition, but reasoning tasks that are easy for humans may still be difficult for neural models. Humans possess the ability to extrapolate reasoning strategies learned on simple problems to solve harder examples, often by thinking for longer. For example, a person who has learned to solve small mazes can easily extend the very same search techniques to solve much larger mazes by spending more time. In computers, this behavior is often achieved through the use of algorithms, which scale to arbitrarily hard problem instances at the cost of more computation. In contrast, the sequential computing budget of feed-forward neural networks is limited by their depth, and networks trained on simple problems have no way of extending their reasoning to accommodate harder problems. In this work, we show that recurrent networks trained to solve simple problems with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference. We demonstrate this algorithmic behavior of recurrent networks on prefix sum computation, mazes, and chess. In all three domains, networks trained on simple problem instances are able to extend their reasoning abilities at test time simply by "thinking for longer."
accept
This paper presents a genuinely interesting result, namely that by increasing the effective depth of a network through the repetition of learned dimensionality-preserving transformations within the network, the resulting test-time network obtains better extrapolatory performance without further training. It shows empirically on a diverse-enough set of toy-ish tasks to pique interest. I can see this paper generating quite a bit of discussion in a certain part of the NeurIPS community, in particular with regard to its application to Transformer-style architectures for which such an idea of adaptive depth could be easily applied. There is a huge variance in scores, but thankfully, there was quite a bit of discussion, and the reviews are all fairly detailed (especially those in support of the paper). I'll be honest, I simply don't understand the argument made by Reviewer iKnp in favour of rejection. While it's true that the question "Can You Learn an Algorithm?" has been addressed by methods such as Neural GPUs, Turing Machines, Stacks etc circa 2014/2015, the matter is certainly not closed or solved in any substantial way, and a contribution such as that made by this paper is welcome. Aside from this point, I don't see a convincing argument in favour of rejection in Reviewer iKnp's comments. Reviewer idGE offers a more substantial critical reading of the paper, which the authors should pay more heed to. I agree with some if this reviewer's point regarding the framing of the contribution, and being careful with the notion of hardness of problems. I urge the authors to consider fairly substantial edits to be a bit more diligent and conservative with their claims here, but at the end of the day would not consider rejecting the paper on this basis of this critique alone. Reviewer Yvok seems satisfied with the author responses to their review, but has not updated their score past 6. However, I don't really see what outstanding issues they have that would prevent them from continuing in their support of the paper. Finally, Reviewer XL39 has written substantially to champion the paper's acceptance, echoing however the issues raised by Reviewer idGE regarding terms such as "hard" which are used a little casually. I again recommend the authors take the task of editing the paper seriously to incorporate this feedback. While on the face of it, this paper is borderline when it comes to scores, when weighing the reviewer recommendation by the strength of the arguments, and the detail of the reviews and discussions, I find that this paper is unambiguously acceptable and a strong contribution. It may not be perfect, but it will have good potential to influence others' work, and will add to the diversity of topics at the conference.
train
[ "W_7byapg5c7", "z3zzzw6hzn", "S6ENuRxAZXF", "vlq8WtuCxD4", "WlHzkWjPnKw", "Upf0RUkEaN", "1CRaSsYmEp4", "_54fF_239Jx", "FLdoxohfTLi", "x2doLcCkoCH", "jC4LnBQvgcj", "IHJTVfhg_8D", "L7ja6_3qNL2", "pd1HE3gJxv3" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Neural GPUs are designed explicitly for sequential inputs. In fact, the results in (Kaiser & Sutskever 2016) are only on binary bit strings. Our results are therefore a valuable exploration into algorithm learning work.\n\nFurthermore, the structure of neural GPUs limits their generalization capabilities to longe...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "WlHzkWjPnKw", "Upf0RUkEaN", "1CRaSsYmEp4", "x2doLcCkoCH", "_54fF_239Jx", "FLdoxohfTLi", "jC4LnBQvgcj", "pd1HE3gJxv3", "L7ja6_3qNL2", "IHJTVfhg_8D", "nips_2021_Tsp2PL7-GQ", "nips_2021_Tsp2PL7-GQ", "nips_2021_Tsp2PL7-GQ", "nips_2021_Tsp2PL7-GQ" ]
nips_2021_OfdQxpZbQMB
Sub-Linear Memory: How to Make Performers SLiM
Valerii Likhosherstov, Krzysztof M. Choromanski, Jared Quincy Davis, Xingyou Song, Adrian Weller
accept
The reviewers have reached a consensus in favor of accepting this paper. I concur with this consensus. I expect that the proposed changes suggested in the author response would improve the clarity of the final version of the paper.
train
[ "93rqgytyn02", "kXn4L1f5Hfv", "EOgxtLgm-u2", "Odg9CWmyB94", "dUv7MchWobf", "4fov2cWailY", "EQ18FgB8aeC", "XYgsQx3niCP", "7ghASMIYgdv", "ggjl73yhBZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes a modification of linear self-attention that allows to trade off parallel execution speed for memory usage. Authors provide extensive comparison of their method with baseline approaches in several configurations and on multiple datasets. Strengths:\n1) A simple but effective approach to memory ...
[ 6, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "nips_2021_OfdQxpZbQMB", "4fov2cWailY", "dUv7MchWobf", "ggjl73yhBZ", "7ghASMIYgdv", "93rqgytyn02", "XYgsQx3niCP", "nips_2021_OfdQxpZbQMB", "nips_2021_OfdQxpZbQMB", "nips_2021_OfdQxpZbQMB" ]
nips_2021_TLIHuw3gcQB
Efficient Learning of Discrete-Continuous Computation Graphs
Numerous models for supervised and reinforcement learning benefit from combinations of discrete and continuous model components. End-to-end learnable discrete-continuous models are compositional, tend to generalize better, and are more interpretable. A popular approach to building discrete-continuous computation graphs is that of integrating discrete probability distributions into neural networks using stochastic softmax tricks. Prior work has mainly focused on computation graphs with a single discrete component on each of the graph's execution paths. We analyze the behavior of more complex stochastic computations graphs with multiple sequential discrete components. We show that it is challenging to optimize the parameters of these models, mainly due to small gradients and local minima. We then propose two new strategies to overcome these challenges. First, we show that increasing the scale parameter of the Gumbel noise perturbations during training improves the learning behavior. Second, we propose dropout residual connections specifically tailored to stochastic, discrete-continuous computation graphs. With an extensive set of experiments, we show that we can train complex discrete-continuous models which one cannot train with standard stochastic softmax tricks. We also show that complex discrete-stochastic models generalize better than their continuous counterparts on several benchmark datasets.
accept
Arguing that the task of learning parameters in stochastic computation graphs containing sequences of discrete random variables is difficult due to vanishing gradients resulting from the use of the Gumbel-softmax relaxation, the authors propose two techniques to counteract this difficulty: (i) dropout residual connections (DropRes) that bypass the discrete sampling / discretization step with some probability and (ii) annealing the Gumbel scale parameter while keeping the relaxation temperature fixed. The authors demonstrate the effectiveness of the techniques at training discrete/continuous models to perform unsupervised parsing of lists of operations and multi-hop reasoning in knowledge graphs. The reviewers agreed that the techniques are novel, and the paper is interesting and potentially influential. The experimental results are also impressive, with the proposed methods being clearly helpful at achieving good performance on fairly sophisticated tasks. The experiments could be more systematic and informative however by including more ablations. Including at least one modern REINFORCE+variance reduction technique as a baseline would be nice, as such methods do not suffer from vanishing gradients (suffering from higher gradient variance instead). The paper could be also improved by making the motivating argument about vanishing gradients more formal and improving the clarity of writing throughout, taking into account the reviewer suggestions.
train
[ "QC-t00NwQg", "-H6Xp2odAPm", "BOcxYphE8QH", "UrNu1TArOmc", "gAyv899t3Pt", "y01DKsTnLL-", "KQa697Y_eX_", "1wff0cgVZK8", "ey5d_hkgxL", "2uag5ZKFBfS", "cCAypQH1fZN", "FswXI8ArpDE", "tPkaF7D6dPk", "oGZz0446hoN", "XmJ1gLtDyKl", "FZ2xWy1-OnR", "Hw2LN8GYq2V", "jiIFunGVu0R", "AQWFlhSFE06...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " If I understand correctly, your point is that $\\epsilon_j$ will be larger as $\\beta$ grows? I agree with that, but $\\epsilon_i$ will also have mean $\\beta \\gamma$, so it will also grow as $\\beta$ grows, and at the same rate. Since the softmax function is invariant to constant shifts of all inputs (e.g. addi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "y01DKsTnLL-", "BOcxYphE8QH", "UrNu1TArOmc", "ey5d_hkgxL", "nips_2021_TLIHuw3gcQB", "KQa697Y_eX_", "1wff0cgVZK8", "FswXI8ArpDE", "jiIFunGVu0R", "nips_2021_TLIHuw3gcQB", "FswXI8ArpDE", "tPkaF7D6dPk", "oGZz0446hoN", "XmJ1gLtDyKl", "FZ2xWy1-OnR", "2uag5ZKFBfS", "E7ohqJvqOMZ", "PfSgBjx...
nips_2021_EO-CQzgcIxd
VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization
Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the "neighbor explosion" problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks.
accept
3 of 4 ratings were "accept". The concerns from the reviewer who gave the lowest rating (a 5) were not that clear to me through the extended conversation with the authors, and I'm not swayed by their concerns. The authors did extensive further experiments in response to the feedback, which reinforced their results. Overall the believe the paper is a borderline-to-clear accept, given the novelty and performance.
train
[ "ib1W2ikDp9T", "3L-Mkvbe-9e", "NE8EavQlm-v", "DYSNNR8CfQ", "73tbtJA1hWa", "1bIUPozi9W", "I2ZOvyiP-n5", "LnXvxwbpdz", "N8gp86i_DR9", "YDHTq4S9K8c", "2By4_v67AFI", "54zgWAEB-XI", "6x6wNoIH16D", "6tWQb6wRneK", "yId_sLwATLm", "MrFNXtUUtcn", "hSSwUU9lW2x", "E-q2bneYSvk", "bRrKvgjLwaf"...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "...
[ " We are grateful for Reviewer UTjD's latest response. As suggested, we have carefully checked our code of Cluster-GCN and GraphSAINT-RW (which are based on the implementations in PyG [3]), and they are sound to the best of our knowledge and efforts. One thing we did find helped speed up the baselines is to downgra...
[ -1, -1, -1, -1, 6, -1, -1, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, -1, 3, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "3L-Mkvbe-9e", "DYSNNR8CfQ", "I2ZOvyiP-n5", "I2ZOvyiP-n5", "nips_2021_EO-CQzgcIxd", "hSSwUU9lW2x", "bRrKvgjLwaf", "nips_2021_EO-CQzgcIxd", "nips_2021_EO-CQzgcIxd", "6x6wNoIH16D", "E-q2bneYSvk", "6x6wNoIH16D", "nips_2021_EO-CQzgcIxd", "N8gp86i_DR9", "N8gp86i_DR9", "nips_2021_EO-CQzgcIxd...
nips_2021_ALvt7nXa2q
Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima
This paper considers incremental few-shot learning, which requires a model to continually recognize new categories with only a few examples provided. Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting. Our analysis further suggests that to prevent catastrophic forgetting, actions need to be taken in the primitive stage -- the training of base classes instead of later few-shot learning sessions. Therefore, we propose to search for flat local minima of the base training objective function and then fine-tune the model parameters within the flat region on new tasks. In this way, the model can efficiently learn new classes while preserving the old ones. Comprehensive experimental results demonstrate that our approach outperforms all prior state-of-the-art methods and is very close to the approximate upper bound. The source code is available at https://github.com/moukamisama/F2M.
accept
After reading the paper and other reviews, and interaction with the authors via the rebuttal, all reviewers recommend to accept the paper. It's well written and motivated, and the proposed method, which is relatively simple, is reasonably evaluated in the context of incremental few-shot learning.
train
[ "_8IUtHghr-w", "HOdTyhLStCT", "0gKZFtRErIm", "C6P0gjDwFJ-", "4rPHnOBHunF", "xdv_X-PbY03", "ktYzKJQYPRj", "4lsVbmkbCCd", "3xl3tCf3yVP", "-c7pOg0_Tho", "oQcvLoyEpvL", "9VXbJRTnzV", "gsMlrkj5r8", "hlqP-jG83wX", "-HoudDj7yu", "Onj4JQrR4", "MwL5UNIFel" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " The authors properly addressed all the raised questions. Therefore, I maintain my initial rating which is 7,", "This paper describes an approach to incremental few-shot learning based on\nfinding a good starting configuration in parameter space that leads to more\nstable incremental few-shot learning with less ...
[ -1, 9, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "-HoudDj7yu", "nips_2021_ALvt7nXa2q", "C6P0gjDwFJ-", "-c7pOg0_Tho", "nips_2021_ALvt7nXa2q", "4lsVbmkbCCd", "nips_2021_ALvt7nXa2q", "gsMlrkj5r8", "oQcvLoyEpvL", "oQcvLoyEpvL", "hlqP-jG83wX", "nips_2021_ALvt7nXa2q", "ktYzKJQYPRj", "4rPHnOBHunF", "MwL5UNIFel", "HOdTyhLStCT", "nips_2021_...
nips_2021_MMZ4djXrwbu
Functional Neural Networks for Parametric Image Restoration Problems
Almost every single image restoration problem has a closely related parameter, such as the scale factor in super-resolution, the noise level in image denoising, and the quality factor in JPEG deblocking. Although recent studies on image restoration problems have achieved great success due to the development of deep neural networks, they handle the parameter involved in an unsophisticated way. Most previous researchers either treat problems with different parameter levels as independent tasks, and train a specific model for each parameter level; or simply ignore the parameter, and train a single model for all parameter levels. The two popular approaches have their own shortcomings. The former is inefficient in computing and the latter is ineffective in performance. In this work, we propose a novel system called functional neural network (FuncNet) to solve a parametric image restoration problem with a single model. Unlike a plain neural network, the smallest conceptual element of our FuncNet is no longer a floating-point variable, but a function of the parameter of the problem. This feature makes it both efficient and effective for a parametric problem. We apply FuncNet to super-resolution, image denoising, and JPEG deblocking. The experimental results show the superiority of our FuncNet on all three parametric image restoration tasks over the state of the arts.
accept
This paper develops a functional neural network for image restoration/reconstruction tasks. The authors develop an adaptive restoration for different parameter levels using a single model essentially by linearly mixing the two networks. This is somewhat akin to doing meta-learning for restoration. The authors provide various experiments for a variety of restorations tasks showing improvements compared to existing approaches. The reviewers mostly thought the paper's approach was interesting and commended the clarity of the paper. However, they also raised concerns about novelty, differentiation of task and network parameters, baselines. There was a lively discussion in the rebuttal period and both the authors and reviewers engaged in the discussion. As a result most reviewers while somewhat lukewarm lean towards acceptance. I share this opinion. The paper has some interesting ideas but can improve in other aspects. I recommend acceptance but ask the authors to follow the recommendations of the reviewers to improve the final peresentation.
val
[ "U03n3KQpYcE", "XM9Bt2VJPF", "C5QfeTnLW7h", "HSId-ofSyz", "W0cKHTcVdaH", "d2PaXO-BrhB", "IdFIBp9oTDs", "oVDmCNMhKFK", "vYjNQrBOVQ", "VRfUKB_oWw", "_5Ln5fav-yv", "8_kN04hLwqO", "9slb3Q9LjYs", "SxE94leMmE", "kDyi1KpATE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you these clarifications. I have accounted for this information and my original misunderstanding in my final recommendation.", " Thank you for the additional experiments, they are encouraging referencing more recent work", " Thanks for your feedback. Following your request for comparisons with more rece...
[ -1, -1, -1, -1, 6, -1, 6, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, 4, -1, 4, -1, 5, -1, -1, -1, -1, -1, 4 ]
[ "oVDmCNMhKFK", "C5QfeTnLW7h", "IdFIBp9oTDs", "d2PaXO-BrhB", "nips_2021_MMZ4djXrwbu", "9slb3Q9LjYs", "nips_2021_MMZ4djXrwbu", "VRfUKB_oWw", "nips_2021_MMZ4djXrwbu", "SxE94leMmE", "kDyi1KpATE", "IdFIBp9oTDs", "W0cKHTcVdaH", "vYjNQrBOVQ", "nips_2021_MMZ4djXrwbu" ]
nips_2021_099uYP0EKsJ
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess \emph{fractal structures}, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal's \emph{intrinsic dimension}, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (\eg, for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the 'persistent homology dimension' (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network's intrinsic dimension in a variety of settings, which is predictive of the generalization error.
accept
This paper identifies a notion of intrinsic dimension that can be rigorously linked to generalization performance. Given the scarceness of rigorous results in this field, it is an important achievement. It backs up the theoretical results by numerical experiments. A number of reviewers pointed out the difficulty to read and interpret the figures. It is important that the authors make on effort on that point before publication. The reviewers seem mostly convinced by the authors answers. After the rebuttal, reviewer J8S7 acknowledges the originality of the theoretical contribution of the paper (its main strength), but remains partially convinced by the answers concerning the large gap in terms of performance with respect to the state of the arts networks. I completely agree with reviewer J8S7 that such performance gap is a source of doubts. Therefore, I invite the authors to or discuss in details the reasons for that gap, and the choice behind these experiments despite this gap (that can be closed with simple networks), as they discuss in their response; or to improve the generalization. Otherwise, it gives an impression that the results of the paper may only apply in trivial regimes of (lack of) learning and therefore would not be relevant to applications. That being said, I have no doubts that all these improvements will be implemented by the authors and will only improve the overall quality.
train
[ "PsvdUkMZD2Z", "FaJ71EibI4-", "PI_TSh5gFC", "GOnbJ9Z1MI", "SMvrQKgk7sO", "ktREa2lMI1", "htxDXUXn-J", "iQsLC-SghHj", "zMiYTAyu9ol", "i-aCrd2x24h", "5dyMtkje9xP", "HMxT2qkysgG", "-BsbvO97bdH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors establish a relationship between the generalization error of trajectories obtained from a training algorithm and the persistent homology (PH) dimension. The theoretical contribution of this work involves combining two results: (1) box dimension of a bounded set can be computed using PH0 dimension [KLS0...
[ 8, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 2 ]
[ 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_099uYP0EKsJ", "ktREa2lMI1", "GOnbJ9Z1MI", "i-aCrd2x24h", "nips_2021_099uYP0EKsJ", "htxDXUXn-J", "SMvrQKgk7sO", "zMiYTAyu9ol", "-BsbvO97bdH", "HMxT2qkysgG", "PsvdUkMZD2Z", "nips_2021_099uYP0EKsJ", "nips_2021_099uYP0EKsJ" ]
nips_2021_HS_sOaxS9K-
GemNet: Universal Directional Graph Neural Networks for Molecules
Effectively predicting molecular interactions has the potential to accelerate molecular dynamics by multiple orders of magnitude and thus revolutionize chemical simulations. Graph neural networks (GNNs) have recently shown great successes for this task, overtaking classical methods based on fixed molecular kernels. However, they still appear very limited from a theoretical perspective, since regular GNNs cannot distinguish certain types of graphs. In this work we close this gap between theory and practice. We show that GNNs with directed edge embeddings and two-hop message passing are indeed universal approximators for predictions that are invariant to translation, and equivariant to permutation and rotation. We then leverage these insights and multiple structural improvements to propose the geometric message passing neural network (GemNet). We demonstrate the benefits of the proposed changes in multiple ablation studies. GemNet outperforms previous models on the COLL, MD17, and OC20 datasets by 34%, 41%, and 20%, respectively, and performs especially well on the most challenging molecules. Our implementation is available online.
accept
This paper was generally well received. Reviewers unanimously agreed that the paper was interesting and impactful, especially given the widespread interest in message passing neural networks to model atomistic systems. It seems clear that NeurIPS is an appropriate venue for this publication. The authors provided a number of clarifications and further experiments in response to the reviews that helped to address several concerns (in particular regarding baselines against existing models such as NequIP and ablation studies). The main issues that could not be addressed during the rebuttal period were the clarity of exposition which several reviewers commented on. I would encourage the authors to work on simplifying the paper as much as possible in the time leading up to the camera ready.
test
[ "6P20eEr78Dz", "LJXTMbsqA8K", "o18oBtnm5Gk", "VkP4IUIuJB5", "2TOYnQsG3P", "nAoV6hrvA5j", "8vydfIdkbQ", "3t9_yHU-73k", "NtSBAB7JbP", "ywgskNKnopI" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a model Geometric Message Passing Neural Network (GemNet) which leverages directed edge embeddings and a new two-hop message passing scheme. The model is more expressive and performs convincingly better than the most related baseline (DimeNet) as well as other baselines across a large range of ...
[ 7, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "nips_2021_HS_sOaxS9K-", "NtSBAB7JbP", "NtSBAB7JbP", "ywgskNKnopI", "nips_2021_HS_sOaxS9K-", "6P20eEr78Dz", "3t9_yHU-73k", "nips_2021_HS_sOaxS9K-", "nips_2021_HS_sOaxS9K-", "nips_2021_HS_sOaxS9K-" ]
nips_2021_St4i_-UoQWQ
Loss function based second-order Jensen inequality and its application to particle variational inference
Bayesian model averaging, obtained as the expectation of a likelihood function by a posterior distribution, has been widely used for prediction, evaluation of uncertainty, and model selection. Various approaches have been developed to efficiently capture the information in the posterior distribution; one such approach is the optimization of a set of models simultaneously with interaction to ensure the diversity of the individual models in the same way as ensemble learning. A representative approach is particle variational inference (PVI), which uses an ensemble of models as an empirical approximation for the posterior distribution. PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models. However, despite its promising performance, a theoretical understanding of this repulsion and its association with the generalization ability remains unclear. In this paper, we tackle this problem in light of PAC-Bayesian analysis. First, we provide a new second-order Jensen inequality, which has the repulsion term based on the loss function. Thanks to the repulsion term, it is tighter than the standard Jensen inequality. Then, we derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models. Finally, we derive a new PVI that optimizes the generalization error bound directly. Numerical experiments demonstrate that the performance of the proposed PVI compares favorably with existing methods in the experiment.
accept
This submission uses a tighter inequality for the likelihood than Jensen's inequality by a second order expansion. The additional terms are then interpreted as a diversity term. A PAC-Bayesian analysis is then developed based on this inequality. The submission initially received a mix of scores straddling the acceptance threshold. An external expert was consulted giving a mixed opinion (see copy of comments below), while the lower scoring reviewer was convinced to revise their score upwards during the rebuttal process. On the balance, the paper is viewed positively, but has some important issues that should be corrected prior to publication. These relate to incorporating reviewer comments (see below), but importantly to avoid obvious technical inaccuracies such as "MAP estimation can be used to obtain samples from the posterior" (section 2, from expert opinion below). Additional expert opinion: "The idea is simple overall, they use a thighter inequality than Jensen For the likelihood by exploiting second order expansion. This results in an additional terms which is interpreted as a diversity term. They combine this bound with a pac-bayesian bound to define a suitable loss for sampling from posterior in a particle fashion. Then they proceed to relate this loss to existing methods in particular w-SGLD and DPP, showing that their bound can result in methods that are formally similar to these prior works I didn’t find the paper particularly nice to read, for instance **some important details raised by reviewer 2 was only discussed in the appendix without mentioning them in the main ** The experiements did not compare with very standard benchmarks such as ELBO or even SMC methods which is also a particle methods. In the text they say that they used to HMC as a baseline, but I couldn’t see the performance of HMC in the results Also some inaccuracies in section 2.1: they claim that MAP estimation can be used to obtain samples from the posterior, but of course this is wrong... this is on the negative side on the positive side: The PAC-bayesian people might be interested in such result as it relies on a tighter bound than Jensen to derive the PAC-bayesian bound. And the connection to other methods is also interesting at a high level, although it is unclear to what extend this connection can say anything specific about w-SGLD and DPP in terms of generalization bound, etc If the paper is accepted, then the remarks by Reviewer 2 should be taken into account and in general the paper should mention in the main text what is going on in the appendix"
train
[ "OLm8J7NODIf", "SqFk38deRM1", "GFxd1WO1JF", "OVfxbVbbtQG", "-ekYAV9vj84", "379JEbPzR1", "rIgmnryVJcY", "GlZhVfxlAth", "hYy4mp0d4X", "75dFuTt_jFv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thanks. The author addressed my most concerns. I will raise my score.", "This paper derives a tighter Jensen Inequality and applied it into the framework of particle variational inference. Different from previous work, it tackles this problem in light of PAC-Bayesian analysis, and provides a new second-order Je...
[ -1, 6, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "OVfxbVbbtQG", "nips_2021_St4i_-UoQWQ", "nips_2021_St4i_-UoQWQ", "rIgmnryVJcY", "GlZhVfxlAth", "hYy4mp0d4X", "SqFk38deRM1", "75dFuTt_jFv", "GFxd1WO1JF", "nips_2021_St4i_-UoQWQ" ]
nips_2021_-440wKL2oJV
Detecting and Adapting to Irregular Distribution Shifts in Bayesian Online Learning
We consider the problem of online learning in the presence of distribution shifts that occur at an unknown rate and of unknown intensity. We derive a new Bayesian online inference approach to simultaneously infer these distribution shifts and adapt the model to the detected changes by integrating ideas from change point detection, switching dynamical systems, and Bayesian online learning. Using a binary ‘change variable,’ we construct an informative prior such that--if a change is detected--the model partially erases the information of past model updates by tempering to facilitate adaptation to the new data distribution. Furthermore, the approach uses beam search to track multiple change-point hypotheses and selects the most probable one in hindsight. Our proposed method is model-agnostic, applicable in both supervised and unsupervised learning settings, suitable for an environment of concept drifts or covariate drifts, and yields improvements over state-of-the-art Bayesian online learning approaches.
accept
This paper proposed a Bayesian framework for detecting and adaptive to irregular distribution shifts. The reviewers generally appreciated the submission, and felt that the author response did a good job of addressing questions. While there were still some remaining questions about design decisions in the approach (see reviews), on the whole the paper represents a good contribution. Please carefully consider and incorporate reviewer comments in the final version.
train
[ "0jnhKBQXK6", "2vHvVnUWwVX", "JzVTY7L2nI5", "ArhEnX7TEC", "d1kjC7Qf_9", "AZiPBMBaaU", "0KJjwV_RDaw", "lUVXdgjXits", "vwORUxHrUe", "sNhtqi9Sjze", "-rThZ0pA8p9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response, that clarifies it. I will keep my score.", "In this paper, the authors propose a method to jointly model the data and also change points in the model parameters.\nThis is accomplished following a Bayesian approach, where the full vector of change point detections (as binary indicator...
[ -1, 4, 8, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, 4, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "d1kjC7Qf_9", "nips_2021_-440wKL2oJV", "nips_2021_-440wKL2oJV", "AZiPBMBaaU", "sNhtqi9Sjze", "JzVTY7L2nI5", "-rThZ0pA8p9", "2vHvVnUWwVX", "nips_2021_-440wKL2oJV", "nips_2021_-440wKL2oJV", "nips_2021_-440wKL2oJV" ]
nips_2021_9x10Q5J8e9W
Asynchronous Decentralized SGD with Quantized and Local Updates
Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes \emph{global} communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, \emph{asynchronous gossip} model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called \emph{SwarmSGD} still converges in this setting, even if \emph{non-blocking communication}, \emph{quantization}, and \emph{local steps} are all applied \emph{in conjunction}, and even if the node data distributions and underlying graph topology are both \emph{heterogenous}. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks.
accept
This paper presents a gossip-style method to perform distributed optimization of a sum of individual functions, combining stochastic gradient updates with quantization mechanisms. In addition to a theoretical guarantee on the scheme's convergence, the paper reports on experiments training i) ResNets in pytorch on CIFAR-10/100 and ImageNet datasets, and a larger Transformer-XL model in tensor flow on the WMT17 dataset. The reviews for this paper all recommend acceptance, being especially laudative about the scale of the experiments performed to validate the proposed approach. The reviews also point to an implicit assumption of atomic updates in the algorithm's operation, which the authors acknowledge and will make explicit in the final version. The authors' experimental work is indeed quite significant. It is nice to have a theoretical guarantee; a discussion of the meaning in theory and in the experiments of the assumptions (especially the 'Bounded Local Function Variance' assumption) would be enlightening.
train
[ "lCUsRFxJ-Bh", "DJyET8oCO6F", "cqNrdLuqbR2", "gb0Z7gnySRE", "lfPWS9fDBxH", "j-4kQJ4d8ah", "Hm_F2qI-mk", "awDSqgCApk", "8D__UYup1h", "91IEPXqGFOL", "-i4eHNFQdez", "ABGZ0MwSPu2" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your detailed response! We believe we have converged on most points, and will just make two very minor clarifications:\n\n1. The second-moment bound requirement can in fact be completely removed, if we do not use quantization. We will state the resulting bound in the next version of our wo...
[ -1, -1, 7, -1, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "DJyET8oCO6F", "j-4kQJ4d8ah", "nips_2021_9x10Q5J8e9W", "Hm_F2qI-mk", "nips_2021_9x10Q5J8e9W", "ABGZ0MwSPu2", "cqNrdLuqbR2", "nips_2021_9x10Q5J8e9W", "lfPWS9fDBxH", "-i4eHNFQdez", "nips_2021_9x10Q5J8e9W", "nips_2021_9x10Q5J8e9W" ]
nips_2021_cc_AXK6rWPJ
Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret
Jean Tarbouriech, Runlong Zhou, Simon S. Du, Matteo Pirotta, Michal Valko, Alessandro Lazaric
accept
This paper makes solid contribution to pushing the frontier of the stochastic shortest path problem. All reviewers support accept strongly.
train
[ "Kzh5KDH5LRx", "Y2GtXJaA4MM", "o8cZwe7uDR", "WicY10vbVxO", "HAFJyRDbJK0", "PEo58uqjhm", "xMhGNZG_7xl", "1AgqdxkagnQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper investigates the problem of regret minimization of the stochastic shortest path, which is actively studied in the recent two years. The main contribution is a novel algorithm that enjoys nearly minimax optimal regret, matching the existing lower bound up to some log factors. Furthermore, the algorithm, ...
[ 8, 7, -1, -1, -1, -1, 8, 7 ]
[ 3, 4, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_cc_AXK6rWPJ", "nips_2021_cc_AXK6rWPJ", "Y2GtXJaA4MM", "1AgqdxkagnQ", "xMhGNZG_7xl", "Kzh5KDH5LRx", "nips_2021_cc_AXK6rWPJ", "nips_2021_cc_AXK6rWPJ" ]
nips_2021_z3tlL2MeTK2
Nested Counterfactual Identification from Arbitrary Surrogate Experiments
The Ladder of Causation describes three qualitatively different types of activities an agent may be interested in engaging in, namely, seeing (observational), doing (interventional), and imagining (counterfactual) (Pearl and Mackenzie, 2018). The inferential challenge imposed by the causal hierarchy is that data is collected by an agent observing or intervening in a system (layers 1 and 2), while its goal may be to understand what would have happened had it taken a different course of action, contrary to what factually ended up happening (layer 3). While there exists a solid understanding of the conditions under which cross-layer inferences are allowed from observations to interventions, the results are somewhat scarcer when targeting counterfactual quantities. In this paper, we study the identification of nested counterfactuals from an arbitrary combination of observations and experiments. Specifically, building on a more explicit definition of nested counterfactuals, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones. For instance, applications in mediation and fairness analysis usually evoke notions of direct, indirect, and spurious effects, which naturally require nesting. Second, we introduce a sufficient and necessary graphical condition for counterfactual identification from an arbitrary combination of observational and experimental distributions. Lastly, we develop an efficient and complete algorithm for identifying nested counterfactuals; failure of the algorithm returning an expression for a query implies it is not identifiable.
accept
This paper was discussed at length between the AC and SAC, and between the SAC and program chairs. The AC was strongly against publication due to the lack of appropriate engagement with prior work. The SAC and program chairs agree with this concern. The discussion of the relationship with prior work is insufficient and absolutely must be addressed in the final version. However, in light of the positive sentiment from four out of five reviewers, we believe the paper should be accepted. (We are counting as "positive sentiment" one of the expert reviewers who gave the paper a low score, based on their comment: "This is a potentially strong paper but I would not like to see it published in the current form because I see that the presentation could be improved in many ways. I am afraid that publishing the paper in the current form would create confusion. On the other hand, I think it is relatively easy to fix the paper and resubmit.") We expect the authors to address the points raised by the expert reviewers. We would not be accepting this paper if we didn't think this was something the authors were capable of doing. The original meta-review provided by the AC follows. ---- A summary from one of the reviews: "The authors study the identification of nested counterfactuals from an arbitrary combination of observations and experiments. They prove the counterfactual unnesting theorem (CUT), which allows for the mapping of nested to unnested counterfactuals. Then they introduce 1) sufficient and necessary graphical conditions for counterfactual identification from an arbitrary combination of observational and empirical distributions, and 2) an algorithm for identifying nested counterfactuals, the failure of which implies non-identifiability." The paper was initially received fairly positively. However, some positive reviews were very short and lacking detail, and all were relatively low confidence. As a result, an additional expert reviewer's opinion was solicited, and (at the behest of the senior AE), another expert reviewer. The paper received an extensive discussion, both with the authors and among reviewers. The result of this discussion was an overall reduction of scores. The concerns were as follows. First, the authors' contribution builds on prior work on identification of joints and conditionals of counterfactual events. While the authors do cite this work, both expert reviewers independently felt that there was far from a sufficient comparison to it. In particular, while it is clear that the authors consider a generalization (where only some interventional distributions are allowed, in the spirit of (Lee et al, 2019)), it is not at all clear if sufficiently novel methods, in light of existing prior work, are needed to address this generalization. The authors' manuscript, as it is currently written, does not make it clear what is novel developments unique to the paper, and what is a reformulation of known prior work. Second, the authors claim to consider identification of nested distributions, and discussion of nested distributions forms much of the paper (and indeed appears in the title). However, the authors do not consider a complete algorithm for the problem of identifiability of nested counterfactuals (and indeed did not seem to answer a direct question about this). Thus, reviewers felt the contribution, as written, did not properly emphasize the actual target they considered. In other words, it is true that the algorithm considered nested counterfactuals as valid targets, but only in a weak sense that such a counterfactual is a (marginal) function of a joint set of events (via what the authors call the CUT). But it is surely the case that in some cases marginals are identified while joints are not. As a result, the reviewers felt the paper needs a restructure and another round of reviews. Feedback for authors. Two independent expert reviewers agreed that the appear needs a more extensive discussion of prior work, in order to make it clearer how novel the contribution is. The authors also need to make it clearer what the completeness argument applies to. It appears not to apply to nested distributions (a class of queries appearing in the title). This is quite confusing. Please address these issues when preparing your revision.
train
[ "CcRAwfQ-36X", "qqLsYbrLpRn", "kVOUm9wmC7F", "J1frWs7TBZ", "fMDhWQyPF53", "Ow3HNzyUVC", "7npYx-8_wWx", "MnE4ImnHvHg", "OpuoQZb56YD", "4EWDCJYYUtT", "pFEI6Xd42DP", "Dj7ZFj8ve7E", "wY4KIJk4ju3", "yDTKVZEpiZ", "1ptj-zc5iyL", "GeD3hHCN_B", "B_1Mh2gcDz" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for clarifying the scope of Theorem 5 and 7 and the relationship to unit-level counterfactuals.\n\nDespite the mixed reviews of this submission, my positive assessment of the work remains unchanged.", " Since this is an interesting point for the reviewers and potential readers who are familiar with SP...
[ -1, -1, -1, -1, 6, -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, 8, 7 ]
[ -1, -1, -1, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "Dj7ZFj8ve7E", "kVOUm9wmC7F", "7npYx-8_wWx", "Ow3HNzyUVC", "nips_2021_z3tlL2MeTK2", "wY4KIJk4ju3", "MnE4ImnHvHg", "nips_2021_z3tlL2MeTK2", "4EWDCJYYUtT", "yDTKVZEpiZ", "B_1Mh2gcDz", "GeD3hHCN_B", "fMDhWQyPF53", "1ptj-zc5iyL", "nips_2021_z3tlL2MeTK2", "nips_2021_z3tlL2MeTK2", "nips_20...
nips_2021_t0B9XQwRDi
Sim and Real: Better Together
Simulation is used extensively in autonomous systems, particularly in robotic manipulation. By far, the most common approach is to train a controller in simulation, and then use it as an initial starting point for the real system. We demonstrate how to learn simultaneously from both simulation and interaction with the real environment. We propose an algorithm for balancing the large number of samples from the high throughput but less accurate simulation and the low-throughput, high-fidelity and costly samples from the real environment. We achieve that by maintaining a replay buffer for each environment the agent interacts with. We analyze such multi-environment interaction theoretically, and provide convergence properties, through a novel theoretical replay buffer analysis. We demonstrate the efficacy of our method on a sim-to-real environment.
accept
The subject of this paper is sim2real transfer, and it tackles this with two different parts. Firstly, it proposes an algorithm for sampling interactions from two different replay buffers, a simulated and a real one, the objective being to sample far less samples from the real environment, where interactions are costly. The algorithm is evaluated in an experimental part. Secondly, the paper contains a theoretical part with a congergence analysis of the provided algorithm. This paper always was on the fence and was discussed by 4 reviewers, who essentially agreed on the strengths and weaknesses of the paper, but who disagreed on how to weight them and on the final decision. All reviewers agreed that the algorithm itself was not sufficiently novel and did exist in some form in the literature. They also agreed on the weaknesses of the experimental part, in particular the simplicity of the chosen tasks and environments. It then boiled down to whether the theoretical insights (convergence properties) are of interest to the community and whether they justify accepting. Here, reviewers were split 50/50. The AC agreed that this paper is on the fence but judges that the theoretical derivations are of value and merit publishing in NeurIPS. This paper was discussed between the AC and SAC.
train
[ "qwM2Jsh8kJq", "PbbNF5_3jeG", "55RWONL-b9K", "5_ADw1Hc5a2", "BifNFeYVEww", "lULwB4bvG8", "VQySEVj2T9f", "lKQKAxnILBo" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper addresses the challenge of sim-to-real transfer, i.e., how to use simulations for robot learning in the real world. It proposes the use of information from rollouts in both the simulated and the real world. The underlying technique is reminiscent of importance sampling. While the approach is explained fo...
[ 4, -1, -1, -1, -1, 7, 6, 4 ]
[ 3, -1, -1, -1, -1, 3, 3, 4 ]
[ "nips_2021_t0B9XQwRDi", "VQySEVj2T9f", "lKQKAxnILBo", "lULwB4bvG8", "qwM2Jsh8kJq", "nips_2021_t0B9XQwRDi", "nips_2021_t0B9XQwRDi", "nips_2021_t0B9XQwRDi" ]
nips_2021_EckG_zyssVj
Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma Distributions
Multimodal regression is a fundamental task, which integrates the information from different sources to improve the performance of follow-up applications. However, existing methods mainly focus on improving the performance and often ignore the confidence of prediction for diverse situations. In this study, we are devoted to trustworthy multimodal regression which is critical in cost-sensitive domains. To this end, we introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result. Our model can be dynamically aware of uncertainty for each modality, and also robust for corrupted modalities. Furthermore, the proposed MoNIG ensures explicitly representation of (modality-specific/global) epistemic and aleatoric uncertainties, respectively. Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks (e.g., temperature prediction for superconductivity, relative location prediction for CT slices, and multimodal sentiment analysis).
accept
In this paper, the authors propose a new regression method for multi-modal (e.g., image and text) data. More specifically, the authors extend the Deep Evidential Regression method for a multi-modal regression problem. The method depends heavily on the existing methods. However, the formulation is new and the fusion strategy has some merit. Thus, I also vote for an acceptance. For the camera-ready version, I expect authors to revise the paper based on the reviewer's comments.
train
[ "Hqo35gTHcC", "_xM62hvxgJF", "dwxzpLEnG2z", "e-ad5LMLPi", "55WY0FJBECU", "6ZDIyPGsoE", "s1uXQU-5htW", "H0yag5CPU8J", "7NAKkoJ0cP", "5H7j8KBVF3B", "YF985P9IwFW", "YyNE7rsmkAi", "QcGMsNjgC_Y", "P4M_rt0t8_Z", "EAXMRNdXGna" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a Mixture of Normal-Inverse Gamma distributions algorithm to estimate uncertainty in principle for adaptive integration of different modalities.\n\nThis research problem proposed in this paper is very important, and the author proposed some techniques and conducted some experiments. This paper...
[ 8, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_EckG_zyssVj", "nips_2021_EckG_zyssVj", "6ZDIyPGsoE", "H0yag5CPU8J", "s1uXQU-5htW", "5H7j8KBVF3B", "YF985P9IwFW", "YyNE7rsmkAi", "EAXMRNdXGna", "_xM62hvxgJF", "P4M_rt0t8_Z", "Hqo35gTHcC", "nips_2021_EckG_zyssVj", "nips_2021_EckG_zyssVj", "nips_2021_EckG_zyssVj" ]
nips_2021_HRE7guiwMgG
An Empirical Study of Adder Neural Networks for Object Detection
Xinghao Chen, Chang Xu, Minjing Dong, Chunjing XU, Yunhe Wang
accept
On the whole the reviewers agreed that the paper was well presented and thoroughly evaluated. There were some points raised on how much of an improvement this provides and some clarity questions, but these concerns were addressed by the authors in the rebuttal, and I think the paper results will be of interest to the NeurIPS community.
train
[ "1NtvPd-Lv3N", "0Y-eKAhZSM-", "lMXaRVM0o6j", "r63_-FNNS7A", "mYmhE0sbDET", "lb7e72cuvC_", "0nU1webyFLp", "mftd_30HHcY" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the constructive comments.\n\n**Impact of each individual tweak seems small**\n\nThanks for the concerns. The results shown in Table 1 indicate the different components do have considerably significant improvements. Other techniques like hyper-parameter tuning, cleaning data, or some oth...
[ -1, -1, -1, -1, 6, 4, 5, 5 ]
[ -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "mftd_30HHcY", "0nU1webyFLp", "lb7e72cuvC_", "mYmhE0sbDET", "nips_2021_HRE7guiwMgG", "nips_2021_HRE7guiwMgG", "nips_2021_HRE7guiwMgG", "nips_2021_HRE7guiwMgG" ]
nips_2021_7J-fKoXiReA
Does Knowledge Distillation Really Work?
Knowledge distillation is a popular technique for training a small student network to emulate a larger teacher model, such as an ensemble of networks. We show that while knowledge distillation can improve student generalization, it does not typically work as it is commonly understood: there often remains a surprisingly large discrepancy between the predictive distributions of the teacher and the student, even in cases when the student has the capacity to perfectly match the teacher. We identify difficulties in optimization as a key reason for why the student is unable to match the teacher. We also show how the details of the dataset used for distillation play a role in how closely the student matches the teacher --- and that more closely matching the teacher paradoxically does not always lead to better student generalization.
accept
The answer of the authors and subsequent discussion made most reviewers agree that this paper presents results about distillation that are solid and of interest to the community.
train
[ "g8bOs7rlm5b", "kk3rP-BJfR9", "Cvy42zm6-Di", "WIOpo_l8_jx", "jOmK7EijjmY", "2zA5r7F1Acx", "LugFDsnVpDi", "wuJz3jjRhO", "9pXJBh3GKJ", "bTrjiIzgWQs", "5MT97YfS5B", "I4jjcUk1Joe", "NpqnkTkL9Bc", "ktlGjfKLGzL", "WgQHrQpB0q", "-vzrUpSELa0", "eihwloLvm34", "Kd9Qkv1geOR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " I would like to thank the authors for carefully trying to address the many points raised in the reviews. I think that the additions to the manuscript help delineate the somewhat complex but intriguing interplay between fidelity and generalization. Personally, I see as a positive the fact that the whole phenomenon...
[ -1, 7, -1, -1, 5, -1, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, 3, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "WgQHrQpB0q", "nips_2021_7J-fKoXiReA", "2zA5r7F1Acx", "LugFDsnVpDi", "nips_2021_7J-fKoXiReA", "eihwloLvm34", "NpqnkTkL9Bc", "nips_2021_7J-fKoXiReA", "nips_2021_7J-fKoXiReA", "jOmK7EijjmY", "nips_2021_7J-fKoXiReA", "nips_2021_7J-fKoXiReA", "wuJz3jjRhO", "Kd9Qkv1geOR", "kk3rP-BJfR9", "9p...
nips_2021_PPh6lqP5BO
Teachable Reinforcement Learning via Advice Distillation
Training automated agents to perform complex behaviors in interactive environments is challenging: reinforcement learning requires careful hand-engineering of reward functions, imitation learning requires specialized infrastructure and access to a human expert, and learning from intermediate forms of supervision (like binary preferences) is time-consuming and provides minimal information per human intervention. Can we overcome these challenges by building agents that learn from rich, interactive feedback? We propose a new supervision paradigm for interactive learning based on teachable decision-making systems, which learn from structured advice provided by an external teacher. We begin by introducing a class of human-in-the-loop decision making problems in which different forms of human provided advice signals are available to the agent to guide learning. We then describe a simple policy learning algorithm that first learns to interpret advice, then learns from advice to target tasks in the absence of human supervision. In puzzle-solving, navigation, and locomotion domains, we show that agents that learn from advice can acquire new skills with significantly less human supervision required than standard reinforcement or imitation learning systems.
accept
The reviewers were very impressed by the amount of additional material provided during the author response phase. The extensive replies addressed all major concerns of the reviewers. Incorporating all that in the paper will be a challenge though.
train
[ "HMkc8ASB0Ys", "9mYRf5onrou", "1cKHCnGBzr2", "3UdXdPNRxvJ", "PjyAPXlcRa_", "4YXdwuhnrFh", "hrmtlGd07eX", "eSh3BdO7LE", "j0BxjMr9HVp", "znjfnIpoQK6", "5vs7cQgVXFo", "z-v9CqY7bN", "e9RUnSnnRTD", "jD6xVMTz3U3", "IMsS8Tjxru", "fLacYwrscN", "CFmx3pDEsv7", "xprLPzlQaB", "P4CQaAXNd9W" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_re...
[ " I am maintaining my original score of 7.", " I Raised the score to 7!", "This paper presents a framework for teaching agents using language advice. The framework consists of three stages: (i) grounding language advice to action, (ii) learning an unconditioned policy that mimics the advice-conditioned policy (...
[ -1, -1, 7, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "3UdXdPNRxvJ", "3UdXdPNRxvJ", "nips_2021_PPh6lqP5BO", "hrmtlGd07eX", "jD6xVMTz3U3", "hrmtlGd07eX", "CFmx3pDEsv7", "nips_2021_PPh6lqP5BO", "nips_2021_PPh6lqP5BO", "nips_2021_PPh6lqP5BO", "znjfnIpoQK6", "e9RUnSnnRTD", "IMsS8Tjxru", "xprLPzlQaB", "j0BxjMr9HVp", "P4CQaAXNd9W", "1cKHCnGBz...
nips_2021_sR1XB9-F-rv
Antipodes of Label Differential Privacy: PATE and ALIBI
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP) with respect to the labels of the training examples. We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.While recent work by Ghazi et al. proposed Label DP schemes based on a randomized response mechanism, we argue that additive Laplace noise coupled with Bayesian inference (ALIBI) is a better fit for typical ML tasks. Moreover, we show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework that builds on recent advances in semi-supervised learning.We complement theoretical analysis of our algorithms' privacy guarantees with empirical evaluation of their memorization properties. Our evaluation suggests that comparing different algorithms according to their provable DP guarantees can be misleading and favor a less private algorithm with a tighter analysis.Code for implementation of algorithms and memorization attacks is available from https://github.com/facebookresearch/labeldpantipodes.
accept
In private deliberation, reviewers seemed to feel that the paper was sound, but perhaps not the most exciting (in particular, not the most novelty in PATE-FM). Nonetheless, they felt it was thorough enough and above the bar for NeurIPS.
train
[ "g8sMMBFR1O-", "JWt1hg44plz", "D3OjMclJGm", "jeHNjrQp8C_", "WmaRRKRQK_F", "RJ1KwaOpfiK", "w9C8J5jgLu", "FsWqo3BIvNp", "A9Mi83sVpp", "NzpcJs4GNVD", "VOCBGsYPup", "0k2bnbuniRb", "llsCe4BssoM", "Nnn7Q3uSyI", "gvWnQ6RPDS4", "qbB14U0xVjV", "wH8g4G7g7ix", "BI80hVd5X-g" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewers for their questions and comments. The paper has definitely improved as a result. We would like to check one last time if there are any pending questions that we have not adequately addressed.", " Thanks for the confirmation. We will include a summarized version in the main t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_sR1XB9-F-rv", "w9C8J5jgLu", "RJ1KwaOpfiK", "WmaRRKRQK_F", "A9Mi83sVpp", "FsWqo3BIvNp", "NzpcJs4GNVD", "0k2bnbuniRb", "Nnn7Q3uSyI", "gvWnQ6RPDS4", "nips_2021_sR1XB9-F-rv", "BI80hVd5X-g", "VOCBGsYPup", "wH8g4G7g7ix", "qbB14U0xVjV", "nips_2021_sR1XB9-F-rv", "nips_2021_sR1XB9-...
nips_2021_ar85GL0N11
Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
Visual search is a ubiquitous and often challenging daily task, exemplified by looking for the car keys at home or a friend in a crowd. An intriguing property of some classical search tasks is an asymmetry such that finding a target A among distractors B can be easier than finding B among A. To elucidate the mechanisms responsible for asymmetry in visual search, we propose a computational model that takes a target and a search image as inputs and produces a sequence of eye movements until the target is found. The model integrates eccentricity-dependent visual recognition with target-dependent top-down cues. We compared the model against human behavior in six paradigmatic search tasks that show asymmetry in humans. Without prior exposure to the stimuli or task-specific training, the model provides a plausible mechanism for search asymmetry. We hypothesized that the polarity of search asymmetry arises from experience with the natural environment. We tested this hypothesis by training the model on augmented versions of ImageNet where the biases of natural images were either removed or reversed. The polarity of search asymmetry disappeared or was altered depending on the training protocol. This study highlights how classical perceptual properties can emerge in neural network models, without the need for task-specific training, but rather as a consequence of the statistical properties of the developmental diet fed to the model. All source code and data are publicly available at https://github.com/kreimanlab/VisualSearchAsymmetry.
accept
This paper describes a new computational model of search asymmetry. The paper received two positive reviews and initially one negative review but the rebuttal satisfied this reviewer who increased their score to a marginal accept. All reviewers commented on the many strengths of this paper. Most of the limitations brought up by the reviewers were addressed in the rebuttal and the remaining ones should be stated in the discussion.
train
[ "Mft8aXMjdjs", "_O_k-mBvE9T", "9FOU8M7dTh", "myHtVAFJ3mz", "Tsokci0tWm-", "63IRBkIqga", "II4lWzY08L", "tliPx8FTlj2", "Z0vxG8WPHC", "ZDKP9yYq8PO" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the feedback! Yes, we will revise the final version according to what we promised in the rebuttal period.", "This paper presents a paradigm in which a deep neural network searches for a target image. This is accomplished by 2 streams, one of which represents the search object and the o...
[ -1, 6, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "9FOU8M7dTh", "nips_2021_ar85GL0N11", "myHtVAFJ3mz", "II4lWzY08L", "63IRBkIqga", "ZDKP9yYq8PO", "_O_k-mBvE9T", "Z0vxG8WPHC", "nips_2021_ar85GL0N11", "nips_2021_ar85GL0N11" ]
nips_2021_Xci6vUAGeJ
On the Universality of Graph Neural Networks on Large Random Graphs
We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs. In the large graph limit, GNNs are known to converge to certain ``continuous'' models known as c-GNNs, which directly enables a study of their approximation power on random graph models. In the absence of input node features however, just as GNNs are limited by the Weisfeiler-Lehman isomorphism test, c-GNNs will be severely limited on simple random graph models. For instance, they will fail to distinguish the communities of a well-separated Stochastic Block Model (SBM) with constant degree function. Thus, we consider recently proposed architectures that augment GNNs with unique node identifiers, referred to as Structural GNNs here (SGNNs). We study the convergence of SGNNs to their continuous counterpart (c-SGNNs) in the large random graph limit, under new conditions on the node identifiers. We then show that c-SGNNs are strictly more powerful than c-GNNs in the continuous limit, and prove their universality on several random graph models of interest, including most SBMs and a large class of random geometric graphs. Our results cover both permutation-invariant and permutation-equivariant architectures.
accept
The paper studies structural GNNs in the large graph limit where they converge towards their continuous counterpart c-SGNN. The paper presents a variety of theoretical results showing that c-SGNNs are provably superior to the continuous version of vanilla GNNs namely, c-GNNs. In particular the paper shows that c-SGNNs can recover communities in a stochastic block models in regimes where c-GNNs fail. All the reviewers agreed that the paper presents a novel and interesting set of theoretical results. However, the reviewers also felt that the paper can do a better job of discussing and comparing with existing literature. I recommend the paper for acceptance with a strong recommendation to the authors that they take into account the reviewer comments about existing work when preparing the camera ready version.
test
[ "1sD5P19vDdC", "HMMR9tlSn3x", "gujnXIBgxOT", "gC4condRBih", "FodSm3KTYjP", "klIDs02kmbC", "3xCbeRBTbDV", "Ce6NHGYLMpi" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed review.\n- **Clarification of the scope of our results**: we agree that the scope of the results deserves more explanation, and will elaborate on this in the paper. In this paper, our goal was primarily to characterize the power of GNNs of *large* graphs with many nodes. While graph iso...
[ -1, -1, -1, -1, 9, 7, 8, 5 ]
[ -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "Ce6NHGYLMpi", "3xCbeRBTbDV", "klIDs02kmbC", "FodSm3KTYjP", "nips_2021_Xci6vUAGeJ", "nips_2021_Xci6vUAGeJ", "nips_2021_Xci6vUAGeJ", "nips_2021_Xci6vUAGeJ" ]
nips_2021_-DyvEp1VsmT
Inverse Reinforcement Learning in a Continuous State Space with Formal Guarantees
Inverse Reinforcement Learning (IRL) is the problem of finding a reward function which describes observed/known expert behavior. The IRL setting is remarkably useful for automated control, in situations where the reward function is difficult to specify manually or as a means to extract agent preference. In this work, we provide a new IRL algorithm for the continuous state space setting with unknown transition dynamics by modeling the system using a basis of orthonormal functions. Moreover, we provide a proof of correctness and formal guarantees on the sample and time complexity of our algorithm. Finally, we present synthetic experiments to corroborate our theoretical guarantees.
accept
This paper provides an IRL algorithm for control processes with continuous states, discrete controls, and unknown transition dynamics with formal sample/time complexity guarantees. The authors addressed many of the initial concerns raised by the reviewers. The primary remaining ones are: (1) clarity and organization of the work---particularly where the assumptions are introduced; (2) the standardness and practicality of the assumption of a known and strictly optimal expert policy (rather than trajectory samples). The first issue seems possible to resolve in the paper's revision. For the second issue, the reviewers in discussion tended to view the paper as making an important theoretical contribution that could lead to extensions with more realistic/relaxed assumptions or serve as a bridge toward continuous state and control settings. Thus, there is still significant merit to the paper even with these concerns remaining. Given all of this, I recommend (weak) acceptance for the paper, but consider it to be the most borderline of the papers I am recommending for acceptance and I am not opposed to papers with stronger proponents being prioritized above it as space limits may require. As a minor comment to the authors: I agree with Reviewer pRjN that the paper title could potentially be misleading and suggest revision to "... Continuous State ..."
val
[ "GK64ohkzYzi", "DHg6MP2qUwx", "tIpGIf3Wwx", "joSOHj9cLAi", "S9tvPMiYyd7", "6fvkEBy5oM", "FCrUmNRJoU0", "Qw9bCQH_u4F", "92uNkuOmUX", "hrBOI0iO0LF", "1PjSdfSpmS", "gksaFPq_nrh", "Hq2I0mxc6YG", "uqfjhLyqQOp", "LKpW5Cmhsz9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the IRL problem in the presence of a compact state space and finite actions. Under the assumption that the transition model of the MDP can be represented by means of an (infinite) set of orthogonal basis functions, the authors prove that there always exists a reward function explaining the expert...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "nips_2021_-DyvEp1VsmT", "hrBOI0iO0LF", "S9tvPMiYyd7", "gksaFPq_nrh", "FCrUmNRJoU0", "Qw9bCQH_u4F", "92uNkuOmUX", "1PjSdfSpmS", "LKpW5Cmhsz9", "GK64ohkzYzi", "uqfjhLyqQOp", "Hq2I0mxc6YG", "nips_2021_-DyvEp1VsmT", "nips_2021_-DyvEp1VsmT", "nips_2021_-DyvEp1VsmT" ]
nips_2021_eXxnkL3QfDY
Adversarial Attacks on Graph Classifiers via Bayesian Optimisation
Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models.
accept
This paper provides a new Bayesian Optimization approach to developing adversarial attacks on graphs. While some prior work has used a similar approach for other domains (e.g., images), it seems that the authors are the first to develop a successful attack methods for graphs in this way. The reviewers felt that the experiments were mostly thorough and at least were convinced about the contributions of this paper. Moreover, there was a lot of discussion, but the reviewers seem to have reached consensus with the authors on the major issues. Therefore, I recommend acceptance of this paper. Given the lengthy discussion, I would strongly encourage the authors to be thoughtful and thorough in the revision for the next version of the paper. It seems that many of the reviewer concerns can be directly addressed in the paper by improving the writing and exposition.
train
[ "nHo68JNsbyg", "dGjCKPihtr", "NolzGIc8eYo", "2jM4GR0xzQr", "aAMccmewmzI", "F3X65wzeLy1", "aURAyikSIdh", "B3LtRa7PmYU", "1K2UeN-6l_1", "N08XGYq0-s", "HqmCmQR3bgD", "ZZ0CKOHloLi", "c-Yndg9o_oK", "okqfHPkK6Yc", "wJJ_gKYf9kR", "SIOIne7FjRS", "GBkc1Uyzm--", "Yi5018rIwzm", "Ur8MWu8hDQS...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_r...
[ " The authors' response has addressed my major concerns. Please make the promised changes accordingly in the updated version. ", "The paper proposes GRABNEL, which formulates the adversarial attack on graph classification as a black-box optimisation problem and uses Bayesian optimisation to solve it. Experiments ...
[ -1, 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "N08XGYq0-s", "nips_2021_eXxnkL3QfDY", "nips_2021_eXxnkL3QfDY", "bUMJzg-Am8c", "c-Yndg9o_oK", "ZZ0CKOHloLi", "1K2UeN-6l_1", "nips_2021_eXxnkL3QfDY", "Yi5018rIwzm", "HqmCmQR3bgD", "wJJ_gKYf9kR", "2jM4GR0xzQr", "SIOIne7FjRS", "NolzGIc8eYo", "dGjCKPihtr", "GBkc1Uyzm--", "nyq1lN_Dpf", ...
nips_2021_8RnRLP4SHe0
Regulating algorithmic filtering on social media
By filtering the content that users see, social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences. This influence has drawn scrutiny, with many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging. In this work, we examine three questions. First, given a regulation, how would one design an audit to enforce it? Second, does the audit impose a performance cost on the platform? Third, how does the audit affect the content that the platform is incentivized to filter? In response, we propose a method such that, given a regulation, an auditor can test whether that regulation is met with only black-box access to the filtering algorithm. We then turn to the platform's perspective. The platform's goal is to maximize an objective function while meeting regulation. We find that there are conditions under which the regulation does not place a high performance cost on the platform and, notably, that content diversity can play a key role in aligning the interests of the platform and regulators.
accept
This paper studies the problem of auditing whether algorithmic filtering implementations respect a given regulation from a theoretical perspective. The authors propose a simple hypothesis test that regulators could use to determine compliance given black-box access to a filtering algorithm. They show that their algorithm has desirable asymptotic properties. Finally, the authors show that under their framework platforms are incentivized to provide diverse content to their users. The topic is very timely and the presented theory/methodology is a strong contribution. Initially, some of the reviewers had concerns regarding the presentation of the results. The authors' reply as well as especially the follow-up explanation/description by one of the reviewers, who enthusiastically championed the paper, cleared up these concerns to some extent. However, it is very important that the authors incorporate the reviewers' suggestions in the final version of the paper, including the running example provided by one of the reviewers, so that a wider audience can appreciate the contribution.
val
[ "K0KNGFL11Ls", "-s0-CqASOVu", "2csfwCfbeIq", "ZK_Snh_yrU4", "3WABT2y5K1U", "wv2DJp-tHBa", "PJjk6JMllD1", "yDKrXAtxLhp", "bP4voFoxfvK", "5z78SAmqVrd", "zfTwBcphRN1", "_Mm68yC_dRe", "SepN_OKyKxw", "aHMqA4WnRc-" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you! This summary (and the discussion as a whole) has been very helpful. ", " Thank you for the rebuttal! In discussions, three specific suggestions for improvement have emerged, which perhaps are better summarized in a new comment:\n- Grounding modelling choices (thank you for all the pointers above, the...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 9, 5, 6 ]
[ -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "-s0-CqASOVu", "5z78SAmqVrd", "PJjk6JMllD1", "wv2DJp-tHBa", "nips_2021_8RnRLP4SHe0", "yDKrXAtxLhp", "bP4voFoxfvK", "3WABT2y5K1U", "_Mm68yC_dRe", "SepN_OKyKxw", "aHMqA4WnRc-", "nips_2021_8RnRLP4SHe0", "nips_2021_8RnRLP4SHe0", "nips_2021_8RnRLP4SHe0" ]
nips_2021_UDaab5xzpj
argmax centroid
We propose a general method to construct centroid approximation for the distribution of maximum points of a random function (a.k.a. argmax distribution), which finds broad applications in machine learning. Our method optimizes a set of centroid points to compactly approximate the argmax distribution with a simple objective function, without explicitly drawing exact samples from the argmax distribution. Theoretically, the argmax centroid method can be shown to minimize a surrogate of Wasserstein distance between the ground-truth argmax distribution and the centroid approximation under proper conditions. We demonstrate the applicability and effectiveness of our method on a variety of real-world multi-task learning applications, including few-shot image classification, personalized dialogue systems and multi-target domain adaptation.
accept
This paper is well motivated by the problem of learning the distribution of a random function's minimizer) which has a wide range of applications. The paper made a solid contribution to this important problem by proposing an interesting and novel method (argmax centroids) with theoretical guarantees and experimental validations. During the rebuttal, concerns regarding the proofs of theorem 2.2 were addressed. The final version should address the following points as well as other suggestions made by the reviewers: 1. correcting these typos and highlighting the contribution in the introduction part. 2. Discussion of the limitation of choosing the best model using a labeled set of test data 3. providing detailed proofs of the folklore result on K-means and Wasserstein
train
[ "Gzw8hnt6xZf", "Pr8ARCcYycb", "uibMmixgsnu", "YHAhMA7wceH", "fZfkRXhTcdl", "Bj8qDcrBHBq", "cofiyuI0CkU", "cMLaTM5PuWA", "ZI62xPTYtfY", "2c9qHIIS-e", "_7GKJ0ZSqfU", "WHEPiWqPosm", "VKuHSUrWUbz", "X0uVzlEx7h", "n308jnsb3wQ", "nv4R_pN--AE", "Z9AKWLTZazm", "xJzC2-tAeFy" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer ZWfd, we want to thank you again for your comments and efforts. We will provide a detailed discussion on computation cost, as well as the case of no labels in new tasks in the revision. \n\n", " Dear reviewer pFrh, thank you again for your attention to our work. We were wondering if our response a...
[ -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 7 ]
[ -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "cMLaTM5PuWA", "nv4R_pN--AE", "cofiyuI0CkU", "nips_2021_UDaab5xzpj", "WHEPiWqPosm", "nips_2021_UDaab5xzpj", "VKuHSUrWUbz", "ZI62xPTYtfY", "xJzC2-tAeFy", "nv4R_pN--AE", "YHAhMA7wceH", "Z9AKWLTZazm", "Bj8qDcrBHBq", "n308jnsb3wQ", "nips_2021_UDaab5xzpj", "nips_2021_UDaab5xzpj", "nips_20...
nips_2021_txWfwhc6gi
Contrastive Learning of Global and Local Video Representations
Contrastive learning has delivered impressive results for various tasks in the self-supervised regime. However, existing approaches optimize for learning representations specific to downstream scenarios, i.e., global representations suitable for tasks such as classification or local representations for tasks such as detection and localization. While they produce satisfactory results in the intended downstream scenarios, they often fail to generalize to tasks that they were not originally designed for. In this work, we propose to learn video representations that generalize to both the tasks which require global semantic information (e.g., classification) and the tasks that require local fine-grained spatio-temporal information (e.g., localization). We achieve this by optimizing two contrastive objectives that together encourage our model to learn global-local visual information given audio signals. We show that the two objectives mutually improve the generalizability of the learned global-local representations, significantly outperforming their disjointly learned counterparts. We demonstrate our approach on various tasks including action/sound classification, lip reading, deepfake detection, event and sound localization.
accept
All the reviewers appreciated the global and local message, the clear exposition, the ablations, and the additional experiments carried out in response to the reviews.
train
[ "ZvLp4WfnpTc", "r8DTDHo-yhl", "zegml8-YOBT", "WxLISpGhawn", "PywV1cMhjWj", "bOzPqYdfFCo", "frm5PFXoVo", "50xF4E9VQ_z", "Dx8de9lZD4", "YDzWOvQav6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an audio-visual self-supervised learning approach based on two cross-modal contrastive losses that learn audio-visual representations that can generalize to both the tasks which require global semantic information and localized spatio-temporal information. Extensive experiments on 4 downstream ...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "nips_2021_txWfwhc6gi", "zegml8-YOBT", "PywV1cMhjWj", "ZvLp4WfnpTc", "Dx8de9lZD4", "50xF4E9VQ_z", "YDzWOvQav6", "nips_2021_txWfwhc6gi", "nips_2021_txWfwhc6gi", "nips_2021_txWfwhc6gi" ]
nips_2021_qKRr_rNCEPz
BooVI: Provably Efficient Bootstrapped Value Iteration
Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang
accept
The paper improves the state-of-the-art analysis of randomized RL algorithms with function approximation. The authors should improve the presentation and add more empirical support.
train
[ "E2wCOJEq_wt", "wB2sxW3VJZq", "gBgpkBi5HtO", "mPbxXVXxT2W", "n-fTCl3JvQo", "LenLQNn8__h", "0nvN65yGIsN", "51HSqHKO-yB", "ZbSTrMPUbEr", "B-1H_TLepYE", "9Zk2ca1TZr0", "006xnerdHcv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes an approach for provably efficient reinforcement learning in linear MDPs that avoids constructing explicit confidence intervals. Instead, akin to posterior sampling, this method only requires sampling from a posterior distribution. The paper proves that this approach still yields optimistic valu...
[ 5, -1, 6, -1, 6, 8, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, -1, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_qKRr_rNCEPz", "006xnerdHcv", "nips_2021_qKRr_rNCEPz", "0nvN65yGIsN", "nips_2021_qKRr_rNCEPz", "nips_2021_qKRr_rNCEPz", "51HSqHKO-yB", "B-1H_TLepYE", "gBgpkBi5HtO", "n-fTCl3JvQo", "LenLQNn8__h", "E2wCOJEq_wt" ]
nips_2021_wxjtOI_8jO
Do Wider Neural Networks Really Help Adversarial Robustness?
Adversarial training is a powerful type of defense against adversarial examples. Previous empirical results suggest that adversarial training requires wider networks for better performances. However, it remains elusive how does neural network width affect model robustness. In this paper, we carefully examine the relationship between network width and model robustness. Specifically, we show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability, which is controlled by the robust regularization parameter λ. With the same λ, wider networks can achieve better natural accuracy but worse perturbation stability, leading to a potentially worse overall model robustness. To understand the origin of this phenomenon, we further relate the perturbation stability with the network's local Lipschitzness. By leveraging recent results on neural tangent kernels, we theoretically show that wider networks tend to have worse perturbation stability. Our analyses suggest that: 1) the common strategy of first fine-tuning λ on small networks and then directly use it for wide model training could lead to deteriorated model robustness; 2) one needs to properly enlarge λ to unleash the robustness potential of wider models fully. Finally, we propose a new Width Adjusted Regularization (WAR) method that adaptively enlarges λ on wide models and significantly saves the tuning time.
accept
The reviewers had raised a few concerns which were mostly resolved after the authors' response. All the reviewer agreed in the discussions that the contributions of the paper are important and interesting. Also, the reviews contain all the points mentioned in the discussions, so there is nothing more to add here.
train
[ "pGV-2YMNSMW", "YVIwEfFnPOl", "BDeaJxnXTo-", "faXb7gWd_JY", "LcEs4_4Gjba", "zshrK06U3xY", "vilRlhisyc1", "4aW6ZzT9xrh", "5prgBswKALK", "bzX8QtmUFO", "AZJ3CapZznA", "BXOEax7dDtI", "VYua3JWLo8", "Bcx7B2zMANg" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors investigate the relation between network width and adversarial robustness. There are some attempts to connect to some theory, but the results are largely empirical. \n - The main problem I find with the current paper is that the main message is not clear. There are some empirical comparisons based on ...
[ 5, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5, 3 ]
[ "nips_2021_wxjtOI_8jO", "LcEs4_4Gjba", "faXb7gWd_JY", "AZJ3CapZznA", "bzX8QtmUFO", "4aW6ZzT9xrh", "nips_2021_wxjtOI_8jO", "5prgBswKALK", "vilRlhisyc1", "Bcx7B2zMANg", "VYua3JWLo8", "pGV-2YMNSMW", "nips_2021_wxjtOI_8jO", "nips_2021_wxjtOI_8jO" ]
nips_2021_j5NrN8ffXC
Exploring the Limits of Out-of-Distribution Detection
Near out-of-distribution detection (OOD) is a major challenge for deep neural networks. We demonstrate that large-scale pre-trained transformers can significantly improve the state-of-the-art (SOTA) on a range of near OOD tasks across different data modalities. For instance, on CIFAR-100 vs CIFAR-10 OOD detection, we improve the AUROC from 85% (current SOTA) to more than 96% using Vision Transformers pre-trained on ImageNet21k. On a challenging genomics OOD detection benchmark, we improve the AUROC from 66% to 77% using transformer and unsupervised pre-training. To further improve performance, we explore the few-shot outlier exposure setting where a few examples from outlier classes may be available; we show that pre-trained transformers are particularly well-suited for outlier exposure, and that the AUROC of OOD detection on CIFAR-100 vs CIFAR-10 can be improved to 98.7% with just 1 image per OOD class, and 99.46% with 10 images per OOD class. For multi-modal image-text pre-trained transformers such as CLIP, we explore a new way of using just the names of outlier classes as a sole source of information without any accompanying images, and show that this outperforms previous SOTA on standard OOD benchmark tasks.
accept
The reviewers are in agreement that the paper results are clearly presented and thoroughly validated, and the takeaways on ViTs helping with (few-shot) OOD identification should be of broad interest to the community.
train
[ "aYncndhj8XH", "6ofQ_T3sLqd", "c5J-oyPWIHV", "mvRBL5dAyrq", "88rYWxye78s", "RJTb0TzQuz", "maLaa_NZcz1", "xgue-AnFdvG", "yPs1wuoAYbO", "9Qgj7t7VVEu", "QxelCAXs-n3", "Odx5R29QQgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks for the rebuttal, I am satisfied with the explanation, but still conservative about the potential usage of this method for the community, therefore, I keep my score unchanged.", " I appreciate the authors response and additional experiments. They mostly resolved my concern. I raised my initial rating. ",...
[ -1, -1, 6, -1, 6, -1, 7, -1, -1, -1, -1, 8 ]
[ -1, -1, 5, -1, 5, -1, 4, -1, -1, -1, -1, 4 ]
[ "xgue-AnFdvG", "yPs1wuoAYbO", "nips_2021_j5NrN8ffXC", "QxelCAXs-n3", "nips_2021_j5NrN8ffXC", "9Qgj7t7VVEu", "nips_2021_j5NrN8ffXC", "maLaa_NZcz1", "c5J-oyPWIHV", "Odx5R29QQgz", "88rYWxye78s", "nips_2021_j5NrN8ffXC" ]
nips_2021_1G6jPa9SKYG
ABC: Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning
Existing semi-supervised learning (SSL) algorithms typically assume class-balanced datasets, although the class distributions of many real world datasets are imbalanced. In general, classifiers trained on a class-imbalanced dataset are biased toward the majority classes. This issue becomes more problematic for SSL algorithms because they utilize the biased prediction of unlabeled data for training. However, traditional class-imbalanced learning techniques, which are designed for labeled data, cannot be readily combined with SSL algorithms. We propose a scalable class-imbalanced SSL algorithm that can effectively use unlabeled data, while mitigating class imbalance by introducing an auxiliary balanced classifier (ABC) of a single layer, which is attached to a representation layer of an existing SSL algorithm. The ABC is trained with a class-balanced loss of a minibatch, while using high-quality representations learned from all data points in the minibatch using the backbone SSL algorithm to avoid overfitting and information loss. Moreover, we use consistency regularization, a recent SSL technique for utilizing unlabeled data in a modified way, to train the ABC to be balanced among the classes by selecting unlabeled data with the same probability for each class. The proposed algorithm achieves state-of-the-art performance in various class-imbalanced SSL experiments using four benchmark datasets.
accept
This paper presents a method for This paper has four favorable reviews, describing beneficial methodology for semi-supervised learning for imbalanced data, and good experimental results for real data sets. In contrast, a reviewer pointed out that the combination of the existing class-imbalance learning (CIL) and semi-supervised learning is straightforward, and therefore the novelty is limited. The authors argue that it is not straightforward to utilize unlabeled data in CIL and that there are no such studies in the past. However, related research is not entirely absent; for example, T. Iwata, A. Fujino, N. Ueda, "Semi-supervised Learning for Maximizing the Partial AUC," proc. of AAAI 2020 has already been proposed. The authors should compare the proposed algorithm not only with CIL and naive combined methods, but also with other existing related works.
test
[ "se_u6ok7a2_", "y43o59Ahcif", "Vvcd-PUi7Dp", "bQtfAcge6rT", "orI7TB4Q6Yt", "mMbc6NrN5mG", "Hhc2_-ZZ99m", "sRY9OZX4-98", "y3Y0zkqu0SU", "hkszcuO22a", "XiYX1cKKGOn", "HpoA8yBClmJ", "TVlJLV__6N4", "r5BP-8gK_F6", "i-DevfEH6ro" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your additional efforts and time to carefully read our response. We are happy to hear that our response successfully addressed your concerns. We sincerely appreciate your insightful comments that helped us clarify the novelties of our method and improve the manuscript.", "This work takes class-imb...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5 ]
[ "y43o59Ahcif", "nips_2021_1G6jPa9SKYG", "bQtfAcge6rT", "hkszcuO22a", "mMbc6NrN5mG", "HpoA8yBClmJ", "y3Y0zkqu0SU", "y43o59Ahcif", "XiYX1cKKGOn", "TVlJLV__6N4", "i-DevfEH6ro", "r5BP-8gK_F6", "nips_2021_1G6jPa9SKYG", "nips_2021_1G6jPa9SKYG", "nips_2021_1G6jPa9SKYG" ]
nips_2021_gbtDcLzwKUb
BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG).Recent advances have enabled effective maximum-likelihood point estimation of DAGs from observational data. However, a point estimate may not accurately capture the uncertainty in inferring the underlying graph in practical scenarios, wherein the true DAG is non-identifiable and/or the observed dataset is limited.We propose Bayesian Causal Discovery Nets (BCD Nets), a variational inference framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.Developing a full Bayesian posterior over DAGs is challenging due to the the discrete and combinatorial nature of graphs.We analyse key design choices for scalable VI over DAGs, such as 1) the parametrization of DAGs via an expressive variational family, 2) a continuous relaxation that enables low-variance stochastic optimization, and 3) suitable priors over the latent variables.We provide a series of experiments on real and synthetic data showing that BCD Nets outperform maximum-likelihood methods on standard causal discovery metrics such as structural Hamming distance in low data regimes.
accept
This article introduces an approach to causal graph inference using variational Bayesian inference. The paper focuses on linear-Gaussian structural equation models, and employs a carefully designed variational approximation to the posterior on directed acyclic graphs (DAGs) to handle the challenging combinatorial nature of the graph space; the proposed approach is called LiGa-VI. The posterior is parametrized in terms of $(P,L,\Sigma)$ where $P$ is a permutation matrix for the vertex ordering, $L$ is a lower triangular matrix such that $P L P^\texttt{T}$ is the graph adjacency matrix, and $\Sigma$ is a diagonal covariance matrix for the Gaussian model. For the variational approximation $q_\phi(P,L,\Sigma)$, the authors use a normal distribution or normalizing flow for $q_\phi(L,\Sigma)$ and a Gumbel-Sinkhorn relaxation of the Gumbel-Matching distribution over permutations for $q_\phi(P|L,\Sigma)$. Gradient-based optimization is used to optimize the variational parameter $\phi$ to fit the posterior by minimizing KL divergence. Experiments are performed to assess performance versus several competing methods on synthetic and real data. The variational approximation used over DAG space is interesting and innovative. The experimental performance appears to be surprisingly good in terms of concentration near the true graph. Overall, the reviewers were quite positive about the paper, but I have some significant concerns. 1) No improvement in performance as the sample size $n$ increases? I'm puzzled by the performance as a function of the sample size $n$ in the additional figure supplied by the authors in their reply to Reviewer PQX5 (https://www.dropbox.com/s/clzi1tzp0jgdwr0/new_fig_n.png?dl=0). Why does the proposed method (LiGa-VI) not benefit from having more data? Similarly, why does the Bayesian method GADGET not benefit from having more data after $n$ increases beyond $300$? Other methods such as Ghosal's are clearly benefiting from having more data. This makes me concerned that the good performance of LiGa-VI here is solely due to something trivial like the choice of prior. This needs to be explained. Also, I highly recommend adding this plot to the paper. 2) I'm skeptical about Figure 2. To have such a big difference in performance, it seems like something about this simulation is cherry picked to yield good performance. Any Bayesian method with the same prior should yield roughly the same accuracy here, up to the accuracy of the posterior approximation. Is there a big difference in the priors used by GADGET and LiGa-VI? Or is GADGET not sampling well from the posterior? It is fine to present simulations that showcase the proposed method, but they need to be put in context to show the relative strengths *and* weaknesses of the method. 3) The computation time of LiGa-VI is not that attractive (Appendix H, Table 4), taking around 2-10x the time required for GOLEM even though GOLEM is using cross-validation while LiGa-VI is not. I found this disappointing since I would have expected a variational approach to be more computationally attractive. I would highly recommend including the computation time for the other competing approaches, especially GADGET since it employs MCMC for Bayesian inference. 4) Mathematical notation and unclear exposition. Some of the notation and descriptions were not at all clear, such as "$q_\phi(L,\sigma) = \mathcal{N}(\mu_\phi,\sigma_\phi)$" for the variational approximation to the posterior on $(L,\Sigma)$ --- the left-hand side is a density while the right-hand side is a distribution, where does $L$ enter into the right-hand side, and how does this yield a distribution over $\sigma$? Please make sure all mathematical expressions are properly written for clear exposition. I was also confused by this phrase in Appendix H: "we are training a neural network with many Sinkhorn iterations per optimization step", since I didn't see where LiGa-VI is using a neural network in the description of the method. This needs to be clarified.
train
[ "j-00x_iRgyG", "6AecmwjqqnK", "EIDISBkPvIK", "JBxz4ZviJQU", "q9J5FN_E8g1", "HfzoOAsaEYb", "BkgGLxlnXMZ", "UP6mZGupukX", "GkXblq8RGB7", "dKc6cM9xHKT", "FC7DPdm4DuD", "6mHc5WDzubo", "vm91MJ_1lZE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " I thank the authors for providing the codebase and am glad they look forward to open source the contribution. I also appreciate the elaborate answer to my comment. I agree that the the assumption of not having unobserved variables is quite common in the relevant literature. Expanding the discussion of limitations...
[ -1, 6, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 2, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "6mHc5WDzubo", "nips_2021_gbtDcLzwKUb", "FC7DPdm4DuD", "nips_2021_gbtDcLzwKUb", "GkXblq8RGB7", "nips_2021_gbtDcLzwKUb", "dKc6cM9xHKT", "nips_2021_gbtDcLzwKUb", "JBxz4ZviJQU", "HfzoOAsaEYb", "6AecmwjqqnK", "vm91MJ_1lZE", "nips_2021_gbtDcLzwKUb" ]
nips_2021_2E4AT-qj3Dg
Discovering Dynamic Salient Regions for Spatio-Temporal Graph Neural Networks
Graph Neural Networks are perfectly suited to capture latent interactions between various entities in the spatio-temporal domain (e.g. videos). However, when an explicit structure is not available, it is not obvious what atomic elements should be represented as nodes. Current works generally use pre-trained object detectors or fixed, predefined regions to extract graph nodes. Improving upon this, our proposed model learns nodes that dynamically attach to well-delimited salient regions, which are relevant for a higher-level task, without using any object-level supervision. Constructing these localized, adaptive nodes gives our model inductive bias towards object-centric representations and we show that it discovers regions that are well correlated with objects in the video. In extensive ablation studies and experiments on two challenging datasets, we show superior performance to previous graph neural networks models for video classification.
accept
This paper received both positive and negative reviews. After a round of discussion between the reviewers and reading the author's rebuttal, three of the four reviewers recommended accepting the paper and I am happy to accept it. I encourage the authors to include in the final version the additional material that they mentioned in the rebuttal.
train
[ "Q6TY87ouQS_", "a_ih4LdJiHb", "FBSg4GPN6Wv", "ujYPV2IFlLt", "LZCLPwSZ0ub", "_3_O-BttVZ5", "TSOpVj6YHl4", "riMYCusRFtH", "NnJDS9DrxcH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a graph based network for spatio-temporal understanding. The core idea of the paper is to learn to select visual features for node representations in graph dynamically using convolutional encoders. The paper shows competitive results in the Smth-Smth v1 and v2 datasets and MultiSync MNIST. The ...
[ 7, -1, -1, -1, -1, -1, 7, 5, 4 ]
[ 4, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "nips_2021_2E4AT-qj3Dg", "NnJDS9DrxcH", "TSOpVj6YHl4", "nips_2021_2E4AT-qj3Dg", "Q6TY87ouQS_", "riMYCusRFtH", "nips_2021_2E4AT-qj3Dg", "nips_2021_2E4AT-qj3Dg", "nips_2021_2E4AT-qj3Dg" ]
nips_2021_h7aSBWbX7S4
Information-constrained optimization: can adaptive processing of gradients help?
We revisit first-order optimization under local information constraints such as local privacy, gradient quantization, and computational constraints limiting access to a few coordinates of the gradient. In this setting, the optimization algorithm is not allowed to directly access the complete output of the gradient oracle, but only gets limited information about it subject to the local information constraints. We study the role of adaptivity in processing the gradient output to obtain this limited information from it, and obtain tight or nearly tight bounds for both convex and strongly convex optimization when adaptive gradient processing is allowed.
accept
The reviewers agree that this generally a good paper although not entirely without (minor) flaws. Please take the reviewers comments in consideration when preparing a revision. The answers provided by the authors were given due consideration.
train
[ "M0yDWcsiYWu", "zP-TAgUnO6T", "ogVby2tc21l", "emsRO4vYxVa", "XKM8SxPXWYG", "bdn4Qq6yv1A", "WJ4ZYPEj6EF", "iUbNoNaoWa" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Here, all the references are to the supplementary material section.\n\nThis work studies the problem of first-order optimization under local constraints, where at each step an agent computes a sub-gradient and the sub-gradient should pass through a channel. Then, output of the channel will be available for the opt...
[ 6, -1, -1, -1, -1, -1, 6, 7 ]
[ 2, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_h7aSBWbX7S4", "emsRO4vYxVa", "nips_2021_h7aSBWbX7S4", "M0yDWcsiYWu", "iUbNoNaoWa", "WJ4ZYPEj6EF", "nips_2021_h7aSBWbX7S4", "nips_2021_h7aSBWbX7S4" ]
nips_2021_vqzAfN-BoA_
Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective
Real-world data universally confronts a severe class-imbalance problem and exhibits a long-tailed distribution, i.e., most labels are associated with limited instances. The naïve models supervised by such datasets would prefer dominant labels, encounter a serious generalization challenge and become poorly calibrated. We propose two novel methods from the prior perspective to alleviate this dilemma. First, we deduce a balance-oriented data augmentation named Uniform Mixup (UniMix) to promote mixup in long-tailed scenarios, which adopts advanced mixing factor and sampler in favor of the minority. Second, motivated by the Bayesian theory, we figure out the Bayes Bias (Bayias), an inherent bias caused by the inconsistency of prior, and compensate it as a modification on standard cross-entropy loss. We further prove that both the proposed methods ensure the classification calibration theoretically and empirically. Extensive experiments verify that our strategies contribute to a better-calibrated model, and their combination achieves state-of-the-art performance on CIFAR-LT, ImageNet-LT, and iNaturalist 2018.
accept
This paper addresses the problem of long-tailed visual recognition. They proposed two complementary methods to address the imbalance problem: (i) a data augmentation scheme based on the Mixup and (ii) an approach to compensate for bias in class prior. The final proposed pipeline is 1 stage, as opposed to 2-stage which has featured prominently in recent work. Results are presented across several different image classification datasets, showing superior performance compared to existing work in most cases. During the review process there was a lot of discussion regarding the relative performance compared to the recent CVPR’21 paper, “Improving Calibration for Long-Tailed Recognition” AKA MiSLAS. This paper was published after the NeurIPS submission deadline, and thus the relationship between the two works did not factor into the final decision. However, the authors are strongly encouraged to cite it, and include the new 2-stage comparison to MiSLAS from the response for completeness. In addition, they should more clearly clarify the relationship to the logit adjustment work from ICLR’21. The reviewers noted that while the technical contributions were not overly significant, the main insight of the paper (that naive mixup introduces “head to head” class bias, i.e. a bias towards head majority pseudo data) is well characterized and will be of interest to the long-tail community. There were also concerns about the clarity of the exposition and some missing discussion. Many of these issues were cleared up in the discussion, but the authors are strongly encouraged to address these issues with the writing for the final version. This is very important as it will hopefully significantly improve the clarity of the paper. The paper should also be proof-read for grammar (e.g. Line 129). In the end, three of the reviewers were broadly supportive of the paper, but one recommended rejection (based mostly on the relationship to MiSLAS - see above). This AC, agrees with the consensus of the reviewers and supports the paper being accepted.
test
[ "lKQu61vctm", "9vCvqxx5J1", "Tl5_a944s1h", "YXyJVHchQdy", "VZ2zZTGHyM7", "_YGEFsR5eb", "VB8Npls8Uik", "9ZzbaZZafPg", "LPMY_GOyLrS", "y29JPAcE-P0", "OXqHTllJ4Y", "5fom9yjuVFM", "ybPOCBecFZ_", "SMRHmzFqIcq", "H2OupeHxwa", "89zxq3rbKA", "dJXHPJC8AZ" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank Reviewer 1uZE's response and would like to reply to the concerns and doubts.\n\n*\\# 1. Why do previous works divide methods into 1-stage and 2-stage when making comparisons?*\n-------------\n\nWe agree that dividing a method into 1-stage and 2-stage is unnecessary when deploying them because 2...
[ -1, -1, 7, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 4 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "9ZzbaZZafPg", "dJXHPJC8AZ", "nips_2021_vqzAfN-BoA_", "VZ2zZTGHyM7", "_YGEFsR5eb", "VB8Npls8Uik", "ybPOCBecFZ_", "9vCvqxx5J1", "nips_2021_vqzAfN-BoA_", "SMRHmzFqIcq", "Tl5_a944s1h", "OXqHTllJ4Y", "5fom9yjuVFM", "LPMY_GOyLrS", "89zxq3rbKA", "nips_2021_vqzAfN-BoA_", "nips_2021_vqzAfN-B...
nips_2021_YIyYkoJX2eA
Learning to Draw: Emergent Communication through Sketching
Evidence that visual communication preceded written language and provided a basis for it goes back to prehistory, in forms such as cave and rock paintings depicting traces of our distant ancestors. Emergent communication research has sought to explore how agents can learn to communicate in order to collaboratively solve tasks. Existing research has focused on language, with a learned communication channel transmitting sequences of discrete tokens between the agents. In this work, we explore a visual communication channel between agents that are allowed to draw with simple strokes. Our agents are parameterised by deep neural networks, and the drawing procedure is differentiable, allowing for end-to-end training. In the framework of a referential communication game, we demonstrate that agents can not only successfully learn to communicate by drawing, but with appropriate inductive biases, can do so in a fashion that humans can interpret. We hope to encourage future research to consider visual communication as a more flexible and directly interpretable alternative of training collaborative agents.
accept
This is a fun paper! The basic ideas of communication by sketching, emergent communication, and indeed emergent communication via sketches have been floating around recently, but the focus on interpretability of the emergent sketches is a valuable addition. As the reviewers indicate the addition of human evaluations is very nice, and some space should be given in the revision to interpreting these new results. One comment from me: I would love to see a little discussion of the connections to human communication with sketching, eg "Pragmatic Inference and Visual Abstraction Enable Contextual Flexibility During Visual Communication." J. E. Fan, R. D. Hawkins, M. Wu, & N. D. Goodman. (2019). Computational Brain & Behavior.
train
[ "Y0p6ldvKju1", "Ds9geWqK5Dt", "gYFwAmbak4n", "7dyLozBcA4H", "uvF8AJqD9N", "94HHICWuCqt", "4d9AN4V-XP0", "GT-kw9Yxhvw", "kuTZT9bR-78", "x-z2dX5et_P", "4r7_O1o_7VR", "-woirlPpS6O", "Wlqq-0jbmXP", "vMoqP8EEE8Z", "6Yllr7eBpUy", "Ll7oP6k8mLp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks! I’ve updated my score.", "I’m impressed by the authors’ inclusion of a human evaluation. While they paint a somewhat unclear story about human interpretability (hard to tell how good 38% is), including the results in commendable and these kinds of studies are sorely missing from the literature.\n\nI’ve ...
[ -1, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "uvF8AJqD9N", "nips_2021_YIyYkoJX2eA", "nips_2021_YIyYkoJX2eA", "4d9AN4V-XP0", "94HHICWuCqt", "4d9AN4V-XP0", "nips_2021_YIyYkoJX2eA", "nips_2021_YIyYkoJX2eA", "vMoqP8EEE8Z", "Wlqq-0jbmXP", "-woirlPpS6O", "Ds9geWqK5Dt", "Ll7oP6k8mLp", "gYFwAmbak4n", "nips_2021_YIyYkoJX2eA", "nips_2021_Y...
nips_2021_MySjw6CHPa4
Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks
The field of neuromorphic computing promises extremely low-power and low-latency sensing and processing. Challenges in transferring learning algorithms from traditional artificial neural networks (ANNs) to spiking neural networks (SNNs) have so far prevented their application to large-scale, complex regression tasks. Furthermore, realizing a truly asynchronous and fully neuromorphic pipeline that maximally attains the abovementioned benefits involves rethinking the way in which this pipeline takes in and accumulates information. In the case of perception, spikes would be passed as-is and one-by-one between an event camera and an SNN, meaning all temporal integration of information must happen inside the network. In this article, we tackle these two problems. We focus on the complex task of learning to estimate optical flow from event-based camera inputs in a self-supervised manner, and modify the state-of-the-art ANN training pipeline to encode minimal temporal information in its inputs. Moreover, we reformulate the self-supervised loss function for event-based optical flow to improve its convexity. We perform experiments with various types of recurrent ANNs and SNNs using the proposed pipeline. Concerning SNNs, we investigate the effects of elements such as parameter initialization and optimization, surrogate gradient shape, and adaptive neuronal mechanisms. We find that initialization and surrogate gradient width play a crucial part in enabling learning with sparse inputs, while the inclusion of adaptivity and learnable neuronal parameters can improve performance. We show that the performance of the proposed ANNs and SNNs are on par with that of the current state-of-the-art ANNs trained in a self-supervised manner.
accept
Dear authors, congratulations on your paper being accepted at Neurips. The reviewers appreciated the demonstration that SNN can achieve impressive performance on a challenging and 'dense' prediction task. Please incorporate the feedback by the reviewers into the final version of the manuscript. In addition, it will be important to make the code and results publicly available, as promised in the manuscript. Your AC
train
[ "j9uDQvaElU", "VN1Z3GlGmRw", "k-K1QXnIJ4A", "4APwYfwck7", "BOmKs0o9cXc", "6EaQS9sbnux", "-5g-WId2s_j", "bgZoF0gTbke" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your thorough review and for increasing your score. We will try to accommodate your suggestions to the best of our abilities by stating more clearly our main contributions and proposed advances, and by changing the name of the models in Table 1 and Section 4 so the conclusions derived from these res...
[ -1, 5, -1, -1, -1, -1, 9, 6 ]
[ -1, 3, -1, -1, -1, -1, 5, 2 ]
[ "k-K1QXnIJ4A", "nips_2021_MySjw6CHPa4", "BOmKs0o9cXc", "bgZoF0gTbke", "VN1Z3GlGmRw", "-5g-WId2s_j", "nips_2021_MySjw6CHPa4", "nips_2021_MySjw6CHPa4" ]
nips_2021_YAv9enSDW-a
On the Value of Infinite Gradients in Variational Autoencoder Models
A number of recent studies of continuous variational autoencoder (VAE) models have noted, either directly or indirectly, the tendency of various parameter gradients to drift towards infinity during training. Because such gradients could potentially contribute to numerical instabilities, and are often framed as a problematic phenomena to be avoided, it may be tempting to shift to alternative energy functions that guarantee bounded gradients. But it remains an open question: What might the unintended consequences of such a restriction be? To address this issue, we examine how unbounded gradients relate to the regularization of a broad class of autoencoder-based architectures, including VAE models, as applied to data lying on or near a low-dimensional manifold (e.g., natural images). Our main finding is that, if the ultimate goal is to simultaneously avoid over-regularization (high reconstruction errors, sometimes referred to as posterior collapse) and under-regularization (excessive latent dimensions are not pruned from the model), then an autoencoder-based energy function with infinite gradients around optimal representations is provably required per a certain technical sense which we carefully detail. Given that both over- and under-regularization can directly lead to poor generated sample quality or suboptimal feature selection, this result suggests that heuristic modifications to or constraints on the VAE energy function may at times be ill-advised, and large gradients should be accommodated to the extent possible.
accept
The authors show that infinite gradients are required to recover optimally sparse representations of data in autoencoder models using a Gaussian likelihood with a learned variance. The results in the paper appear both correct and interesting in the context of autoencoders, but the reviewers found some of the claims insufficiently precise and at times overstated, which is a serious issue for a theoretical work. Most notably, they were not convinced by the claims of the applicability of the results to VAEs (e.g. the zero reconstruction error as one of the assumptions). The back-and-forth with the reviewers provided some necessary clarifications and explanations, and generated some excellent suggestions for improving the paper. Unfortunately implementing the changes required would amount to a major revision and thus is not feasible without resubmission. The paper would also really benefit from a section covering prior work on infinite gradients, including a detailed discussion of the relevant results in Dai and Wipf (2019) and their relationship to those in this paper.
train
[ "IcKDcb-TkgP", "ElGKgcIDLQL", "MZJ-TAsICJ2", "LjLygXgsFlf", "0gwnA4VsUu", "ZwchPyuT_F", "rBMKtJUUjhb", "E387FJU3qpl", "rOCiNbGmGDw", "mFD69cm9AH6", "yWCofBM59EF", "rbCkmknvC2T", "ZIyYPI9Dj1V", "tHspNgU6Co2", "4MtFm-QTb-n", "EANQN4XXrr0", "OiXIObjGf7h", "8tcMkak7H8u", "bR3czE4OZG"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "a...
[ " While we sincerely appreciate the polite closing remarks and apologetic tone, it is our understanding that the rolling discussion period is now officially over and ACs have already made or are making decisions. So while we would otherwise like to respond in more depth to the reviewer's new comments, this now see...
[ -1, -1, 6, 5, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 5, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "0gwnA4VsUu", "bR3czE4OZG", "nips_2021_YAv9enSDW-a", "nips_2021_YAv9enSDW-a", "nips_2021_YAv9enSDW-a", "nips_2021_YAv9enSDW-a", "rOCiNbGmGDw", "nips_2021_YAv9enSDW-a", "mFD69cm9AH6", "rbCkmknvC2T", "ZIyYPI9Dj1V", "EANQN4XXrr0", "tHspNgU6Co2", "4MtFm-QTb-n", "nips_2021_YAv9enSDW-a", "E3...
nips_2021_IhiU6AJYpDs
Online Robust Reinforcement Learning with Model Uncertainty
Robust reinforcement learning (RL) is to find a policy that optimizes the worst-case performance over an uncertainty set of MDPs. In this paper, we focus on model-free robust RL, where the uncertainty set is defined to be centering at a misspecified MDP that generates samples, and is assumed to be unknown. We develop a sample-based approach to estimate the unknown uncertainty set, and design robust Q-learning algorithm (tabular case) and robust TDC algorithm (function approximation setting), which can be implemented in an online and incremental fashion. For the robust Q-learning algorithm, we prove that it converges to the optimal robust Q function, and for the robust TDC algorithm, we prove that it converges asymptotically to some stationary points. Unlike the results in [Roy et al., 2017], our algorithms do not need any additional conditions on the discount factor to guarantee the convergence. We further characterize the finite-time error bounds of the two algorithms, and show that both the robust Q-learning and robust TDC algorithms converge as fast as their vanilla counterparts (within a constant factor). Our numerical experiments further demonstrate the robustness of our algorithms. Our approach can be readily extended to robustify many other algorithms, e.g., TD, SARSA, and other GTD algorithms.
accept
Based on the reviews and the discussion afterwards, my recommendation weighs towards acceptance. While the reviewers reached a consensus that the algorithms make novel and solid contributions, there remain some criticisms. First, the experiments can be made much more solid and comprehensive by adding previous robust RL methods as baselines, showing the tradeoff between robustness and pure performance, and presenting the performance evaluation in the face of misspecified uncertainty set. Second, I encourage the authors to also add discussion (and possibly theories) to address the limitation of the R-contamination model. I hope that the above concerns, along with other clarity issues raised by the reviewers, are addressed in the camera-ready version.
train
[ "OkRrrG135x2", "YTfUT8WY-9", "tXJPE2urvK", "SciSEIBY9Ee", "W8W0DafI1em", "VNwneDV2qbB", "lbV9BEiOeqT", "eTGKxIQkEEk", "-yPx6j5SaMb", "_xAex1jw7kS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for their responses. They main point of remaining concern seems to be the necessity of analysis/evaluation under model misspecification. I feel like this concern is quite significant, but would defer to better experts in the field.\n\n=====\n\nThe paper presents robust versions of the Q-learnin...
[ 7, 6, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "nips_2021_IhiU6AJYpDs", "nips_2021_IhiU6AJYpDs", "SciSEIBY9Ee", "W8W0DafI1em", "OkRrrG135x2", "YTfUT8WY-9", "_xAex1jw7kS", "-yPx6j5SaMb", "nips_2021_IhiU6AJYpDs", "nips_2021_IhiU6AJYpDs" ]
nips_2021_JhCcUMFEq7
Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose
We study the problem of learning to estimate the 3D object pose from a few labelled examples and a collection of unlabelled data. Our main contribution is a learning framework, neural view synthesis and matching, that can transfer the 3D pose annotation from the labelled to unlabelled images reliably, despite unseen 3D views and nuisance variations such as the object shape, texture, illumination or scene context. In our approach, objects are represented as 3D cuboid meshes composed of feature vectors at each mesh vertex. The model is initialized from a few labelled images and is subsequently used to synthesize feature representations of unseen 3D views. The synthesized views are matched with the feature representations of unlabelled images to generate pseudo-labels of the 3D pose. The pseudo-labelled data is, in turn, used to train the feature extractor such that the features at each mesh vertex are more invariant across varying 3D views of the object. Our model is trained in an EM-type manner alternating between increasing the 3D pose invariance of the feature extractor and annotating unlabelled data through neural view synthesis and matching. We demonstrate the effectiveness of the proposed semi-supervised learning framework for 3D pose estimation on the PASCAL3D+ and KITTI datasets. We find that our approach outperforms all baselines by a wide margin, particularly in an extreme few-shot setting where only 7 annotated images are given. Remarkably, we observe that our model also achieves an exceptional robustness in out-of-distribution scenarios that involve partial occlusion.
accept
This paper addresses semi-supervised 3D viewpoint estimation by lifting 2D CNN features into object-centric feature cuboids, which are rotated and rendered to generate features maps of alternative views, while also updates in a contrastive manner exploiting labelled and pseudo-labelled examples in an iterative manner. All reviewers agree on the novelty of the approach pursued in the paper. The rebuttal submitted by the authors clarified many concerns of the reviewers regarding handling of occlusions and self-occlusions, scalability of the method with respect to the number of annotated examples, evaluation metric and thresholds used.
val
[ "AyctSbC5qNt", "FNXR_8G4j-d", "WbONUr91uy", "NZVZlGUEGu", "1ZMhXW7_NyV", "A632pPLff4V", "yXiUZaFYqGb", "QFtFmozfeG", "zdttx33ZwH6", "GK7iVS4LKk_", "Gbmh7rWxBDG", "G09Mctfc3yt", "1jMK6CoNWlA", "sfN_0tkTczV", "3mbwXOKdUye", "DLu9MDRboD0", "jC1BrBkH0b", "Zzu0_bWu84U", "Kn59ITdlHvd",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "...
[ " I am satisfied with the authors' responses to my concerns. I will maintain my original rating and recommend accepting this paper to NeuRIPS.", " Thanks for your response. I understand and encourage you to do so in the final version of your paper.\n\n", " **I am satisfied with the authors' response to Q1. and ...
[ -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "1jMK6CoNWlA", "WbONUr91uy", "NZVZlGUEGu", "1jMK6CoNWlA", "A632pPLff4V", "jC1BrBkH0b", "nips_2021_JhCcUMFEq7", "sfN_0tkTczV", "GK7iVS4LKk_", "Gbmh7rWxBDG", "G09Mctfc3yt", "3mbwXOKdUye", "Chn3E16Xpq", "yXiUZaFYqGb", "Kn59ITdlHvd", "yXiUZaFYqGb", "Zzu0_bWu84U", "nips_2021_JhCcUMFEq7"...
nips_2021_bYIddUC7AYO
Sharp Impossibility Results for Hyper-graph Testing
In a broad Degree-Corrected Mixed-Membership (DCMM) setting, we test whether a non-uniform hypergraph has only one community or has multiple communities. Since both the null and alternative hypotheses have many unknown parameters, the challenge is, given an alternative, how to identify the null that is hardest to separate from the alternative. We approach this by proposing a degree matching strategy where the main idea is leveraging the theory for tensor scaling to create a least favorable pair of hypotheses. We present a result on standard minimax lower bound theory and a result on Region of Impossibility (which is more informative than the minimax lower bound). We show that our lower bounds are tight by introducing a new test that attains the lower bound up to a logarithmic factor. We also discuss the case where the hypergraphs may have mixed-memberships.
accept
The paper's main contribution is to provide a crisp theoretical characterization of the feasibility of detecting multiplicity of underlying communities in a degree-corrected mixed membership hypergraph model. It has been recognized by all reviews that this is a significant achievement that will be of interest to researchers working on stochastic block models and related inference questions. The reviewers did not give very high marks on the basis that the topic may be of interest to only a limited subset of the NeurIPS community, also pointing to the fact that the paper could be made more attractive if a compelling application was brought forward. The authors' reply to these comments is an argument that the problem they tackle is important in network science, an area which is relevant for NeurIPS, and a description of concrete applications to hypergraphs of co-authorships in scientific articles. The authors further point to the usefulness of their result to hierarchical clustering, and explain in greater detail how their non-polynomial time test informs the design of the polynomial test used in the experiments. I believe that these answers by the authors alleviate the concerns expressed by the reviewers, and suffice to justify acceptance of the paper to the conference.
train
[ "TAnOw9NUpm", "u1NRGD3W-aT", "3M6UdxrCfE", "whav_Owvnid", "nxtMSYIz_Tb", "A0lpmJ9TjfX", "MjMokw4NjyG", "nk2ImSCxkvs", "0r2zpxVwila", "SMmTqFyyEyY", "bYWDJXptNU" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments. We are glad that you think the test we use in simulations is an interesting contribution.\n\nIn our point-by-point response (Point 3), we gave a specific example of how these results are practically useful, which we do not repeat here. We wish to take this opportunity to re-iterate our p...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "u1NRGD3W-aT", "whav_Owvnid", "bYWDJXptNU", "SMmTqFyyEyY", "nk2ImSCxkvs", "nips_2021_bYIddUC7AYO", "0r2zpxVwila", "nips_2021_bYIddUC7AYO", "nips_2021_bYIddUC7AYO", "nips_2021_bYIddUC7AYO", "nips_2021_bYIddUC7AYO" ]
nips_2021_0CDKgyYaxC8
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of Federated learning, whereby malicious eavesdroppers or participants in the protocol can recover (partially) the clients' private data. This paper evaluates existing attacks and defenses. We find that some attacks make strong assumptions about the setup. Relaxing such assumptions can substantially weaken these attacks. We then evaluate the benefits of three proposed defense mechanisms against gradient inversion attacks. We show the trade-offs of privacy leakage and data utility of these defense methods, and find that combining them in an appropriate manner makes the attack less effective, even under the original strong assumptions. We also estimate the computation cost of end-to-end recovery of a single image under each evaluated defense. Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss, as summarized in a list of potential strategies.
accept
The reviewers are satisfied by the responses made by the authors. The authors are strongly encouraged to include the additional experiments and results they provided in the rebuttal phase to their final manuscript.
train
[ "mdtJhWgHxj", "8QwlOeSIJQq", "Fjg8k_nb53", "d8YU41JMGP", "0vubiSCIgm", "nCIXpardNxk", "KQ-lk52pFvJ", "knHerZqdADx", "waSdX8cTo-", "0LVYS7pkUxj", "CThAPUiPNXH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarifications. I confirm my positive opinion of the paper. \n\nIt'd be great if you could then include content from this answer in the actual paper, so that these aspects are better clarified. ", " Thank you for the detailed explanation and the additional experiments and analysis, these have...
[ -1, -1, 7, -1, -1, -1, -1, -1, 9, 6, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "KQ-lk52pFvJ", "d8YU41JMGP", "nips_2021_0CDKgyYaxC8", "Fjg8k_nb53", "0LVYS7pkUxj", "waSdX8cTo-", "CThAPUiPNXH", "nips_2021_0CDKgyYaxC8", "nips_2021_0CDKgyYaxC8", "nips_2021_0CDKgyYaxC8", "nips_2021_0CDKgyYaxC8" ]
nips_2021_sZu0b4WrElD
Faster Non-asymptotic Convergence for Double Q-learning
Lin Zhao, Huaqing Xiong, Yingbin Liang
accept
This paper received borderline reviews. Some main issues were about lack of clarity, and of a lack of empirical validation of the theoretical benefits of the proposed constant step sizes. I felt the rebuttal did help clarify some of the raised issues. I would also encourage the authors to include the numerical experiments that they mentioned they ran in a future version of this paper. Even if the authors are of the opinion that these do not strictly add new information and only validate the theory, it is important to consider a publication as primarily a means to communicate our findings to the larger research community, and such numerical experiments can help illustrate the main points, as well as augment the theory, if and when there are discrepancies. For instance, I'd personally love to understand better whether the practical convergence rates match the current theory. Is there an important 'gap' still between practical and theoretical convergence rates? Would it be possible to do an even tighter analysis and find even-better bounds? These are examples of questions that could help the reader better understand the significance of this work (not necessarily in terms of "does it work with neural nets?", but rather in terms of "what kind of further theoretical study does this enable?" or "what kind of research questions does this raise or answer?"). I think the paper has promise, but could benefit from another round of reviews, using the reviewers' comments to improve the work. I therefore would like to strongly encourage the authors to consider how the paper could be made even better, and then resubmit (e.g., to a related venue).
train
[ "UxCgzTCfTD1", "vOr7_17GK4B", "mNJdZuLYLbG", "i0WcNyLcKMF", "Hcj1pLgdTt", "t-Ux0taTPYy", "Z7fm_eqrmp", "f7gzLM0ulyr", "tzlmEAeSdYC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the response. I still lean towards acceptance and choose to keep my score the same. ", "Thanks to the practical applicability of double Q-learning, this paper denoted to analyzing the finite-time convergence of double Q-learning algorithms with constant stepsizes. Building on existing an...
[ -1, 5, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, 5, -1, -1, -1, -1, 4, 4, 5 ]
[ "t-Ux0taTPYy", "nips_2021_sZu0b4WrElD", "tzlmEAeSdYC", "vOr7_17GK4B", "f7gzLM0ulyr", "Z7fm_eqrmp", "nips_2021_sZu0b4WrElD", "nips_2021_sZu0b4WrElD", "nips_2021_sZu0b4WrElD" ]
nips_2021_86iCmraCBL
Towards Tight Communication Lower Bounds for Distributed Optimisation
Janne H. Korhonen, Dan Alistarh
accept
The authors consider the communication complexity of minimizing a convex function defined on a $d$-dimensional box and distributed over $N$ machines. They show a lower bound that says the machines need to communicate roughly $Nd \log (1/\epsilon)$ bits of information to find an $\epsilon$-accurate solution. Despite raising some concerns on the validity of the problem setting, the reviewers appreciated the topic of the paper and the communication-complexity view. Consequently, I am recommending acceptance of the paper, but urge the authors to take the feedback of the reviewers into account when revising the paper. In particular, the non-standard choice of domain (box rather than ball) should be explained, the objective should be modified to the average of the functions rather than their sum, and the case of constant $\epsilon$ should be thoroughly discussed.
train
[ "_P72b-dbdYT", "NwMY8TOn0GE", "0jpLBrFm7YM", "KPzI3wFd3NY", "IznEOWbvGDA", "wajD5oMJ5w", "IqQ68mlrKwQ", "GlDc58xshnM", "NZuw26hIwLv", "bj6Cy6E8NWT", "XMUBMAZUc1b", "hBRpWDOXZwW", "YCMoxq5yxay", "tOHUlmhjUaL", "omUNSyrELaY" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your timely response, and for clarifying your regime of interest!\n\nIn fact, in this regime the upper bound on $\\varepsilon$ is indeed constant w.r.t. $N$, and we would argue that this is the case in most reasonable regimes. We detail this below:\n\nLet us assume that $d = N$. Then, our lower boun...
[ -1, -1, -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, 2, -1, -1, -1, -1, 3, 4 ]
[ "NwMY8TOn0GE", "0jpLBrFm7YM", "IznEOWbvGDA", "IqQ68mlrKwQ", "nips_2021_86iCmraCBL", "hBRpWDOXZwW", "GlDc58xshnM", "XMUBMAZUc1b", "nips_2021_86iCmraCBL", "IznEOWbvGDA", "tOHUlmhjUaL", "omUNSyrELaY", "NZuw26hIwLv", "nips_2021_86iCmraCBL", "nips_2021_86iCmraCBL" ]
nips_2021_gjBz22V93a
Fast Multi-Resolution Transformer Fine-tuning for Extreme Multi-label Text Classification
Extreme multi-label text classification~(XMC) seeks to find relevant labels from an extreme large label collection for a given text input. Many real-world applications can be formulated as XMC problems, such as recommendation systems, document tagging and semantic search. Recently, transformer based XMC methods, such as X-Transformer and LightXML, have shown significant improvement over other XMC methods. Despite leveraging pre-trained transformer models for text representation, the fine-tuning procedure of transformer models on large label space still has lengthy computational time even with powerful GPUs. In this paper, we propose a novel recursive approach, XR-Transformer to accelerate the procedure through recursively fine-tuning transformer models on a series of multi-resolution objectives related to the original XMC objective function. Empirical results show that XR-Transformer takes significantly less training time compared to other transformer-based XMC models while yielding better state-of-the-art results. In particular, on the public Amazon-3M dataset with 3 million labels, XR-Transformer is not only 20x faster than X-Transformer but also improves the Precision@1 from 51% to 54%.
accept
The paper introduces an improved version of a transformer-based algorithm for extreme multi-label classification. The empirical results show significant speed-ups and very competitive predictive performance. The paper is well-written and the proposed modifications are sound. The reviewers were satisfied by the responses given by the authors.
train
[ "9EU9_94HuP", "h3B79Y3Hxn9", "G-J3NA01t90", "Oha3B2KWesc", "qtGcWRPmTas", "wIqpPU8vyRq", "PF1Z44d-QyP", "wuYkS94HZx", "kqon0pfsmnx", "5wAYNND8lUR", "_AnSaDq5ge3", "RZC918ps895" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate your valuable comments. Since the discussion period is ending soon, we would like to hear your thoughts after reviewing our additional experiments in https://openreview.net/forum?id=gjBz22V93a&noteId=kqon0pfsmnx ? \n\nSincerely,\n\nAuthors of this submission\n", " Thanks for your response. ...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "wuYkS94HZx", "Oha3B2KWesc", "nips_2021_gjBz22V93a", "PF1Z44d-QyP", "5wAYNND8lUR", "RZC918ps895", "G-J3NA01t90", "_AnSaDq5ge3", "nips_2021_gjBz22V93a", "nips_2021_gjBz22V93a", "nips_2021_gjBz22V93a", "nips_2021_gjBz22V93a" ]
nips_2021_DF8LCjR03tX
HRFormer: High-Resolution Vision Transformer for Dense Predict
We present a High-Resolution Transformer (HRFormer) that learns high-resolution representations for dense prediction tasks, in contrast to the original Vision Transformer that produces low-resolution representations and has high memory and computational cost. We take advantage of the multi-resolution parallel design introduced in high-resolution convolutional networks (HRNet [45]), along with local-window self-attention that performs self-attention over small non-overlapping image windows [21], for improving the memory and computation efficiency. In addition, we introduce a convolution into the FFN to exchange information across the disconnected image windows. We demonstrate the effectiveness of the HighResolution Transformer on both human pose estimation and semantic segmentation tasks, e.g., HRFormer outperforms Swin transformer [27] by 1.3 AP on COCO pose estimation with 50% fewer parameters and 30% fewer FLOPs. Code is available at: https://github.com/HRNet/HRFormer
accept
The authors propose a "high-resolution Transformer" architecture for dense prediction tasks using a multiresolution hierarchy and self-attention over local windows. All reviewers agreed that the ideas were sensible and technically sound, and that the experiments demonstrated improvement. There were concerns about missing comparisons (reviewers Jvmd, G26c) and missing ablations (tHJt, Jvmd), but the authors seem to have addressed all of these in rebuttal. There were also concerns of limited novelty (reviewers 143m, G26c), but I don't think this alone should disqualify the paper. Therefore I recommend acceptance.
train
[ "laxxfNd5kCd", "Lz5eVwwkHFE", "p2qbBh0OrQ8", "PFSkIQzVDi", "86TOERkDt3d", "aEgRGdU7Nh", "kjR0AMHRxuI", "g5RyjZRpZc7", "rykkgQzzWD", "3In0ebpXxMf", "LPrBny1FCV", "wh8KdkIByW", "e3NmwVYXAqr", "Ht3QqnAlifz", "m8pIjfPbivl" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper modifies HRNet replacing standard convolutional layers with a transformer block which consists of a local multi-head attention operation (on non-overlapping windows) followed by a residual block with a 3x3 depth-wise convolution. The proposed model is benchmarked on ImageNet, COCO pose estimation, and a...
[ 6, -1, -1, -1, -1, 5, -1, 7, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, -1, -1, 5, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_DF8LCjR03tX", "laxxfNd5kCd", "3In0ebpXxMf", "rykkgQzzWD", "kjR0AMHRxuI", "nips_2021_DF8LCjR03tX", "Ht3QqnAlifz", "nips_2021_DF8LCjR03tX", "wh8KdkIByW", "e3NmwVYXAqr", "laxxfNd5kCd", "g5RyjZRpZc7", "m8pIjfPbivl", "aEgRGdU7Nh", "nips_2021_DF8LCjR03tX" ]
nips_2021_Fj6kQJbHwM9
Manifold Topology Divergence: a Framework for Comparing Data Manifolds.
We propose a framework for comparing data manifolds, aimed, in particular, towards the evaluation of deep generative models. We describe a novel tool, Cross-Barcode(P,Q), that, given a pair of distributions in a high-dimensional space, tracks multiscale topology spacial discrepancies between manifolds on which the distributions are concentrated. Based on the Cross-Barcode, we introduce the Manifold Topology Divergence score (MTop-Divergence) and apply it to assess the performance of deep generative models in various domains: images, 3D-shapes, time-series, and on different datasets: MNIST, Fashion MNIST, SVHN, CIFAR10, FFHQ, market stock data, ShapeNet. We demonstrate that the MTop-Divergence accurately detects various degrees of mode-dropping, intra-mode collapse, mode invention, and image disturbance. Our algorithm scales well (essentially linearly) with the increase of the dimension of the ambient high-dimensional space. It is one of the first TDA-based methodologies that can be applied universally to datasets of different sizes and dimensions, including the ones on which the most recent GANs in the visual domain are trained. The proposed method is domain agnostic and does not rely on pre-trained networks.
accept
Congratulations, the paper is accepted to NeurIPS 2021! Please make an honest effort making this paper more accessible to general ML audience (non-experts in TDA). Clarify the barcode construction. Include persistent homology literature background. Elaborate limitations. Please include other clarifications, edits and additions as discussed in rebuttal and reviews.
test
[ "GmEM7ga917v", "SV_OV_ur-wd", "iUC6ua8QyqB", "ukepISUlUD1", "AAkERtv7EF7", "67dv0Obty3V", "fTqMDrz_mkw" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a method to compare two data manifolds based on topology. A persistent diagram is constructed using both sets of samples (cross-barcode), and the proposed metric is computed as the average length of segments in the cross-barcode. The efficiency of the method is demonstrated in several experimen...
[ 6, -1, -1, -1, -1, 7, 9 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_Fj6kQJbHwM9", "iUC6ua8QyqB", "GmEM7ga917v", "fTqMDrz_mkw", "67dv0Obty3V", "nips_2021_Fj6kQJbHwM9", "nips_2021_Fj6kQJbHwM9" ]
nips_2021_vrXuRmaU_jM
Weak-shot Fine-grained Classification via Similarity Transfer
Recognizing fine-grained categories remains a challenging task, due to the subtle distinctions among different subordinate categories, which results in the need of abundant annotated samples. To alleviate the data-hungry problem, we consider the problem of learning novel categories from web data with the support of a clean set of base categories, which is referred to as weak-shot learning. In this setting, we propose a method called SimTrans to transfer pairwise semantic similarity from base categories to novel categories. Specifically, we firstly train a similarity net on clean data, and then leverage the transferred similarity to denoise web training data using two simple yet effective strategies. In addition, we apply adversarial loss on similarity net to enhance the transferability of similarity. Comprehensive experiments demonstrate the effectiveness of our weak-shot setting and our SimTrans method.
accept
Following the rebuttal and discussion period, all reviewers lean towards acceptance of this work. The reviewers found the problem statement to be of practical importance and the paper well motivated and well written. Questions raised by QXjD and 4wp8 were adequately addressed during the rebuttal/discussion phase related to analysis of performance vs rarity of class and overall motivation for the work.
train
[ "UzXte_UQ76", "M9-oGrK051", "lK9IEJpBzjl", "ybo9sj1Y_iP", "0UN8XVKUK-", "87EbQ2ho6er" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a new setting called weak-shot fine-grained classification, which contains base classes with clean labels and novel classes with noisy labels. To boost the performance of the noisy novel classes, the paper adopts sample weighting and similarity regularization for better tackling the noisy issue...
[ 6, 6, -1, -1, -1, 7 ]
[ 5, 4, -1, -1, -1, 3 ]
[ "nips_2021_vrXuRmaU_jM", "nips_2021_vrXuRmaU_jM", "M9-oGrK051", "UzXte_UQ76", "87EbQ2ho6er", "nips_2021_vrXuRmaU_jM" ]
nips_2021_WybjTtCKfGi
Shape your Space: A Gaussian Mixture Regularization Approach to Deterministic Autoencoders
Variational Autoencoders (VAEs) are powerful probabilistic models to learn representations of complex data distributions. One important limitation of VAEs is the strong prior assumption that latent representations learned by the model follow a simple uni-modal Gaussian distribution. Further, the variational training procedure poses considerable practical challenges. Recently proposed regularized autoencoders offer a deterministic autoencoding framework, that simplifies the original VAE objective and is significantly easier to train. Since these models only provide weak control over the learned latent distribution, they require an ex-post density estimation step to generate samples comparable to those of VAEs. In this paper, we propose a simple and end-to-end trainable deterministic autoencoding framework, that efficiently shapes the latent space of the model during training and utilizes the capacity of expressive multi-modal latent distributions. The proposed training procedure provides direct evidence if the latent distribution adequately captures complex aspects of the encoded data. We show in experiments the expressiveness and sample quality of our model in various challenging continuous and discrete domains. An implementation is available at https://github.com/boschresearch/GMM_DAE.
accept
By the scores alone, this paper is just borderline. Reviewers had a number of questions and clarifications, which were responded to well during the author response period, and even included additional results against a VQ-VAE baseline; this led to multiple reviewers raising their scores. Leaving aside issues of clarity (which could be fixed by camera-ready, given the discussion here), the main concern across the reviewers is the degree of novelty and originality, given that on the one hand there are a number of other methods which use Gaussian mixture priors on VAEs, and on the other hand this is also a direct extension of Ghosh et al 2020. The other concern is the somewhat heuristic use of an approximation to the KS statistic in the loss, and a question of how sub-optimal the result is, as well as whether using simply a Gaussian mixture likelihood could not lead to similar results.
train
[ "1qCLIaLhTn9", "QC01a9jqJ1", "Mc2ldqc8Gic", "AzrUBqlV5V", "w9jL7e8E6FX", "CCs8Dn2ZPJR", "9YD46cLzafy", "V36xlI2vF1Z", "w8W9XgTbgwf", "pQgPwoW9q9", "-wPed-Wk4p", "r_o1pdPIDlU", "96wkcuUPk2W", "xNOYfAIgY1k", "GJ4A0f9j9g9", "6XYId4PIcV", "mijbV6etV-D", "OsFYvqpEnBz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I have read the latest responses. Overall, I think the authors did put a lot of effort into this work and I have increased my score, but I cannot give higher. I still think the paper's originality and contribution are limited.", "This paper proposes to learn GMM priors in autoencoders, to be able to construct a...
[ -1, 6, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "AzrUBqlV5V", "nips_2021_WybjTtCKfGi", "CCs8Dn2ZPJR", "9YD46cLzafy", "6XYId4PIcV", "pQgPwoW9q9", "V36xlI2vF1Z", "96wkcuUPk2W", "nips_2021_WybjTtCKfGi", "-wPed-Wk4p", "xNOYfAIgY1k", "GJ4A0f9j9g9", "w8W9XgTbgwf", "QC01a9jqJ1", "OsFYvqpEnBz", "mijbV6etV-D", "nips_2021_WybjTtCKfGi", "n...
nips_2021_iBHiqlbFvLb
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning
We present and analyze an algorithm for optimizing smooth and convex or strongly convex objectives using minibatch stochastic gradient estimates. The algorithm is optimal with respect to its dependence on both the minibatch size and minimum expected loss simultaneously. This improves over the optimal method of Lan, which is insensitive to the minimum expected loss; over the optimistic acceleration of Cotter et al., which has suboptimal dependence on the minibatch size; and over the algorithm of Liu and Belkin, which is limited to least squares problems and is also similarly suboptimal. Applied to interpolation learning, the improvement over Cotter et al.~and Liu and Belkin translates to a linear, rather than square-root, parallelization speedup.
accept
All reviewers recommend accepting the paper. Please take the time to consider the reviewer's comments and update the paper when preparing the final version. In particular, please discuss the additional related references brought up in the reviews.
train
[ "JxNUpEbK6dc", "lbeY86hkdmB", "3RZ18gkDJcE", "52CVGZ-EBGx", "cnIRslquIKN", "ShhVbHW2gmF", "RgL06URE99z", "v5R9pRqR1Fr", "UpHzkl3gaSP" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. \nThese comments on related studies are useful and make the position of the paper clearer. I hope that they will be incorporated in the revised version. I don't think there are any comments in the other reviews which hurt the contribution of the paper. \nI would like to keep the score.", ...
[ -1, -1, -1, -1, -1, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "3RZ18gkDJcE", "UpHzkl3gaSP", "v5R9pRqR1Fr", "RgL06URE99z", "ShhVbHW2gmF", "nips_2021_iBHiqlbFvLb", "nips_2021_iBHiqlbFvLb", "nips_2021_iBHiqlbFvLb", "nips_2021_iBHiqlbFvLb" ]
nips_2021_5J9sbGwZ9bC
Indexed Minimum Empirical Divergence for Unimodal Bandits
We consider a stochastic multi-armed bandit problem specified by a set of one-dimensional family exponential distributions endowed with a unimodal structure. The unimodal structure is of practical relevance for several applications. We introduce IMED-UB, an algorithm that exploits provably optimally the unimodal-structure, by adapting to this setting the Indexed Minimum Empirical Divergence (IMED) algorithm introduced by Honda and Takemura (2015). Owing to our proof technique, we are able to provide a concise finite-time analysis of the IMED-UB algorithm, that is simple and yet yields asymptotic optimality. We finally provide numerical experiments showing that IMED-UB competes favorably with the recently introduced state-of-the-art algorithms.
accept
The committee has appreciated the technical value of this paper as well as its significance for the community. We strongly encourage the authors to take into account the remarks that were made on the presentation and writing for the final version of their work.
train
[ "r9MdFE7aM2j", "y6nI7VPdGy", "oNuNqTsh5BK", "ZPabHT18NNo", "ZPvqaC1yFDy", "k6L-opF34wE", "Y1YqvsAsHIM" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper tackles the problem of unimodal bandits where it looks at an algorithm based on the IMED index which follows the general idea of the OSUB algorithm. The main novelty of the approach is the analysis which does not rely on distinctions between exploration and exploitation rounds and the added benefit of th...
[ 6, -1, 7, -1, -1, -1, 6 ]
[ 3, -1, 4, -1, -1, -1, 3 ]
[ "nips_2021_5J9sbGwZ9bC", "oNuNqTsh5BK", "nips_2021_5J9sbGwZ9bC", "r9MdFE7aM2j", "Y1YqvsAsHIM", "oNuNqTsh5BK", "nips_2021_5J9sbGwZ9bC" ]
nips_2021_E5EoQqCVYX
SOAT: A Scene- and Object-Aware Transformer for Vision-and-Language Navigation
Natural language instructions for visual navigation often use scene descriptions (e.g., bedroom) and object references (e.g., green chairs) to provide a breadcrumb trail to a goal location. This work presents a transformer-based vision-and-language navigation (VLN) agent that uses two different visual encoders -- a scene classification network and an object detector -- which produce features that match these two distinct types of visual cues. In our method, scene features contribute high-level contextual information that supports object-level processing. With this design, our model is able to use vision-and-language pretraining (i.e., learning the alignment between images and text from large-scale web data) to substantially improve performance on the Room-to-Room (R2R) and Room-Across-Room (RxR) benchmarks. Specifically, our approach leads to improvements of 1.8% absolute in SPL on R2R and 3.7% absolute in SR on RxR. Our analysis reveals even larger gains for navigation instructions that contain six or more object references, which further suggests that our approach is better able to use object features and align them to references in the instructions.
accept
This paper presents a method for vision and language navigation based on transformers and leveraging information from object-level processing. The paper received favorable reviews and a consensus emerged very quickly on acceptance, in particular taking into account a convincing rebuttal. The AC concurs.
train
[ "e0RgV7gAYYc", "-i7LPvu6ksJ", "468iOkT8Iu6", "8ZxlmvklAkR", "MUTzDnelN_I", "453xctXwRb", "uWkep3S1TxE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a transformer based VLN model based on VLNBERT, which leverages both object-level features (from a Visual Genome pre-trained object detector) and scene-level features (from a Places pre-trained detector). The intuition is that natural language instructions found in the VLN task will contain bot...
[ 7, 7, 7, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1 ]
[ "nips_2021_E5EoQqCVYX", "nips_2021_E5EoQqCVYX", "nips_2021_E5EoQqCVYX", "uWkep3S1TxE", "e0RgV7gAYYc", "-i7LPvu6ksJ", "468iOkT8Iu6" ]
nips_2021_fpvUKdqcPV
A Normative and Biologically Plausible Algorithm for Independent Component Analysis
The brain effortlessly solves blind source separation (BSS) problems, but the algorithm it uses remains elusive. In signal processing, linear BSS problems are often solved by Independent Component Analysis (ICA). To serve as a model of a biological circuit, the ICA neural network (NN) must satisfy at least the following requirements: 1. The algorithm must operate in the online setting where data samples are streamed one at a time, and the NN computes the sources on the fly without storing any significant fraction of the data in memory. 2. The synaptic weight update is local, i.e., it depends only on the biophysical variables present in the vicinity of a synapse. Here, we propose a novel objective function for ICA from which we derive a biologically plausible NN, including both the neural architecture and the synaptic learning rules. Interestingly, our algorithm relies on modulating synaptic plasticity by the total activity of the output neurons. In the brain, this could be accomplished by neuromodulators, extracellular calcium, local field potential, or nitric oxide.
accept
Dear authors, reviewers have reached a clear positive consensus on your work and I therefore endorse your paper for acceptance. Best regards The AC
train
[ "fhb5wjNZkZz", "TPrT6hCNk9w", "KzsL3DJqe4t", "QsOJZGEYukY", "81zgjyiUb8c", "3QAsqQqTgC5", "UQT4KCB_c-j", "Lo4kFGhGGAt", "Fo53wPKYLXq", "BVTgRnazM7d", "5IUgG_Q6Vb3", "Osj-llJ0Xc0", "EmJxVKyXuA", "A98hZqOkvws", "ouEMsZCWQG", "GkFG8ZOvxF", "FuRTGgtxGJ8", "nXmoKQZTPuo", "ie4HhXu-dQ",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Dear authors, I updated my review and score after your most recent response. I had misunderstood the semantics of $x_t$. I apologize for that error. ", "The paper proposes a biologically plausible neural network (BPNN) with an independent component analysis (ICA) objective loss function to solve blind source se...
[ -1, 7, -1, 7, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 8 ]
[ -1, 3, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "81zgjyiUb8c", "nips_2021_fpvUKdqcPV", "A98hZqOkvws", "nips_2021_fpvUKdqcPV", "3QAsqQqTgC5", "ouEMsZCWQG", "Fo53wPKYLXq", "nips_2021_fpvUKdqcPV", "ie4HhXu-dQ", "5IUgG_Q6Vb3", "O_dWB4-OALT", "Lo4kFGhGGAt", "HQIrxLRhmzc", "TPrT6hCNk9w", "GkFG8ZOvxF", "QsOJZGEYukY", "nips_2021_fpvUKdqcP...
nips_2021_eb0angdXVfR
Regret Bounds for Gaussian-Process Optimization in Large Domains
The goal of this paper is to characterize Gaussian-Process optimization in the setting where the function domain is large relative to the number of admissible function evaluations, i.e., where it is impossible to find the global optimum. We provide upper bounds on the suboptimality (Bayesian simple regret) of the solution found by optimization strategies that are closely related to the widely used expected improvement (EI) and upper confidence bound (UCB) algorithms. These regret bounds illuminate the relationship between the number of evaluations, the domain size (i.e. cardinality of finite domains / Lipschitz constant of the covariance function in continuous domains), and the optimality of the retrieved function value.In particular, we show that even when the number of evaluations is far too small to find the global optimum, we can find nontrivial function values (e.g. values that achieve a certain ratio with the optimal value).
accept
This manuscript considers the development of insightful regret bounds for Bayesian optimization when the domain is so large that we have effectively no hope of finding the global optimum. The analysis is novel and the results give insight into performance in this realistic setting. After a long discussion and continued engagement with the authors, there is consensus among the reviewers that the work is of high quality, relevant to the NeurIPS audience, and significant. There is also consensus that there is room for improvement in the current manuscript before publication, and I strongly recommend that the authors take the reviewers' comments and suggestions into account when preparing their camera-ready version. In particular, the empirical evaluation designed and carried out during the rebuttal phase should not be lost.
train
[ "6CXretrxweC", "8_3RhuK5k5p", "v-xkMlg9KoR", "WXDu8w4EMcN", "dtJkLgqOLo", "Y35yBLgEy3d", "9GO5R93bGxl", "MFW1AeAWKQb", "mWXXK4tvFC6", "DbMCcJ52yn6", "-8yGNnas1yN", "Yn6J68Xky22", "zjR9Hlkhgfl", "9iFF2pX67x6", "EXWg-m5GyAK", "RcUxYrOr1lT", "4bgdT1CYSno", "6ceP_3Bh-0", "wBcemQmANJ4...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Dear reviewer,\n\nThank you for your insightful comments! Conducting such experiments in the finite-domain setting would indeed be very interesting, since there are no constraints on the covariance matrix in this setting. We would still expect that EI2/UCB2 would require no more than 2x the number of evaluations ...
[ -1, -1, 7, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, 3, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "8_3RhuK5k5p", "WXDu8w4EMcN", "nips_2021_eb0angdXVfR", "9GO5R93bGxl", "nips_2021_eb0angdXVfR", "9GO5R93bGxl", "Jzsu9bIbIje", "9iFF2pX67x6", "nips_2021_eb0angdXVfR", "-8yGNnas1yN", "mWXXK4tvFC6", "v-xkMlg9KoR", "mWXXK4tvFC6", "mWXXK4tvFC6", "kcwNaS6Gtxl", "kcwNaS6Gtxl", "v-xkMlg9KoR",...