paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_UmCsy3C4xj | Learning Fast-Inference Bayesian Networks | We propose new methods for learning Bayesian networks (BNs) that reliably support fast inference. We utilize maximum state space size as a more fine-grained measure for the BN's reasoning complexity than the standard treewidth measure, thereby accommodating the possibility that variables range over domains of different sizes. Our methods combine heuristic BN structure learning algorithms with the recently introduced MaxSAT-powered local improvement method (Peruvemba Ramaswamy and Szeider, AAAI'21). Our experiments show that our new learning methods produce BNs that support significantly faster exact probabilistic inference than BNs learned with treewidth bounds.
| accept | This manuscript proposes to use a measure based on the max. "bag" size (product of cardinalities) when learning Bayesian networks, instead of treewidth. From a theoretical perspective, this is unimportant: results of hardness hold when all variables have the same cardinality, and one can easily convert nets if needed to/from that. However, the practical results of using the bag size (which is obviously a better measure than treewidth of the graph, no doubt of that, everybody should know it) can be interesting and useful. The novelty and theoretical contribution are not so significant, but on the other hand it is fair to say that someone needs come forward and argue that we should be using bag size instead of treewidth and make that claim work properly. | train | [
"bJKA3InhXXQ",
"1U6mesn1s9",
"AtoYRjpnHW_",
"a4GrmgrB9BA",
"kkHEXcP7Orv",
"gYGkJPRkYIN",
"ve3eWVVkj7",
"dVswWmIGFgi",
"jDKKn7q4Rha"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This is a paper about the structural learning of Bayesian networks. Analogously to existing methods trying to learning models with bounded treewidth, the authors here are interested in learning models with bounded \"state space\". Of course the two descriptors differ for non-binary models, the second providing a m... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2021_UmCsy3C4xj",
"kkHEXcP7Orv",
"bJKA3InhXXQ",
"dVswWmIGFgi",
"jDKKn7q4Rha",
"ve3eWVVkj7",
"nips_2021_UmCsy3C4xj",
"nips_2021_UmCsy3C4xj",
"nips_2021_UmCsy3C4xj"
] |
nips_2021_0lz69oI5iZP | Per-Pixel Classification is Not All You Need for Semantic Segmentation | Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.
| accept | Reviewers are unanimously positive about this paper and recommend to accept it. I see no reason to overturn their decision.
The paper is well written, with a clear message and good results. The authors are encouraged to include some of the discussions related to Max-DeepLab and DETR in the main paper (if they can find space for it). | train | [
"R0ZW-v3SZrO",
"nSE4reqtdS",
"ce0yvsiDhzA",
"GQ3rW9dJ8uF",
"HjIVPjO_san",
"jvda2W58Tpn",
"gOTPdAq0UpI",
"nZ6TrVqc5r4",
"ZX1iR4yzu4z",
"vESq3tRjxx4",
"zYD-5skU6YI",
"j3rZG1VttN6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My concerns are well addressed in the rebuttal. Thus I would like to keep my initial score unchanged. I hope the extra results would be added to the modified manuscript.",
" I have read the responses from the authors and appreciate the feedback. I have raised my rating to accept the paper. ",
"This paper disc... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"ZX1iR4yzu4z",
"gOTPdAq0UpI",
"nips_2021_0lz69oI5iZP",
"nZ6TrVqc5r4",
"j3rZG1VttN6",
"j3rZG1VttN6",
"ce0yvsiDhzA",
"zYD-5skU6YI",
"vESq3tRjxx4",
"nips_2021_0lz69oI5iZP",
"nips_2021_0lz69oI5iZP",
"nips_2021_0lz69oI5iZP"
] |
nips_2021_ekVPXh9tYkL | Deep Markov Factor Analysis: Towards Concurrent Temporal and Spatial Analysis of fMRI Data | Factor analysis methods have been widely used in neuroimaging to transfer high dimensional imaging data into low dimensional, ideally interpretable representations. However, most of these methods overlook the highly nonlinear and complex temporal dynamics of neural processes when factorizing their imaging data. In this paper, we present deep Markov factor analysis (DMFA), a generative model that employs Markov property in a chain of low dimensional temporal embeddings together with spatial inductive assumptions, all related through neural networks, to capture temporal dynamics in functional magnetic resonance imaging (fMRI) data, and tackle their high spatial dimensionality, respectively. Augmented with a discrete latent, DMFA is able to cluster fMRI data in its low dimensional temporal embedding with regard to subject and cognitive state variability, therefore, enables validation of a variety of fMRI-driven neuroscientific hypotheses. Experimental results on both synthetic and real fMRI data demonstrate the capacity of DMFA in revealing interpretable clusters and capturing nonlinear temporal dependencies in these high dimensional imaging data.
| accept | Dear authors,
the overall consensus is that your work has merit in terms of method and experiments
although there are a number of concerns about some limited comparison with alternative
baseline approaches. Despite this concern I am inclined to flip the decision on the
positive side as code is available and you can still insert some numbers in the camera
ready as proposed.
Best regards,
The AC
| train | [
"zaf1UNBklnD",
"nt_E3qZPmfF",
"9YL0YCKo53",
"8vqiD0Un3gR",
"h2FSH79yd6L",
"R_QWC7SSjYD",
"rLqHwe0qw5k",
"LbSuc2qzc2L",
"ZvhpXtBeI3i",
"WbDDU7tndUu",
"_hkrnwOvAyv",
"REQiQBUeV8V",
"6aiY9powGp",
"4jDYpxtl_4T",
"QIzE5hUtupu",
"E80NKLPvDOZ",
"2G-uUkCzPY-",
"K2YCsVzyNb",
"gubKz-zkPB",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"a... | [
"The paper presents a deep Markov factor analysis (DMFA), which is a generative model that employs Markov property in a chain of low dimensional temporal embeddings as well as spatial inductive assumptions in order to capture temporal dynamics in functional magnetic resonance imaging (fMRI). The paper shows that DM... | [
7,
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ekVPXh9tYkL",
"ZvhpXtBeI3i",
"nips_2021_ekVPXh9tYkL",
"rLqHwe0qw5k",
"nips_2021_ekVPXh9tYkL",
"2G-uUkCzPY-",
"WbDDU7tndUu",
"_hkrnwOvAyv",
"zaf1UNBklnD",
"LbSuc2qzc2L",
"K2YCsVzyNb",
"nips_2021_ekVPXh9tYkL",
"nips_2021_ekVPXh9tYkL",
"6aiY9powGp",
"REQiQBUeV8V",
"zaf1UNBklnD"... |
nips_2021_zImiB39pyUL | BooVAE: Boosting Approach for Continual Learning of VAE | Variational autoencoder (VAE) is a deep generative model for unsupervised learning, allowing to encode observations into the meaningful latent space. VAE is prone to catastrophic forgetting when tasks arrive sequentially, and only the data for the current one is available. We address this problem of continual learning for VAEs. It is known that the choice of the prior distribution over the latent space is crucial for VAE in the non-continual setting. We argue that it can also be helpful to avoid catastrophic forgetting. We learn the approximation of the aggregated posterior as a prior for each task. This approximation is parametrised as an additive mixture of distributions induced by an encoder evaluated at trainable pseudo-inputs. We use a greedy boosting-like approach with entropy regularisation to learn the components. This method encourages components diversity, which is essential as we aim at memorising the current task with the fewest components possible. Based on the learnable prior, we introduce an end-to-end approach for continual learning of VAEs and provide empirical studies on commonly used benchmarks (MNIST, Fashion MNIST, NotMNIST) and CelebA datasets. For each dataset, the proposed method avoids catastrophic forgetting in a fully automatic way.
| accept | The submission proposes a learnable prior for continual learning of a VAE to retain performance on previous tasks. The reviewers were unanimous that the paper is above the threshold for acceptance at NeurIPS. Quoting from Reviewer T9Fg "The idea is quite innovative. It is a significant improvement over coreset selection, the algorithm that it most resembles." The reviewers also appreciated both the theoretical and empirical results in the paper. From Reviewer w1MQ "The experimental evaluation is extensive and well-structured: the setups and protocols are clear; competitors and baselines adopted by the authors provide insightful suggestions and remarks to the reader; last but not least, the results are good." Some concerns remained, including some language typos, and clarity of presentation (comments by T9Fg, w1MQ - section 3, and 1sJx). | train | [
"-Ijl4rQASDo",
"PKu0ngWSWFA",
"FHISZ0OEvK",
"OFEIIt-_JRg",
"QDz7BTTpvJ",
"P6nZVQL7KcX",
"vt0NyDa8UmC",
"P4l1gcF1Bd2",
"m8fwDhsTS3t",
"yq4HFbTlWkJ",
"MRpRRYi-iVR",
"Gk7d_paDy4v",
"UeRxESqk5H8"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time! We are encouraged that we've managed to address your concerns.",
" My final score is 6 and I see it correctly in the system. I hope this helps.",
" Thank you for the time you spent with the review and the rebuttal! We note that despite your updated score, we don't see changes. We are ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"OFEIIt-_JRg",
"FHISZ0OEvK",
"QDz7BTTpvJ",
"P4l1gcF1Bd2",
"P6nZVQL7KcX",
"UeRxESqk5H8",
"Gk7d_paDy4v",
"MRpRRYi-iVR",
"yq4HFbTlWkJ",
"nips_2021_zImiB39pyUL",
"nips_2021_zImiB39pyUL",
"nips_2021_zImiB39pyUL",
"nips_2021_zImiB39pyUL"
] |
nips_2021_d7skOEQClK | Handling Long-tailed Feature Distribution in AdderNets | Adder neural networks (ANNs) are designed for low energy cost which replace expensive multiplications in convolutional neural networks (CNNs) with cheaper additions to yield energy-efficient neural networks and hardware accelerations. Although ANNs achieve satisfactory efficiency, there exist gaps between ANNs and CNNs where the accuracy of ANNs can hardly be compared to CNNs without the assistance of other training tricks, such as knowledge distillation. The inherent discrepancy lies in the similarity measurement between filters and features, however how to alleviate this difference remains unexplored. To locate the potential problem of ANNs, we focus on the property difference due to similarity measurement. We demonstrate that unordered heavy tails in ANNs could be the key component which prevents ANNs from achieving superior classification performance since fatter tails tend to overlap in feature space. Through pre-defining Multivariate Skew Laplace distributions and embedding feature distributions into the loss function, ANN features can be fully controlled and designed for various properties. We further present a novel method for tackling existing heavy tails in ANNs with only a modification of classifier where ANN features are clustered with their tails well-formulated through proposed angle-based constraint on the distribution parameters to encourage high diversity of tails. Experiments conducted on several benchmarks and comparison with other distributions demonstrate the effectiveness of proposed approach for boosting the performance of ANNs.
| accept | The manuscript has been reviewed by four experienced reviewers, all of whom acknowledged the contributions of the submission and recommend acceptance. Specifically, the reviewers agree that the manuscript is easy to follow and the proposed method is novel.
Since there is no attempt to reject the submission, there is no basis to overturn the consensus. The AC thus recommends an acceptance. | test | [
"Bl_CEYu18UV",
"4OSJHhh-MM",
"9Jd0scNCah7",
"lBszWMFC3dB",
"dTUV9w59NdS",
"vd1VErdWqez",
"3z6xe_ovku9",
"1egVWqXgKaa",
"SFhQ62jGiUj"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates the performance gap between ANNs and CNNs in classification task and focuses on the heavy-tailed feature distributions in ANNs. To alleviate this problem, this paper proposes to pre-define ANN features to follow a mixture of Multivariate Skew Laplace distributions, and introduces an angle-b... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"nips_2021_d7skOEQClK",
"lBszWMFC3dB",
"Bl_CEYu18UV",
"SFhQ62jGiUj",
"1egVWqXgKaa",
"3z6xe_ovku9",
"nips_2021_d7skOEQClK",
"nips_2021_d7skOEQClK",
"nips_2021_d7skOEQClK"
] |
nips_2021_Ww1e07fy9fC | Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL | Minshuo Chen, Yan Li, Ethan Wang, Zhuoran Yang, Zhaoran Wang, Tuo Zhao | accept | The reviewers raised some concerns with regard to lack of experimental results in this paper. Despite that they agree that the paper considers an interesting and relevant problem and it provides non-trivial theoretical results in terms of sample complexity of the proposed pessimistic method. Please provide the simple illustrative experiments in the final version. | train | [
"zaCY5VwH4G3",
"y3B7WNrQSG",
"AKemAKSfUx",
"bITSlMcW7_t",
"5Abxkk4T-o",
"_qPZmu6aF3R",
"yZJPZagvHy4",
"DEKKTXhd9s",
"nalxxSASMS6",
"GRCSIP0kbq8",
"CO3L7cQtJmb",
"Zgxq7qA5TT",
"besNOYFjhCG",
"jWmfYh80fjh"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed explanation. ",
" Dear Reviewers:\n\nThank you for your feedback. \n\n[15] (Breaking the curse of many agents: Provable mean embedding q-iteration for mean-field reinforcement learning) studies a similar mean-field reinforcement learning setting and also adopts the idea of mean embed... | [
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"y3B7WNrQSG",
"AKemAKSfUx",
"Zgxq7qA5TT",
"nips_2021_Ww1e07fy9fC",
"GRCSIP0kbq8",
"DEKKTXhd9s",
"nips_2021_Ww1e07fy9fC",
"CO3L7cQtJmb",
"besNOYFjhCG",
"bITSlMcW7_t",
"yZJPZagvHy4",
"jWmfYh80fjh",
"nips_2021_Ww1e07fy9fC",
"nips_2021_Ww1e07fy9fC"
] |
nips_2021_6veB3MCD-bu | A Law of Iterated Logarithm for Multi-Agent Reinforcement Learning | Gugan Chandrashekhar Thoppe, Bhumesh Kumar | accept | This paper studies a family of distributed nonlinear stochastic approximation for MARL, and derives a law of iterated logarithm, which describes the convergence rate on the converging sample path. It also presents an application using TD(0) with linear function approximation, by establishing the convergence rate and showing that it does not depend on the interaction graph. All reviewers are convinced on the novelty and contribution of this paper. Reviewers still recommend authors to take their advices in restructuring certain part of the paper to make it more accessible to the general RL community. | train | [
"rU0TS73-h2W",
"8M_IA9_ZupM",
"rFbobO5PMcr",
"xRDh7S181ey",
"9ubxLoSHTqy",
"V8MOljMjRSZ",
"BMx7-Lih84",
"cM5ndETAIcr",
"8097mu8N7Mh",
"P_mQQX1Ea_4",
"f36ioMXyHYe",
"nDw0EbO4GPA",
"lMoXkj_IZo2",
"Oz1G1MB3nFQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nWe sincerely thank you for your constructive feedback. Indeed, we will incorporate your suggestions. ",
" I thank the authors for the responses. I went through the comments and rebuttals. I would suggest incorporating the more intuition and concrete application examples to the revision. ",
"... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
-1,
2,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"8M_IA9_ZupM",
"nDw0EbO4GPA",
"V8MOljMjRSZ",
"BMx7-Lih84",
"nips_2021_6veB3MCD-bu",
"f36ioMXyHYe",
"8097mu8N7Mh",
"nips_2021_6veB3MCD-bu",
"cM5ndETAIcr",
"Oz1G1MB3nFQ",
"9ubxLoSHTqy",
"lMoXkj_IZo2",
"nips_2021_6veB3MCD-bu",
"nips_2021_6veB3MCD-bu"
] |
nips_2021_x4oe1W8Hpl3 | MOMA: Multi-Object Multi-Actor Activity Parsing | Complex activities often involve multiple humans utilizing different objects to complete actions (e.g., in healthcare settings, physicians, nurses, and patients interact with each other and various medical devices). Recognizing activities poses a challenge that requires a detailed understanding of actors' roles, objects' affordances, and their associated relationships. Furthermore, these purposeful activities are composed of multiple achievable steps, including sub-activities and atomic actions, which jointly define a hierarchy of action parts. This paper introduces Activity Parsing as the overarching task of temporal segmentation and classification of activities, sub-activities, atomic actions, along with an instance-level understanding of actors, objects, and their relationships in videos. Involving multiple entities (actors and objects), we argue that traditional pair-wise relationships, often used in scene or action graphs, do not appropriately represent the dynamics between them. Hence, we introduce Action Hypergraph, a spatial-temporal graph containing hyperedges (i.e., edges with higher-order relationships), as a new representation. In addition, we introduce Multi-Object Multi-Actor (MOMA), the first benchmark and dataset dedicated to activity parsing. Lastly, to parse a video, we propose the HyperGraph Activity Parsing (HGAP) network, which outperforms several baselines, including those based on regular graphs and raw video data.
| accept | The authors introduce action hypergraph and an associated task (activity parsing), as well as a new dataset (MOMA) for complex activity recognition. All of the reviewers are positive, and find the representation has certain novelty and the dataset will be valuable to the community. The AC concurs that this paper is acceptable, and recommends a poster.
| val | [
"1DZCC7MDApq",
"GEkvJCSpl_Q",
"B-sP5wnEcaj",
"VDOELAQ-ug9",
"yoJsEXfKUt-",
"za8K-mw-Nig",
"gRHKxzTrSDG",
"30mm1oJF2D4",
"V5LRGWCKLnp",
"CiLRHk2hssi"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After going through all reviews and authors' responses, I keep my positive rating.",
" The author's rebuttal gave answers to my listed three questions. For the last two questions, using GT boxes instead of learning from raw videos, and inferior \"Role classification\" results in Table 4, the authors gave their ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"yoJsEXfKUt-",
"za8K-mw-Nig",
"nips_2021_x4oe1W8Hpl3",
"30mm1oJF2D4",
"CiLRHk2hssi",
"B-sP5wnEcaj",
"V5LRGWCKLnp",
"nips_2021_x4oe1W8Hpl3",
"nips_2021_x4oe1W8Hpl3",
"nips_2021_x4oe1W8Hpl3"
] |
nips_2021_BbSPfmZqs4B | The Pareto Frontier of model selection for general Contextual Bandits | Teodor Vanislavov Marinov, Julian Zimmert | accept | The paper studies model selection in contextual bandits (and related problems) and the main result is a lower bound showing that "proper" algorithms cannot obtain the model selection guarantee conjecture in [FKL20]. A consequence is that the second order bound conjecture by Freund in a COLT open problem is not achievable.
The result is very interesting and after a careful reading along with discussions with the reviewers, we believe it is correct. It is clearly a very important result and represents significant progress in our understanding of online learning and bandit problems. As such the reviewers and I agree the paper should be accepted.
However, the paper was very hard to read, particularly regarding the proof of the lower bounds. My thoughts/suggestions here:
- I do not think the calculations for wrapping up the proof are very illuminating, e.g., paragraph block labeled "Proof of theorem 3" and "proof of theorem 4." I would prefer to see these replaced with a more qualitative/conceptual explanation of what is going on in the proof. Obviously the calculations should be included in the appendix, but I feel something more conceptual might be helpful to convince a reader.
- I found it helpful to think about what an algorithm might do to convince myself. For example, for Theorem 3 the algorithm basically has two options: (1) it can choose to play only arm i=K or (2) it can essentially explore uniformly in arms [K-1] for N_max rounds. In the former case, the environment will not switch, which is a contradiction of the algorithm's guarantee, while in (2) it incurs a lot of regret due to Lemma 1. Then you can do the calculation showing how you set \Delta to ensure that both of these cases work out to "confirm" the theorem. This is not a proof but I think provides more intuition than the calculations provided at present.
For the stochastic lower bound I think it is worth highlighting the following steps (this is also how I understand the proof):
- A proper learner means that there is no "cherry-picking" on contexts, so the statistical lower bound is a "passive learning" lower bound
- The construction is set up so that the lower bound part is essentially full information. In particular the algorithm always knows the loss for action 3 and the realized losses for actions 1 and 2 are coupled so that when you "explore" you are in a full information setting.
- Thus we can reduce to "standard" full-information passive learning problems. The flavor of such problem is closely related to work on "estimating learnability" [1] or "tolerant testing" [2]. This is a very clever reduction and highlights the role of the proper learning assumption.
- The specific learning problem being reduced to is essentially the problem of "estimating sparsity" in linear regression (see also the part on learning dictators in [2]), in particular where the sparsity is just 1. In fact I think Lemma 2 is very similar to Theorem 4.5 in [3] (although that theorem/paper is also quite hard to read). It would be great to mention this, so that readers who know this work (or who trust that result) can quickly convince themselves that your result is correct.
- Actually the results in [3] demonstrate something quite interesting that I think would be worth pointing out here. If I understand it correctly, their results show that a slightly different construction does not completely work for the model selection lower bound. Suppose that the means are given as in your construction, but rather than Bernoulli noise, the noise is Gaussian. Then the statistic \frac{1}{n} \sum_i (\ell_{i,1} - 1/2)^2 actually provides a better testing sample complexity bound than the lower bound in Lemma 2. The key here is that the variances under both hypothesis are the same, so the difference in mean can be detected by looking at the second moment. On the other hand, the Bernoulli construction is closer to the "unknown variance" setting in [3] where the mean difference is washed out by changing the variance. (You can see that in [3] the complexity of the "known variance" and "unknown variance" problems are actually different.)
- This is more or less how I convinced myself that the proof is correct.
At a high level, I think it would be great to re-write the lower bound section to capture the following two properties: (1) some hand-wavy way to convince an expert that the result is likely correct, without them having to read the technical details (e.g., by citing related results and maybe following the recipe above), and (2) some intuition based more on algorithmic considerations which I think are easier for readers to understand conceptually.
Finally, regarding proper vs improper algorithms. The reviewer raised a concern here on the actual definition but additionally there are many contextual bandit algorithms that are formally improper (e.g., mixing uniform exploration makes an algorithm improper). Additionally, maybe you can use the lower bound argument on active tolerant testing of dictators in [2] to actually get a lower bound against improper algorithms? Otherwise I think it is fine to conjecture this, but I might slightly tone down the claims about solving the open problems in [FKL20]. If I understand it correctly, the present paper formally resolves Open Problem 1 (and 1a, 1b) in [FKL20] but does not formally resolve open problem 2 due to the proper/improper learning issue. I am aware of some some folklore constructions that critically rely on cherry-picking/active learning/improperness to obtain model selection guarantees in some special cases, so I personally would be more hesitant to make these claims.
In addition to my suggestions, I hope the authors incorporate feedback from the reviewers, who I know spent quite a lot of time studying the paper and understanding the details.
References:
1. Kong, Valiant. Estimating learnability in the sublinear data regime. https://arxiv.org/abs/1805.01626
2. Balcan, Blais, Blum, Yang. Active tolerant testing. http://www.cs.cmu.edu/~ninamf/papers/active-testing.pdf
3. Ingster, Tsybakov, Verzelen. Detection boundary in sparse regression. https://arxiv.org/abs/1009.1706
Please also follow references to find the related papers to these. I know there are others.
| test | [
"T_Otjl-3UlN",
"CIgHzvweAMf",
"tmETjpT8A6t",
"jL5fBtx5YTf",
"3ma5UEikBox",
"ZhlDbrb6dh0",
"rQ8xRns4U0D",
"8C0zoMY1DOe",
"DUej9Sy69_-",
"UPnoiHps4zN",
"1NDOPchWMm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies the model selection problem in contextual bandits. Lower bounds for the model selection problem, the switching rewards problem, and the second-order bound in the full-information game are provided, which resolve several open problems. Lower bounds established in this paper resolve several open ... | [
8,
5,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_BbSPfmZqs4B",
"nips_2021_BbSPfmZqs4B",
"8C0zoMY1DOe",
"ZhlDbrb6dh0",
"nips_2021_BbSPfmZqs4B",
"UPnoiHps4zN",
"T_Otjl-3UlN",
"CIgHzvweAMf",
"1NDOPchWMm",
"3ma5UEikBox",
"nips_2021_BbSPfmZqs4B"
] |
nips_2021_ZkGfZLEXZ20 | Teaching an Active Learner with Contrastive Examples | Chaoqi Wang, Adish Singla, Yuxin Chen | accept | This paper proposes a novel learning setup, provides some initial sample and query complexity analysis as well as an empirical evaluation on a synthetic dataset. The authors propose a merge of active learning and machine teaching. In their framework, a learner gets to query an example, and the teacher responds with a label and an additional piece of information. Here, the teacher also provides a "contrastive example", that is an example that is similar to the query but is from a different label class or one that is dissimilar but from the same label class.
The reviewers appreciated the clarity and novelty in this submission. Some concerns regarding correctness were raised, but resolved during the discussion phase. The reviewers also raised some concerns about motivating this particular setup.
Overall, this seems to be a solid contribution, that provides a novel framework and initial analysis. The work will likely be of interest to researchers in the neurips community who focus on active learning, query learning, or machine teaching. | train | [
"E5yvIb2tlLp",
"DcE0GG-Nd7",
"UwWzRDWBFkY",
"30ohA7wPV9U",
"X9-3iGZ38X",
"GXmRLeBlwyf",
"KLVOVwJJCB9",
"X30eqPR0cQD",
"UaYTp0LkXWF",
"08JDQqcfMJ",
"iUxzS4nNdD",
"gqn5KgLW7F3",
"jE7IW45szTx",
"nCC2ZkjdK5",
"FJkojbPwCo",
"webJQoVe1W",
"oiyKCkK4Fhy",
"pqSB0ct85p",
"ZUiRffpQAHz",
"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" \nWe thank all the reviewers for their careful reviews and positive comments, including: (Reviewer PbTC) “the idea of using contrastive examples is interesting”, (Reviewer NhnX) “the paper has good originality and clarity” and (Reviewer j9HW) “the problem setting addressed is interesting and novel… the submission... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ZkGfZLEXZ20",
"30ohA7wPV9U",
"nips_2021_ZkGfZLEXZ20",
"GXmRLeBlwyf",
"UaYTp0LkXWF",
"KLVOVwJJCB9",
"X30eqPR0cQD",
"iUxzS4nNdD",
"nips_2021_ZkGfZLEXZ20",
"iUxzS4nNdD",
"gqn5KgLW7F3",
"jE7IW45szTx",
"nCC2ZkjdK5",
"FJkojbPwCo",
"pqSB0ct85p",
"UaYTp0LkXWF",
"HXt32dHSBP",
"Uw... |
nips_2021_h7-XixPCAL | Structured Denoising Diffusion Models in Discrete State-Spaces | Denoising diffusion probabilistic models (DDPMs) [Ho et al. 2021] have shown impressive results on image and waveform generation in continuous state spaces. Here, we introduce Discrete Denoising Diffusion Probabilistic Models (D3PMs), diffusion-like generative models for discrete data that generalize the multinomial diffusion model of Hoogeboom et al. [2021], by going beyond corruption processes with uniform transition probabilities. This includes corruption with transition matrices that mimic Gaussian kernels in continuous space, matrices based on nearest neighbors in embedding space, and matrices that introduce absorbing states. The third allows us to draw a connection between diffusion models and autoregressive and mask-based generative models. We show that the choice of transition matrix is an important design decision that leads to improved results in image and text domains. We also introduce a new loss function that combines the variational lower bound with an auxiliary cross entropy loss. For text, this model class achieves strong results on character-level text generation while scaling to large vocabularies on LM1B. On the image dataset CIFAR-10, our models approach the sample quality and exceed the log-likelihood of the continuous-space DDPM model.
| accept | This paper presents a framework for learning denoising diffusion models in discrete space. Although there are valid criticisms around the novelty of the discrete denoising diffusion models and the inferior performance compared to the autoregressive models (in sampling time and NLL), the main novelty of this work is around the clever design of Markov transition matrices. Given the importance of both modeling distributions over discrete variables and text modeling without an autoregressive structure, I am recommending this paper for acceptance. | train | [
"Xueqr2S94B",
"-uPiECDH8M_",
"r9StHDRpdgK",
"YcEy0kF7Rx_",
"KCpD5wTEiG_",
"idFIxFzylBZ",
"GJri_xgdAm",
"IkAxLVj74xd",
"0ggDYDulXyi",
"jXnggU6bojN",
"eVoY7ZXtKTj",
"7s6IOyZYR3S",
"F2kTMgOraKu",
"I3DY5zLcNk"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper generalizes diffusion models from Bernoulli distributions to categorical distributions. The proposed method supersedes masked training in BERTS and autoregressive learning. Authors examined different designs of the transition matrices and demonstrated the proposed method in both language and image model... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"nips_2021_h7-XixPCAL",
"r9StHDRpdgK",
"YcEy0kF7Rx_",
"IkAxLVj74xd",
"F2kTMgOraKu",
"nips_2021_h7-XixPCAL",
"I3DY5zLcNk",
"Xueqr2S94B",
"F2kTMgOraKu",
"7s6IOyZYR3S",
"nips_2021_h7-XixPCAL",
"nips_2021_h7-XixPCAL",
"nips_2021_h7-XixPCAL",
"nips_2021_h7-XixPCAL"
] |
nips_2021_yq5MYHVaClG | Emergent Communication of Generalizations | To build agents that can collaborate effectively with others, recent research has trained artificial agents to communicate with each other in Lewis-style referential games. However, this often leads to successful but uninterpretable communication. We argue that this is due to the game objective: communicating about a single object in a shared visual context is prone to overfitting and does not encourage language useful beyond concrete reference. In contrast, human language conveys a rich variety of abstract ideas. To promote such skills, we propose games that require communicating generalizations over sets of objects representing abstract visual concepts, optionally with separate contexts for each agent. We find that these games greatly improve systematicity and interpretability of the learned languages, according to several metrics in the literature. Finally, we propose a method for identifying logical operations embedded in the emergent languages by learning an approximate compositional reconstruction of the language.
| accept | This work points out limitations of existing reference games, and proposes using communication for generalization over (concept-level) target sets instead of single targets. Different types of targets, distractors and target sets are explored. The reviewers agree the current work presents several good and useful contributions to the field of emergent language. The authors did an excellent job in addressing any remaining concerns in the rebuttal. | train | [
"bTZIdv2c9x2",
"im9-Mm2Z5l",
"Z-HhDacK5Ox",
"QIdv6OKKl91",
"V-le5QqcfRF",
"ExWZPEWFjOg",
"RuU6vNOJNTH",
"1w5aG3yCiO4",
"aHlP1BEf76C",
"m50-RjBUexn",
"BeaeAhAT1b8",
"dnSW5Frl2D",
"EJeeFKJFScM",
"XoDiYOfXPQl",
"rTc637UZBPn",
"XZYDaQfmj0Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We'd like to thank all reviewers for providing detailed and constructive reviews, and additionally for engaging with our responses during the rebuttal phase. Each reviewer has provided concrete suggestions that will improve the quality of our paper!",
"In the referential game setup that is widely used in the em... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_yq5MYHVaClG",
"nips_2021_yq5MYHVaClG",
"BeaeAhAT1b8",
"aHlP1BEf76C",
"ExWZPEWFjOg",
"XZYDaQfmj0Q",
"nips_2021_yq5MYHVaClG",
"dnSW5Frl2D",
"rTc637UZBPn",
"BeaeAhAT1b8",
"im9-Mm2Z5l",
"EJeeFKJFScM",
"RuU6vNOJNTH",
"nips_2021_yq5MYHVaClG",
"nips_2021_yq5MYHVaClG",
"nips_2021_yq... |
nips_2021_F9HNBbytcqT | Distributed Machine Learning with Sparse Heterogeneous Data | Motivated by distributed machine learning settings such as Federated Learning, we consider the problem of fitting a statistical model across a distributed collection of heterogeneous data sets whose similarity structure is encoded by a graph topology. Precisely, we analyse the case where each node is associated with fitting a sparse linear model, and edges join two nodes if the difference of their solutions is also sparse. We propose a method based on Basis Pursuit Denoising with a total variation penalty, and provide finite sample guarantees for sub-Gaussian design matrices. Taking the root of the tree as a reference node, we show that if the sparsity of the differences across nodes is smaller than the sparsity at the root, then recovery is successful with fewer samples than by solving the problems independently, or by using methods that rely on a large overlap in the signal supports, such as the group Lasso. We consider both the noiseless and noisy setting, and numerically investigate the performance of distributed methods based on Distributed Alternating Direction Methods of Multipliers (ADMM) and hyperspectral unmixing.
| accept | While the reviewers have pointed out several concerns regarding the paper, there seems to a consensus that this paper has merit. Everyone is placing this paper as a borderline candidate. I will strongly suggest that the authors implement the suggestions given by the reviewers in the subsequent versions. | train | [
"Cg2FE2Mtyk5",
"b2SSADDGSix",
"s0jWhTAouMo",
"WYu_8TTEx7Q",
"kF29PRPbPu4",
"HiEFyzz_knD",
"SSjXP98kSL7",
"iMihMK5vzGI",
"sHo7w-h1x9",
"UQAinHFhLrP",
"zkx-KTHj_0I"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nJust a quick follow up regarding our response to your comments.\nSpecifically, If you have any other outstanding concerns, do let us know as we can provide additional clarification.\n\nOtherwise, if you have no other concerns, we would kindly ask that you update your review in light of this.\n \... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"sHo7w-h1x9",
"UQAinHFhLrP",
"WYu_8TTEx7Q",
"zkx-KTHj_0I",
"sHo7w-h1x9",
"zkx-KTHj_0I",
"UQAinHFhLrP",
"nips_2021_F9HNBbytcqT",
"nips_2021_F9HNBbytcqT",
"nips_2021_F9HNBbytcqT",
"nips_2021_F9HNBbytcqT"
] |
nips_2021_Z7xSQ3SXLQU | Manipulating SGD with Data Ordering Attacks | Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker. Furthermore, we find that even a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors.
| accept | This paper presents a novel attack on SGD trained models by identifying that one can adversarially change the order of samples to significantly decrease the accuracy of the model. This is a rather surprising finding given the fact that neither the data, or labels are perturbed in any way. Although such technique would not be successful in the convex case (apart from slowing down convergence), in the nonconvex case adversarial perturbations can lead to significant loss of accuracy. Although the presented results are interesting, the attack model needs to be clarified a bit with regards to the strength of the adversary. Assuming access to the order of data, likely implies a signifiant adversarial strength with regards to access to the software stack of the training system at hand. Nevertheless the observations made are important, highlighting yet another facet of attacks against high-capacity SGD-trained models.
| train | [
"_qWyWmBnzoI",
"l9PisJ02JRv",
"5Tnv2nYv7dI",
"a3kfko4Sqrh",
"t9x9NAZcN5Q",
"sb932Tx5vCk",
"mlwqIIL7SG",
"eNC6AGaBbWb",
"aRB1N0UIfuz",
"rquWEyILabb",
"tjUYIc_wreY",
"g9og2LE4sJF",
"5KeuhPWZ5KS",
"BXwpu6HaUm0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a training-time attack against learning algorithms employing stochastic-gradient-descent (SGD) update. The attacks are designed as simply re-ordering the sequence of batches or individual data points during training. The idea of re-ordering the input sequence to influence the learning outcome ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"nips_2021_Z7xSQ3SXLQU",
"5Tnv2nYv7dI",
"a3kfko4Sqrh",
"t9x9NAZcN5Q",
"sb932Tx5vCk",
"mlwqIIL7SG",
"aRB1N0UIfuz",
"g9og2LE4sJF",
"tjUYIc_wreY",
"5KeuhPWZ5KS",
"_qWyWmBnzoI",
"BXwpu6HaUm0",
"nips_2021_Z7xSQ3SXLQU",
"nips_2021_Z7xSQ3SXLQU"
] |
nips_2021_N0Pigj5tpHE | Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification | The interdependence between nodes in graphs is key to improve class prediction on nodes, utilized in approaches like Label Probagation (LP) or in Graph Neural Networks (GNNs). Nonetheless, uncertainty estimation for non-independent node-level predictions is under-explored. In this work, we explore uncertainty quantification for node classification in three ways: (1) We derive three axioms explicitly characterizing the expected predictive uncertainty behavior in homophilic attributed graphs.(2) We propose a new model Graph Posterior Network (GPN) which explicitly performs Bayesian posterior updates for predictions on interdependent nodes. GPN provably obeys the proposed axioms. (3) We extensively evaluate GPN and a strong set of baselines on semi-supervised node classification including detection of anomalous features, and detection of left-out classes. GPN outperforms existing approaches for uncertainty estimation in the experiments.
| accept | This paper first proposes axioms describing desired properties of uncertainty in the absence or in the presence of network effects. It then introduces a novel framework for uncertainty estimation in the graph setting (graph posterior networks). The problem of uncertainty estimation in the graph setting is interesting and relevant and has received much less attention than more standard supervised learning settings. The approach is technically correct. AN extensive set of the experiments includes the assessment of uncertainty estimation quality for OOD detection as well as the assessment of robustness against shifts in attributed graph properties. The proposed approach shows promising performance in these experiments. Following the author response and discussion, the reviewers were in agreement that the paper should be accepted. One concern raised in several of the reviews was that the paper could benefit from reducing the density of the exposition and the reliance on reader familiarity with results from prior works. The authors have committed to addressing these issues in the revised manuscript. | train | [
"wALGhSGTkIO",
"NQfVLb3lPq",
"HikOL_DflVt",
"8XdoY2573Ky",
"6GepO-59T9c",
"-k8PQE_dJcU",
"AC6ucc-JwCO",
"Xl8nYfaDbfu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper makes three contributions: (i) three axioms are specified to characterize the requirements of predictive uncertainty behaviour in homophilic attributed graphs; (ii) a new inference model is proposed and theorems are provided that demonstrate how the model behaves with respect to its characterization of u... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_N0Pigj5tpHE",
"AC6ucc-JwCO",
"nips_2021_N0Pigj5tpHE",
"-k8PQE_dJcU",
"Xl8nYfaDbfu",
"HikOL_DflVt",
"wALGhSGTkIO",
"nips_2021_N0Pigj5tpHE"
] |
nips_2021_Rav_oC35ToB | Locality Sensitive Teaching | The emergence of the Internet-of-Things (IoT) sheds light on applying the machine teaching (MT) algorithms for online personalized education on home devices. This direction becomes more promising during the COVID-19 pandemic when in-person education becomes infeasible. However, as one of the most influential and practical MT paradigms, iterative machine teaching (IMT) is prohibited on IoT devices due to its inefficient and unscalable algorithms. IMT is a paradigm where a teacher feeds examples iteratively and intelligently based on the learner's status. In each iteration, current IMT algorithms greedily traverse the whole training set to find an example for the learner, which is computationally expensive in practice. We propose a novel teaching framework, Locality Sensitive Teaching (LST), based on locality sensitive sampling, to overcome these challenges. LST has provable near-constant time complexity, which is exponentially better than the existing baseline. With at most 425.12x speedups and 99.76% energy savings over IMT, LST is the first algorithm that enables energy and time efficient machine teaching on IoT devices. Owing to LST's substantial efficiency and scalability, it is readily applicable in real-world education scenarios.
| accept | The paper presents a faster algorithm for iterative machine teaching, which repeatedly traverses the training set to find samples for the learner. The improvement is obtained by replacing linear scan over the data set with a sampling approach based on locality-sensitive hashing. Empirical evaluation shows substantial speedups, up to 2-3 orders of magnitude, and similar energy savings. | val | [
"Rvo4tqiZbhW",
"GkmDjcekY7E",
"ClOxjK3J5w9",
"ePaNAbOVftx",
"HnT54wjPUzQ",
"wfSQHeGZ0qn",
"zD-Wg-v6foO",
"KBDR4edjc1c"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper applies locality sensitive hashing to online teaching, where the machine uses a student’s information to retrieve a data sample from hash tables. The paper claims several contributions. (1) Propose a novel teaching framework, called Locality Sensitive Teaching (LST) for the retrieval. (2) LST has provabl... | [
6,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021_Rav_oC35ToB",
"wfSQHeGZ0qn",
"KBDR4edjc1c",
"Rvo4tqiZbhW",
"zD-Wg-v6foO",
"nips_2021_Rav_oC35ToB",
"nips_2021_Rav_oC35ToB",
"nips_2021_Rav_oC35ToB"
] |
nips_2021_Pq7wIzt3OUE | No-Press Diplomacy from Scratch | Prior AI successes in complex games have largely focused on settings with at most hundreds of actions at each decision point. In contrast, Diplomacy is a game with more than 10^20 possible actions per turn. Previous attempts to address games with large branching factors, such as Diplomacy, StarCraft, and Dota, used human data to bootstrap the policy or used handcrafted reward shaping. In this paper, we describe an algorithm for action exploration and equilibrium approximation in games with combinatorial action spaces. This algorithm simultaneously performs value iteration while learning a policy proposal network. A double oracle step is used to explore additional actions to add to the policy proposals. At each state, the target state value and policy for the model training are computed via an equilibrium search procedure. Using this algorithm, we train an agent, DORA, completely from scratch for a popular two-player variant of Diplomacy and show that it achieves superhuman performance. Additionally, we extend our methods to full-scale no-press Diplomacy and for the first time train an agent from scratch with no human data. We present evidence that this agent plays a strategy that is incompatible with human-data bootstrapped agents. This presents the first strong evidence of multiple equilibria in Diplomacy and suggests that self play alone may be insufficient for achieving superhuman performance in Diplomacy.
| accept | The paper presents a self-play algorithm to learn to play Diplomacy entirely from scratch. Diplomacy is an extremely large challenge domain and as such, has previously required human data to perform competitively. The paper presents DORA which addresses these challenges with various solutions inspired by the literature: a basic change to Nash Q-learning to use (state-based) value functions, double oracle to address/prune the action space, and an equilibrium search. The paper lacks in novelty and breadth of applicability due to simply combining various known methods together for a focused application to a specific domain. Still, the domain in question is a particularly interesting one and has garnered recent interest from the MARL community. Also, the results themselves are novel, showing that DORA is enough to reach super-human in the two-player zero-sum variant without human data, but the same method applied to n-player had mixed success (unlike the case of poker). These results will be of significant interest to MARL researchers, and the paper makes valuable contribution to the community. | train | [
"VCmEr3vR8lw",
"dm86KUrELOt",
"Gp9pafpZBQn",
"9X5clJzWg3I",
"K0ZZCCw-7Pe",
"aYQSTVYv75",
"jifibUd-M3E",
"15sXv-s_K_",
"04F-ND9XlV",
"HbsP4GwEuHy"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper builds off of Gray et. al. by swapping out the blueprint that is trained on human data with one that is trained from scratch using a deep learning variant of Nash-Q learning. The new method is able to achieve what appears to be superhuman performance in 1v1 Diplomacy and also does well but with mixed re... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
8,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
5
] | [
"nips_2021_Pq7wIzt3OUE",
"HbsP4GwEuHy",
"04F-ND9XlV",
"15sXv-s_K_",
"VCmEr3vR8lw",
"jifibUd-M3E",
"nips_2021_Pq7wIzt3OUE",
"nips_2021_Pq7wIzt3OUE",
"nips_2021_Pq7wIzt3OUE",
"nips_2021_Pq7wIzt3OUE"
] |
nips_2021_pvCLqcsLJ1N | Remember What You Want to Forget: Algorithms for Machine Unlearning | Ayush Sekhari, Jayadev Acharya, Gautam Kamath, Ananda Theertha Suresh | accept | The paper considers the problem of machine unlearning: after a model is trained on a dataset, there is a request to delete a point from the dataset. The goal is to design an efficient method to update the trained model s.t. is nearly indistinguishable from what we would have obtained had we trained on the dataset without the point. One baseline from prior work is to use DP. The authors give a method that can efficiently unlearn more points that DP-training in the context of in convex learning problems. The reviewers agree that this is an interesting paper with solid contribution, and all of them support acceptance. | val | [
"sd_IeTG7K3V",
"j9POb7JNYsi",
"c8M-AdDOL9b",
"d3ebKrLq9N0",
"FmFwu9QzDTz",
"XO1V5NhO4Qw",
"1g0noeqF0qw",
"QhpWJsURYUn",
"pKNBFMZMcZg",
"uviJlotWXIm",
"jFppaxThXeR",
"lQn7cKNn0D3",
"XX3PFZjRBOe",
"gQfyYFja8ad",
"-FhjxC6GiJe",
"ACSOGnbg1nR",
"Pv8cTUbmepa",
"Ilm7GoMKJG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks, we will try to better illustrate this in the final paper.",
"The paper considers the problem of machine unlearning: after a model is trained on a dataset, there is a request to delete a point from the dataset. The goal is to design an *efficient* method to update the trained model such that is nearly in... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"uviJlotWXIm",
"nips_2021_pvCLqcsLJ1N",
"FmFwu9QzDTz",
"nips_2021_pvCLqcsLJ1N",
"gQfyYFja8ad",
"1g0noeqF0qw",
"QhpWJsURYUn",
"pKNBFMZMcZg",
"-FhjxC6GiJe",
"jFppaxThXeR",
"lQn7cKNn0D3",
"XX3PFZjRBOe",
"Ilm7GoMKJG",
"d3ebKrLq9N0",
"j9POb7JNYsi",
"Pv8cTUbmepa",
"nips_2021_pvCLqcsLJ1N",
... |
nips_2021_f9mSLa07Ncc | Learning latent causal graphs via mixture oracles | We study the problem of reconstructing a causal graphical model from data in the presence of latent variables. The main problem of interest is recovering the causal structure over the latent variables while allowing for general, potentially nonlinear dependencies. In many practical problems, the dependence between raw observations (e.g. pixels in an image) is much less relevant than the dependence between certain high-level, latent features (e.g. concepts or objects), and this is the setting of interest. We provide conditions under which both the latent representations and the underlying latent causal model are identifiable by a reduction to a mixture oracle. These results highlight an intriguing connection between the well-studied problem of learning the order of a mixture model and the problem of learning the bipartite structure between observables and unobservables. The proof is constructive, and leads to several algorithms for explicitly reconstructing the full graphical model. We discuss efficient algorithms and provide experiments illustrating the algorithms in practice.
| accept | There has been ampled discussion between reviewers and authors. The authors are encourage to leverage the elements in this exchange to improve the paper.
In particular, a key conlcusion of the discussion between reviewers is that given the key role of a Mixture Oracle in this work, all reviewers agree that it would be really important to add a paragraph describing existing identifiability results for mixture oracles, as well as a corollary that gives an exact set of assumptions under which the algorithm is consistent.
| val | [
"_CPN93OzYqb",
"WOsR0iGQ4wv",
"fttWLLS0IU2",
"ba0m4QGCe_O",
"JGNiXS0pxpI",
"XE-D0UxR4ta",
"Dglx37Q4_7o",
"RbyvWPraQfc",
"PHGVQfb9Oj",
"VM7mX27dOBB",
"YXzMbZAK_Vd",
"4nGu7Byn-K7",
"OFeJwBvJrea",
"tsyzkYAn2Rs",
"luB5pg-TkFY"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This is a paper that focuses on causal discovery especially focusing on the causal relation among discrete latent variables. \nThe basic idea is that by mixture oracles, one can locate the latent variables and recovery their distribution.\nThe main contribution is the model identification theory, including biparti... | [
7,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_f9mSLa07Ncc",
"Dglx37Q4_7o",
"JGNiXS0pxpI",
"nips_2021_f9mSLa07Ncc",
"4nGu7Byn-K7",
"nips_2021_f9mSLa07Ncc",
"RbyvWPraQfc",
"PHGVQfb9Oj",
"OFeJwBvJrea",
"_CPN93OzYqb",
"tsyzkYAn2Rs",
"ba0m4QGCe_O",
"XE-D0UxR4ta",
"luB5pg-TkFY",
"nips_2021_f9mSLa07Ncc"
] |
nips_2021_fAWFaNaRVeF | ErrorCompensatedX: error compensation for variance reduced algorithms | Communication cost is one major bottleneck for the scalability for distributed learning. One approach to reduce the communication cost is to compress the gradient during communication. However, directly compressing the gradient decelerates the convergence speed, and the resulting algorithm may diverge for biased compression. Recent work addressed this problem for stochastic gradient descent by adding back the compression error from the previous step. This idea was further extended to one class of variance reduced algorithms, where the variance of the stochastic gradient is reduced by taking a moving average over all history gradients. However, our analysis shows that just adding the previous step's compression error, as done in existing work, does not fully compensate the compression error. So, we propose ErrorCompensateX, which uses the compression error from the previous two steps. We show that ErrorCompensateX can achieve the same asymptotic convergence rate with the training without compression. Moreover, we provide a unified theoretical analysis framework for this class of variance reduced algorithms, with or without error compensation.
| accept | The paper presents a novel error feedback mechanism to handle momentum-like terms (most notably, SGD with momentum and STORM). The paper presents theoretical analyses of existing momentum-like algorithms coupled with compression based on their modifications and experimental results confirming their results.
The reviewer commended the new direction and find the results of interest to the community. The reviewers also find, as a minor issue, that the assumptions used in the analysis are a bit strong and, more crucially, the experiments do not yet clearly demonstrate practical usability.
The reviewers find that there is currently not enough motivation presented that would justify the 'need' of two momentum buffers. In the particular in the presence of earlier work that successfully (though mostly empirically) applied compression to momentum SGD with a single buffer [such as the cited (Zheng, S., Huang, Z., & Kwok, J. (2019a). Communication-efficient distributed blockwise momentum sgd with error-feedback) and (Vogels, T., Karimireddy, S. P., & Jaggi, M. (2019). Powersgd: Practical low-rank gradient compression for distributed optimization.)] the reviewers find that the key differences should be highlighted better.
Additionally, I also find it strange that the paper sets off with the question 'what is the best compression method for variance reduced algorithms?', yet optimality of the proposed scheme is not discussed.
The reviewers provided detailed and well-justified feedback and the authors are encouraged to take their comments into account in the revision. In particular, since the application of error feedback to STROM appears to be a novel direction, we are confident that this work will attract interest in the community after repositioning and recalibration. | train | [
"zqpEhr2ebo",
"al55runLm4",
"TKyat3XFdcg",
"CmX5_YQr7qD",
"vJ4s_DoKgzg",
"fT558vwXjc",
"PFwpFwIzss_",
"GBqE86gfPI"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposed ErrorCompensatedX, that uses compression error from the previous two steps for variance reduced algorithms and SGD. The paper provided a theoretical analysis framework to analyze error compensated algorithms, and shows that ErrorCompensatedX admits the same asymptotic convergence rate for SGD, M... | [
5,
5,
5,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_fAWFaNaRVeF",
"nips_2021_fAWFaNaRVeF",
"nips_2021_fAWFaNaRVeF",
"al55runLm4",
"TKyat3XFdcg",
"zqpEhr2ebo",
"GBqE86gfPI",
"nips_2021_fAWFaNaRVeF"
] |
nips_2021_evqzNxmXsl3 | Deep Contextual Video Compression | Most of the existing neural video compression methods adopt the predictive coding framework, which first generates the predicted frame and then encodes its residue with the current frame. However, as for compression ratio, predictive coding is only a sub-optimal solution as it uses simple subtraction operation to remove the redundancy across frames. In this paper, we propose a deep contextual video compression framework to enable a paradigm shift from predictive coding to conditional coding. In particular, we try to answer the following questions: how to define, use, and learn condition under a deep video compression framework. To tap the potential of conditional coding, we propose using feature domain context as condition. This enables us to leverage the high dimension context to carry rich information to both the encoder and the decoder, which helps reconstruct the high-frequency contents for higher video quality. Our framework is also extensible, in which the condition can be flexibly designed. Experiments show that our method can significantly outperform the previous state-of-the-art (SOTA) deep video compression methods. When compared with x265 using veryslow preset, we can achieve 26.0% bitrate saving for 1080P standard test videos.
| accept | The authors propose a new video compression model that uses motion vectors to extract and warp features instead of warping the previous frame for predicting the current frame. As a result, the encoder doesn’t rely on residual compression but directly encodes the necessary information needed to decode the frame.
The paper leads to a new state of the art in video compression and merits publication. Unfortunately, the main paper lacked several citations and baseline comparisons. Some of these comparisons were shown only in the appendix, making the main paper appear incomplete. I strongly urge the authors to present these results in the main paper. In addition, I also encourage the authors to run additional baseline comparisons of the related works mentioned by the reviewers (such as Ruihan Yang et al., ICLR 2021). | train | [
"U23F_hPX52d",
"dxt90TF0Wbd",
"hVu8fUP6ZQc",
"EqcTi1rGF-j",
"Jckmqxqqz02",
"zvfwJLWU-_0",
"G1V8sE-symm",
"dkZXjMVJgDE",
"DFxfwoa7DtP",
"lgO9l-5Bnhe"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this work the authors propose a new model for video compression that, instead of warping the previous frame and predicting the current frame, extract and warp features using the motion vectors. As a result, the encoder doesn’t rely on residual compression (residual between ground truth and prediction) but direc... | [
7,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_evqzNxmXsl3",
"nips_2021_evqzNxmXsl3",
"nips_2021_evqzNxmXsl3",
"zvfwJLWU-_0",
"dkZXjMVJgDE",
"hVu8fUP6ZQc",
"U23F_hPX52d",
"lgO9l-5Bnhe",
"dxt90TF0Wbd",
"nips_2021_evqzNxmXsl3"
] |
nips_2021_IARK9TWiFRb | On the Frequency Bias of Generative Models | The key objective of Generative Adversarial Networks (GANs) is to generate new data with the same statistics as the provided training data. However, multiple recent works show that state-of-the-art architectures yet struggle to achieve this goal. In particular, they report an elevated amount of high frequencies in the spectral statistics which makes it straightforward to distinguish real and generated images. Explanations for this phenomenon are controversial: While most works attribute the artifacts to the generator, other works point to the discriminator. We take a sober look at those explanations and provide insights on what makes proposed measures against high-frequency artifacts effective. To achieve this, we first independently assess the architectures of both the generator and discriminator and investigate if they exhibit a frequency bias that makes learning the distribution of high-frequency content particularly problematic. Based on these experiments, we make the following four observations: 1) Different upsampling operations bias the generator towards different spectral properties. 2) Checkerboard artifacts introduced by upsampling cannot explain the spectral discrepancies alone as the generator is able to compensate for these artifacts. 3) The discriminator does not struggle with detecting high frequencies per se but rather struggles with frequencies of low magnitude. 4) The downsampling operations in the discriminator can impair the quality of the training signal it provides.In light of these findings, we analyze proposed measures against high-frequency artifacts in state-of-the-art GAN training but find that none of the existing approaches can fully resolve spectral artifacts yet. Our results suggest that there is great potential in improving the discriminator and that this could be key to match the distribution of the training data more closely.
| accept | The reviewers agreed that the paper was a strong systematic study of frequency bias of GANs. While the overall impact of the paper is somewhat limited, as previous work show similar results, it does provide sufficient new insight that it will be useful to the community. | train | [
"dNZ0x27Cd1Q",
"vvTw0zATTc5",
"9Nrnt2uS0bc",
"winbDSoO9t1",
"y2VzV6UY55h",
"gMC5A6hZnPx",
"WHiRvjqoxNI",
"W539toaSWS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your constructive comments and valuable concerns. We hope we can clarify the presentation and scope of our work below:\n\n> I am confused regarding the colorbar in Figures 2, 4, 5. First, I recommend explaining the heatmap in the caption, rather than buried in the text. In L145, the text mentions th... | [
-1,
-1,
-1,
-1,
7,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"W539toaSWS",
"WHiRvjqoxNI",
"gMC5A6hZnPx",
"y2VzV6UY55h",
"nips_2021_IARK9TWiFRb",
"nips_2021_IARK9TWiFRb",
"nips_2021_IARK9TWiFRb",
"nips_2021_IARK9TWiFRb"
] |
nips_2021_B1Kh0SVodY | Learning curves of generic features maps for realistic datasets with a teacher-student model | Teacher-student models provide a framework in which the typical-case performance of high-dimensional supervised learning can be described in closed form. The assumptions of Gaussian i.i.d. input data underlying the canonical teacher-student model may, however, be perceived as too restrictive to capture the behaviour of realistic data sets. In this paper, we introduce a Gaussian covariate generalisation of the model where the teacher and student can act on different spaces, generated with fixed, but generic feature maps. While still solvable in a closed form, this generalization is able to capture the learning curves for a broad range of realistic data sets, thus redeeming the potential of the teacher-student framework. Our contribution is then two-fold: first, we prove a rigorous formula for the asymptotic training loss and generalisation error. Second, we present a number of situations where the learning curve of the model captures the one of a realistic data set learned with kernel regression and classification, with out-of-the-box feature maps such as random projections or scattering transforms, or with pre-learned ones - such as the features learned by training multi-layer neural networks. We discuss both the power and the limitations of the framework.
| accept | This paper received four reviews in total, with the scores/confidences (after the author response) being 6/3, 6/3, 6/2, and 6/3. Upon reading by myself the reviews and the author response, as well as the paper itself, my judgement on this paper is that, whereas it has a clarity issue as pointed out by most reviewers, it presents an interesting piece of theoretical work. Overall, the reviewers evaluated quite positively this paper's contributions such as:
- Crystallizing those problem settings in some recent pieces of work into the formulation which the authors call the novel Gaussian covariance teacher-student model.
- Providing a rigorous characterization of the properties of the proposed model (summarized in Theorems 1 ($d\to\infty$) and 2 (non-asymptotic)) under fairly generic settings (although the reviewers would not have checked all the details of the proofs in SM).
- Demonstrating applicability of the theoretical predictions to more realistic scenarios. The problem setting in the proposed model would seem rather simplistic as both the teacher and the student are essentially assumed to be single-layer perceptrons. Nevertheless, the authors argue via what is called the Gaussian equivalence conjecture (GEC; Conjecture 1) that the analytical results would be able to capture asymptotic behaviors of a wider class of models beyond the proposed model. The GEC was empirically confirmed on the teacher-student setting in this paper via the experiments on trained generative networks (Section 3.3), where one focuses on the last layers of the trained teacher and student networks which allows us to cast the problem into the analytical framework of this paper except the Gaussianity, as well as on ridge regression (Section 3.4), where not the Gaussianity but second-order statistics determine the learning behaviors. The empirical validation of the GEC on the teacher-student setting is another main contribution of this paper.
On the basis of the positive scores as well as the above evaluations, I am happy to recommend acceptance of this paper.
I agree with the authors that some comments on computational aspects as non-essential. As for the clarity issue, I also agree with the authors that this paper contains a lot of theoretical and experimental results so that it may be somehow unavoidable for such a paper to be dense. Still, my impression is that there exist some gaps between the arguments given in natural language and those given in the form of mathematical expressions. As an example, one could explain significance of the density $\hat{\mu}_d$ in equation (7) in terms of a kind of gauge transform, as follows:
> As the teacher sees its input $u$ through its one-dimensional projection $\theta_0^Tu$, the distribution of the teacher input is characterized by the variance of the projection $\theta_0^Tu$, which is proportional to $\rho$. Also, one can rotate the coordinates of the student input $v$ so that its elements are independent. Thus the joint input distribution is fully characterized under the teacher-student setting in this paper by the variances of the independent elements of the student input, which are proportional to $\bar{\theta}_i$s, and their covariances with the projection $\theta_0^Tu$, which are proportional to $\omega_i$s.
Such an explanation should be beneficial not only as it clarifies the meaning of these quantities but also as it would furthermore suggest how to extend the theoretical discussion in this paper to more general cases, for example, to the case with a committee machine as the teacher.
Some very minor points:
- Font styles in mathematical formulas are sometimes inconsistent. For example, vector quantities such as $x,u,v$ are in some occasions written in bold roman, whereas in other occasions they appear in bold italic.
- Equation (4): The argument of $f_0$ should be divided by $\sqrt{p}$.
- Line 51: In Theorem(s)
- Lines 52-53: training and generalisation error(s)
- Lines 152, 233: $x$ and $y$ should be italic.
- Line 157: the training (loss $\to$ error)
- Line 162: such as (e.g.)
- Line 188: kitchen sink(s) model
- Line 193: all-one(s) vector
- Line 196: random sink(s) models
- Line 241: that follow(s)
- Line 252: five-layer(s) Deep convolutional GAN
- Line 274: the covariance(s) matrices
- Line 283: that fit(s) the true labels
- Line 314: Fig(s). 4 and 1 | train | [
"EFrDRcX3E4J",
"Waw9MPBsMLr",
"T-bPWRVmnm",
"_PRwb_uy7KI",
"eTpXarPR2D7",
"SDQAP8gtUII",
"UF9MmzDCVb",
"-wGC6RLrh_v",
"pSfKHF1GYp6",
"VXzsxqC8DHR",
"XtSzCdAD-_H",
"uiCPNOTHhs"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this papers the author proposed a new Gaussian covariate teacher-student model that is capable of accurately estimating the training and generalization errors for its learned linear coefficients of its student part with an exponentially decaying concentrate bound. Their framework covers a wide range of learning... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2021_B1Kh0SVodY",
"-wGC6RLrh_v",
"pSfKHF1GYp6",
"eTpXarPR2D7",
"UF9MmzDCVb",
"VXzsxqC8DHR",
"XtSzCdAD-_H",
"EFrDRcX3E4J",
"uiCPNOTHhs",
"nips_2021_B1Kh0SVodY",
"nips_2021_B1Kh0SVodY",
"nips_2021_B1Kh0SVodY"
] |
nips_2021_MYvpQVjCK0_ | It Has Potential: Gradient-Driven Denoisers for Convergent Solutions to Inverse Problems | In recent years there has been increasing interest in leveraging denoisers for solving general inverse problems. Two leading frameworks are regularization-by-denoising (RED) and plug-and-play priors (PnP) which incorporate explicit likelihood functions with priors induced by denoising algorithms. RED and PnP have shown state-of-the-art performance in diverse imaging tasks when powerful denoisersare used, such as convolutional neural networks (CNNs). However, the study of their convergence remains an active line of research. Recent works derive the convergence of RED and PnP methods by treating CNN denoisers as approximations for maximum a posteriori (MAP) or minimum mean square error (MMSE) estimators. Yet, state-of-the-art denoisers cannot be interpreted as either MAPor MMSE estimators, since they typically do not exhibit symmetric Jacobians. Furthermore, obtaining stable inverse algorithms often requires controlling the Lipschitz constant of CNN denoisers during training. Precisely enforcing this constraint is impractical, hence, convergence cannot be completely guaranteed. In this work, we introduce image denoisers derived as the gradients of smooth scalar-valued deep neural networks, acting as potentials. This ensures two things: (1) the proposed denoisers display symmetric Jacobians, allowing for MAP and MMSE estimators interpretation; (2) the denoisers may be integrated into RED and PnP schemes with backtracking step size, removing the need for enforcing their Lipschitz constant. To show the latter, we develop a simple inversion method that utilizes the proposed denoisers. We theoretically establish its convergence to stationary points of an underlying objective function consisting of the learned potentials. We numerically validate our method through various imaging experiments, showing improved results compared to standard RED and PnP methods, and with additional provable stability.
| accept | This paper is proposing a new and interesting idea: a potential-based neural net denoiser that makes RED convergence analysis clean and simple. This is quite novel and suitable to NeurIPS audience. The reviewers raised serious concerns on the evaluation and how the comparisons to other RED methods were performed. I agree that the evaluation should be improved and the authors did a reasonable job on responding to reviewer input on this. I recommend that the authors revise their manuscript taking all this input into account. The analysis on convex functions is also a good suggestion that the authors should take into consideration.
| train | [
"cccNaQiSvDT",
"uUq9HcORODe",
"g9I91d116pT",
"FljqkA0-3mU",
"sSyOJFg5cn_",
"YfGrPgCAqk8",
"CYmqBvfn-Nn",
"6Y5XuTNru64",
"Qin-0U_ZjFu",
"oTb7EMN3pYO",
"FkGXvHuiJt",
"Nl0YXNC5htm",
"hMA3oIPy8LR",
"HO4Xxmg4pB",
"REzBeU8TQC8",
"4oWdmqL3pQD",
"F3NuGZcEUD",
"KJjtCtgXQzo"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the reviewer’s constructive comments.\n\n“why would this additional constraint improve the performance relative PnP/RED”:\nAs stated in the paper, our work is motivated by proximal operators that are basic forms of denoisers for which both RED and PnP converge, and by fact that they must be gradient... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"CYmqBvfn-Nn",
"sSyOJFg5cn_",
"YfGrPgCAqk8",
"nips_2021_MYvpQVjCK0_",
"hMA3oIPy8LR",
"oTb7EMN3pYO",
"HO4Xxmg4pB",
"Qin-0U_ZjFu",
"FkGXvHuiJt",
"KJjtCtgXQzo",
"F3NuGZcEUD",
"4oWdmqL3pQD",
"FljqkA0-3mU",
"REzBeU8TQC8",
"nips_2021_MYvpQVjCK0_",
"nips_2021_MYvpQVjCK0_",
"nips_2021_MYvpQV... |
nips_2021_Y0fS_1N0rsk | Training Over-parameterized Models with Non-decomposable Objectives | Many modern machine learning applications come with complex and nuanced design goals such as minimizing the worst-case error, satisfying a given precision or recall target, or enforcing group-fairness constraints. Popular techniques for optimizing such non-decomposable objectives reduce the problem into a sequence of cost-sensitive learning tasks, each of which is then solved by re-weighting the training loss with example-specific costs. We point out that the standard approach of re-weighting the loss to incorporate label costs can produce unsatisfactory results when used to train over-parameterized models. As a remedy, we propose new cost- sensitive losses that extend the classical idea of logit adjustment to handle more general cost matrices. Our losses are calibrated, and can be further improved with distilled labels from a teacher model. Through experiments on benchmark image datasets, we showcase the effectiveness of our approach in training ResNet models with common robust and constrained optimization objectives.
| accept | The paper proposes new calibrated cost-sensitive losses for training over-parameterized models. The reviewers were unanimous in their opinion that the paper is above the acceptance threshold. The approach is well supported theoretically, and covers an interesting family of non-decomposable losses. The authors have promised to release code at the time of publication in response to a reviewer comment. | train | [
"GnVm8DIuAld",
"OyxezosLSWh",
"-iP5JGg1VY6",
"9UgYJCtHp_",
"gYFGuGunkJY",
"upXvDK4xXI-",
"tBZvGOSjs-U",
"u6JPmK40rXC",
"1wdufj4ylz",
"Ow67ZqSzIq",
"m_dv40b9PsB",
"A57cEt41Svt",
"VTYOFf2NUul",
"uiQIzfPTLT",
"JQfNnXYCu_",
"BJwOkGSUS9Z",
"RD571kV49xC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for addressing my questions and concerns.\nI would suggest the author to provide more clarity to the reader as well as explanation on how to use the method for more general metrics as it is important to the reader that may want to apply it to a slightly different problem.",
"The paper propose... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"uiQIzfPTLT",
"nips_2021_Y0fS_1N0rsk",
"9UgYJCtHp_",
"gYFGuGunkJY",
"upXvDK4xXI-",
"tBZvGOSjs-U",
"u6JPmK40rXC",
"A57cEt41Svt",
"m_dv40b9PsB",
"nips_2021_Y0fS_1N0rsk",
"VTYOFf2NUul",
"OyxezosLSWh",
"Ow67ZqSzIq",
"BJwOkGSUS9Z",
"RD571kV49xC",
"nips_2021_Y0fS_1N0rsk",
"nips_2021_Y0fS_1... |
nips_2021_JQznhE5mdyv | Reinforcement learning for optimization of variational quantum circuit architectures | The study of Variational Quantum Eigensolvers (VQEs) has been in the spotlight in recent times as they may lead to real-world applications of near-term quantum devices. However, their performance depends on the structure of the used variational ansatz, which requires balancing the depth and expressivity of the corresponding circuit. At the same time, near-term restrictions limit the depth of the circuit we can expect to run. Thus, the optimization of the VQE ansatz requires maximizing the expressivity of the circuit while maintaining low depth. In recent years, various methods for VQE structure optimization have been introduced but the capacities of machine learning to aid with this problem have not yet been extensively investigated. In this work, we propose a reinforcement learning algorithm that autonomously explores the space of possible ansatzes, identifying economic circuits which still yield accurate ground energy estimates. The algorithm uses a feedback-driven curriculum learning method that autonomously adapts the complexity of the learning problem to the current performance of the learning algorithm and it incrementally improves the accuracy of the result while minimizing the circuit depth. We showcase the performance of our algorithm on the problem of estimating the ground-state energy of lithium hydride (LiH) in various configurations. In this well-known benchmark problem, we achieve chemical accuracy and state-of-the-art results in terms of circuit depth.
| accept | This paper is the first demonstration of (1) deep RL for variational optimization of quantum circuits which (2) discovers a circuit which demonstrates high accuracy on a "real world" problem - computing the ground state energy of lithium hydride - using fewer gates than other approaches. While both (1) and (2) have been done before, none of the reviewers have convincingly argued that the combination has not been done before. There was significant disagreement between the reviewers and I believe some misunderstanding over the novelty of the approach. Many of the reviewers would have liked to see experiments with a wider variety of deep RL algorithms. While this would certainly make the paper stronger, and is justified for a well-studied benchmark problem, an exhaustive comparison between closely related methods is not always needed for a novel application, as Reviewer KxuB points out.
Others pointed out that the application is not entirely novel - other papers have looked at optimizing circuits for quantum chemistry applications using non-DRL methods (arXiv:2010.10217) and have looked at optimizing quantum circuits for other applications using DRL (arXiv:2104.07715). On more closely inspecting these papers, it is clear that the first one is looking at simpler systems than this paper (the hydrogen molecule, with two electrons, rather than lithium hydride, with 4 electrons) and the second paper is only looking at simple 2 or 3 qubit toy systems (Bell states and GHZ states). Therefore this paper is an advance over the previous state of the art simply by virtue of looking at more difficult systems. The authors should rewrite the paper to emphasize this aspect of the novelty of their work and properly contextualize their work in the wider context of the field, but this should be straightforward to do.
Many reviewers raised concerns about scaling, noting that running 40,000 iterations on real quantum hardware would be extremely challenging. This is a valid concern, and one which is common to other applications of deep RL where generating data on real hardware is expensive (e.g. robotics). However, I don't believe it should be disqualifying for the paper. Training deep RL models in simulation and then transferring to real hardware is an active area of research, and deep RL papers are published all the time which only work in simulated settings like OpenAI Gym but would be impractical to run in a realistic amount of time on real hardware.
Relatedly, reviewers also raised concerns about how the learned circuits were not evaluated on real hardware. A closely related paper (arXiv:2010.10217) was evaluated on real hardware, and found a significant decline in performance due to noise. I expect something similar would be seen with this method, and the authors should be sure to discuss this in the paper. But, despite the availability of some small research devices on some cloud platforms, these devices are still far from widespread and are very experimental. While it would certainly make the paper much stronger, I don't think experiments on real hardware should be a requirement for a paper published at a machine learning venue. That would put authors working on quantum algorithms at an unfair disadvantage compared to those working on ML algorithms that run on conventional hardware.
Overall, while there were many valid criticisms of the paper, I think that the paper should be accepted based on the novelty of the results. This novelty could be better explained by the authors, and I hope that they get a chance to read these comments and improve the paper based on them. Rigorous comparison against more RL baselines and other quantum architecture search methods and experiments on real quantum hardware would make this paper significantly stronger - but if those were present, then I would likely be recommending this paper for a spotlight rather than simply a poster. | train | [
"w-L165gde57",
"JwTPGhQLb6",
"TsBUTTR-Ozx",
"h6sCU_nNCrq",
"DlLNRZS4PPE",
"Nv-JFE8UvrF",
"NQVSoNQt0kf",
"ERRLenxXflf"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"# Overview\n\nThe paper proposes a framework incorporating deep reinforcement learning for resolving Ansatz optimization problems, which is with wide impacts on quantum chemistry. The authors have a good introduction and related work section, which covers most standing works in quantum reinforcement learning and v... | [
4,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
5,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_JQznhE5mdyv",
"ERRLenxXflf",
"w-L165gde57",
"NQVSoNQt0kf",
"Nv-JFE8UvrF",
"nips_2021_JQznhE5mdyv",
"nips_2021_JQznhE5mdyv",
"nips_2021_JQznhE5mdyv"
] |
nips_2021_cwWfDHYpb1z | Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices | Training deep neural networks on large datasets can often be accelerated by using multiple compute nodes. This approach, known as distributed training, can utilize hundreds of computers via specialized message-passing protocols such as Ring All-Reduce.However, running these protocols at scale requires reliable high-speed networking that is only available in dedicated clusters.In contrast, many real-world applications, such as federated learning and cloud-based distributed training, operate on unreliable devices with unstable network bandwidth.As a result, these applications are restricted to using parameter servers or gossip-based averaging protocols.In this work, we lift that restriction by proposing Moshpit All-Reduce — an iterative averaging protocol that exponentially converges to the global average.We demonstrate the efficiency of our protocol for distributed optimization with strong theoretical guarantees.The experiments show 1.3x speedup for ResNet-50 training on ImageNet compared to competitive gossip-based strategies and 1.5x speedup when training ALBERT-large on preemptible compute nodes.
| accept | This paper proposes a novel approach to distributed data-parallel training, MoshpitSGD. The approach targets systems where fault tolerance and robustness important, e.g., when some worker nodes may fail, there is load imbalance, or other sources of heterogeneity in the system. This is a setting relevant to practitioners applying distributed training.
The reviewers agreed that the contribution is interesting and worthy of acceptance. While there were some questions and concerns raised in the initial reviews, the authors’s responses largely addressed these. In particular, the new empirical results nicely illustrate the tradeoff between messaging rounds and resilience to node failures. It is important that the camera-ready revision should include the points mentioned during the rebuttal to address questions raised by the reviewers to improve the clarity of the paper, especially around experimental setup and findings. We also strongly encourage the authors to include the additional experiments in the paper and make the experiments more focused on specific contributions (e.g., the proposed match-making scheme) in addition to highlighting performance of the method as a whole. This will likely involving moving/adding material to the supplementary material.
| test | [
"UrZqa-MujDr",
"TCur57GrjUE",
"rBaVfhzKpGD",
"r-0jbY4P93A",
"zBGYgRnwGaD",
"6G_GjZ1LXRs",
"3cHzaoeV8Mm",
"4gzWbHfKvBV",
"4FiVrV5YDwA",
"7C8pgN04ScY",
"sJeKgyYKMMa",
"NE1AB6VqhB_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Data parallel distributed NN training often depends on homogenous, well-provisioned networks and reliable end hosts. This paper presents a scheme, Moshpit All-Reduce, that leverages a hybrid approach to communicating parameters (gradients) that combines elements of gossip with all-reduce to better withstand heter... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_cwWfDHYpb1z",
"rBaVfhzKpGD",
"nips_2021_cwWfDHYpb1z",
"nips_2021_cwWfDHYpb1z",
"rBaVfhzKpGD",
"rBaVfhzKpGD",
"UrZqa-MujDr",
"UrZqa-MujDr",
"NE1AB6VqhB_",
"sJeKgyYKMMa",
"nips_2021_cwWfDHYpb1z",
"nips_2021_cwWfDHYpb1z"
] |
nips_2021_KtvHbjCF4v | IRM---when it works and when it doesn't: A test case of natural language inference | Invariant Risk Minimization (IRM) is a recently proposed framework for out-of-distribution (o.o.d) generalization. Most of the studies on IRM so far have focused on theoretical results, toy problems, and simple models. In this work, we investigate the applicability of IRM to bias mitigation-a special case of o.o.d generalization-in increasingly naturalistic settings and deep models. Using natural language inference (NLI) as a test case, we start with a setting where both the dataset and the bias are synthetic, continue with a natural dataset and synthetic bias, and end with a fully realistic setting with natural datasets and bias. Our results show that in naturalistic settings, learning complex features in place of the bias proves to be difficult, leading to a rather small improvement over empirical risk minimization. Moreover, we find that in addition to being sensitive to random seeds, the performance of IRM also depends on several critical factors, notably dataset size, bias prevalence, and bias strength, thus limiting IRM's advantage in practical scenarios. Our results highlight key challenges in applying IRM to real-world scenarios, calling for a more naturalistic characterization of the problem setup for o.o.d generalization.
| accept | This paper studies the effectiveness of Invariant Risk Minimization (IRM) in order to train unbiased classifiers in a specific Natural Language Inference (NLI) setting. The authors devise three experimental scenarios: synthetic data and synthetic bias, natural data and synthetic bias, natural data and natural bias. In all three settings, the authors study whether IRM is capable of training bias free classifiers by controlling the prevalence, strength of bias and data size. Results seem to show that, apart from a very synthetic setting, IRM seems to only marginally succeed in training robust classifiers, especially when bias prevalence, strength and data sizes are small.
—
All reviewers concur that this paper has extensive experiments and can be a useful resource as is one of the few studies of IRM with non-synthetic data. The paper appears clear although some sentences must be reformulated and made more precise/formal (e.g. "IRM is not able to completely ignore the bias when the test environments are sufficiently different"). Experiments are well-executed. One of the concerns of the reviewers was that the paper is somewhat limited in scope: binary NLI may be a too simplistic task in NLP and thus it is unclear whether the results in the paper can tell something about IRM’s performance in other settings. A reviewer also noticed the absence of performance in more ‘standard’ out-of-domain test sets (e.g. ANLI, counterfactual SNLI, …). We concurred that, in these test sets, it might be difficult to control for factors influencing IRM performance, and it might be reasonable to limit this paper to the careful study of controlled settings. After discussion with reviewers, I suggest the authors to move the HANS results from Appendix C.5 into the main text as this is a well-known evaluation set (even if results might not be directly comparable with reported baselines) and address some of the aforementioned issues. The authors might consider to also test on the Revised Premise dataset from [1]. Finally and most importantly, one source of confusion revolved around the takeaways of the paper. Reviewers and myself agree that this seems like a carefully executed negative results paper, as IRM doesn't seem very successful apart from the synthetic setting. I strongly suggest the authors to take a clearer stance on this by removing emphasis (i.e. from the abstract) on the fact that IRM performs well in a synthetic setting (which is mildly interesting) and emphasize the message in lines 50-51: "However, in these more naturalistic settings, IRM is not able to completely discard the bias, while ERM does not rely solely on the bias. Thus, in practice the advantage of IRM is small."
Overall, this paper is a useful resource as it's one of the few that studies IRM in a controlled and natural, albeit simplistic, task. Provided that the authors incorporate in the final version the suggestions above and reviewers' feedback, I recommend this paper for acceptance.
[1] Kaushik, Divyansh, Eduard Hovy, and Zachary Lipton. "Learning The Difference That Makes A Difference With Counterfactually-Augmented Data." In International Conference on Learning Representations. 2020. | train | [
"C6KqkpPRQ8u",
"SqFSLRPk55G",
"qy4_pA4dlmJ",
"kyJCdKT_l_D",
"XIgvxzbMAL-",
"vdHg2C_kvDF",
"e8sStAP1OuE",
"DSbyBNsWqRc",
"ve--iRoNULf",
"mLZNvS54KJT",
"Ky0nKG6Scvv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are happy to answer any additional questions regarding the work. \nPlease let us know if the response addresses your concerns.",
"In this paper, the authors perform an empirical analysis of the effectiveness of Invariant Risk Minimization (IRM). They focus on the natural language inference (NLI) task, and wh... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"kyJCdKT_l_D",
"nips_2021_KtvHbjCF4v",
"e8sStAP1OuE",
"nips_2021_KtvHbjCF4v",
"Ky0nKG6Scvv",
"SqFSLRPk55G",
"mLZNvS54KJT",
"ve--iRoNULf",
"nips_2021_KtvHbjCF4v",
"nips_2021_KtvHbjCF4v",
"nips_2021_KtvHbjCF4v"
] |
nips_2021_RQfcckT1M_4 | Self-Supervised Learning Disentangled Group Representation as Feature | A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics). In this paper, we formulate the notion of "good" representation from a group-theoretic view using Higgins' definition of disentangled representation, and show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization, thus unable to modularize the remaining semantics. To break the limitation, we propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM), which successfully grounds the abstract semantics and the group acting on them into concrete contrastive learning. At each iteration, IP-IRM first partitions the training samples into two subsets that correspond to an entangled group element. Then, it minimizes a subset-invariant contrastive loss, where the invariance guarantees to disentangle the group element. We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks. Codes are available at https://github.com/Wangt-CN/IP-IRM.
| accept | This paper proposes a novel approach to an important and popular problem, and in doing so provides a natural but novel application of *another* important and popular line of work (IRM). The work is more theoretically justified than most in the area, and has reasonably satisfying experimental results. Overall, it will make a fine contribution to the conference. To maximize that, please make sure to incorporate the outcome of the reviewer discussions – some of which seemed fairly illuminating – into the final version of the paper. | test | [
"yKk0TTGFor",
"DCv7U8NATJC",
"8lDRoerKr1Z",
"KvQoyzneRSY",
"2i3tdaYz9y6",
"bwwcdM9Eb6U",
"fAksr1zuPb5",
"Bq28MBYMD1v",
"a3O0SJo_CN",
"m1mYGaTm_R1",
"kDxKNeTOVH",
"EcFZDPL6_zR",
"p06URvKpozA",
"Wz39fk2BMR"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers for their positive feedback and constructive suggestions. We are very happy that all the reviewers find our response detailed and informative, as well as agree that our work should be of interest and a good contribution to the whole community. We will carefully revise our paper by takin... | [
-1,
7,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_RQfcckT1M_4",
"nips_2021_RQfcckT1M_4",
"EcFZDPL6_zR",
"a3O0SJo_CN",
"m1mYGaTm_R1",
"nips_2021_RQfcckT1M_4",
"Bq28MBYMD1v",
"bwwcdM9Eb6U",
"Wz39fk2BMR",
"p06URvKpozA",
"DCv7U8NATJC",
"DCv7U8NATJC",
"nips_2021_RQfcckT1M_4",
"nips_2021_RQfcckT1M_4"
] |
nips_2021_FUxXaBop-J_ | SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning | Augmenting pre-trained language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks. However, for a given task instance, the KG, or certain parts of the KG, may not be useful. Although KG-augmented models often use attention to focus on specific KG components, the KG is still always used, and the attention mechanism is never explicitly taught which KG components should be used. Meanwhile, saliency methods can measure how much a KG feature (e.g., graph, node, path) influences the model to make the correct prediction, thus explaining which KG features are useful. This paper explores how saliency explanations can be used to improve KG-augmented models' performance. First, we propose to create coarse (Is the KG useful?) and fine (Which nodes/paths in the KG are useful?) saliency explanations. Second, to motivate saliency-based supervision, we analyze oracle KG-augmented models which directly use saliency explanations as extra inputs for guiding their attention. Third, we propose SalKG, a framework for KG-augmented models to learn from coarse and/or fine saliency explanations. Given saliency explanations created from a task's training set, SalKG jointly trains the model to predict the explanations, then solve the task by attending to KG features highlighted by the predicted explanations. On three popular commonsense QA benchmarks (CSQA, OBQA, CODAH) and a range of KG-augmented models, we show that SalKG can yield considerable performance gains --- up to 2.76% absolute improvement on CSQA.
| accept | This paper proposes SalKG, a framework for using saliency explanations to learn to better use knowledge graphs (KG) for commonsense reasoning (CSR) tasks. Saliency is considered at two levels: (1) the usefulness of the KG overall called "coarse" and (2) the usefulness of specific nodes and paths in the KG called "fine". To create coarse explanations the authors introduce an ensemble based saliency method for comparing models with and without access to a KG. Fine explanations use off the shelf saliency methods. This leads to SalKG, where explanations for the training data of a CSR task are used to regularize the learning of the attention mechanisms in the CSR architecture. Experiments on the CSQA, OBQA, and CODAH experiments show that the hybrid model significantly outperforms prior work baselines using both BERT and RoBERTa base models.
Many of the reviewers agreed on the strengths of the paper. They found that method is sound and leads to significant results. They generally all agreed that the paper was well written and clear in its presentation. They also remarked on the quality of the analysis, such as the oracle-based approach for analyzing how learning from saliency can lead to improved accuracy.
The reviewers also raised several main concerns. First, some expressed concern that the method was not sufficiently "novel." As the authors clarified in their response, while attention and saliency are both well studied, the significance of their work is using saliency as a training signal for incorporating KGs into CSR.
The other main concern regarded comparison with a wider range of baselines, as well as using the official train/test split for CSQA. In the response, the authors addressed these concerns. First, additional comparisons on the same datasets are included in the appendix. Second, they also submitted a model to the official CSQA test server. Their RoBERTa based model places third among other RoBERTa based models. The authors are encouraged to include these additional results in the final version of the paper.
Ultimately, the strengths outweigh the potential weaknesses. As one reviewer noted, this work has potential application beyond CSR to other tasks like reading comprehension, and it is likely to be of interest to the wider community. | train | [
"SRPvTy4KDCn",
"5QuV3FT3wJI",
"dPULgEieWn",
"-0bpLM_Hyol",
"V6P7yKEaMg3",
"t5sujQOJ8H4",
"gkb6_W9s5K",
"eNp45YveShd",
"WEdtRogiaS",
"JgyuMFssiVS",
"Qq7Yg9mbgAd",
"qpl1Ob79Do3",
"xyTdVL2LaWs",
"vXe5jTPSERt",
"qLVgZQwc6FB",
"aapA6Xf5Vf",
"HviMXYOwEeQ",
"d9yq7Tmo15F",
"k-_Po1YFdc",
... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"autho... | [
" In a previous comment (https://openreview.net/forum?id=FUxXaBop-J_¬eId=ZDvPGJ5myEf), we compared SalKG-Hybrid (RoBERTa + MHGRN)\tto other RoBERTa-based models on the CSQA leaderboard. We also explained that computational limitations prevented us from presenting results on ALBERT at that time.\n\nSince then, we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_FUxXaBop-J_",
"V6P7yKEaMg3",
"EGi8T9G0BFu",
"WEdtRogiaS",
"vXe5jTPSERt",
"JgyuMFssiVS",
"Qq7Yg9mbgAd",
"qpl1Ob79Do3",
"nips_2021_FUxXaBop-J_",
"CCkCeyAQi2O",
"BYwHPLwjhw",
"EGi8T9G0BFu",
"BYwHPLwjhw",
"nQELWaUl8tf",
"WEdtRogiaS",
"CCkCeyAQi2O",
"dPULgEieWn",
"CCkCeyAQi2O... |
nips_2021_kqYiS7HEWfZ | Supervising the Transfer of Reasoning Patterns in VQA | Methods for Visual Question Anwering (VQA) are notorious for leveraging dataset biases rather than performing reasoning, hindering generalization. It has been recently shown that better reasoning patterns emerge in attention layers of a state-of-the-art VQA model when they are trained on perfect (oracle) visual inputs. This provides evidence that deep neural networks can learn to reason when training conditions are favorable enough. However, transferring this learned knowledge to deployable models is a challenge, as much of it is lost during the transfer.We propose a method for knowledge transfer based on a regularization term in our loss function, supervising the sequence of required reasoning operations.We provide a theoretical analysis based on PAC-learning, showing that such program prediction can lead to decreased sample complexity under mild hypotheses. We also demonstrate the effectiveness of this approach experimentally on the GQA dataset and show its complementarity to BERT-like self-supervised pre-training.
| accept | The authors addressed most of the reviewers concerns and all reviewers who engaged in post-rebuttal discussion recommend to accept the paper, one reviewer increasing their score.
The paper contributes an interesting approach to transfer reasoning patterns with program supervision in the context of visual question answering and a solid evaluation.
I recommend accepting the paper under the expectation that the authors will address the concerns as promised in the author response, including:
1) The empirical evaluation of the theoretical claim on sample complexity
2) improve clarity, typos
3) discuss related work Chen et al.
If possible, please also include a better failure analysis as suggested by reviewer uBf9. | train | [
"-MNPqmCkn6e",
"uEj67fEQstv",
"cbmT9AHXC9g",
"XA2uKFQZnvt",
"xhPB2bICbL2",
"4tuPs1taoFS",
"nswh4LF2Ke7",
"_IkGliUMtAX",
"Iko-Nruuta1",
"vzhcQqG1At0",
"ICDHlUvw_lg",
"wz7obitfgzU",
"a2HmnJjcka",
"nStjzkTUlD",
"MAwNcjCgBW_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new loss terms to encourage Transformer-based models perform \"explicit reasoning\", which is beneficial for transferring knowledge from oracle-trained VQA models to deployable ones. Specifically, the novel loss is similar with the one commonly used in modular network (e.g. converting questio... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_kqYiS7HEWfZ",
"cbmT9AHXC9g",
"XA2uKFQZnvt",
"4tuPs1taoFS",
"vzhcQqG1At0",
"nswh4LF2Ke7",
"a2HmnJjcka",
"nips_2021_kqYiS7HEWfZ",
"_IkGliUMtAX",
"MAwNcjCgBW_",
"-MNPqmCkn6e",
"nStjzkTUlD",
"nips_2021_kqYiS7HEWfZ",
"nips_2021_kqYiS7HEWfZ",
"nips_2021_kqYiS7HEWfZ"
] |
nips_2021_e95xWqO7ehi | Conformal Bayesian Computation | Edwin Fong, Chris C. Holmes | accept | This article presents a novel method for constructing prediction intervals with correct frequentist coverage, based on combining frequentist ideas from conformal prediction with a Bayesian model. The main idea is to use the Bayesian posterior predictive density as the conformity measure in the standard conformal prediction framework, and employ importance sampling to avoid having to re-fit the model for every data point. The method uses "add-one-in" importance sampling rather than leave-one-out, which is advantageous in that it lowers the variance of the importance weights, leading to more stable results. The computational cost is also significantly reduced by approximating the outcome space with a discrete grid of points, following Chen *et al* (2018).
The idea is relatively simple but very generally applicable, and has the potential to be high impact in the Bayesian community. The simplicity with which it can be implemented is attractive, and the computational cost is modest: it takes less than 2x the time as the standard Bayesian technique (Tables 1-3). Further, the performance guarantees are quite general, and the experiments demonstrate a striking improvement in coverage performance compared to standard Bayes (Tables 1-3). The paper is well-written and sound.
In my view, the main limitations are:
1) Novelty. The method combines standard techniques for conformal prediction, importance sampling, and Bayesian inference. Thus, the constituent parts are not novel, and they are integrated in a fairly straightforward way. The novelty lies in bringing these pieces together and recognizing its utility.
2) Need to appropriately choose the grid of $y$ points, $\mathcal{Y}_\mathrm{grid}$. The paper provides a reasonable default recommendation for this, which is to use 100 evenly spaced points between $y_\mathrm{min}-2$ and $y_\mathrm{max}+2$. However, it seems that this choice may require application-specific tuning.
3) Model misspecification may increase the variance of the importance sampling approximation, resulting in more unstable inferences.
There was a fairly wide range in the reviewers scores. There were two ratings of 4 from reviewers tEpY (with a low confidence of 2) and dTHk (with a medium confidence of 4). I have carefully examined the paper, the reviews, and the authors' reply, and I believe the main criticisms can be addressed as follows:
- Regarding novelty, reviewers tEpY and dTHk criticize the article for providing incremental improvement upon existing approaches. While it is true that the method can be viewed as combining existing methods, I would argue that the utility of the proposed method — in comparison with existing alternatives — justifies its simplicity. Reviewer dTHk also states that Bürkner *et al* (2020) already introduced the idea of using add-one-in importance sampling for approximating posterior integrals; however, I have examined the Bürkner *et al* (2020) paper and while they use importance sampling, I do not see any use of a technique resembling the method in the present paper.
- Regarding the choice of $\mathcal{Y}_\mathrm{grid}$: Reviewer tEpY criticizes the article for outsourcing the problem of choosing the grid, and the associated tradeoff between computational cost and accuracy, to Chen *et al* (2018). Since the use of a discrete grid for conformal prediction is not the main novel contribution of the paper, this outsourcing seems reasonable in my opinion.
- Reviewer dTHk is concerned that the assumption of a known $x_{n+1}$ may not be practical, and discusses making predictions for a grid of $x_{n+1}$ points. However, as the authors state in their reply, the coverage guarantees are for random $X_{n+1}$, not a given fixed $x_{n+1}$. Thus, in fact, the method effectively considers all possible $x_{n+1}$ values.
- Additionally, reviewer dTHk felt that the article should emphasize more clearly that the coverage guarantees hold in expectation, when averaging over the entire dataset as well as the future datapoint $x_{n+1}$. This is a good suggestion, which is easily addressed via a minor edit, which the authors have volunteered to do in their reply.
Bürkner, P. C., Gabry, J., & Vehtari, A. (2020). *Approximate leave-future-out cross-validation for Bayesian time series models.* Journal of Statistical Computation and Simulation, 90(14), 2499-2523.
Chen, W., Chun, K.-J., and Barber, R. F. (2018). *Discretized conformal prediction for efficient distribution-free inference.* Stat, 7(1):e173.
| train | [
"UqWQckGLrOK",
"xaNH3ET0R1S",
"HyJpUTJ-eAG",
"0Tg7K8lWvru",
"OdPlHyFNDLx",
"OsNHr_4zj7j",
"I3lrc_n7VtQ",
"wKtcCthC6B6",
"hiuAY2IdBOe",
"xs3Z5JleJT_",
"vNYz3r1JJ1W",
"FlR2dOajSh0",
"FsA-G4BCDQ-",
"aBEBPjadfOI",
"_3U37vs8xfz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors, I will leave my score unchanged but also my confidence score remains low since I am no expert in this field.\n\nBriefly going over the submission again, I see incremental modifications of existing approaches, no actual analysis of the approximations obtained and no quantification of the trade-off be... | [
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4
] | [
-1,
2,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"FlR2dOajSh0",
"nips_2021_e95xWqO7ehi",
"0Tg7K8lWvru",
"wKtcCthC6B6",
"I3lrc_n7VtQ",
"nips_2021_e95xWqO7ehi",
"xs3Z5JleJT_",
"_3U37vs8xfz",
"FsA-G4BCDQ-",
"OsNHr_4zj7j",
"xaNH3ET0R1S",
"aBEBPjadfOI",
"nips_2021_e95xWqO7ehi",
"nips_2021_e95xWqO7ehi",
"nips_2021_e95xWqO7ehi"
] |
nips_2021_vMWHOumNj5 | A Unified Approach to Fair Online Learning via Blackwell Approachability | We provide a setting and a general approach to fair online learning with stochastic sensitive and non-sensitive contexts.The setting is a repeated game between the Player and Nature, where at each stage both pick actions based on the contexts. Inspired by the notion of unawareness, we assume that the Player can only access the non-sensitive context before making a decision, while we discuss both cases of Nature accessing the sensitive contexts and Nature unaware of the sensitive contexts. Adapting Blackwell's approachability theory to handle the case of an unknown contexts' distribution, we provide a general necessary and sufficient condition for learning objectives to be compatible with some fairness constraints. This condition is instantiated on (group-wise) no-regret and (group-wise) calibration objectives, and on demographic parity as an additional constraint. When the objective is not compatible with the constraint, the provided framework permits to characterise the optimal trade-off between the two.
| accept | Thanks for the strong submission! The reviewers unanimously enjoyed the paper. | train | [
"cIfT-YRFeQ",
"DAe2mQN6P9H",
"R-NiU9j8gIO",
"i6lOvpNa5sD",
"TjZf4Ww-wMP",
"CozKZCaB0pj",
"0YKZ0plhnO",
"vI3jJmDL3kc",
"L-GjSvkA91m",
"LAvd_ooFola"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper looks at the online learning problem with different fairness objectives/constraints from the lens of Blackwell Approachability. Doing so allows one to see what fairness objectives (e.g. group-wise calibration/regret) with possibly additional fairness constraints (e.g. statistical parity) are compatible a... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"nips_2021_vMWHOumNj5",
"nips_2021_vMWHOumNj5",
"vI3jJmDL3kc",
"CozKZCaB0pj",
"cIfT-YRFeQ",
"LAvd_ooFola",
"L-GjSvkA91m",
"DAe2mQN6P9H",
"nips_2021_vMWHOumNj5",
"nips_2021_vMWHOumNj5"
] |
nips_2021_fNo7un6Ilaj | Training Neural Networks is ER-complete | Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow | accept | The paper shows that the problem of (**exact**) empirical risk minimization (ERM) in linear neural networks is ETR-Complete. The result is technically innovative, and interesting. Nevertheless it was felt by the reviewers that some of the conclusions made by the authors are misleading, and not very convincing.
While exact solving of the ERM objective is an interesting question, there is a crucial gap between ERM and training -- which normally refers to learning good parameters of the model that obtain small *excess* risk. Because of that, it is normally justified to find an $\textrm{OPT}+\epsilon$ solution for ERM. This is justified because $\epsilon$ error is incurred anyway due to statistical uncertainty. Moreover, in any reasonable model (where solution of the ERM can begin to lead to some generalization guarantees) finding an $\textrm{OPT}+\epsilon$ is regarded NP hard (by standard discretization/packing/covering and union bound arguments).
If the authors want to draw conclusions on the hardness of training from hardness of ERM solving, they would need to further justify this. In the final revision please tone down the last paragraphs in the introduction, and discuss the difference between *exact* ERM and training, as well as address other reviewers comments. | train | [
"Xt0Rxt9aCVf",
"8Hik-2F1VF-",
"gj14cP-Pek-",
"Qd0fQ892WpG",
"clmoS3uyNZz",
"o2QGHDejF0T"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors showed that the training of neural networks is $\\exists\\mathbb{R}$-complete, which implies the reason that techniques commonly used for solving large NP-complete problems do not work for neural network training. Strengths:\n+ The authors partially answered the question of why backpropagation works b... | [
6,
-1,
-1,
9,
5,
4
] | [
2,
-1,
-1,
4,
4,
4
] | [
"nips_2021_fNo7un6Ilaj",
"clmoS3uyNZz",
"o2QGHDejF0T",
"nips_2021_fNo7un6Ilaj",
"nips_2021_fNo7un6Ilaj",
"nips_2021_fNo7un6Ilaj"
] |
nips_2021_te8iyHjbPQd | Understanding the Under-Coverage Bias in Uncertainty Estimation | Yu Bai, Song Mei, Huan Wang, Caiming Xiong | accept | This article provides a theoretical explanation for the empirical observation that quantile regression tends to exhibit an under-coverage bias, in the sense of frequentist coverage of prediction intervals. The theory shows that this under-coverage stems from the error in estimating high-dimensional parameters when the number of parameters $d$ grows proportionally with the sample size $n$. An explicit formula for the bias is provided (Eqn 7) in the case of a linear Gaussian model when $d/n$ is small, under quite general conditions on the noise distribution. This under-coverage occurs even in the well-specified setting of learning a linear quantile function when the true data distribution follows a linear Gaussian model. The theory is validated in empirical experiments on simulated and real data.
The paper is well-written and successfully addresses an important problem in a compelling way. To my knowledge, the results are novel. The reviews were consistently positive, and I believe the paper provides a valuable contribution to the literature. The experimental results on neural networks (e.g., in Table 1) are particularly interesting and relevant to the NeurIPS community.
The main limitation (as mentioned by Reviewer sjsB) is that the analysis considers only the classical linear quantile regression algorithm with pinball loss. It would be interesting to extend the results to more advanced algorithms. However, this is a minor limitation and the paper opens up directions of future research into such extensions.
| train | [
"lYA2RwVKMX6",
"BnyLf6Wyvy1",
"lRsR-ZnHpU_",
"clrKxSGfTzg",
"kf5ZtotgWoV",
"thLBKPn9Ole",
"T2GOgDvgGMX",
"cuWsBd_GgQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this work, the authors consider linear quantile regression, paying particular attention to the \"coverage\" properties of the resulting predictor. The traditional goal of quantile regression is to learn a quantile of the conditional distribution of output $Y$, conditioned on the inputs $X$. Denoting the predict... | [
6,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
4,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2021_te8iyHjbPQd",
"lYA2RwVKMX6",
"cuWsBd_GgQ",
"T2GOgDvgGMX",
"thLBKPn9Ole",
"nips_2021_te8iyHjbPQd",
"nips_2021_te8iyHjbPQd",
"nips_2021_te8iyHjbPQd"
] |
nips_2021_nhkbYh30Tl | Decentralized Q-learning in Zero-sum Markov Games | We study multi-agent reinforcement learning (MARL) in infinite-horizon discounted zero-sum Markov games. We focus on the practical but challenging setting of decentralized MARL, where agents make decisions without coordination by a centralized controller, but only based on their own payoffs and local actions executed. The agents need not observe the opponent's actions or payoffs, possibly being even oblivious to the presence of the opponent, nor be aware of the zero-sum structure of the underlying game, a setting also referred to as radically uncoupled in the literature of learning in games. In this paper, we develop a radically uncoupled Q-learning dynamics that is both rational and convergent: the learning dynamics converges to the best response to the opponent's strategy when the opponent follows an asymptotically stationary strategy; when both agents adopt the learning dynamics, they converge to the Nash equilibrium of the game. The key challenge in this decentralized setting is the non-stationarity of the environment from an agent's perspective, since both her own payoffs and the system evolution depend on the actions of other agents, and each agent adapts her policies simultaneously and independently. To address this issue, we develop a two-timescale learning dynamics where each agent updates her local Q-function and value function estimates concurrently, with the latter happening at a slower timescale.
| accept | Almost all the Reviewers agree that this paper is worth being published. The potential overlap with the ICML 21 paper raised initially serious concerns by the Reviewers. However, all the Reviewers agree that this paper and the ICML 21 paper are concurrent. I also believe that these two papers present several differences and the authors clearly discuss these differences in the rebuttals. I invite the authors to add a section in their paper in which the differences with ICML 21 are discussed. I also invite the authors to take into seriously consideration the following concerns raised by Reviewer KoGL and to clarify these issues in the paper:
*The "fully" decentralized claim is still questionable to me after reading the response. To some extent, it does make sense for agents to follow "myopic" best-response dynamics. But still, assuming every one of them being at the same level of "myopia" is sort of collaborating. And the Corollary 1 does not really tell any meaningful thing in terms of the algorithm's ability to exploit opponents that do not follows such dynamics.*
Frankly, I don't think that *assuming every one of them being at the same level of "myopia" is sort of collaborating*, but it could be useful that the authors clarify better this point in their paper.
| train | [
"utQMmIpq7h",
"HQutU1_1y1u",
"PtHVLdDqP0O",
"OFNPhh6M_5_",
"fKhuZ7cZ_Ze",
"vbuyWjBuVj",
"U_uWraUGruX",
"OAWkXtLgqL",
"cqKOHYE2T6",
"bZ4DunYHOqj",
"756WoikJffS",
"TcH8TR1OsT",
"cuFAZuKeV-Z"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to first thank the area chair and the reviewers for appreciating the independency and concurrency of the two works, as well as their significant differences. We would be happy to summarize the differences between the two papers as follows, which will also be included in the final version of our pape... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
2
] | [
"PtHVLdDqP0O",
"nips_2021_nhkbYh30Tl",
"cqKOHYE2T6",
"cuFAZuKeV-Z",
"HQutU1_1y1u",
"bZ4DunYHOqj",
"TcH8TR1OsT",
"756WoikJffS",
"nips_2021_nhkbYh30Tl",
"nips_2021_nhkbYh30Tl",
"nips_2021_nhkbYh30Tl",
"nips_2021_nhkbYh30Tl",
"nips_2021_nhkbYh30Tl"
] |
nips_2021_Qh-fwFsrEz | Fast Certified Robust Training with Short Warmup | Zhouxing Shi, Yihan Wang, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh | accept | This paper makes useful contributions to robust training. The authors begin by identifying several issues with current certified robustness training and next propose effective approaches to mitigating these issues. The improvement is shown to be substantial in their experiments. | train | [
"wWO3UdHbLMK",
"NgBc5KPWgf3",
"R2E8gb_H4V2",
"dlQ7JpWMuWW",
"Cj7MAKY0X6B",
"Tt5IsQ69gWG",
"51EAFGStr-0",
"eZSr1PjbNxK",
"bnaOXoLZZ7i",
"0V-RSubi9H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\n\nIn this paper, the authors propose a weight initialization scheme and regularizers for certified robust training. Moreover, the authors argue the benefit of Batch Normalization (BN) in certified training. \nPros.\n 1. The idea of keeping the difference gain close to one to achieve stable bounds is interest... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_Qh-fwFsrEz",
"nips_2021_Qh-fwFsrEz",
"NgBc5KPWgf3",
"wWO3UdHbLMK",
"wWO3UdHbLMK",
"NgBc5KPWgf3",
"bnaOXoLZZ7i",
"0V-RSubi9H",
"nips_2021_Qh-fwFsrEz",
"nips_2021_Qh-fwFsrEz"
] |
nips_2021_9uXILaIam0 | Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices | We propose the use of the vector-valued distance to compute distances and extract geometric information from the manifold of symmetric positive definite matrices (SPD), and develop gyrovector calculus, constructing analogs of vector space operations in this curved space. We implement these operations and showcase their versatility in the tasks of knowledge graph completion, item recommendation, and question answering. In experiments, the SPD models outperform their equivalents in Euclidean and hyperbolic space. The vector-valued distance allows us to visualize embeddings, showing that the models learn to disentangle representations of positive samples from negative ones.
| accept | The paper proposes to use vector-valued distances (and calculus) on the manifold of symmetric positive definite matrices to use in machine learning applications. As all reviewers agree, the paper contains novel insights and interesting applications (with good analysis). While some theoretical insights are well-known in some previous works, the paper does a great job of putting things cleanly for the NeurIPS community.
| train | [
"MMdiLvPLPYj",
"2pmAx9yiBaN",
"hAkPlu6Gl_F",
"RFq56EPvaDT",
"EkkwouPQrVM",
"gAKmmEq9sXc",
"INnMS4H8bEr",
"dTZjUKeuGo",
"UNPLwgiRgUM",
"5eXVNKMuY-",
"WtDBE4uTUgi",
"XDK7lHI8kCd",
"aNuuzBpBmEx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors' explanations. I keep the positive rating. ",
" I appreciate the detailed responses from the authors. Major concerns have been addressed. ",
" I thank the authors for the details responses which cleared my doubts about the paper. I keep my positive score regarding the acceptance of the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"gAKmmEq9sXc",
"INnMS4H8bEr",
"EkkwouPQrVM",
"dTZjUKeuGo",
"aNuuzBpBmEx",
"XDK7lHI8kCd",
"WtDBE4uTUgi",
"5eXVNKMuY-",
"nips_2021_9uXILaIam0",
"nips_2021_9uXILaIam0",
"nips_2021_9uXILaIam0",
"nips_2021_9uXILaIam0",
"nips_2021_9uXILaIam0"
] |
nips_2021_zmbiQmdtg9 | Improved Transformer for High-Resolution GANs | Long Zhao, Zizhao Zhang, Ting Chen, Dimitris Metaxas, Han Zhang | accept | The paper proposes a transformer-based generative model that shows better results on multiple standard benchmarks. The model includes multi-axis blocked self-attention at early stages and uses MLP for late stages. The reviewers appreciate the idea of a transformer-based generator, clear writing, and experimental results. There are also concerns about the STOA results on ImageNet (compared to BigBiGAN) and the motivation and advantage of transformer-based architecture compared to ConvNet. The rebuttal addressed some of the concerns. Overall, the significance of the work outweighs the limitations. I recommend accepting the work. The authors are encouraged to carefully examine the downsampling functions for 128x128 Imagenet (use antialias=True in TF, for example) and discuss the latest results (e.g., BigBiGAN) in ImageNet experiments. | train | [
"JSTAlvfzhpW",
"01jQIrCLNs8",
"l8GjxrFjBnz",
"R4pTAMh5w6j",
"SNBporqNtme",
"EGWhDrYA2Qj",
"rk5oK54nl-2",
"Ag5sPJ4s6If",
"_aDLS3-PSF6",
"nuhWYIg2Hhp",
"mzWzQ01gA5P",
"_FGNADP7Pfu"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer JC42,\n\nThank you for your detailed and instructive reviews. Reviewers 6oDm and QCfs have responded, and Reviewer 6oDm increased their scores and others remain their scores. We would like to know if our response has resolved your concerns. In our response, we believe we have addressed your main con... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
5,
8,
4
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
3
] | [
"_FGNADP7Pfu",
"Ag5sPJ4s6If",
"nips_2021_zmbiQmdtg9",
"l8GjxrFjBnz",
"_FGNADP7Pfu",
"_aDLS3-PSF6",
"mzWzQ01gA5P",
"nuhWYIg2Hhp",
"nips_2021_zmbiQmdtg9",
"nips_2021_zmbiQmdtg9",
"nips_2021_zmbiQmdtg9",
"nips_2021_zmbiQmdtg9"
] |
nips_2021_pmWeMLm411_ | Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence | Existing rotated object detectors are mostly inherited from the horizontal detection paradigm, as the latter has evolved into a well-developed area. However, these detectors are difficult to perform prominently in high-precision detection due to the limitation of current regression loss design, especially for objects with large aspect ratios. Taking the perspective that horizontal detection is a special case for rotated object detection, in this paper, we are motivated to change the design of rotation regression loss from induction paradigm to deduction methodology, in terms of the relation between rotation and horizontal detection. We show that one essential challenge is how to modulate the coupled parameters in the rotation regression loss, as such the estimated parameters can influence to each other during the dynamic joint optimization, in an adaptive and synergetic way. Specifically, we first convert the rotated bounding box into a 2-D Gaussian distribution, and then calculate the Kullback-Leibler Divergence (KLD) between the Gaussian distributions as the regression loss. By analyzing the gradient of each parameter, we show that KLD (and its derivatives) can dynamically adjust the parameter gradients according to the characteristics of the object. For instance, it will adjust the importance (gradient weight) of the angle parameter according to the aspect ratio. This mechanism can be vital for high-precision detection as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. More importantly, we have proved that KLD is scale invariant. We further show that the KLD loss can be degenerated into the popular Ln-norm loss for horizontal detection. Experimental results on seven datasets using different detectors show its consistent superiority, and codes are available at https://github.com/yangxue0827/RotationDetection.
| accept | The reviewers have discussed the paper and have not come to an agreement.
Reasons to accept: the proposed approach is noted to be simple and yet effective, and it can improve the performance of various existing object detectors on different datasets significantly. Most of the issues raised in the reviews were properly addressed in the author responses (including extra experimental results).
Reasons to reject: some of the reviewers were not satisfied with the provided responses and decided not to support the paper.
I think that the overall pros of the paper outweigh the cons and would like to recommend accepting the paper.
| val | [
"wmt5dUBtsiq",
"49V7NvjcNNb",
"LvvqSGdQbbe",
"7DLNrINmln",
"uJqFJ3sLsNW",
"VyDMZlGdQQk",
"8rmUCSy33uZ",
"bhMcZKMxXAh",
"0mjk9uDL_bJ",
"6RP-4dZzwIv",
"GgXZaV7VVVw",
"OFC7nzZxBy",
"T5Z9SruKtEE",
"zjXLW0Xgd3S",
"gK7Lb2jaq7P",
"tqqsk4zSkr",
"-D0QTGdoVT",
"g7jgQn4DNPm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes an algorithm that can detect slender objects,\nlike text in images or boats in aerial images. The work mainly focuses on\nthe bounding box regression task and includes a parameter for box rotation.\nBounding boxes are represented as Gaussian functions instead of axis-aligned\nboxes. The scale in... | [
2,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_pmWeMLm411_",
"LvvqSGdQbbe",
"7DLNrINmln",
"uJqFJ3sLsNW",
"6RP-4dZzwIv",
"bhMcZKMxXAh",
"nips_2021_pmWeMLm411_",
"0mjk9uDL_bJ",
"8rmUCSy33uZ",
"wmt5dUBtsiq",
"OFC7nzZxBy",
"T5Z9SruKtEE",
"zjXLW0Xgd3S",
"wmt5dUBtsiq",
"wmt5dUBtsiq",
"8rmUCSy33uZ",
"g7jgQn4DNPm",
"nips_202... |
nips_2021_UKoV0-BamX4 | On Locality of Local Explanation Models | Shapley values provide model agnostic feature attributions for model outcome at a particular instance by simulating feature absence under a global population distribution. The use of a global population can lead to potentially misleading results when local model behaviour is of interest. Hence we consider the formulation of neighbourhood reference distributions that improve the local interpretability of Shapley values. By doing so, we find that the Nadaraya-Watson estimator, a well-studied kernel regressor, can be expressed as a self-normalised importance sampling estimator. Empirically, we observe that Neighbourhood Shapley values identify meaningful sparse feature relevance attributions that provide insight into local model behaviour, complimenting conventional Shapley analysis. They also increase on-manifold explainability and robustness to the construction of adversarial classifiers.
| accept | The paper discusses local feature importance and presents a limitation of current methods: they are, in a sense, not local “enough”. This is an important observation that should be studied. The authors present a possible solution to this problem by adding geometric and distribution-related components to the valuation function and show how this changes the feature importance score.
We had active discussion about this work. It was agreed that the problem presented is well justified. However, there were concerns about the proposed solution and therefore we conclude that it be more beneficial to reject the paper and encourage the authors to further study the problem. Here are some examples to questions that the authors may wish to consider when further exploring this domain:
- How would the explanation behave for points that are relatively remote? Note that the reviewers have commented that the explanation could fail off the manifold and it is important to mention this in the final version of this work.
- How would the explanation behave in high dimension when the Euclidean distance might be altered by many non-relevant features or features with different scale?
- How should we measure how good is the explanation? I.e., if someone proposes a different way to provide local explanations, how would we know which one is better? In other words, how do we know that the feature importance score presented here is good?
- Do the properties of Shapley Value match the requirements in the case? After all, Shapley Value was design to solve a fairness problem in a completely different domain and therefore it is not clear why the same properties are required in this setting.
This study is presenting an interesting direction but is not ready yet for publication in NeurIPS.
| train | [
"E3BM2-wAvxq",
"AZO3g8pXbLe",
"WQPxff8kmG",
"RJhHcvdlULD",
"2FBige88hsS",
"yXSS__C3rVi",
"r5KsJCvzNPs",
"IUegFuW8qn6",
"uAmjazDHeTo",
"pR4jesWu4WF",
"C_67w-aNDXA"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the positive feedback. We have now added a reference to Theorem 9.2 of Owen (2013) in the revised version of the paper. \n\n### References \nOwen, A. B. (2013). Monte Carlo theory, methods and examples. ",
" Thank you very much for the detailed reply. \n\nI would encourage the authors to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
5
] | [
"AZO3g8pXbLe",
"yXSS__C3rVi",
"C_67w-aNDXA",
"pR4jesWu4WF",
"C_67w-aNDXA",
"uAmjazDHeTo",
"IUegFuW8qn6",
"nips_2021_UKoV0-BamX4",
"nips_2021_UKoV0-BamX4",
"nips_2021_UKoV0-BamX4",
"nips_2021_UKoV0-BamX4"
] |
nips_2021_3qMwV98zLIk | FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling | The recently proposed FixMatch achieved state-of-the-art results on most semi-supervised learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes. To address this issue, we propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status. The core of CPL is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo labels. CPL does not introduce additional parameters or computations (forward or backward propagation). We apply CPL to FixMatch and call our improved algorithm FlexMatch. FlexMatch achieves state-of-the-art performance on a variety of SSL benchmarks, with especially strong performances when the labeled data are extremely limited or when the task is challenging. For example, FlexMatch outperforms FixMatch by 14.32% and 24.55% on CIFAR-100 and STL-10 datasets respectively, when there are only 4 labels per class. CPL also significantly boosts the convergence speed, e.g., FlexMatch can use only 1/5 training time of FixMatch to achieve even better performance. Furthermore, we show that CPL can be easily adapted to other SSL algorithms and remarkably improve their performances. We open-source our code at https://github.com/TorchSSL/TorchSSL.
| accept | This paper proposes Curriculum Pseudo Labeling (CPL), an extension to semi-supervised learning algorithms that uses an estimate of how well the model has learned each class to set per-class thresholds for promoting unlabeled data to pseudolabeled data. This conceptually simple idea has a significant impact in practice. It is also simple to implement and does not significantly add to the computational cost of the base SSL methods. (It actually can speed up convergence significantly.) Further, the combination of CPL and FixMatch (called FlexMatch) sets new state-of-the-art scores on several SSL benchmarks. Finally, the paper includes the release of Torch SSL, a codebase for the study of SSL.
The reviewers generally liked that the method was conceptually simple, effective, and easy to combine with many existing methods. Three of the four reviewers clearly favored acceptance, for the above strengths. Some of the concerns they had initially were addressed during the discussion period, and the authors are encouraged to include these clarifications in the final version of the paper. One reviewer had serious concerns about the experimental setup. The results reported in the paper for baselines differ from some previously reported in the literature. The authors attribute this to both a difference in the number of labeled examples (which is fair to vary in SSL experiments) and differences in hyperparameters. The authors are encouraged to both clarify whether the hyperparameters were tuned to be optimal for each baseline and also to add comparisons on the same number of labeled examples as previously used in the literature (perhaps in an appendix) for a more detailed comparison. | train | [
"zeX5FWVQFRn",
"UOwqLzA-TAs",
"p1inAo3L1V3",
"2X9BFvgvll",
"m8hD2L48Ia",
"baiwFfZrXoo",
"jrgkVIDno4u",
"pUkFJYX7Swt",
"mGWBRPJsVD",
"7Tk69K1Fos",
"TTS-ColdjMr",
"ni3NKDYEga",
"IVZtiXnzuRT",
"4aCYomKaOuH"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduces a curriculum learning strategy for semi-supervised learning. Their motivation is that most existing semi-supervised learning utilizes a fixed threshold to compute pseudo-labeling loss, but the fixed threshold should be adjusted according to the state of the model. Then, they propose to adjust... | [
7,
-1,
-1,
3,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_3qMwV98zLIk",
"p1inAo3L1V3",
"m8hD2L48Ia",
"nips_2021_3qMwV98zLIk",
"baiwFfZrXoo",
"ni3NKDYEga",
"nips_2021_3qMwV98zLIk",
"mGWBRPJsVD",
"7Tk69K1Fos",
"jrgkVIDno4u",
"zeX5FWVQFRn",
"2X9BFvgvll",
"4aCYomKaOuH",
"nips_2021_3qMwV98zLIk"
] |
nips_2021_sygvo7ctb_ | Relative Flatness and Generalization | Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks. While it has been empirically observed that flatness measures consistently correlate strongly with generalization, it is still an open theoretical problem why and under which circumstances flatness is connected to generalization, in particular in light of reparameterizations that change certain flatness measures but leave generalization unchanged. We investigate the connection between flatness and generalization by relating it to the interpolation from representative data, deriving notions of representativeness, and feature robustness. The notions allow us to rigorously connect flatness and generalization and to identify conditions under which the connection holds. Moreover, they give rise to a novel, but natural relative flatness measure that correlates strongly with generalization, simplifies to ridge regression for ordinary least squares, and solves the reparameterization issue.
| accept | This paper provides a theoretical account of the relation between flatness of minima and generalization in deep neural networks. This is an important problem that attracted recent attention. The results presented here (Theorem 6) provide novel insight and are validated by experiments. The presentation is clear. | train | [
"R9ca8T1A4ix",
"1_0oRrXNQpc",
"qLNUGrBzTKR",
"gPJ3mjkuhMD",
"k6Aktbd7G9m",
"sLuvHtfV5oe",
"iWfyISI06Fn",
"qO4HY15qal",
"nVUafI86vt",
"OLTRdddxN_t",
"6WNBHdM4tKi",
"qnwyTGfgJ9"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I've updated my rating from 5 to 6.\n\nA weighted sum of Hessian eigenvalues would indeed be an interesting direction, though that seems like a somewhat different direction from the current work. I'm curious the authors' thoughts on whether lack of reparameterization invariance being treated fairly (not just by t... | [
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nVUafI86vt",
"nips_2021_sygvo7ctb_",
"gPJ3mjkuhMD",
"6WNBHdM4tKi",
"nips_2021_sygvo7ctb_",
"qO4HY15qal",
"nips_2021_sygvo7ctb_",
"OLTRdddxN_t",
"1_0oRrXNQpc",
"k6Aktbd7G9m",
"qnwyTGfgJ9",
"nips_2021_sygvo7ctb_"
] |
nips_2021_6mEWjDYJeE- | The Image Local Autoregressive Transformer | Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance compared to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model -- image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both quantitative and qualitative results show the efficacy of our model.
| accept | Three reviewers are positive about the paper, while one is more sceptical. As pointed out by the authors, the latter may be due to some confusion about the task addressed: image editing vs. image inpainting.
So the AC follows the advise of the positive reviewers, and decides to accept the paper for publication in Neurips.
The authors are encouraged to work on the paper to improve the clarity, based on the constructive feedback given by the reviewers. | train | [
"dk1emrjpw0y",
"2VkO6spq9jg",
"SJ-i3Vr4o2l",
"YU7wHPsVX_S",
"S_5SDhYTZBp",
"WI1LUYKmBN",
"9T_t0UVHdY1",
"-krG3-LW8RS",
"po1d2EocdkQ",
"Ix_uzpnXcwg",
"x6FBzCK9mVz",
"QHLg5ZbxVqG"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a transformer-based model for local image editing. The key challenges tackled by the paper is the need to integrate global information, while also preventing leakage of local guidance information (i.e. the local editing should only affect the designated region). To accomplish this, a combinatio... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
3
] | [
"nips_2021_6mEWjDYJeE-",
"S_5SDhYTZBp",
"YU7wHPsVX_S",
"9T_t0UVHdY1",
"po1d2EocdkQ",
"QHLg5ZbxVqG",
"dk1emrjpw0y",
"x6FBzCK9mVz",
"Ix_uzpnXcwg",
"nips_2021_6mEWjDYJeE-",
"nips_2021_6mEWjDYJeE-",
"nips_2021_6mEWjDYJeE-"
] |
nips_2021_e5vrkfc5aau | Towards Multi-Grained Explainability for Graph Neural Networks | When a graph neural network (GNN) made a prediction, one raises question about explainability: “Which fraction of the input graph is most influential to the model’s decision?” Producing an answer requires understanding the model’s inner workings in general and emphasizing the insights on the decision for the instance at hand. Nonetheless, most of current approaches focus only on one aspect: (1) local explainability, which explains each instance independently, thus hardly exhibits the class-wise patterns; and (2) global explainability, which systematizes the globally important patterns, but might be trivial in the local context. This dichotomy limits the flexibility and effectiveness of explainers greatly. A performant paradigm towards multi-grained explainability is until-now lacking and thus a focus of our work. In this work, we exploit the pre-training and fine-tuning idea to develop our explainer and generate multi-grained explanations. Specifically, the pre-training phase accounts for the contrastivity among different classes, so as to highlight the class-wise characteristics from a global view; afterwards, the fine-tuning phase adapts the explanations in the local context. Experiments on both synthetic and real-world datasets show the superiority of our explainer, in terms of AUC on explaining graph classification over the leading baselines. Our codes and datasets are available at https://github.com/Wuyxin/ReFine.
| accept | In this paper, the authors propose an explainability approach to graph neural networks (GNN). The fundamental idea is to first derive high-level explanations for each class and then fine-tune these explanations to obtain low-level explanations for each instance. Reviewers agree that the problem in this paper is important. During the discussion, we carefully consider the novelty of the proposed method over the recent works (NeurIPS 2020) and we agree that the GNN formulation has some merits. However, there still exist some concerns in experiments for publication. Thus, I encourage authors to revise the paper based on reviewer's comments and resubmit it to a future venue. | train | [
"2Xhz65SPYn7",
"nOKeA3fGRnu",
"XK2Bhl7G_5s",
"GYmquyLNlmQ",
"kvP7VV2tKOC",
"VxxQmM5DMUR",
"sAojxiswwUI",
"--iCiMFHXQ_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper aims to develop an explainability approach to graph neural networks. The authors introduce such a method coined ReFine which in fact combines local and global aspects. They also provide numerical experiments proving the superiority of their approach, even if only one part of ReFine is used. Almost all... | [
7,
4,
7,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_e5vrkfc5aau",
"nips_2021_e5vrkfc5aau",
"nips_2021_e5vrkfc5aau",
"nOKeA3fGRnu",
"XK2Bhl7G_5s",
"2Xhz65SPYn7",
"--iCiMFHXQ_",
"nips_2021_e5vrkfc5aau"
] |
nips_2021_fIn4wLS2XzU | Behavior From the Void: Unsupervised Active Pre-Training | We introduce a new unsupervised pre-training method for reinforcement learning called APT, which stands for Active Pre-Training. APT learns behaviors and representations by actively searching for novel states in reward-free environments. The key novel idea is to explore the environment by maximizing a non-parametric entropy computed in an abstract representation space, which avoids challenging density modeling and consequently allows our approach to scale much better in environments that have high-dimensional observations (e.g., image observations). We empirically evaluate APT by exposing task-specific reward after a long unsupervised pre-training phase. In Atari games, APT achieves human-level performance on 12 games and obtains highly competitive performance compared to canonical fully supervised RL algorithms. On DMControl suite, APT beats all baselines in terms of asymptotic performance and data efficiency and dramatically improves performance on tasks that are extremely difficult to train from scratch.
| accept | This paper proposes a novel unsupervised pre-training method for reinforcement learning, using a particle-based entropy estimator on a contrastive-loss trained feature representation. The method is novel and scalable and the empirical results and rebuttal additional experiments are strong. I trust the authors will incorporate the reviewers' comments and discussion points into the final version of the manuscript. | train | [
"m5BV2y0EwaN",
"nK5NZh0e-I",
"Ip_lNkt5TG8",
"e025HGd_-By",
"g5DHoBLXy-8",
"LhIB5EDN8jb",
"XwIcq997yp",
"9giig9qTF0T",
"l5oB7r2Wrni",
"R33YSog2vme",
"D8ml2z53iId"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the suggestion regarding proposition 1, we will remove it in the next version. ",
"This paper proposes an unsupervised (or rather, self-supervised) training method for RL, named active pre-training. This is based on using contrastive learning from pixels, then maximizing state entropy in the abstr... | [
-1,
8,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"Ip_lNkt5TG8",
"nips_2021_fIn4wLS2XzU",
"R33YSog2vme",
"nips_2021_fIn4wLS2XzU",
"nips_2021_fIn4wLS2XzU",
"l5oB7r2Wrni",
"nK5NZh0e-I",
"e025HGd_-By",
"g5DHoBLXy-8",
"D8ml2z53iId",
"nips_2021_fIn4wLS2XzU"
] |
nips_2021_ELU8Bu1Z9w1 | Autonomous Reinforcement Learning via Subgoal Curricula | Reinforcement learning (RL) promises to enable autonomous acquisition of complex behaviors for diverse agents. However, the success of current reinforcement learning algorithms is predicated on an often under-emphasised requirement -- each trial needs to start from a fixed initial state distribution. Unfortunately, resetting the environment to its initial state after each trial requires substantial amount of human supervision and extensive instrumentation of the environment which defeats the goal of autonomous acquisition of complex behaviors. In this work, we propose Value-accelerated Persistent Reinforcement Learning (VaPRL), which generates a curriculum of initial states such that the agent can bootstrap on the success of easier tasks to efficiently learn harder tasks. The agent also learns to reach the initial states proposed by the curriculum, minimizing the reliance on human interventions into the learning. We observe that VaPRL reduces the interventions required by three orders of magnitude compared to episodic RL while outperforming prior state-of-the art methods for reset-free RL both in terms of sample efficiency and asymptotic performance on a variety of simulated robotics problems.
| accept | After reading each other's reviews and the authors' feedback, the reviewers discussed the merits and flaws of the paper.
The reviewers did not reach a consensus. In particular, the lack of a theoretical analysis together with some doubts about its applicability to realistic scenarios have been considered as significant limitations.
Nonetheless, the majority of the reviewers were satisfied with the answers provided by the authors and think that the approach proposed in the paper is interesting and promising.
Overall, I think that, despite its limits, the paper provides a nice contribution and I propose to accept it.
I want to congratulate the authors and invite them to modify their paper following the reviewers' suggestions. | train | [
"PntfEiwlDHG",
"H2YdGjhkfcV",
"kNmyitzlPom",
"6-B9yy0_KOx",
"I6mWSTDKAM5",
"TEDNBm_QJxV",
"xmQxaeh7Zp",
"tit1hqP_UYw",
"hV1OCTFhlCk",
"h3cj_wLFdcJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a curriculum approach for persistent goal conditioned reinforcement learning, that is, for learning how to train continuously and then evaluate in episodic fashion. The main idea is, in each iteration, to try to keep the value of moving from a selected state to the goal above a threshold while s... | [
7,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ELU8Bu1Z9w1",
"nips_2021_ELU8Bu1Z9w1",
"xmQxaeh7Zp",
"nips_2021_ELU8Bu1Z9w1",
"h3cj_wLFdcJ",
"h3cj_wLFdcJ",
"6-B9yy0_KOx",
"H2YdGjhkfcV",
"PntfEiwlDHG",
"nips_2021_ELU8Bu1Z9w1"
] |
nips_2021_48LtSAkxjiX | Statistically and Computationally Efficient Linear Meta-representation Learning | Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh | accept | All reviewers agree that this is a solid contribution to NeurIPS: it tackles a relevant problem in the context of meta-learning and does so by proposing a novel and interesting strategy inspired by the matrix sensing literature. The authors provide a thorough theoretical investigation of the setting considered, which yields insights also on previous work on the topic of linear meta-representation learning. There are a few issues with clarity that authors have addressed during the discussion period and that will need to be taken care of in the final version of the paper. In particular, including material from the Appendix (e.g. plots concerning the behavior of Burer-Monteiro gradient descent) and adding further insight regarding the distinction between AltMin and AltMinGD-S (also explaining why this was not tested in the experiments). | test | [
"REegeBd6SdC",
"mfeikus14vq",
"pPeOh_70W5",
"7lpcHQhYRvO",
"EhnToeUrNvP",
"nOrEwxi6l2p",
"FWruyJ9JALD",
"0bEsnHUNue4",
"rEIJ43nNoXf",
"NZFmZ_3Vgqz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your quick response and for evaluating our work higher. We will surely add these points and explanations to our next revision. Again, we appreciate your feedback to improve our paper.",
"The paper analyses linear meta-representation learning, deriving formally statistical guarantees for widely ado... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"pPeOh_70W5",
"nips_2021_48LtSAkxjiX",
"nOrEwxi6l2p",
"rEIJ43nNoXf",
"NZFmZ_3Vgqz",
"mfeikus14vq",
"0bEsnHUNue4",
"nips_2021_48LtSAkxjiX",
"nips_2021_48LtSAkxjiX",
"nips_2021_48LtSAkxjiX"
] |
nips_2021_BuoTowxp-9 | Decentralized Learning in Online Queuing Systems | Flore Sentenac, Etienne Boursier, Vianney Perchet | accept | The paper makes a clear contribution to the question of decentralized learning in queuing systems.
Thus, we decided to accept the paper. | train | [
"v9kYgIfSxgh",
"HXHUCPe_Bts",
"b0YAe4AIPEC",
"2LCpfch_sK",
"N5_hknXUAj",
"08JydvIih_g",
"DiAty7ZtUla",
"lO_BGM8sElX",
"ndaTqtTNh_",
"MMJHeZJEXk2",
"Vk7FZQ_qLx",
"myBc7_XFfU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification about shared randomness. My evaluations remain the same. ",
" Appreciate the authors' detailed response, and my evaluation remains the same. ",
" Thanks for the author's response. \n\nAfter reading the other reviews, I'll raise my score to 5.\n\nI'd like to make three suggestions:... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"lO_BGM8sElX",
"MMJHeZJEXk2",
"DiAty7ZtUla",
"nips_2021_BuoTowxp-9",
"08JydvIih_g",
"ndaTqtTNh_",
"2LCpfch_sK",
"Vk7FZQ_qLx",
"myBc7_XFfU",
"nips_2021_BuoTowxp-9",
"nips_2021_BuoTowxp-9",
"nips_2021_BuoTowxp-9"
] |
nips_2021_ljOg2HIBDGH | Explainable Semantic Space by Grounding Language to Vision with Cross-Modal Contrastive Learning | In natural language processing, most models try to learn semantic representations merely from texts. The learned representations encode the “distributional semantics” but fail to connect to any knowledge about the physical world. In contrast, humans learn language by grounding concepts in perception and action and the brain encodes “grounded semantics” for cognition. Inspired by this notion and recent work in vision-language learning, we design a two-stream model for grounding language learning in vision. The model includes a VGG-based visual stream and a Bert-based language stream. The two streams merge into a joint representational space. Through cross-modal contrastive learning, the model first learns to align visual and language representations with the MS COCO dataset. The model further learns to retrieve visual objects with language queries through a cross-modal attention module and to infer the visual relations between the retrieved objects through a bilinear operator with the Visual Genome dataset. After training, the model’s language stream is a stand-alone language model capable of embedding concepts in a visually grounded semantic space. This semantic space manifests principal dimensions explainable with human intuition and neurobiological knowledge. Word embeddings in this semantic space are predictive of human-defined norms of semantic features and are segregated into perceptually distinctive clusters. Furthermore, the visually grounded language model also enables compositional language understanding based on visual knowledge and multimodal image search with queries based on images, texts, or their combinations.
| accept | This paper investigates cross-modal contrastive learning for semantic representations and finds that grounded language embeddings are more semantically coherent than un-grounded ones. The reviewers generally like this direction and the paper is well-written. The authors should really heed the feedback from the reviewers, however, and in particular: the authors should make *very* clear that this is not a methods paper, but an analysis paper. Occasionally the paper reads too much like it is claiming a new method, which is problematic because a) the comparison to other models is lacking; and b) the paper is not adequately positioned in the existing literature. There are many papers that have explored visually-grounded semantic embeddings in the past, several of them already quite old, and this work should be positioned accordingly. | test | [
"ujtFJP3kLBZ",
"l1e9NOFPGSL",
"nJGXiacDp0z",
"DtS2uVFE0Pl",
"hSrXRDiA4b",
"_s1-Y6EXV51",
"R_LkgwwOwPQ",
"ED1llJNRRU",
"M0qcIrlM_uy",
"ll6zWT4PHWU",
"17U8ejAzLQf",
"wz6SotD-8MZ",
"iP5MARczXsR",
"wufaBOrVgP3",
"DcpSAiSTljZ",
"AUiSIezIrX-"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for your constructive and encouraging feedback. We will revise the paper to further clarify the detailed method used for multimodal image search in Section 4.5. In addition, we will also describe more implementation details and include a script for running multimodal image search examples in the supplem... | [
-1,
7,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
4,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nJGXiacDp0z",
"nips_2021_ljOg2HIBDGH",
"l1e9NOFPGSL",
"nips_2021_ljOg2HIBDGH",
"R_LkgwwOwPQ",
"nips_2021_ljOg2HIBDGH",
"M0qcIrlM_uy",
"wufaBOrVgP3",
"_s1-Y6EXV51",
"ED1llJNRRU",
"DtS2uVFE0Pl",
"l1e9NOFPGSL",
"wz6SotD-8MZ",
"DcpSAiSTljZ",
"AUiSIezIrX-",
"nips_2021_ljOg2HIBDGH"
] |
nips_2021_eAPrmf2g8f2 | BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining | Neural network robustness has become a central topic in machine learning in recent years. Most training algorithms that improve the model's robustness to adversarial and common corruptions also introduce a large computational overhead, requiring as many as ten times the number of forward and backward passes in order to converge. To combat this inefficiency, we propose BulletTrain, a boundary example mining technique to drastically reduce the computational cost of robust training. Our key observation is that only a small fraction of examples are beneficial for improving robustness. BulletTrain dynamically predicts these important examples and optimizes robust training algorithms to focus on the important examples. We apply our technique to several existing robust training algorithms and achieve a 2.2x speed-up for TRADES and MART on CIFAR-10 and a 1.7x speed-up for AugMix on CIFAR-10-C and CIFAR-100-C without any reduction in clean and robust accuracy.
| accept | This work proposes to a method to accelerate robust training of neural networks by focusing on boundary examples. The idea is interesting and is well received by the reviewers. The paper is well motivated and clearly writing. Authors' rebuttal also did a great work at resolving the concerns from the reviewers.
On the other hand, there are some weaknesses that we hope the authors make efforts to address in the final version. This includes (as summarized by reviewer 5TSA during discussion): (1) experimental validation of the signed-prediction variance proxy used in the paper, and (2) lack of comparison with other defenses/attacks. In particular, Reviewer d8Mu mentioned critical comparisons that the paper should have reported initially: comparisons with YOPO, Free and GradAlign. Reviewer 5TSA was particularly concerned with the disproportionate importance given to theoretical speedups in contrast to empirical speedups. | train | [
"fW6kGDjWsR9",
"8DGech4crcq",
"jUzaLsTO23",
"iEpqDdTQfS",
"WQ2aGrpz3jx",
"KnWCVNtoupC",
"NzzVglL7-9I",
"1XwY2rYMrHH",
"wbQ3daNeQRC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a technique for reducing the computational cost of robust training by dynamically selecting which samples to assign the most compute power to during either adversary generation (for robustness against adversaries) or the forward pass of alternative samples (for robustness against common corrupti... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_eAPrmf2g8f2",
"WQ2aGrpz3jx",
"nips_2021_eAPrmf2g8f2",
"nips_2021_eAPrmf2g8f2",
"fW6kGDjWsR9",
"jUzaLsTO23",
"wbQ3daNeQRC",
"nips_2021_eAPrmf2g8f2",
"nips_2021_eAPrmf2g8f2"
] |
nips_2021_fClMl0pAIhd | Neural Distance Embeddings for Biological Sequences | The development of data-dependent heuristics and representations for biological sequences that reflect their evolutionary distance is critical for large-scale biological research. However, popular machine learning approaches, based on continuous Euclidean spaces, have struggled with the discrete combinatorial formulation of the edit distance that models evolution and the hierarchical relationship that characterises real-world datasets. We present Neural Distance Embeddings (NeuroSEED), a general framework to embed sequences in geometric vector spaces, and illustrate the effectiveness of the hyperbolic space that captures the hierarchical structure and provides an average 38% reduction in embedding RMSE against the best competing geometry. The capacity of the framework and the significance of these improvements are then demonstrated devising supervised and unsupervised NeuroSEED approaches to multiple core tasks in bioinformatics. Benchmarked with common baselines, the proposed approaches display significant accuracy and/or runtime improvements on real-world datasets. As an example for hierarchical clustering, the proposed pretrained and from-scratch methods match the quality of competing baselines with 30x and 15x runtime reduction, respectively.
| accept | The paper studies methods for embedding sequences into geometric spaces that approximately preserve the edit distance. The key finding is that embedding into hyperbolic spaces yields (significantly) lower distortion than embedding into other geometric spaces. The finding is supported by experiments on multiple data sets, in the context of several downstream applications: similarity search, clustering and multiple alignment.
Despite a few rounds of discussion, the reviewers did not reach a consensus. Most of the reviews are positive, with some reviewers being very positive. At the same time though, the negative review raises valid points, including (1) the basic edit distance (where all operations have unit cost) does not occur often in applications, so the practical impact of the proposed method is unclear (2) in the context of similarity search, there is no comparison to indexing methods that are faster than linear scan (e.g., LSH); given that fast algorithms for hyperbolic spaces are not well developed, it could be that the more accurate embeddings into hyperbolic spaces might actually yield less efficient algorithms than the less accurate embeddings into (say) the Euclidean space.
That said, the paper does introduce an interesting idea, and it is likely that its findings will stimulate further research on embeddings into (and algorithms for) hyperbolic spaces. So, I recommend accepting the paper, even though at present its practical impact is not guaranteed.
| train | [
"3S6eoKS3wyq",
"dXesiWOb2x",
"SQMxC3LQMdc",
"KqGreMjp9De",
"46ezNkLkVeh",
"nQXIiTrH9-u",
"FN_nu7btmmm",
"PKS391_x55E",
"RkR4M_jzGWk",
"HbIhp3Ghxg",
"eobsnB6SDeC",
"69fQjlixf8y"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response.\nGiven their response as well as other reviews, I choose to maintain my score. I think the paper provides a valuable contribution, and that there is amble experimental support provided in the paper to the proposed method.",
" Thank you for your comments and for addressing... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"FN_nu7btmmm",
"RkR4M_jzGWk",
"nips_2021_fClMl0pAIhd",
"PKS391_x55E",
"nips_2021_fClMl0pAIhd",
"HbIhp3Ghxg",
"eobsnB6SDeC",
"69fQjlixf8y",
"SQMxC3LQMdc",
"nips_2021_fClMl0pAIhd",
"nips_2021_fClMl0pAIhd",
"nips_2021_fClMl0pAIhd"
] |
nips_2021_9DEAT9pDiN | Fitting summary statistics of neural data with a differentiable spiking network simulator | Fitting network models to neural activity is an important tool in neuroscience. A popular approach is to model a brain area with a probabilistic recurrent spiking network whose parameters maximize the likelihood of the recorded activity. Although this is widely used, we show that the resulting model does not produce realistic neural activity. To correct for this, we suggest to augment the log-likelihood with terms that measure the dissimilarity between simulated and recorded activity. This dissimilarity is defined via summary statistics commonly used in neuroscience and the optimization is efficient because it relies on back-propagation through the stochastically simulated spike trains. We analyze this method theoretically and show empirically that it generates more realistic activity statistics. We find that it improves upon other fitting algorithms for spiking network models like GLMs (Generalized Linear Models) which do not usually rely on back-propagation. This new fitting algorithm also enables the consideration of hidden neurons which is otherwise notoriously hard, and we show that it can be crucial when trying to infer the network connectivity from spike recordings.
| accept | Dear authors,
Congratulations on your paper being accepted at Neurips. Your submission was discussed extensively, (and, at times, controversially) amongst the reviewers. We appreciated that your approach has the potentially to substantially increase the usability of a common analysis tool in neuroscience, by ensuring that GLMs fitted to multi-neuron spike recordings lead more faithful models of neural firing. The current ’state of the art’ is still to use MLE, and there is mounting empirical evidence that this often yields models that do not provide accurate generative samples (e.g. exploding firing rates, or inaccurate representations of PSTHs and correlations). The approach to extend the loss function and to use differential simulators for optimisation seems to provide a clear path for overcoming this general problem — therefore, the final decision was to accept this paper at Neurips.
At the same time, the discussion and review process revealed multiple points in which the paper could be strengthed— please see the reviews and comments by the reviewers for specific points. However, a general theme relates to the framing of the paper— it is clear that approach leads to better generative samples, and this is demonstrated empirically. It is not immediately clear why this approach would also lead to better reconstructions of 'functional connectivity’ (aside from the many issues plaguing this concept), and the empirical evidence is similarly not completely convincing.
We would ask the authors to take the feedback by the reviewers seriously, and adjust the paper accordingly.
Best,
Your AC | train | [
"6JAgR39Myc",
"XLYfjxmixFV",
"JV1KQwV7kv",
"ehPi0TjyV9e",
"_6Aoo5o5W2",
"w28372OYPPk",
"NsGsTOK5Bad",
"GYZdQWwDwwZ",
"MmNn4Vmgw6",
"lnoZMOSeMSO",
"7pOQKsZ8cw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This study proposes an approach to fitting spiking networks to neural population data, by combining a typical maximum likelihood loss (in case of a fully observed network, but an ELBO loss for a network with unobserved neurons) with an additional loss term measuring the distance between summary statistics of recor... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_9DEAT9pDiN",
"nips_2021_9DEAT9pDiN",
"_6Aoo5o5W2",
"GYZdQWwDwwZ",
"MmNn4Vmgw6",
"6JAgR39Myc",
"7pOQKsZ8cw",
"lnoZMOSeMSO",
"XLYfjxmixFV",
"nips_2021_9DEAT9pDiN",
"nips_2021_9DEAT9pDiN"
] |
nips_2021_v2w7CVZGHeA | PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators | We consider offline reinforcement learning (RL) with heterogeneous agents under severe data scarcity, i.e., we only observe a single historical trajectory for every agent under an unknown, potentially sub-optimal policy. We find that the performance of state-of-the-art offline and model-based RL methods degrade significantly given such limited data availability, even for commonly perceived "solved" benchmark settings such as "MountainCar" and "CartPole". To address this challenge, we propose PerSim, a model-based offline RL approach which first learns a personalized simulator for each agent by collectively using the historical trajectories across all agents, prior to learning a policy. We do so by positing that the transition dynamics across agents can be represented as a latent function of latent factors associated with agents, states, and actions; subsequently, we theoretically establish that this function is well-approximated by a "low-rank" decomposition of separable agent, state, and action latent functions. This representation suggests a simple, regularized neural network architecture to effectively learn the transition dynamics per agent, even with scarce, offline data. We perform extensive experiments across several benchmark environments and RL methods. The consistent improvement of our approach, measured in terms of both state dynamics prediction and eventual reward, confirms the efficacy of our framework in leveraging limited historical data to simultaneously learn personalized policies across agents.
| accept | This paper studies an interesting and challenging offline RL problem, where the agents are heterogeneous and there is only a single trajectory for every agent under a potentially non-optimal policy. The authors propose PerSim, where a personalized simulator is learned for each agent based on the trajectories from all agents, and then a policy is made based on MPC over simulators ensemble.
The technical novelty of the paper lies in the finding that the transition dynamics across agents can be represented as a latent function and thus can be nicely approximated by a low-rank decomposition of the latent functions of agent, state, actions. In general, the contributions of this paper are clear and sufficient.
The majority of reviewers provide acceptance recommendations for this paper. The only reviewer of score 5 raised some questions about the generalization ability and sample complexity of the method, problem setting, and the evaluation, which I think have been well answered with detailed revision plans. Thus I think this paper can be accepted. | train | [
"K6foJjgtWow",
"ZFB5gX4oVyU",
"m55wO73acsr",
"mJCI9cBQFp2",
"Y7h4PQacW3B",
"kwiFyyXCWMW",
"O_-jvgmAuAb",
"jTTESr1SULd",
"nSquwmreCkE",
"0R1nxh_CR0",
"DvcGuwraT4j",
"opbSSyFN34",
"LjPRDCIxZEY",
"DGYulSDCQeP",
"RazBkYH07S"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their feedback. \n\nWe will add a remark stating that PerSim requires access to a unique ID for each *trajectory*, and reference important real-world scenarios where access to such an ID is readily available. For example, (i) medicine where health trajectories are collected on a patient-... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"ZFB5gX4oVyU",
"nSquwmreCkE",
"DvcGuwraT4j",
"Y7h4PQacW3B",
"kwiFyyXCWMW",
"0R1nxh_CR0",
"nips_2021_v2w7CVZGHeA",
"LjPRDCIxZEY",
"DGYulSDCQeP",
"RazBkYH07S",
"opbSSyFN34",
"nips_2021_v2w7CVZGHeA",
"nips_2021_v2w7CVZGHeA",
"nips_2021_v2w7CVZGHeA",
"nips_2021_v2w7CVZGHeA"
] |
nips_2021_5fPBtLSGk21 | Online Sign Identification: Minimization of the Number of Errors in Thresholding Bandits | In the fixed budget thresholding bandit problem, an algorithm sequentially allocates a budgeted number of samples to different distributions. It then predicts whether the mean of each distribution is larger or lower than a given threshold. We introduce a large family of algorithms (containing most existing relevant ones), inspired by the Frank-Wolfe algorithm, and provide a thorough yet generic analysis of their performance. This allowed us to construct new explicit algorithms, for a broad class of problems, whose losses are within a small constant factor of the non-adaptive oracle ones. Quite interestingly, we observed that adaptive methodsempirically greatly out-perform non-adaptive oracles, an uncommon behavior in standard online learning settings, such as regret minimization. We explain this surprising phenomenon on an insightful toy problem.
| accept | The reviewers came to consensus that this paper makes a good progress on the thresholding bandit problem from the perspectives of upper and lower bounds as well as an interesting discussion on the benefit of adaptivity. I agree with these opinions and please polish the manuscript so that the minor concerns raised by the reviewers become clear in the final version. | train | [
"2sl7auiIGwO",
"myHORMafsWf",
"pUU6KKBJxr",
"7pDJsdD9Ld6",
"HRGEeKqEhmp",
"e2JyjWoW6A0",
"yX1w0TyJrOC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper works on the problem of fixed-budget sign identification problem in stochastic bandits. Given a set of arms where each sample on an arm returns a noisy result according to this arm's unknown latent distribution, a threshold, and the limited time horizon $T$, the learner needs to adaptively sample the ar... | [
7,
-1,
7,
-1,
-1,
-1,
8
] | [
4,
-1,
4,
-1,
-1,
-1,
4
] | [
"nips_2021_5fPBtLSGk21",
"HRGEeKqEhmp",
"nips_2021_5fPBtLSGk21",
"yX1w0TyJrOC",
"2sl7auiIGwO",
"pUU6KKBJxr",
"nips_2021_5fPBtLSGk21"
] |
nips_2021_2vubO341F_E | All Tokens Matter: Token Labeling for Training Better Vision Transformers | In this paper, we present token labeling---a new training objective for training high-performance vision transformers (ViTs). Different from the standard training objective of ViTs that computes the classification loss on an additional trainable class token, our proposed one takes advantage of all the image patch tokens to compute the training loss in a dense manner. Specifically, token labeling reformulates the image classification problem into multiple token-level recognition problems and assigns each patch token with an individual location-specific supervision generated by a machine annotator. Experiments show that token labeling can clearly and consistently improve the performance of various ViT models across a wide spectrum. For a vision transformer with 26M learnable parameters serving as an example, with token labeling, the model can achieve 84.4% Top-1 accuracy on ImageNet. The result can be further increased to 86.4% by slightly scaling the model size up to 150M, delivering the minimal-sized model among previous models (250M+) reaching 86%. We also show that token labeling can clearly improve the generalization capability of the pretrained models on downstream tasks with dense prediction, such as semantic segmentation. Code will be made publicly available.
| accept | This paper proposed a new framework to train vision transformers with token level supervision. While there were some debate over the novelty of the proposed technique in the context of previous methods (MixToken, CutMix, relabel), it was agreed that the proposed method is efficient and performant, and the extensive empirical studies in this paper could benefit the community. The authors also did well by providing convincing results on extra baselines requested by the reviewers in the rebuttal. | test | [
"LN8hE3VYdEt",
"eWXLrobBi1z",
"6cUmBZYskp7",
"Le7WQY98sfL",
"3pj5IsYJy-w",
"c33UtPcNuto",
"UQH5HNItls",
"uQWvosTkLB",
"NCHMyFdjrnP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks a lot for providing feedbacks to me. \n\nActually, the two feedbacks still did not convince me. MixToken is a slight modification of CutMix: only patch only comes from a single image other than possibly two images. Token Labeling is a dense prediction manner, and might be not done for supervised Pretrainin... | [
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
"UQH5HNItls",
"nips_2021_2vubO341F_E",
"nips_2021_2vubO341F_E",
"uQWvosTkLB",
"c33UtPcNuto",
"NCHMyFdjrnP",
"eWXLrobBi1z",
"6cUmBZYskp7",
"nips_2021_2vubO341F_E"
] |
nips_2021_qL_juuU4P3Y | Partition and Code: learning how to compress graphs | Can we use machine learning to compress graph data? The absence of ordering in graphs poses a significant challenge to conventional compression algorithms, limiting their attainable gains as well as their ability to discover relevant patterns. On the other hand, most graph compression approaches rely on domain-dependent handcrafted representations and cannot adapt to different underlying graph distributions. This work aims to establish the necessary principles a lossless graph compression method should follow to approach the entropy storage lower bound. Instead of making rigid assumptions about the graph distribution, we formulate the compressor as a probabilistic model that can be learned from data and generalise to unseen instances. Our “Partition and Code” framework entails three steps: first, a partitioning algorithm decomposes the graph into subgraphs, then these are mapped to the elements of a small dictionary on which we learn a probability distribution, and finally, an entropy encoder translates the representation into bits. All the components (partitioning, dictionary and distribution) are parametric and can be trained with gradient descent. We theoretically compare the compression quality of several graph encodings and prove, under mild conditions, that PnC achieves compression gains that grow either linearly or quadratically with the number of vertices. Empirically, PnC yields significant compression improvements on diverse real-world networks.
| accept | The paper presents learning-based method for compressing graphs. The method proceeds by partitioning the input graph into simpler parts, and using a learned dictionary to encode frequently appearing subgraphs. Empirical evaluation demonstrates improvements over multiple baselines.
The main concern was that the sizes of graphs used in evaluation were relatively small. However, given that the technique was shown to generalize to several diverse data sets, most reviewers were comfortable that the results should generalize to larger graphs as well. | train | [
"7LiC-vm0c8l",
"UAevhHU30Om",
"IOzvCBpT4M",
"2dT54mJ28eC",
"ZKEWMgpyWn",
"qp9T5e7NT5e",
"m4w4AbHNV-",
"JoHD5gSQo7E",
"p4Ok5NISaB8",
"plFmTd25X8z",
"z4_2Fqkxxd1",
"T8StfZer-4",
"qythACMpLhc",
"XC89GeGcDHX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors establish a theoretically well-motivated, general-purpose graph compression framework they call Partition-and-Code (PnC).\nThe framework consists of 3 general parts: partitioning, creating a dictionary of common subgraphs and entropy coding. Crucially, the first two steps are used to map a graph to its... | [
8,
8,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_qL_juuU4P3Y",
"nips_2021_qL_juuU4P3Y",
"qythACMpLhc",
"nips_2021_qL_juuU4P3Y",
"7LiC-vm0c8l",
"m4w4AbHNV-",
"p4Ok5NISaB8",
"XC89GeGcDHX",
"JoHD5gSQo7E",
"nips_2021_qL_juuU4P3Y",
"UAevhHU30Om",
"2dT54mJ28eC",
"z4_2Fqkxxd1",
"nips_2021_qL_juuU4P3Y"
] |
nips_2021_OLyhLK2eQP | Knowledge-inspired 3D Scene Graph Prediction in Point Cloud | Prior knowledge integration helps identify semantic entities and their relationships in a graphical representation, however, its meaningful abstraction and intervention remain elusive. This paper advocates a knowledge-inspired 3D scene graph prediction method solely based on point clouds. At the mathematical modeling level, we formulate the task as two sub-problems: knowledge learning and scene graph prediction with learned prior knowledge. Unlike conventional methods that learn knowledge embedding and regular patterns from encoded visual information, we propose to suppress the misunderstandings caused by appearance similarities and other perceptual confusion. At the network design level, we devise a graph auto-encoder to automatically extract class-dependent representations and topological patterns from the one-hot class labels and their intrinsic graphical structures, so that the prior knowledge can avoid perceptual errors and noises. We further devise a scene graph prediction model to predict credible relationship triplets by incorporating the related prototype knowledge with perceptual information. Comprehensive experiments confirm that, our method can successfully learn representative knowledge embedding, and the obtained prior knowledge can effectively enhance the accuracy of relationship predictions. Our thorough evaluations indicate the new method can achieve the state-of-the-art performance compared with other scene graph prediction methods.
| accept | This paper proposes a method for 3D scene graph prediction from point clouds in which a graph auto-encoder model learns prototypical representations for object categories using scene graph annotations, that are used as prior knowledge during scene graph inference from a point cloud input. Reviewers acknowledge the novelty of the autoencoding model proposed for learning categorical priors.
Reviewers point out that the paper is not clear regarding the input of the autoencoding graph model. They further point out that vague words like ``common sense” are not well defined and are used arbitrarily in the paper. Authors are strongly encouraged to clarify the paper writing (and corresponding figures), following reviewers’ comments.
| val | [
"e4E-qn6sbRU",
"iF9YO9Fps8h",
"R5Rkh6D35HC",
"f87VWFIuuv1",
"5dmM3I9Tah-",
"gwn-Hx2GacJ",
"HRl0BYogoVs",
"yemfDQ4h-eX",
"mC4SC2n1ybF",
"W6RdI5iH6ws",
"sZgfPB7qo4Y",
"RbSJwEdOwm7",
"Sii0wMZ-mTC",
"w5Bnv3d4Gn",
"t8TevxATiIe"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for addressing my questions. Their response answer my queries, and they have also provided insights from preliminary experimental run of a suggested setting in the review. I would like to keep my rating as is. ",
" Thank you for updating the rating point! We appreciate your kindness, and we ... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"W6RdI5iH6ws",
"f87VWFIuuv1",
"nips_2021_OLyhLK2eQP",
"HRl0BYogoVs",
"RbSJwEdOwm7",
"sZgfPB7qo4Y",
"yemfDQ4h-eX",
"mC4SC2n1ybF",
"R5Rkh6D35HC",
"Sii0wMZ-mTC",
"w5Bnv3d4Gn",
"t8TevxATiIe",
"nips_2021_OLyhLK2eQP",
"nips_2021_OLyhLK2eQP",
"nips_2021_OLyhLK2eQP"
] |
nips_2021_et2st4Jqhc | Online Variational Filtering and Parameter Learning | We present a variational method for online state estimation and parameter learning in state-space models (SSMs), a ubiquitous class of latent variable models for sequential data. As per standard batch variational techniques, we use stochastic gradients to simultaneously optimize a lower bound on the log evidence with respect to both model parameters and a variational approximation of the states' posterior distribution. However, unlike existing approaches, our method is able to operate in an entirely online manner, such that historic observations do not require revisitation after being incorporated and the cost of updates at each time step remains constant, despite the growing dimensionality of the joint posterior distribution of the states. This is achieved by utilizing backward decompositions of this joint posterior distribution and of its variational approximation, combined with Bellman-type recursions for the evidence lower bound and its gradients. We demonstrate the performance of this methodology across several examples, including high-dimensional SSMs and sequential Variational Auto-Encoders.
| accept | The authors propose a general scheme for approximation in state space systems where the variation distribution is conditioned _backwards_ in time. This leads to an ELBO (and gradients) that can be computed online, without having to adjust previous terms.
The reviewers unanimously praised the clarity of the writing, and I was impressed that each reviewer was left with a very clear picture of the paper's contribution and relevance - a sign that the idea has been presented with great clarity.
One clarification that arouse during the discussion was around a regression that occurs at time step $t+1$, and whether the input/output pairs here are drawn in such a way that could introduce error. I'm happy with the authors' clarification, which I anticipate in the camera ready manuscript.
One in-depth discussion between a reviewer and the authors covered connections to reinforcement learning: the authors connect Bellman recursion to the ELBO updates. This discussion could make for a neat appendix to the paper.
| train | [
"v4Z8WV2PGNL",
"Lja5ZtpoBg",
"w6pSiDLoYnc",
"GY6Dz0aBBfm",
"kb7M1W18qXH",
"10a05pYBbFH",
"xf-bHY0JNJB",
"rsTtwhSckwT",
"IjY6YqouffB",
"zMunKDy-lWs",
"zG7UzECKhS",
"1hkW7oFZ0-G",
"AMaNh9wLIp",
"WhusxFAJ7v",
"cdnYu5jfXJ0"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper develops an algorithm for online variational inference and parameter learning in state-space models, where both model parameters and variational parameters are learned online with each new data point in the time series. The authors accomplish by approximating a backwards decomposition of the full poster... | [
8,
-1,
-1,
-1,
7,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_et2st4Jqhc",
"10a05pYBbFH",
"IjY6YqouffB",
"zG7UzECKhS",
"nips_2021_et2st4Jqhc",
"WhusxFAJ7v",
"AMaNh9wLIp",
"nips_2021_et2st4Jqhc",
"v4Z8WV2PGNL",
"nips_2021_et2st4Jqhc",
"kb7M1W18qXH",
"cdnYu5jfXJ0",
"WhusxFAJ7v",
"rsTtwhSckwT",
"nips_2021_et2st4Jqhc"
] |
nips_2021_fYLfs9yrtMQ | Heavy Ball Neural Ordinary Differential Equations | We propose heavy ball neural ordinary differential equations (HBNODEs), leveraging the continuous limit of the classical momentum accelerated gradient descent, to improve neural ODEs (NODEs) training and inference. HBNODEs have two properties that imply practical advantages over NODEs: (i) The adjoint state of an HBNODE also satisfies an HBNODE, accelerating both forward and backward ODE solvers, thus significantly reducing the number of function evaluations (NFEs) and improving the utility of the trained models. (ii) The spectrum of HBNODEs is well structured, enabling effective learning of long-term dependencies from complex sequential data. We verify the advantages of HBNODEs over NODEs on benchmark tasks, including image classification, learning complex dynamics, and sequential modeling. Our method requires remarkably fewer forward and backward NFEs, is more accurate, and learns long-term dependencies more effectively than the other ODE-based neural network models. Code is available at \url{https://github.com/hedixia/HeavyBallNODE}.
| accept | The paper proposes a reparameterization of a Neural ODE (NODE) as a Heavy-Ball Neural ODE (HBNODE) in order to handle the notoriously difficult problem of long-range dependency modeling. Proposed approach is elegant and novel. The authors provided also very comprehensive empirical evaluation showcasing the effectiveness of their method and addressed *in detail* all reviewers' comments. | train | [
"aTaXuxa-DWx",
"XPcSHfJmt7p",
"5P3lxA0lsJ8",
"-REshMD1zH",
"K32S7-b5VRz",
"ucGqCALiIJE",
"XYHRn-SYrcE",
"STS03x2eqI4",
"CKd3Stz0bIC",
"7fjdMyJdRHf",
"ASUpjljpC_0",
"vUovIGzg8-a"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your further feedback, encouragement, and endorsement. We have revised our paper according to your suggestions.",
"This work proposes a reparameterization of a Neural ODE (NODE) as a Heavy-Ball Neural ODE (HBNODE). This is inspired by Heavy-Ball Momentum, which is the continuous limit of classical mo... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"5P3lxA0lsJ8",
"nips_2021_fYLfs9yrtMQ",
"7fjdMyJdRHf",
"K32S7-b5VRz",
"CKd3Stz0bIC",
"XYHRn-SYrcE",
"STS03x2eqI4",
"ASUpjljpC_0",
"vUovIGzg8-a",
"XPcSHfJmt7p",
"nips_2021_fYLfs9yrtMQ",
"nips_2021_fYLfs9yrtMQ"
] |
nips_2021_Fv0DPhwB6o9 | Structure learning in polynomial time: Greedy algorithms, Bregman information, and exponential families | Greedy algorithms have long been a workhorse for learning graphical models, and more broadly for learning statistical models with sparse structure. In the context of learning directed acyclic graphs, greedy algorithms are popular despite their worst-case exponential runtime. In practice, however, they are very efficient. We provide new insight into this phenomenon by studying a general greedy score-based algorithm for learning DAGs. Unlike edge-greedy algorithms such as the popular GES and hill-climbing algorithms, our approach is vertex-greedy and requires at most a polynomial number of score evaluations. We then show how recent polynomial-time algorithms for learning DAG models are a special case of this algorithm, thereby illustrating how these order-based algorithms can be rigourously interpreted as score-based algorithms. This observation suggests new score functions and optimality conditions based on the duality between Bregman divergences and exponential families, which we explore in detail. Explicit sample and computational complexity bounds are derived. Finally, we provide extensive experiments suggesting that this algorithm indeed optimizes the score in a variety of settings.
| accept | This paper studies the problem of learning directed acyclic graphs via a greedy score-based algorithm. Specifically their algorithm works in two stages, first by estimating the topological ordering of the nodes, and then by pruning edges that do not influence the score. Compared to earlier works, the main innovation is in using the Bregman information as the score function. This allows them to show their algorithm recovers the ground truth network under various assumptions. This is a nice contribution, but the main point of contention among the reviewers was whether these assumptions were natural and/or properly justified. In their response, the authors related it to causal minimality, however the reviewers felt that it seemed to be stronger. As a suggestion for the authors, it would be better if Assumptions 4.3 and 4.4 were discussed in more depth and made more of a focal point of the paper. | test | [
"741apg6q40",
"jL1ziLAVJ4g",
"ZPalKvVyAp",
"PsruzJx2KcL",
"hxrY7B-Wp0A",
"b9v5zflKqZU",
"I0596O6yUrX",
"mVrza7b4INS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the review and comments. We address the queries below, and apologize in advance for the length of our response, as the questions asked are important and we feel deserving of some careful thought.\n\n**Assumption 4.3:** We are happy to add some intuition behind Assumption 4.3, as follows: Let us firs... | [
-1,
-1,
-1,
-1,
4,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"mVrza7b4INS",
"I0596O6yUrX",
"b9v5zflKqZU",
"hxrY7B-Wp0A",
"nips_2021_Fv0DPhwB6o9",
"nips_2021_Fv0DPhwB6o9",
"nips_2021_Fv0DPhwB6o9",
"nips_2021_Fv0DPhwB6o9"
] |
nips_2021_vlf0zTKa5Lh | On the Sample Complexity of Learning under Geometric Stability | Many supervised learning problems involve high-dimensional data such as images, text, or graphs. In order to make efficient use of data, it is often useful to leverage certain geometric priors in the problem at hand, such as invariance to translations, permutation subgroups, or stability to small deformations. We study the sample complexity of learning problems where the target function presents such invariance and stability properties, by considering spherical harmonic decompositions of such functions on the sphere. We provide non-parametric rates of convergence for kernel methods, and show improvements in sample complexity by a factor equal to the size of the group when using an invariant kernel over the group, compared to the corresponding non-invariant kernel. These improvements are valid when the sample size is large enough, with an asymptotic behavior that depends on spectral properties of the group. Finally, these gains are extended beyond invariance groups to also cover geometric stability to small deformations, modeled here as subsets (not necessarily subgroups) of permutations.
| accept | This paper studies kernel ridge regression in the case where the target function is invariant through the group action of $G$, a subgroup of the symmetric group. They explore how making the inner-product kernel invariant too can improve the sample complexity bounds. Furthermore they consider extensions to approximate invariance. One of the reviewers brought up the paper “Learning with invariances in random features and kernel models” by Mei et al. which has a number of overlapping messages. In their response, the authors clarified the technical relationship to this paper and the main differences are in the scaling regime and the types of permutation groups they can handle. Here they consider fixed $d$ and $n$ going to infinity, but can also work with large permutation groups whose size is exponential in $d$. This discussion should be incorporated into the next version of the paper. Overall the reviewers felt that this was a solid set of contributions on an important topic. | train | [
"ObVch5n21p8",
"8eldABlEj0M",
"bnvXol4c1Lw",
"mGL0gGL6Qr7",
"Y9Ec7PSv8aF",
"pOrG1P5A-ND",
"_7T9vc2cnTp",
"z_A-gfoim12",
"JTa4m9KV5GV",
"QOAnek6JHy"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors study the gain in sample complexity of using kernel ridge regression with invariant versus non-invariant kernels. The authors consider a target function that is invariant through the action of a subgroup $G$ of the symmetric group in $d$ dimensions, and compare learning it with (1) an in... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2021_vlf0zTKa5Lh",
"nips_2021_vlf0zTKa5Lh",
"ObVch5n21p8",
"QOAnek6JHy",
"ObVch5n21p8",
"JTa4m9KV5GV",
"z_A-gfoim12",
"nips_2021_vlf0zTKa5Lh",
"nips_2021_vlf0zTKa5Lh",
"nips_2021_vlf0zTKa5Lh"
] |
nips_2021_VGDFaLNFFk | SIMILAR: Submodular Information Measures Based Active Learning In Realistic Scenarios | Active learning has proven to be useful for minimizing labeling costs by selecting the most informative samples. However, existing active learning methods do not work well in realistic scenarios such as imbalance or rare classes,out-of-distribution data in the unlabeled set, and redundancy. In this work, we propose SIMILAR (Submodular Information Measures based actIve LeARning), a unified active learning framework using recently proposed submodular information measures (SIM) as acquisition functions. We argue that SIMILAR not only works in standard active learning but also easily extends to the realistic settings considered above and acts as a one-stop solution for active learning that is scalable to large real-world datasets. Empirically, we show that SIMILAR significantly outperforms existing active learning algorithms by as much as ~5%−18%in the case of rare classes and ~5%−10%in the case of out-of-distribution data on several image classification tasks like CIFAR-10, MNIST, and ImageNet.
| accept | The reviewers appreciate the paper’s general idea in a unified AL framework that addresses different AL tasks (rare class, OOD data, and redundancy), and acknowledge the technical novelty in the acquisition function which (non-trivially) generalizes existing heuristics. However, there remains concern in the presentation clarity (e.g. justification of important assumptions such as accessibility to OoD data for certain applications, the significance of contributions w.r.t. existing work) and the experimental setup. Thus the reviewers were not convinced that the method is well justified in its current state to merit acceptance for publication. | train | [
"wdhBgXsNqtq",
"YX8lBUXzTDy",
"LgPw_2UlbQX",
"EXmMptFvyT1",
"os842N4WWJR",
"iLV_YIDr21M",
"Mp-TxtEl7OA",
"Zj-ofYuWUX",
"bfVjRmL2nEW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for your response and additional comments. Please find our response to your comments:\n\n> Additionally, the author mentioned that \"for instance, in medical imaging domains, images of cancer cells are often rare and are critical to classify correctly\", but they didn't have the related experiments/case... | [
-1,
5,
5,
5,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
3,
3,
-1,
-1,
-1,
-1,
4
] | [
"Mp-TxtEl7OA",
"nips_2021_VGDFaLNFFk",
"nips_2021_VGDFaLNFFk",
"nips_2021_VGDFaLNFFk",
"LgPw_2UlbQX",
"EXmMptFvyT1",
"YX8lBUXzTDy",
"bfVjRmL2nEW",
"nips_2021_VGDFaLNFFk"
] |
nips_2021_0qnPBmvJSaf | Monte Carlo Tree Search With Iteratively Refining State Abstractions | Decision-time planning is the process of constructing a transient, local policy with the intent of using it to make the immediate decision. Monte Carlo tree search (MCTS), which has been leveraged to great success in Go, chess, shogi, Hex, Atari, and other settings, is perhaps the most celebrated decision-time planning algorithm. Unfortunately, in its original form, MCTS can degenerate to one-step search in domains with stochasticity. Progressive widening is one way to ameliorate this issue, but we argue that it possesses undesirable properties for some settings. In this work, we present a method, called abstraction refining, for extending MCTS to stochastic environments which, unlike progressive widening, leverages the geometry of the state space. We argue that leveraging the geometry of the space can offer advantages. To support this claim, we present a series of experimental examples in which abstraction refining outperforms progressive widening, given equal simulation budgets.
| accept | This paper generated a good amount of discussion between the authors and reviewers, which helped resolve some issues in the original reviews. Certain issues remain, but the paper does seem to be making a reasonable and novel contribution in light of those issues.
The reviewers have worked hard to suggest improvements and the authors have already indicated the inclusion of new experiments and adjustments to the text. It is expected that the authors will follow through on these promises.
Finally, I would like to point out that the authors have not properly characterized the related paper pointed out by one of the reviewers.
Jesse Hostetler, Alan Fern, Thomas Dietterich. "Sample-Based Tree Search with Fixed and Adaptive State Abstractions", JAIR 2017
The authors indicate in their response that the paper assumes more than a sample-based model, but that is not accurate. It does not assume access to the entire set of next states. Also note that the algorithm in the experiments of that paper is based on forward-search sparse sampling, which is a trajectory-sampling algorithm in the spirit of MCTS/UCT. Thus a modification to MCTS is not as distance as the authors may think. | val | [
"YHAOlbp5991",
"RRtVzQBSJtk",
"0WCDlY9iFss",
"05haNYZryB",
"nIpBI0HeYI",
"EyLIMXHyMfr",
"qNg6MxYrt50",
"LAIJs5DS8T4",
"Af0bOsJzv8e",
"UE06m02urWl",
"BYeZaLuyeWx",
"SKaky0rbyoz",
"zNPkRKgYClK",
"o7ykPmfMFQx",
"6jONWHsrNSe",
"QKfbMMReMq6",
"UDXr8ZbFwj",
"3edRJCPTZTz",
"NjweyIu6tXf"... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author"
] | [
" Thanks for your thorough and insightful comments! We respond to your points below.\n\n1. That is exactly correct. We agree that this would be interesting to see. Though ultimately, what we care about is whether abstraction refining performs well in expectation over orderings, which is what the results in the pape... | [
-1,
-1,
-1,
6,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nIpBI0HeYI",
"05haNYZryB",
"6jONWHsrNSe",
"nips_2021_0qnPBmvJSaf",
"nips_2021_0qnPBmvJSaf",
"LAIJs5DS8T4",
"RRtVzQBSJtk",
"nips_2021_0qnPBmvJSaf",
"BYeZaLuyeWx",
"0WCDlY9iFss",
"3edRJCPTZTz",
"zNPkRKgYClK",
"o7ykPmfMFQx",
"QKfbMMReMq6",
"qNg6MxYrt50",
"UDXr8ZbFwj",
"NjweyIu6tXf",
... |
nips_2021_q1eCa1kMfDd | Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning | The backpropagation networks are notably susceptible to catastrophic forgetting, where networks tend to forget previously learned skills upon learning new ones. To address such the 'sensitivity-stability' dilemma, most previous efforts have been contributed to minimizing the empirical risk with different parameter regularization terms and episodic memory, but rarely exploring the usages of the weight loss landscape. In this paper, we investigate the relationship between the weight loss landscape and sensitivity-stability in the continual learning scenario, based on which, we propose a novel method, Flattening Sharpness for Dynamic Gradient Projection Memory (FS-DGPM). In particular, we introduce a soft weight to represent the importance of each basis representing past tasks in GPM, which can be adaptively learned during the learning process, so that less important bases can be dynamically released to improve the sensitivity of new skill learning. We further introduce Flattening Sharpness (FS) to reduce the generalization gap by explicitly regulating the flatness of the weight loss landscape of all seen tasks. As demonstrated empirically, our proposed method consistently outperforms baselines with the superior ability to learn new skills while alleviating forgetting effectively.
| accept | Echoing what the reviewers highlighted as well. I think the idea of this work is quite novel, and the paper is well executed, where there was due diligence in terms of explaining the hypothesis, and doing ablation studies. As it stands, I think the work is definitely interesting for the community and should be accepted. | train | [
"C2xPNA89Pc7",
"W1Ulq9BxyPB",
"Myps7aAm6gL",
"8OjRBWXuAs4",
"SbjuYryU5lV",
"KvSkZgENxhi",
"BGZZ09JQ93O",
"KnCUZ5qDp1y",
"wbXbCOEfoZ7",
"66BCTaizXc",
"8zyWtmwP6sG",
"IXfTv4gs69",
"xWnDLdihE6v",
"ekh1Pa9V_ws",
"YoI-x6-Q2nm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new continual learning method that promotes the loss shapes to be flattened. The main idea is to optimize the worst case perturbed loss function at every gradient step, and the empirical investigation of the loss values around the learned parameter corroborates their assertion. Also, experimen... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_q1eCa1kMfDd",
"BGZZ09JQ93O",
"nips_2021_q1eCa1kMfDd",
"IXfTv4gs69",
"BGZZ09JQ93O",
"BGZZ09JQ93O",
"8zyWtmwP6sG",
"C2xPNA89Pc7",
"YoI-x6-Q2nm",
"Myps7aAm6gL",
"nips_2021_q1eCa1kMfDd",
"Myps7aAm6gL",
"ekh1Pa9V_ws",
"nips_2021_q1eCa1kMfDd",
"nips_2021_q1eCa1kMfDd"
] |
nips_2021_P6bUrLREcne | Taxonomizing local versus global structure in neural network loss landscapes | Viewing neural network models in terms of their loss landscapes has a long history in the statistical mechanics approach to learning, and in recent years it has received attention within machine learning proper. Among other things, local metrics (such as the smoothness of the loss landscape) have been shown to correlate with global properties of the model (such as good generalization performance). Here, we perform a detailed empirical analysis of the loss landscape structure of thousands of neural network models, systematically varying learning tasks, model architectures, and/or quantity/quality of data. By considering a range of metrics that attempt to capture different aspects of the loss landscape, we demonstrate that the best test accuracy is obtained when: the loss landscape is globally well-connected; ensembles of trained models are more similar to each other; and models converge to locally smooth regions. We also show that globally poorly-connected landscapes can arise when models are small or when they are trained to lower quality data; and that, if the loss landscape is globally poorly-connected, then training to zero loss can actually lead to worse test accuracy. Our detailed empirical results shed light on phases of learning (and consequent double descent behavior), fundamental versus incidental determinants of good generalization, the role of load-like andtemperature-like parameters in the learning process, different influences on the loss landscape from model and data, and the relationships between local and global metrics, all topics of recent interest.
| accept | This paper investigates how the structure of the loss landscape of neural networks affects their generalization performance. The authors categorize loss landscapes into four different phases, globally {well,poorly}-connected) x (locally {flat, sharp}), where global connectivity is measured by mode connectivity and local flatness is measured using Hessian. Through a large-scale empirical investigation of neural network loss landscapes, the authors present an analysis of how these categories affect generalization.
Overall, I think this is a solid paper. The reviewers provided a lot of useful detailed feedback and the authors also did a great job of addressing the reviewer concerns. During the discussion phase, the consensus decision leaned towards acceptance.
I recommend acceptance and encourage the authors to address the reviewer comments in the final revision.
| train | [
"_Y87L5zfbKq",
"riF5HS1sJdJ",
"j_8LVDT2Vp9",
"TjtNRjYAbH",
"zmGUnRqEJTu",
"pAb_gywG-YI",
"WSPe636QvDX",
"YBt_kciPGN",
"0kDLFbw3q7r",
"E67f8-ZbAr",
"dHXTZxp_btx",
"HeTofdZNmT",
"QavPQF_N9mm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper takes a large number of neural networks that span range of different parameters (e.g. width), train them in a large number of ways (e.g. dataset, batch size), and studies several metrics that characterize the local (e.g. Hessian spectral norm) and global (weight vector pair-wise connectivity on potentia... | [
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_P6bUrLREcne",
"E67f8-ZbAr",
"WSPe636QvDX",
"pAb_gywG-YI",
"nips_2021_P6bUrLREcne",
"0kDLFbw3q7r",
"QavPQF_N9mm",
"HeTofdZNmT",
"zmGUnRqEJTu",
"_Y87L5zfbKq",
"nips_2021_P6bUrLREcne",
"nips_2021_P6bUrLREcne",
"nips_2021_P6bUrLREcne"
] |
nips_2021_JZK9uP4Fev | Learning Models for Actionable Recourse | As machine learning models are increasingly deployed in high-stakes domains such as legal and financial decision-making, there has been growing interest in post-hoc methods for generating counterfactual explanations. Such explanations provide individuals adversely impacted by predicted outcomes (e.g., an applicant denied a loan) with recourse---i.e., a description of how they can change their features to obtain a positive outcome. We propose a novel algorithm that leverages adversarial training and PAC confidence sets to learn models that theoretically guarantee recourse to affected individuals with high probability without sacrificing accuracy. We demonstrate the efficacy of our approach via extensive experiments on real data.
| accept | The paper studies how to train models that offer good opportunity for recourse, quantifying what that might mean in terms of probabilities and providing algorithms with guarantees on these. The paper is an interesting and valuable contribution to the area of algorithmic recourse and was well reviewed by the referees. Many useful and detailed suggestions were made and the authors should incorporate these into a camera ready version. | train | [
"U0FPXCiaG9P",
"VWompfC4w5v",
"Wzad2-3mynK",
"UKoT3Wk6Pq",
"n_p_l7vytD",
"SBcy_bLJkD",
"zgpQhkJPSL",
"G80WbKQrDC-",
"CsGwtlq_qZ",
"eyZpSw7D0G7",
"kPZgjkJd17H",
"92LqZlaveZv"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We are very glad to hear that the reviewer found our rebuttal helpful. Thank you so much for increasing your score. \n\nWe will definitely include the new results with the OOD tests as well as a discussion on the \"Importance of ensuring that recourses exist\" in the final version. We will also discuss all the re... | [
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
4,
-1,
2,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
"SBcy_bLJkD",
"UKoT3Wk6Pq",
"nips_2021_JZK9uP4Fev",
"eyZpSw7D0G7",
"nips_2021_JZK9uP4Fev",
"zgpQhkJPSL",
"n_p_l7vytD",
"92LqZlaveZv",
"kPZgjkJd17H",
"Wzad2-3mynK",
"nips_2021_JZK9uP4Fev",
"nips_2021_JZK9uP4Fev"
] |
nips_2021_b2bkE0Qq8Ya | Efficient and Accurate Gradients for Neural SDEs | Patrick Kidger, James Foster, Xuechen (Chen) Li, Terry Lyons | accept | After reading the author responses and discussing with the authors, the reviewers have reached a consensus that this paper should be accepted. I concur with that judgement. The accuracy improvements proposed by this method seem significant and useful across a range of tasks. And the mathematical insights on which the result are based will be interesting to even those in the community who aren't using neural SDEs. | train | [
"Wybtfk1ZUHv",
"oJmXTfXrhy",
"McrEQlHi3wC",
"QwlRXEl61gu",
"AP0w-um4Kpi",
"Aqq4gL5tEA4",
"XfnuMcB-_yJ",
"NKk4zMzz99Z",
"2--W_LWxq0C"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper aims to address issues with training Neural SDEs; in particular, it aims to overcome issues relating to backpropagation through the SDE solve step. Previously this computation was marred by speed and accuracy issues arising due to high computational complexity, numerical errors in the SDEs solve step, as... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"nips_2021_b2bkE0Qq8Ya",
"Aqq4gL5tEA4",
"2--W_LWxq0C",
"NKk4zMzz99Z",
"XfnuMcB-_yJ",
"Wybtfk1ZUHv",
"nips_2021_b2bkE0Qq8Ya",
"nips_2021_b2bkE0Qq8Ya",
"nips_2021_b2bkE0Qq8Ya"
] |
nips_2021_blzTEKKRIcV | EIGNN: Efficient Infinite-Depth Graph Neural Networks | Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications. However, with their inherently finite aggregation layers, existing GNN models may not be able to effectively capture long-range dependencies in the underlying graphs. Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN), to efficiently capture very long-range dependencies. We theoretically derive a closed-form solution of EIGNN which makes training an infinite-depth GNN model tractable. We then further show that we can achieve more efficient computation for training EIGNN by using eigendecomposition. The empirical results of comprehensive experiments on synthetic and real-world datasets show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance. Furthermore, we show that our model is also more robust against both noise and adversarial perturbations on node features.
| accept | 3 of 4 ratings were "accept". The lowest rating (a 5) was most concerned about insufficient experimental evaluation of speed/accuracy tradeoffs, which was echoed by other reviewers as well.
Through the rebuttal/discussion period the authors were very responsive and caused several reviewers to raise their ratings. My view is that the original submission was somewhat confusing and incomplete in a few ways, but that the core of the method and results are solid, and through the discussion period the authors have improved the paper enough to warrant publication. The confusion could also stem from the fairly high technical complexity in the work, which I'd encourage the authors to explain as clearly as possible in the final text.
I do agree with the concern about speed/accuracy analysis, and I'd encourage the authors to do anything they can to address this for camera-ready (if the paper is accepted), however I don't anticipate any surprises.
| test | [
"BoJ_9lINpSF",
"n0oouiAmIRa",
"eELAozgpUUE",
"9lvEhDS0Px5",
"Tf1Ifm__nyH",
"AAjLRmjRyQ",
"aruIJUw4bm8",
"GD6MBl3QJTk",
"3dGxtewqhNF",
"8LxY7nhQbyP",
"NADoWtC9IgV",
"HIweCf4WLs6",
"xM6pe7Kd3Gw",
"Lk3o2QGGQQS",
"HZvkqX09XbZ",
"rinix3ONCN4",
"vGmdA7EoMHV",
"1m0VGY3Rdk",
"OdB6wGuEut"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" Thank you for your suggestions and the prompt feedback. \n\nWe will try to add more clear justifications for our design in the future version. \n\nHere, we share some findings regarding the tradeoffs between accuracy and efficiency. In our previous experiments, we found simply increasing the max iterative steps w... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"n0oouiAmIRa",
"eELAozgpUUE",
"GD6MBl3QJTk",
"aruIJUw4bm8",
"8LxY7nhQbyP",
"nips_2021_blzTEKKRIcV",
"NADoWtC9IgV",
"rinix3ONCN4",
"nips_2021_blzTEKKRIcV",
"HZvkqX09XbZ",
"HIweCf4WLs6",
"xM6pe7Kd3Gw",
"Lk3o2QGGQQS",
"AAjLRmjRyQ",
"OdB6wGuEut",
"3dGxtewqhNF",
"1m0VGY3Rdk",
"nips_2021... |
nips_2021_WlkzLjxpYe | Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms | Understanding generalization in deep learning has been one of the major challenges in statistical learning theory over the last decade. While recent work has illustrated that the dataset and the training algorithm must be taken into account in order to obtain meaningful generalization bounds, it is still theoretically not clear which properties of the data and the algorithm determine the generalization performance. In this study, we approach this problem from a dynamical systems theory perspective and represent stochastic optimization algorithms as \emph{random iterated function systems} (IFS). Well studied in the dynamical systems literature, under mild assumptions, such IFSs can be shown to be ergodic with an invariant measure that is often supported on sets with a \emph{fractal structure}. As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure. Then, by leveraging results from dynamical systems theory, we show that the generalization error can be explicitly linked to the choice of the algorithm (e.g., stochastic gradient descent -- SGD), algorithm hyperparameters (e.g., step-size, batch-size), and the geometry of the problem (e.g., Hessian of the loss). We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden-layered neural networks) and algorithms (e.g., SGD and preconditioned variants), and obtain analytical estimates for our bound. For modern neural networks, we develop an efficient algorithm to compute the developed bound and support our theory with various experiments on neural networks.
| accept | This is a strong paper, in which the authors prove generalization bounds for iterative stochastic algorithms based on the Hausdorff dimension of the invariant measure. The bounds are specialized for several applications like least squares (SGD & stochastic Newton), logistic regression, SVMs, and 1-hidden-layer neural networks. Some supporting empirical verification is also included. The proofs use tools from dynamical systems which are not so common in this are and seem like a promising lens on generalization.
| train | [
"MdCLsTZCjgl",
"P7mDNP0c8xf",
"47mtbb_F-9L",
"Tqji8WvTJsX",
"FEgoMOida5G",
"zXLXvjjwSRy",
"_Qo9ZmFbvV",
"Bb69hi-ellE",
"DtGLYXiQrz7",
"cMwyosREx0s",
"iCWSy7EPt79",
"2UkAATEQUUE",
"jCGpfB8TumQ",
"mFmQaPhOF6"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper gives novel generalization bounds for models trained with SGD (or variants of SGD) based on the Hausdorff dimension. The main contribution is Theorem 1 which shows, under certain assumptions, that for a general class of models, their generalization bound after training with SGD can be bounded by a term t... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_WlkzLjxpYe",
"Tqji8WvTJsX",
"nips_2021_WlkzLjxpYe",
"FEgoMOida5G",
"zXLXvjjwSRy",
"iCWSy7EPt79",
"MdCLsTZCjgl",
"DtGLYXiQrz7",
"mFmQaPhOF6",
"jCGpfB8TumQ",
"47mtbb_F-9L",
"nips_2021_WlkzLjxpYe",
"nips_2021_WlkzLjxpYe",
"nips_2021_WlkzLjxpYe"
] |
nips_2021_J-pFhOiGVn7 | An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence | A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data. But far away from them, ReLU Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be asymptotically overconfident. This issue arises since the output variance of a BNN with finitely many features is quadratic in the distance from the data region. Meanwhile, Bayesian linear models with ReLU features converge, in the infinite-width limit, to a particular Gaussian process (GP) with a variance that grows cubically so that no asymptotic overconfidence can occur. While this may seem of mostly theoretical interest, in this work, we show that it can be used in practice to the benefit of BNNs. We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data. Although the resulting model approximates a full GP posterior, thanks to its structure, it can be applied post-hoc to any pre-trained ReLU BNN at a low cost.
| accept | The proposed method attempts to fix the asymptotic overconfidence issue in ReLU networks. All reviewers agree that the paper is of high quality, theoretically interesting, and practically sound, and thus should be accepted. The remaining concerns are about motivation and experiments with other methods in addition to last-layer Laplace --- the authors are strongly encouraged to improve these points in the next iteration. | train | [
"egT11PsrBYr",
"nFYQTA2TwLL",
"jAhLHj_fG7",
"fWUa6uBEgxM",
"3_bv_XbbQAX",
"ojGAxmJ4QOs",
"mFahbsMBEor",
"qsajwadT2Wp",
"hokuCqJiEr",
"3ZkzeFku0eD",
"n9Kc-u4mBUq",
"DzXy7tvdoJm",
"VQCY8RnBsx7",
"DGhZl6iaNr",
"vDbv-uSM68e",
"aZGsteiQRdw",
"0D6KVo3BbmL",
"LPN7P9IBOiA",
"OIcGvEJFZ1w"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_rev... | [
" Thank you for clarifications and further details - I look forward to seeing this changes in the next revision!",
" First of all, thanks to the Authors for the response. After reading the reply to my questions and the discussion with other reviewers, I'm overall positive with this work and I'll keep my score (7)... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"jAhLHj_fG7",
"vDbv-uSM68e",
"fWUa6uBEgxM",
"0D6KVo3BbmL",
"mFahbsMBEor",
"nips_2021_J-pFhOiGVn7",
"DzXy7tvdoJm",
"nips_2021_J-pFhOiGVn7",
"3ZkzeFku0eD",
"n9Kc-u4mBUq",
"aZGsteiQRdw",
"VQCY8RnBsx7",
"DGhZl6iaNr",
"ojGAxmJ4QOs",
"OIcGvEJFZ1w",
"qsajwadT2Wp",
"LPN7P9IBOiA",
"nips_202... |
nips_2021_fThfMoV7Ri | Bandit Phase Retrieval | Tor Lattimore, Botao Hao | accept | The paper studies the bandit phase retrieval problem and establishes a sharp minimax regret bound of d\sqrt{n}, which improves on previous work based on low rank bandit approaches. The proof is enlightening and the paper is well-written, making nice connections to related work. All reviewers were positive on the paper, and so we advocate acceptance.
As I mentioned in our discussion, I do think it would be worthwhile to add some discussion of the recent/concurrent paper of Huang et al to the final manuscript. | train | [
"I2mqArqeQrv",
"io91tKM1_NJ",
"3QLNxWt6iOa",
"KEhj1DWTTK",
"jKPjqYWd6P",
"9QRHcStJXRi",
"QaNA6IHKO3x",
"brYyWo48Df"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThe paper studies the quadratic bandit problem, where the reward at time $t$ is given by $\\langle \\theta_\\star, a_t \\rangle^2 + \\varepsilon_t$, where $\\varepsilon_t$ is i.i.d. gaussian noise, $\\theta_\\star$ is an unknown $d$-dimensional vector and $a$ is the action. The action set is fixed to be the unit... | [
7,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"nips_2021_fThfMoV7Ri",
"9QRHcStJXRi",
"I2mqArqeQrv",
"brYyWo48Df",
"QaNA6IHKO3x",
"nips_2021_fThfMoV7Ri",
"nips_2021_fThfMoV7Ri",
"nips_2021_fThfMoV7Ri"
] |
nips_2021_napaTaDQ0lY | Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions | Yin Tat Lee, Ruoqi Shen, Kevin Tian | accept | This paper provides lower bounds on mixing and relaxation times of popular sampling algorithms for log-concave distributions. The reviewers all agree that this is a strong contribution that fills in a gap in the existing literature. This is a strong paper! | train | [
"TV-1gBb4kgD",
"nP1g-tdz7Z",
"Zw5J4ZUw5R",
"1YGyGR3rBD5",
"RHcBY-9-2aX",
"Ics4kUPoU3R",
"ki_aujstoKx",
"7lYdey-Y0e",
"75PIUPrXuk",
"-C8Pe2o9ebo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies two well-known sampling algorithms for well-conditioned log concave distributions: the Metropolos adjusted Langevin algorithm (MALA) and Hamiltonian Montecarlo (HMC). The main contributions of the paper are:\n\n1. A lower bound $\\tilde\\Omega(d\\kappa)$ for the mixing time of MALA from an $\\ex... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"nips_2021_napaTaDQ0lY",
"1YGyGR3rBD5",
"7lYdey-Y0e",
"Ics4kUPoU3R",
"ki_aujstoKx",
"TV-1gBb4kgD",
"-C8Pe2o9ebo",
"75PIUPrXuk",
"nips_2021_napaTaDQ0lY",
"nips_2021_napaTaDQ0lY"
] |
nips_2021_D5APl1Yixnc | Taming Communication and Sample Complexities in Decentralized Policy Evaluation for Cooperative Multi-Agent Reinforcement Learning | Xin Zhang, Zhuqing Liu, Jia Liu, Zhengyuan Zhu, Songtao Lu | accept | This paper studies decentralized policy evaluation for cooperative multi-agent RL. The main approach of this paper is (1) reduce the problem to a minimax optimization problem, and (2) develop new algorithms for solving this minimax problem using gradient tracking and variance reduction technique. As pointed out by reviewers, the main concerns of this paper are that (a) the main technique contribution is mostly in optimization, while the paper currently mainly sells as a RL paper, without detailed comparison in terms of rate with existing optimization algorithms. (b) the strongly concave assumption lacks justification in the RL setting. Except these, most reviewers are convinced that the paper is well-written, the results are solid, and this paper makes interesting contributions. We hope authors can address above concerns in the final version. | test | [
"Or_3FjQ01Vg",
"sAgRcFHigSI",
"3yu8VElDut",
"bFk3gddi_eY",
"vM8Dq6l6_md",
"seG9kCaf6gz",
"A9kC5HgWzNS",
"CT-OGq-SoIf",
"fYQeAzuh_ei",
"SwHK4ZG-7Tz",
"yqApHnJ7Emy",
"Uyt1AZHq3wC",
"8ciR8xbDjCo",
"fMyVw-dvTzS",
"m6pRSNGDFEz"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank Reviewer 6xn4 again for the updated review and the increased score to 7! All your reviews, comments, questions, and suggestions have significantly improved the quality of our work! We will definitely incorporate all your suggestions and comments in the revised version of this paper!",
" Thanks for your... | [
-1,
-1,
7,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
3,
-1,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"vM8Dq6l6_md",
"bFk3gddi_eY",
"nips_2021_D5APl1Yixnc",
"Uyt1AZHq3wC",
"nips_2021_D5APl1Yixnc",
"nips_2021_D5APl1Yixnc",
"CT-OGq-SoIf",
"fYQeAzuh_ei",
"SwHK4ZG-7Tz",
"yqApHnJ7Emy",
"seG9kCaf6gz",
"3yu8VElDut",
"m6pRSNGDFEz",
"vM8Dq6l6_md",
"nips_2021_D5APl1Yixnc"
] |
nips_2021_yJqcM36Qvnu | Federated Graph Classification over Non-IID Graphs | Federated learning has emerged as an important paradigm for training machine learning models in different domains. For graph-level tasks such as graph classification, graphs can also be regarded as a special type of data samples, which can be collected and stored in separate local systems. Similar to other domains, multiple local systems, each holding a small set of graphs, may benefit from collaboratively training a powerful graph mining model, such as the popular graph neural networks (GNNs). To provide more motivation towards such endeavors, we analyze real-world graphs from different domains to confirm that they indeed share certain graph properties that are statistically significant compared with random graphs. However, we also find that different sets of graphs, even from the same domain or same dataset, are non-IID regarding both graph structures and node features. To handle this, we propose a graph clustered federated learning (GCFL) framework that dynamically finds clusters of local systems based on the gradients of GNNs, and theoretically justify that such clusters can reduce the structure and feature heterogeneity among graphs owned by the local systems. Moreover, we observe the gradients of GNNs to be rather fluctuating in GCFL which impedes high-quality clustering, and design a gradient sequence-based clustering mechanism based on dynamic time warping (GCFL+). Extensive experimental results and in-depth analysis demonstrate the effectiveness of our proposed frameworks.
| accept | There was a fruitful discussion about this paper.
Overall, I feel that the paper contains enough interesting and novel ideas for a publication. | train | [
"icXOSWwe75Q",
"zbL4mRXhJBH",
"clqIOAOta_",
"vM-dSoFzc5",
"e6sCa5VFXx_",
"RPmJ1OR0-5",
"JLFcczs0JVY",
"A_AegsimizI"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer RhpX,\n\nWe are glad to receive the positive feedback from you. Thanks a lot and please feel free to let us know should any additional concerns arise.",
"This paper advocates a novel setting of cross-dataset and cross-domain federated learning for graph classification, which allows multiple data o... | [
-1,
8,
-1,
-1,
-1,
-1,
8,
4
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"clqIOAOta_",
"nips_2021_yJqcM36Qvnu",
"vM-dSoFzc5",
"zbL4mRXhJBH",
"A_AegsimizI",
"JLFcczs0JVY",
"nips_2021_yJqcM36Qvnu",
"nips_2021_yJqcM36Qvnu"
] |
nips_2021_vrhNQ7aYSdr | SubTab: Subsetting Features of Tabular Data for Self-Supervised Representation Learning | Self-supervised learning has been shown to be very effective in learning useful representations, and yet much of the success is achieved in data types such as images, audio, and text. The success is mainly enabled by taking advantage of spatial, temporal, or semantic structure in the data through augmentation. However, such structure may not exist in tabular datasets commonly used in fields such as healthcare, making it difficult to design an effective augmentation method, and hindering a similar progress in tabular data setting. In this paper, we introduce a new framework, Subsetting features of Tabular data (SubTab), that turns the task of learning from tabular data into a multi-view representation learning problem by dividing the input features to multiple subsets. We argue that reconstructing the data from the subset of its features rather than its corrupted version in an autoencoder setting can better capture its underlying latent representation. In this framework, the joint representation can be expressed as the aggregate of latent variables of the subsets at test time, which we refer to as collaborative inference. Our experiments show that the SubTab achieves the state of the art (SOTA) performance of 98.31% on MNIST in tabular setting, on par with CNN-based SOTA models, and surpasses existing baselines on three other real-world datasets by a significant margin.
| accept | The authors have addressed most of the reviewer concerns during the review process, and I now suggest the acceptance of the paper. As the reviewers have acknowledged, the performance improvements are notable with an elegant and conceptually appealing method.
There is significant content in the author responses that need to be integrated into the paper prior to publication, especially on the new results on extra benchmark datasets, improved literature review, numerous clarification points on the method explanations, hyperparameter tuning, ablation studies, and description of limitations. | train | [
"zSViYm0ov8S",
"pDTutTXMqLQ",
"fW9G4d_7YeV",
"76QeyGpG6wo",
"QM0GubhIhnp",
"SVQ4XIHwJ4G",
"ihuxqXYrD1q",
"sOSzBuc2P4",
"bisqRF8PWrM",
"Rtv1hCHf9O",
"8-9rhw_Fil2",
"KfeJ8LwH77f",
"VQVtWf4K7E",
"PBKPtKX_S1v",
"VBtfwSLvH8z",
"D8GT0r6Y8hH",
"bfIAqxYq4Wf",
"6yJuBQ1j7i",
"MR2n7Z8veP"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We truly appreciate the kind words. We thank the reviewer for the positive feedback and the vote of acceptance. We think that the paper got better with the feedback we received, and we are grateful to all the reviewers for that.",
" The authors have adequately addressed my main concerns. I have increased my sco... | [
-1,
-1,
7,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
-1,
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"pDTutTXMqLQ",
"PBKPtKX_S1v",
"nips_2021_vrhNQ7aYSdr",
"ihuxqXYrD1q",
"bisqRF8PWrM",
"nips_2021_vrhNQ7aYSdr",
"SVQ4XIHwJ4G",
"nips_2021_vrhNQ7aYSdr",
"Rtv1hCHf9O",
"sOSzBuc2P4",
"MR2n7Z8veP",
"SVQ4XIHwJ4G",
"D8GT0r6Y8hH",
"bfIAqxYq4Wf",
"8-9rhw_Fil2",
"sOSzBuc2P4",
"fW9G4d_7YeV",
"... |
nips_2021_yxHPRAqCqn | Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance | Hongjian Wang, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Simsekli, Murat A. Erdogdu | accept | This paper examines the convergence of stochastic gradient descent in strongly convex minimization problems. The novelty of the analysis is that the authors do not assume that the variance of the gradient queries is finite; instead, they consider "heavy-tailed" gradient noise models with bounded moments for some $p\in[1,2)$ - but not necessarily for $p=2$ or higher.
This paper received almost universally positive reviews during the review phase, and the only "weak reject" recommendation was changed to a "weak accept" after the authors addressed the reviewer's concerns. As a result, during the committee discussion, a consensus was reached early on to make an "accept" recommendation. | test | [
"DolqwObfc2V",
"QaKSrimfZ7",
"nDY9shcZZn",
"5UQaV01O2_l",
"YW4pokOo4ae",
"soVEXPD3w7q",
"Dy_TNMDlSuD",
"fPhJx9pr_r-",
"bK4oEOVsCj",
"fLIdkjZKkRq",
"00wWYjLMRM"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper concerns the convergence rate of SGD with heavy-tail noise. In particular, the considered gradient noise consists of a finite-variance part (corresponds to the multiplicative noise in SGD for least square) and an infinite-variance part (corresponds to the additive noise in SGD for least square). The mai... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_yxHPRAqCqn",
"soVEXPD3w7q",
"5UQaV01O2_l",
"fPhJx9pr_r-",
"nips_2021_yxHPRAqCqn",
"00wWYjLMRM",
"YW4pokOo4ae",
"DolqwObfc2V",
"fLIdkjZKkRq",
"nips_2021_yxHPRAqCqn",
"nips_2021_yxHPRAqCqn"
] |
nips_2021__61Qh8tULj_ | Conflict-Averse Gradient Descent for Multi-task learning | The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks. A standard multi-task learning objective is to minimize the average loss across all tasks. While straightforward, using this objective often results in much worse final performance for each task than learning them independently. A major challenge in optimizing a multi-task model is the conflicting gradients, where gradients of different task objectives are not well aligned so that following the average gradient direction can be detrimental to specific tasks' performance. Previous work has proposed several heuristics to manipulate the task gradients for mitigating this problem. But most of them lack convergence guarantee and/or could converge to any Pareto-stationary point.In this paper, we introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function, while leveraging the worst local improvement of individual tasks to regularize the algorithm trajectory. CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss. It includes the regular gradient descent (GD) and the multiple gradient descent algorithm (MGDA) in the multi-objective optimization (MOO) literature as special cases. On a series of challenging multi-task supervised learning and reinforcement learning tasks, CAGrad achieves improved performance over prior state-of-the-art multi-objective gradient manipulation methods.
| accept | The manuscript is proposing a multi-task learning method using multi-objective optimization. The proposed algorithm extends the MGDA line of work by simply changing the objective from finding a descent direction that is consistent with all objectives to finding a descent direction that is consistent with all objectives and closest to the average gradient. The proposed algorithm is later analyzed theoretically with Pareto stationarity and convergence guarantees. The authors provide an extensive empirical study with very promising results. Reviewers all enjoyed reading the paper and appreciated the algorithm. Most of the major issues about the paper are on the set-of-baselines side which the authors addressed successfully with additional experiments during the rebuttal perior. I believe the paper is interesting and warrant acceptance. | train | [
"uTZzj5U2Q-A",
"b_85I5W3EtY",
"8X1ilJ9C80s",
"bHtdRyym5N",
"zT-WscIN5Ar",
"RwcYuwGVFwY",
"oLaFM2B24M",
"xbKxxqQRYou",
"o3gK93Y1twA",
"xf2FkcPhBYX",
"Miu23W1TVOZ",
"zCibkNrtry4",
"iyPaTHc2F2g",
"uNZIHDsXOfN",
"VgsRtpida9Z",
"1-z44mJZkON"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Authors propose a multitask learning approach to reduce gradient conflict by minimizing harm to the worst-performing task given any gradient update. Proofs are provided for convergence and pareto optimality of the resultant weight configuration. Experiments are run on a variety of settings, including computer visi... | [
6,
-1,
5,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
5,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021__61Qh8tULj_",
"8X1ilJ9C80s",
"nips_2021__61Qh8tULj_",
"nips_2021__61Qh8tULj_",
"xf2FkcPhBYX",
"nips_2021__61Qh8tULj_",
"xbKxxqQRYou",
"o3gK93Y1twA",
"zCibkNrtry4",
"bHtdRyym5N",
"uTZzj5U2Q-A",
"RwcYuwGVFwY",
"8X1ilJ9C80s",
"1-z44mJZkON",
"nips_2021__61Qh8tULj_",
"nips_2021__... |
nips_2021_wdIDt--oLmV | Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate | In design, fabrication, and control problems, we are often faced with the task of synthesis, in which we must generate an object or configuration that satisfies a set of constraints while maximizing one or more objective functions. The synthesis problem is typically characterized by a physical process in which many different realizations may achieve the goal. This many-to-one map presents challenges to the supervised learning of feed-forward synthesis, as the set of viable designs may have a complex structure. In addition, the non-differentiable nature of many physical simulations prevents efficient direct optimization. We address both of these problems with a two-stage neural network architecture that we may consider to be an autoencoder. We first learn the decoder: a differentiable surrogate that approximates the many-to-one physical realization process. We then learn the encoder, which maps from goal to design, while using the fixed decoder to evaluate the quality of the realization. We evaluate the approach on two case studies: extruder path planning in additive manufacturing and constrained soft robot inverse kinematics. We compare our approach to direct optimization of the design using the learned surrogate, and to supervised learning of the synthesis problem. We find that our approach produces higher quality solutions than supervised learning, while being competitive in quality with direct optimization, at a greatly reduced computational cost.
| accept | Thank you for your submission to NeurIPS. Even after discussion, there is some substantial disagreement on this paper, so I will need to express an opinion that is not a unanimous one. But with this caveat, I will state that I ultimately come down very much on the "positive" side of the disagreement, and I am recommending the paper be accepted to NeurIPS as a spotlight.
The rationale for this is rather simple: although currently confined to rather simple domains, the paper makes a compelling case that introducing a differentiable surrogate into synthesis problems, in a disciplined and generic fashion, can provide substantial gains over direct learning on the system of interest. The authors do an extremely good job presenting detailed (and real, even if they are simple) evaluations of this strategy. Thus, in total the authors present a compelling (_in_ its simplicity, rather than this being a flaw) approach to constrained synthesis, and show that across multiple realistic domains it outperforms the most common baseline. Those are all points of a strong paper.
Now let me address the negative reviewer's concerns, which in my mind boils down to the concern that 1) the domains evaluated here are not particularly challenging and 2) don't reflect the vast amount of work that is already done in incorporating neural networks into synthesis problems. I think it would of course be good to mention some of these connections (in the discussion, not in direct comparison) in the final draft of the paper. But it is also the case that the landscape of integrating neural networks into specific physical synthesis domains is _vast_, and attempting to consider the scope in which all these approaches have been applied previously wouldn't be feasible. Yes, for any given domain there are likely multiple NN-based solutions to synthesis problems, but attempting to consider all of these, in my opinion, would make anything but extremely domain-specific papers impossible. The current paper makes no promises of extending the state of the art in a given practical domain (say the chemistry domains mentioned by the reviewer), but does offer evidence of a generic and compelling approach, and I believe these should be welcomed even if they are not (yet) as in-depth within a given domain as would be ideal in the longer term. I think this is likely to be a fundamental disagreement as to the "right" focuses of papers like these, so while I don't expect to resolve this dilemma (and I certainly understand the reviewer's opinion), I still lean strongly towards accepting the paper. | train | [
"bh3PYWT_6ba",
"tl0w-MLfNSP",
"Sp3s4lTsHzB",
"ulxmy2sUy-",
"qe7sGmw_qY",
"roEoWoRpZNQ",
"o3E1MM0v5xy",
"1hOs7gyCd1I",
"qLCcIKIiquj",
"tPEXfaKVsVG",
"LWmztfgGSsf",
"8zQ669zPfG"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for providing these additional details.",
" I thank the reviewers for their response.\n\nRe. (1), I realize that one could indeed expand the dataset. However, the point is that properly representing all possible velocity profiles and other such higher order requirements would significantly expand the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
2,
5
] | [
"qe7sGmw_qY",
"1hOs7gyCd1I",
"ulxmy2sUy-",
"o3E1MM0v5xy",
"8zQ669zPfG",
"LWmztfgGSsf",
"tPEXfaKVsVG",
"qLCcIKIiquj",
"nips_2021_wdIDt--oLmV",
"nips_2021_wdIDt--oLmV",
"nips_2021_wdIDt--oLmV",
"nips_2021_wdIDt--oLmV"
] |
nips_2021_3qYgdGj9Svt | Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination | A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise, often quantified by the performance of the best hypothesis; such results are known as first-order or small-loss guarantees. While first-order guarantees are relatively well understood in statistical and online learning, adapting to low noise in contextual bandits (and more broadly, decision making) presents major algorithmic challenges. In a COLT 2017 open problem, Agarwal, Krishnamurthy, Langford, Luo, and Schapire asked whether first-order guarantees are even possible for contextual bandits and---if so---whether they can be attained by efficient algorithms. We give a resolution to this question by providing an optimal and efficient reduction from contextual bandits to online regression with the logarithmic (or, cross-entropy) loss. Our algorithm is simple and practical, readily accommodates rich function classes, and requires no distributional assumptions beyond realizability. In a large-scale empirical evaluation, we find that our approach typically outperforms comparable non-first-order methods.On the technical side, we show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees, and we combine this observation with new refinements to the regression oracle reduction framework of Foster and Rakhlin (2020). The use of triangular discrimination yields novel results even for the classical statistical learning model, and we anticipate that it will find broader use.
| accept | The paper advances the state of the art in contextual bandits by proposing an interesting extension of the SquareCB algorithm of Foster and Rakhlin (ICML 2020), and showing that this method efficiently achieves optimal first-order regret guarantees under realistic conditions. This solves a COLT open problem.
The paper has received uniformly positive reviews, with the reviewers praising the solid theoretical contributions and the thorough experimental evaluation. I myself have found the contribution to be significant and the techniques to be highly original. Overall, I believe that this outstanding paper is likely to have a long-lasting impact in the contextual-bandit literature, and clearly it should be accepted for publication at NeurIPS 2021. | train | [
"mK6wKLV0C9",
"xhfPS2Y4lvH",
"cLbfEys3ssR",
"q7gjy2pf9WS",
"-BfqxHWnWFm",
"Qsi7UPRzfFF",
"1gVDJGqQDEI",
"d-3bkN4Vs7R",
"ubBRNmjpFe9",
"gEnmh-Y8LT4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your review! Please let us know if there are any further questions we can address.",
"Optimal first-order regret bound for contextual bandits. This paper built on the recent SquareCB paper and derived an optimal first-order regret bound for contextual bandits. The main change of the algorithm ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"1gVDJGqQDEI",
"nips_2021_3qYgdGj9Svt",
"q7gjy2pf9WS",
"gEnmh-Y8LT4",
"xhfPS2Y4lvH",
"ubBRNmjpFe9",
"d-3bkN4Vs7R",
"nips_2021_3qYgdGj9Svt",
"nips_2021_3qYgdGj9Svt",
"nips_2021_3qYgdGj9Svt"
] |
nips_2021_GfVeFihyLRe | Distributed Estimation with Multiple Samples per User: Sharp Rates and Phase Transition | Jayadev Acharya, Clement Canonne, Yuhan Liu, Ziteng Sun, Himanshu Tyagi | accept | Overall the reviewers liked the matching upper and lower bounds in a wide range of regimes for a quite natural distributed problem, generalizing in a significant way previous work that only held for one sample per machine. The reviewers also found the phase transition to be interesting, as well as the techniques drawn from different areas. There were some specific presentation comments that we encourage the authors to take into account. | train | [
"iecZOfTgoBB",
"WrJh_e-NuI",
"GlkvrejjLNU",
"eWxzIJLYSXB",
"WU0cmuJe4nk",
"Ia5b_T6XZMZ",
"9CXdPElQwFq",
"T4A29uUzq1_",
"5p4LcqbpnPf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their thoughtful responses to my and the other referee's reviews of the work. My review remains unchanged.",
"The paper studies the problem of distributed learning of a discrete distribution. Formally, there are $n$ participants each of whom receives $m$ samples from a discrete distr... | [
-1,
6,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"T4A29uUzq1_",
"nips_2021_GfVeFihyLRe",
"5p4LcqbpnPf",
"T4A29uUzq1_",
"WrJh_e-NuI",
"9CXdPElQwFq",
"nips_2021_GfVeFihyLRe",
"nips_2021_GfVeFihyLRe",
"nips_2021_GfVeFihyLRe"
] |
nips_2021_i_Q1yrOegLY | Revisiting Deep Learning Models for Tabular Data | The existing literature on deep learning for tabular data proposes a wide range of novel architectures and reports competitive results on various datasets. However, the proposed models are usually not properly compared to each other and existing works often use different benchmarks and experiment protocols. As a result, it is unclear for both researchers and practitioners what models perform best. Additionally, the field still lacks effective baselines, that is, the easy-to-use models that provide competitive performance across different problems.In this work, we perform an overview of the main families of DL architectures for tabular data and raise the bar of baselines in tabular DL by identifying two simple and powerful deep architectures. The first one is a ResNet-like architecture which turns out to be a strong baseline that is often missing in prior works. The second model is our simple adaptation of the Transformer architecture for tabular data, which outperforms other solutions on most tasks. Both models are compared to many existing architectures on a diverse set of tasks under the same training and tuning protocols. We also compare the best DL models with Gradient Boosted Decision Trees and conclude that there is still no universally superior solution. The source code is available at https://github.com/yandex-research/rtdl.
| accept | All reviewers have agreed on the positive aspects of the paper, and its important contributions for tabular deep learning. There were significant additions by the authors during the review process, particularly extra experimental results, method motivations, clarification of experimental setups and hyperparameter tuning. It is important to reflect these in the final version of the paper. Finally, as discussed, it would be an impactful addition to open-source the benchmarking framework. | test | [
"QO8gb7D0nDu",
"fdFXEeuwRFJ",
"GEhsteOfBan",
"nJTqjvX22pv",
"yoE7z6OLZ8",
"nT5wZeYH_8-",
"FdKlTIJJyrG",
"qnMvCnZQCM6",
"snZMTEkuWnA",
"MyIIYGvCFxE",
"feuRIi88PPD",
"-w7DjbL6sS6",
"wIkHnPDM_uv",
"aY0pjHWo0Bw",
"_eIp9ZyLHZ",
"3MmFecT0g8",
"4B8ASoucG87",
"tw3HqcdeGdn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nthanks for your swift reply.\nI like all proposed changes and will further increase my score to a 7.\nI hope that you follow through with all the changes you propose here, and I would especially suggest that you take Points 1-4 of the rebuttal and add them to the paper.\n\nI don't quite know what... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"GEhsteOfBan",
"nips_2021_i_Q1yrOegLY",
"nJTqjvX22pv",
"yoE7z6OLZ8",
"nT5wZeYH_8-",
"snZMTEkuWnA",
"fdFXEeuwRFJ",
"fdFXEeuwRFJ",
"fdFXEeuwRFJ",
"tw3HqcdeGdn",
"4B8ASoucG87",
"3MmFecT0g8",
"_eIp9ZyLHZ",
"nips_2021_i_Q1yrOegLY",
"nips_2021_i_Q1yrOegLY",
"nips_2021_i_Q1yrOegLY",
"nips_2... |
nips_2021_2j_cut38wv | Backdoor Attack with Imperceptible Input and Latent Modification | Recent studies have shown that deep neural networks (DNN) are vulnerable to various adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model such that the compromised model will behave normally without the presence of the trigger. Techniques for generating backdoor images that are visually imperceptible from clean images have also been developed recently, which further enhance the stealthiness of the backdoor attacks from the input space. Along with the development of attacks, defense against backdoor attacks is also evolving. Many existing countermeasures found that backdoor tends to leave tangible footprints in the latent or feature space, which can be utilized to mitigate backdoor attacks.In this paper, we extend the concept of imperceptible backdoor from the input space to the latent representation, which significantly improves the effectiveness against the existing defense mechanisms, especially those relying on the distinguishability between clean inputs and backdoor inputs in latent space. In the proposed framework, the trigger function will learn to manipulate the input by injecting imperceptible input noise while matching the latent representations of the clean and manipulated inputs via a Wasserstein-based regularization of the corresponding empirical distributions. We formulate such an objective as a non-convex and constrained optimization problem and solve the problem with an efficient stochastic alternating optimization procedure. We name the proposed backdoor attack as Wasserstein Backdoor (WB), which achieves a high attack success rate while being stealthy from both the input and latent spaces, as tested in several benchmark datasets, including MNIST, CIFAR10, GTSRB, and TinyImagenet.
| accept | This paper extends the notion of stealthy backdoors to the latent space (the authors focus on the penultimate layer). The manuscript shows how the proposed attack is able to evade prior defenses for backdoors. Optimizing an objective in the latent space is not completely novel to the adversarial ML area - it has been done in adversarial examples research (e.g., https://arxiv.org/abs/1511.05122) - but it is here demonstrated in the context of backdoors attacks which creates its own sets of challenges. Thus, there is merit to the work proposed here. In particular, experiments show how the proposed attack is able to obtain similar performance to prior approaches while also improving the stealthiness in latent space. I encourage the authors to take into account discussions from the reviews while preparing the camera ready of their manuscript, and to open-source their code. | train | [
"QXqGACczrPG",
"7MJ2MhG5j5k",
"f6OZo7DYt_",
"0N0uymoeZao",
"rpv4iHwk8Vz",
"lpaOzFHhkGl",
"Tu8Y3TBX2D-",
"f_Dfr6qw6kf",
"NFKBheRf9U7",
"GMDjhkBKg4P",
"LV2UigLduks",
"wb8GUZXo16l",
"TxPtN3IYpkW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you! We will include this reference in the proof for the later version of the manuscript.",
" Thanks for the clarifications you should mention [20] in your proof.",
" I have updated your score to 7.\nPlease update Figure 9 to show your actually backdoor images in the revised version.",
"This paper pro... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"7MJ2MhG5j5k",
"Tu8Y3TBX2D-",
"rpv4iHwk8Vz",
"nips_2021_2j_cut38wv",
"lpaOzFHhkGl",
"NFKBheRf9U7",
"wb8GUZXo16l",
"TxPtN3IYpkW",
"0N0uymoeZao",
"LV2UigLduks",
"nips_2021_2j_cut38wv",
"nips_2021_2j_cut38wv",
"nips_2021_2j_cut38wv"
] |
nips_2021_Mfi0LZmFB5a | SOPE: Spectrum of Off-Policy Estimators | Many sequential decision making problems are high-stakes and require off-policy evaluation (OPE) of a new policy using historical data collected using some other policy. One of the most common OPE techniques that provides unbiased estimates is trajectory based importance sampling (IS). However, due to the high variance of trajectory IS estimates, importance sampling methods based on state-action visitation distributions (SIS) have recently been adopted. Unfortunately, while SIS often provides lower variance estimates for long horizons, estimating the state-action distribution ratios can be challenging and lead to biased estimates. In this paper, we present a new perspective on this bias-variance trade-off and show the existence of a spectrum of estimators whose endpoints are SIS and IS. Additionally, we also establish a spectrum for doubly-robust and weighted version of these estimators. We provide empirical evidence that estimators in this spectrum can be used to trade-off between the bias and variance of IS and SIS and can achieve lower mean-squared error than both IS and SIS.
| accept | The paper considers the problem of off-policy evaluation and shows that it is possible to interpolate between the importance sampling estimator, which is unbiased but high variance, and correcting state distributions, which is low variance but could be very biased since the distributions need to be estimated. The authors showed that the optimal estimator with lowest MSE is indeed taking an intermediate values in the spectrum to mix these two estimators.
The proposed estimator is derived in a principled fashion, the results are insightful, and the paper is very well written. Generally this paper's contribution are non-trivial yet interesting, and it can definitely be a valuable addition in the vast literature of off-policy evaluation in RL. Some potential drawback of this paper includes simplistic experiments that are aimed only for proof-of-concept, and the restriction to to n-step estimators. During the discussion phase, the reviewer also recommends doing another round of improvement before publications to study the confounding effect of these two estimators. Please try to do so to improve the (already good) quality of this work.
In general I believe the merits of this work surpass the improvements required. So I also recommend acceptance.
| val | [
"9M1Xt9ZXj9x",
"HWc0Hu1_RSc",
"1B8VZRTNbO6",
"WoocMAhDx45",
"lftSagYULfY",
"e_X5A9_V_xk",
"1MNZBjo_DOV",
"uykNYyctDVK",
"6Eq16U7bQJ6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a novel n-step method which interpolates between PDIS ($n=H$) and density estimation ($n=0$) methods (the DICE family). The paper derives the relationship between density ratio corrections and likelihood ratio corrections, leading to the development of the proposed algorithm. The paper empiri... | [
8,
5,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_Mfi0LZmFB5a",
"nips_2021_Mfi0LZmFB5a",
"WoocMAhDx45",
"HWc0Hu1_RSc",
"9M1Xt9ZXj9x",
"6Eq16U7bQJ6",
"uykNYyctDVK",
"nips_2021_Mfi0LZmFB5a",
"nips_2021_Mfi0LZmFB5a"
] |
nips_2021_UZm2IQhgIyB | Label-Imbalanced and Group-Sensitive Classification under Overparameterization | The goal in label-imbalanced and group-sensitive classification is to optimize relevant metrics such as balanced error and equal opportunity. Classical methods, such as weighted cross-entropy, fail when training deep nets to the terminal phase of training (TPT), that is training beyond zero training error. This observation has motivated recent flurry of activity in developing heuristic alternatives following the intuitive mechanism of promoting larger margin for minorities. In contrast to previous heuristics, we follow a principled analysis explaining how different loss adjustments affect margins. First, we prove that for all linear classifiers trained in TPT, it is necessary to introduce multiplicative, rather than additive, logit adjustments so that the interclass margins change appropriately. To show this, we discover a connection of the multiplicative CE modification to the cost-sensitive support-vector machines. Perhaps counterintuitively, we also find that, at the start of training, the same multiplicative weights can actually harm the minority classes. Thus, while additive adjustments are ineffective in the TPT, we show that they can speed up convergence by countering the initial negative effect of the multiplicative weights. Motivated by these findings, we formulate the vector-scaling (VS) loss, that captures existing techniques as special cases. Moreover, we introduce a natural extension of the VS-loss to group-sensitive classification, thus treating the two common types of imbalances (label/group) in a unifying way. Importantly, our experiments on state-of-the-art datasets are fully consistent with our theoretical insights and confirm the superior performance of our algorithms. Finally, for imbalanced Gaussian-mixtures data, we perform a generalization analysis, revealing tradeoffs between balanced / standard error and equal opportunity.
| accept | This paper has obtained four favorable reviews. The reviewers laud the novelty of the theoretical study of the imbalanced loss functions used. It is also practically valuable in the sense that it is useful to develop new algorithms. Many researchers would be interested in the result. The AC agrees with the reviewers. | train | [
"wM_F2JjgMP",
"ESTSNV44I4V",
"5aIZIeWd2AL",
"NSmeSGAf7Zv",
"Bww1reFCFP-",
"POORZWzILbF",
"Slcw5-jIhom",
"bHJMJBmEFmY",
"Bs4x1Nt2vO6",
"gyJvLBXnJ_y",
"F0llIr6VqNi",
"NhhFssOza9g",
"9ix_t3TCIu",
"BUqdvpDuDXJ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewers and AC,\n\nWe are happy to see that our previous responses have helped to raise any concerns. Since the discussion phase will be finalizing soon, we wanted to reach out to you again letting you know that we are happy to hear if you have further feedback or concerns on the paper.\n\nWith this oppor... | [
-1,
6,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_UZm2IQhgIyB",
"nips_2021_UZm2IQhgIyB",
"NhhFssOza9g",
"bHJMJBmEFmY",
"nips_2021_UZm2IQhgIyB",
"Bs4x1Nt2vO6",
"nips_2021_UZm2IQhgIyB",
"F0llIr6VqNi",
"gyJvLBXnJ_y",
"Bww1reFCFP-",
"BUqdvpDuDXJ",
"ESTSNV44I4V",
"Slcw5-jIhom",
"nips_2021_UZm2IQhgIyB"
] |
nips_2021_yaksQCYcRs | Neural Program Generation Modulo Static Analysis | State-of-the-art neural models of source code tend to be evaluated on the generation of individual expressions and lines of code, and commonly fail on long-horizon tasks such as the generation of entire method bodies. We propose to address this deficiency using weak supervision from a static program analyzer. Our neurosymbolic method allows a deep generative model to symbolically compute, using calls to a static analysis tool, long-distance semantic relationships in the code that it has already generated. During training, the model observes these relationships and learns to generate programs conditioned on them. We apply our approach to the problem of generating entire Java methods given the remainder of the class that contains the method. Our experiments show that the approach substantially outperforms a state-of-the-art transformer and a model that explicitly tries to learn program semantics on this task, both in terms of producing programs free of basic semantic errors and in terms of syntactically matching the ground truth.
| accept | The paper is well written and clearly motivated, with good results outperforming the most relevant baselines. All reviewers agree this is a good paper and should be accepted.
There are, however, some comments shared by the reviewers that the authors should take into account to improve this work. Most notably the transformer language model baseline chosen for comparison may have been a bit weak given the recent progress in this direction. | train | [
"kb_la7dzdSF",
"I-OUZrEd_Ay",
"Cd_vEVcUfda",
"U-ow4zj4cPr",
"svHA4TUmgj",
"cgn4BQcYJ_d",
"MBH2xElElgR",
"jWSkD2ObdNv",
"-1kkegjHlud",
"n7yXtaBHn_j",
"jBMJ7Bl-cUO",
"WRH9n4VZxC",
"DPmNh5ppoTw"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Leveraging compiler inferences about code, such as from static analyzers, can improve the ability of neural models to reason about code. This paper introduces a method that relies on attribute grammar coupled with static analysis that operates on partial programs, demonstrating that access to this information grea... | [
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"nips_2021_yaksQCYcRs",
"Cd_vEVcUfda",
"U-ow4zj4cPr",
"cgn4BQcYJ_d",
"jWSkD2ObdNv",
"MBH2xElElgR",
"DPmNh5ppoTw",
"WRH9n4VZxC",
"kb_la7dzdSF",
"jBMJ7Bl-cUO",
"nips_2021_yaksQCYcRs",
"nips_2021_yaksQCYcRs",
"nips_2021_yaksQCYcRs"
] |
nips_2021_8vwDIC9pEb | Unfolding Taylor's Approximations for Image Restoration | Deep learning provides a new avenue for image restoration, which demands a delicate balance between fine-grained details and high-level contextualized information during recovering the latent clear image. In practice, however, existing methods empirically construct encapsulated end-to-end mapping networks without deepening into the rationality, and neglect the intrinsic prior knowledge of restoration task. To solve the above problems, inspired by Taylor’s Approximations, we unfold Taylor’s Formula to construct a novel framework for image restoration. We find the main part and the derivative part of Taylor’s Approximations take the same effect as the two competing goals of high-level contextualized information and spatial details of image restoration respectively. Specifically, our framework consists of two steps, which are correspondingly responsible for the mapping and derivative functions. The former first learns the high-level contextualized information and the later combines it with the degraded input to progressively recover local high-order spatial details. Our proposed framework is orthogonal to existing methods and thus can be easily integrated with them for further improvement, and extensive experiments demonstrate the effectiveness and scalability of our proposed framework.
| accept | This paper proposes a new method for image restoration using deep neural networks based on decomposing the problem using a Taylor approximation. 3 out of 4 reviewers appreciate the contribution as useful and novel. The most negative reviewer (79vC) has the reasonable concern that the proposed method seems mostly empirical and is not well supported by theory. Unfortunately this reviewer only provided a short review and did not engage in committee discussions or acknowledge the author response: for this reason we cannot give it too much weight in making a decision. The other reviewers acknowledged the author response and are satisfied by it. No serious concerns were identified that would invalidate the contribution made by this paper. The author response looks convincing to me as well: Authors, please integrate this discussion and all promised new results into the camera ready version of the paper. | train | [
"-VEGRtxV4Q",
"hzdv_zkoOCw",
"LCH65rR8txx",
"l09FOCSyOCn",
"KfIklg9I0q",
"WRapVRlGCZJ",
"icxEHFJKN5K",
"WGV08MGRSGB",
"MZSpGCR3DeO",
"8XdqwHWVKK3",
"lck2cukO8gY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the rebuttal. The authors address some of my concerns including parameter size difference by adding more make-up experiments. The authors claimed they replace the implementation with better module, while that will change the results of the table in the original version. I am not sure whether it is fea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
5
] | [
"WGV08MGRSGB",
"lck2cukO8gY",
"8XdqwHWVKK3",
"WGV08MGRSGB",
"MZSpGCR3DeO",
"lck2cukO8gY",
"8XdqwHWVKK3",
"nips_2021_8vwDIC9pEb",
"nips_2021_8vwDIC9pEb",
"nips_2021_8vwDIC9pEb",
"nips_2021_8vwDIC9pEb"
] |
nips_2021_c3u5qyZawqh | Metropolis-Hastings Data Augmentation for Graph Neural Networks | Graph Neural Networks (GNNs) often suffer from weak-generalization due to sparsely labeled data despite their promising results on various graph-based tasks. Data augmentation is a prevalent remedy to improve the generalization ability of models in many domains. However, due to the non-Euclidean nature of data space and the dependencies between samples, designing effective augmentation on graphs is challenging. In this paper, we propose a novel framework Metropolis-Hastings Data Augmentation (MH-Aug) that draws augmented graphs from an explicit target distribution for semi-supervised learning. MH-Aug produces a sequence of augmented graphs from the target distribution enables flexible control of the strength and diversity of augmentation. Since the direct sampling from the complex target distribution is challenging, we adopt the Metropolis-Hastings algorithm to obtain the augmented samples. We also propose a simple and effective semi-supervised learning strategy with generated samples from MH-Aug. Our extensive experiments demonstrate that MH-Aug can generate a sequence of samples according to the target distribution to significantly improve the performance of GNNs.
| accept | This paper proposed to do graph data augmentation with a carefully designed target distribution and the corresponding proposal distribution. The experiments show that the proposed augmentation method can be more effective than existing baselines. During the rebuttal period, all the reviewers found that the authors have effectively resolved their concerns, and are generally positive about the paper. Personally I also like the design of the target distribution which is the key insight of the paper. However none of us are super excited about the outcome, partially due to the significance of the contribution and the potential implication for more general settings. Nevertheless, this paper is a sound contribution in the current scope. No matter of the outcome, we highly encourage the authors to incorporate the additional experiments and clarifications in to the revision of the paper to make it more solid. | train | [
"SdtEATV0nm6",
"tOYG7HxZuw_",
"hkWx4Fniwkp",
"MREGGeNqzbh",
"rZ6tlPMGiaO",
"oTnyXCwjft",
"zu15qgkxbFR",
"7gamoN3Aitl",
"rpyz5EQQIY9",
"3vV4CEJTwQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\nthank you for answers, there are no further questions from my side.",
" Thank you for your time and efforts in reviewing our paper. We have responded to your comments and we believe that most of your suggestions have been resolved. Could you please go over our responses and let us know if you hav... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"tOYG7HxZuw_",
"rZ6tlPMGiaO",
"zu15qgkxbFR",
"oTnyXCwjft",
"rpyz5EQQIY9",
"3vV4CEJTwQ",
"7gamoN3Aitl",
"nips_2021_c3u5qyZawqh",
"nips_2021_c3u5qyZawqh",
"nips_2021_c3u5qyZawqh"
] |
nips_2021_MrAN2U5EPZZ | Strategic Behavior is Bliss: Iterative Voting Improves Social Welfare | Recent work in iterative voting has defined the additive dynamic price of anarchy (ADPoA) as the difference in social welfare between the truthful and worst-case equilibrium profiles resulting from repeated strategic manipulations. While iterative plurality has been shown to only return alternatives with at most one less initial votes than the truthful winner, it is less understood how agents' welfare changes in equilibrium. To this end, we differentiate agents' utility from their manipulation mechanism and determine iterative plurality's ADPoA in the worst- and average-cases. We first prove that the worst-case ADPoA is linear in the number of agents. To overcome this negative result, we study the average-case ADPoA and prove that equilibrium winners have a constant order welfare advantage over the truthful winner in expectation. Our positive results illustrate the prospect for social welfare to increase due to strategic manipulation.
| accept | Thank you for your submission. The reviewers reached a consensus that this paper is well-written and presents interesting results. Please follow the reviewers' suggestions to improve the paper in the next revision. | train | [
"Z-j4ShPUU9",
"wmGG7WEwRVJ",
"biojIkfbwFC",
"Siue1qsem9D",
"-BvkfemrzLC",
"XrfP6cHKq_",
"klJK600jPwZ",
"YEqdiJ3sbzA",
"chkfy0Dl_yi"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response, and I’m looking forward to seeing these edits in the camera-ready version! My score hasn’t changed (7).",
" Thank you for taking the time to read and review our work. We appreciate your feedback and look forward to incorporating your suggestions. Firstly, your summary and understanding... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"-BvkfemrzLC",
"chkfy0Dl_yi",
"YEqdiJ3sbzA",
"klJK600jPwZ",
"XrfP6cHKq_",
"nips_2021_MrAN2U5EPZZ",
"nips_2021_MrAN2U5EPZZ",
"nips_2021_MrAN2U5EPZZ",
"nips_2021_MrAN2U5EPZZ"
] |
nips_2021_ZAOrF0mYSYU | Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations | Ayush Sekhari, Christoph Dann, Mehryar Mohri, Yishay Mansour, Karthik Sridharan | accept | The paper considers the problem of learning near-optimal policies in low-rank MDPs and arbitrary policy classes, and proposes an algorithm with nontrivial guarantees in this setting, constituting one of the first theoretical guarantees in any agnostic learning scenario within the context of reinforcement learning. This is a problem of significant interest, and the reviewers all agreed that the paper provides some interesting technical innovations and clearly advances the state of the art in the area.
Overall, this is an excellent paper that clearly deserves to be published at the conference. | train | [
"K2TVPp7vzZB",
"odYG5LBVrWs",
"MMVYkNpLoF",
"D5hrXbPIVTp",
"G9bDJYUDIrb",
"kaqES7uacS",
"ypJU6M8Lb7C",
"fbkhOp2Ule",
"QkOeGu9AB16",
"aCyhIelc_SO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies reinforcement learning in low-rank MDPs under an agnostic perspective, where the agent does not assume realizability of the optimal value function in some given functional space but rather has to find a near-optimal policy in a given policy class. For this problem, the authors propose a policy se... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2021_ZAOrF0mYSYU",
"G9bDJYUDIrb",
"ypJU6M8Lb7C",
"aCyhIelc_SO",
"K2TVPp7vzZB",
"QkOeGu9AB16",
"fbkhOp2Ule",
"nips_2021_ZAOrF0mYSYU",
"nips_2021_ZAOrF0mYSYU",
"nips_2021_ZAOrF0mYSYU"
] |
nips_2021_uTqvj8i3xv | Functional Regularization for Reinforcement Learning via Learned Fourier Features | We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis and show that it improves the sample efficiency of both state-based and image-based RL. We perform infinite-width analysis of our architecture using the Neural Tangent Kernel and theoretically show that tuning the initial variance of the Fourier basis is equivalent to functional regularization of the learned deep network. That is, these learned Fourier features allow for adjusting the degree to which networks underfit or overfit different frequencies in the training data, and hence provide a controlled mechanism to improve the stability and performance of RL optimization. Empirically, this allows us to prioritize learning low-frequency functions and speed up learning by reducing networks' susceptibility to noise in the optimization process, such as during Bellman updates. Experiments on standard state-based and image-based RL benchmarks show clear benefits of our architecture over the baselines.
| accept | This paper considered the learned Fourier features in RL as an alternative parametrization to the vanilla MLP for Q network. The author argued that the benefits of the Fourier features can **i)** reduces high-frequency noise; **ii)** the noise is the major performance killer in RL. Then, the authors showed the better performances of the proposed parametrized Q in off-policy RL tasks on DeepMind Control Suite from both state and image.
Besides the novelty of using Fourier features raised by several reviewers (jV7a, aHKm, Susc), the major issue of this paper is that the claimed benefits of Fourier feature layer are indeed the main reason leading to better performances (N6iX and Susc). Specifically, the empirical study does not provide convincing support of claim, while the theoretical justification does not offer strong insight (Reviewer N6ix) and actually may not able to explain the empirical results.
- As reviewer Susc mentioned, the motivation is clearly justified: **i)** reduces noise; **ii)** the noise is the major performance killer in RL.
- The difference between vanilla MLP and Fourier feature is in fact different parametrization families. With square loss in Eq. 4, the optimal solution actually is $E[y|x]$ if the parametrization is realizable. All parametrization is actually able to reduce the noise. How can the author guarantee the superior comes from uncertainty canceling, not **approximation error** reduction is never discussed.
- In the theoretical analysis part, the author calculate NTK, which relies on the assumption in the **limit case** and the neural network weights are not far away from initialization. In the experiment, only around 1000 random feature is used. As many recent papers [1, 2 and to name a few] suggest the situation is different in finite case. With this gap in mind, it is not clear to me the phenomena from kernel can be straightforwardly extend to random feature. To support the claim, the author may first compare the NTK vs. random feature.
- The authors also provide the contraction proof for kernel feature. **i)** this proof is not clear to me whether the relu layer is used; **ii)** in fact, how to characterize the benefits of the contractition vs. noise cancellation should be discussed.
Let me quote the suggestion from Reviewer Susc "validating all hypotheses should be a main feature of papers like this one, not just showing improvements." As a scientific paper, we should make careful claim with supportive evidence. The paper indeed reveals some interesting phenomena, but with some overclaims without convincing support.
Minor:
The term "functional regularization" might not be appropriate and not aligned with its vanilla meaning. In this paper, it is in fact parametrization family selection.
Since most of the reviewers agreed to accept this paper, I will still recommend acceptance for this submission. However, I would suggest the authors can revise the paper, taking the comments in reviews and meta-review into account. Specifically, reduce the over-simplified justification and the misleading claim, just make the claims which the empirical evidence can support to avoid the potential misleading of the whole community.
[1] Fort, Stanislav, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, and Surya Ganguli. "Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel." arXiv preprint arXiv:2010.15110 (2020).
[2] Allen-Zhu, Zeyuan, and Yuanzhi Li. "What can ResNet learn efficiently, going beyond kernels?." arXiv preprint arXiv:1905.10337 (2019). | train | [
"7ccT87NFZgv",
"eieaS3BfiTL",
"NeDGeOWP73",
"5Ne6MvT3Kgh",
"1BshZOmgGs8",
"HGQYICQdhEG",
"Xtk7gpFmCkm",
"-HLn49Un-T1",
"xODs_5kWv7H",
"jOoPGL5gPY-",
"Ejriv5qvxNN",
"GJZPNv2LRA",
"xlPKqpzL0Ep",
"1bHPP6Xj5-4s",
"BTMX405u4Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to use input features concatenated with learned Fourier feature (LF2) basis to control the degree of fitting various frequencies in the training data. Theoretical analysis under NTK regime with 2 layer network shows that the initial variance of Fourier basis controls this degree. In value-based ... | [
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_uTqvj8i3xv",
"NeDGeOWP73",
"5Ne6MvT3Kgh",
"-HLn49Un-T1",
"Xtk7gpFmCkm",
"nips_2021_uTqvj8i3xv",
"Ejriv5qvxNN",
"jOoPGL5gPY-",
"nips_2021_uTqvj8i3xv",
"7ccT87NFZgv",
"HGQYICQdhEG",
"1bHPP6Xj5-4s",
"BTMX405u4Q",
"nips_2021_uTqvj8i3xv",
"nips_2021_uTqvj8i3xv"
] |
nips_2021_wTLc2HcWLIM | Adaptive First-Order Methods Revisited: Convex Minimization without Lipschitz Requirements | We propose a new family of adaptive first-order methods for a class of convex minimization problems that may fail to be Lipschitz continuous or smooth in the standard sense. Specifically, motivated by a recent flurry of activity on non-Lipschitz (NoLips) optimization, we consider problems that are continuous or smooth relative to a reference Bregman function – as opposed to a global, ambient norm (Euclidean or otherwise). These conditions encompass a wide range ofproblems with singular objective, such as Fisher markets, Poisson tomography, D-design, and the like. In this setting, the application of existing order-optimal adaptive methods – like UnixGrad or AcceleGrad – is not possible, especially in the presence of randomness and uncertainty. The proposed method, adaptive mirror descent (AdaMir), aims to close this gap by concurrently achieving min-max optimal rates in problems that are relatively continuous or smooth, including stochastic ones.
| accept | Overall the reviewers liked the paper and found the contributions strong enough for NeurIPS. I have read the paper, and while I have an issue with the claim of the authors that their algorithm is adaptive, while it requires the specification of the "smoothing function" h(), I still appreciate the overall technical contribution provided in the analysis and obtained results.
| train | [
"iA8YtsToss4",
"hLZmTaH-1Zf",
"3lGeUasVPNM",
"KnN-oe6cPF5",
"h3v1CbZVbz",
"F8V0LVaRUNS",
"8a28hy9SJcf",
"RZ1uDMJ6Pf8",
"OOjEhQCkDRw",
"d3VPAjb__uT"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThanks a lot for reaching out, we are very happy for the opportunity to discuss in more detail our proofs and the difficulties we encountered - both technical and conceptual.\n\nIn a nutshell, the principal challenges we faced stem from the fact that our paper simultaneously tackles several issu... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"hLZmTaH-1Zf",
"3lGeUasVPNM",
"d3VPAjb__uT",
"OOjEhQCkDRw",
"RZ1uDMJ6Pf8",
"8a28hy9SJcf",
"nips_2021_wTLc2HcWLIM",
"nips_2021_wTLc2HcWLIM",
"nips_2021_wTLc2HcWLIM",
"nips_2021_wTLc2HcWLIM"
] |
nips_2021_HbaQ4FEh-6 | Adapting to function difficulty and growth conditions in private optimization | Hilal Asi, Daniel Levy, John C. Duchi | accept | Despite some suggestions for additional results that could round out the paper, and presentation suggestions for clarifying the paper's conceptual contributions, I would like to recommend it for acceptance. In particular:
- As an algorithmic paper, providing a new suite of algorithms and complementing them with matching lower bounds tells a complete story. Experiments would be nice, but given the other results I do not believe they are necessary.
- Concerning Reviewer pyyH's comment about inexact projections, the author's proposed modifications should indeed be implemented carefully. Ensuring that the approximate projections do not (even in the worst-case over the projection algorithm) significantly increase sensitivity affects the privacy guarantee of the final algorithm. So this is an important effect to be accounted for in detail. | val | [
"TZor10K_U0C",
"ywjVooWt9H1",
"pBWNA2zcPms",
"3tq7J_fF3q",
"D8S9UugvbzV",
"iCp4SCgZW-"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for their valuable time and feedback and for saying that the paper is “well-written”, “has a nice flow of ideas” and that it constitutes a “significant contribution”.\n\n- **[Question about estimating $\\kappa$]**: We do not currently know of ways to estimate the growth constan... | [
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
5,
3,
2
] | [
"iCp4SCgZW-",
"D8S9UugvbzV",
"3tq7J_fF3q",
"nips_2021_HbaQ4FEh-6",
"nips_2021_HbaQ4FEh-6",
"nips_2021_HbaQ4FEh-6"
] |
nips_2021_-ioMuxJ6ud9 | Support Recovery of Sparse Signals from a Mixture of Linear Measurements | Soumyabrata Pal, Arya Mazumdar, Venkata Gandikota | accept | All reviewers have very positively received this paper; all marks match to a clear accept, that I will follow. The concerns by the reviewers seem to have been fully answered. Some reviewers pointed out the lack of numerical experiments, then provided by the authors answers. I believe that, as promised to some reviewers, the authors should definitely add these experiments, possibly in an appendix. Additionally, it seems that some definitions could be slightly more furnished to improve readability. I'm confident that the authors will implement these few improvements for the final version as suggested by the reviewers. Given the overall quality of redaction of the paper and the originality of the results, I recommend acceptance for a poster. | train | [
"hQSDKU82Zl",
"hkAQPBoCQwO",
"Oce_Xct8oLj",
"__zbFIhOqh_",
"R9TSNw1xE9Z",
"sqmF1204Uoc",
"UXCO-4UpILO",
"HG11slrJ7_A",
"_efw_SeOpIy",
"tk8FshFjimc"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I'd like to thank the authors for answering my questions. I will keep my original score.",
" The paper addresses a generalized problem in 1-bit compressed sensing, considered in several papers [9, 19, 27, 29]. The problem is to recovery the supports of ALL of l unknown k-sparse vectors from 1-bit noisy measurem... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1
] | [
"UXCO-4UpILO",
"nips_2021_-ioMuxJ6ud9",
"R9TSNw1xE9Z",
"HG11slrJ7_A",
"sqmF1204Uoc",
"hkAQPBoCQwO",
"_efw_SeOpIy",
"tk8FshFjimc",
"nips_2021_-ioMuxJ6ud9",
"nips_2021_-ioMuxJ6ud9"
] |
nips_2021_9_CTZ_xdQk | Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity | Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) [Mescheder et al., 2017]. SGDA is known to converge to a stationary point for specific classes of games, but current convergence analyses require a bounded variance assumption. SCO is used successfully for solving large-scale adversarial problems, but its convergence guarantees are limited to its deterministic variant. In this work, we introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO under this condition for solving a class of stochastic variational inequality problems that are potentially non-monotone. We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size, and we propose insightful stepsize-switching rules to guarantee convergence to the exact solution. In addition, our convergence guarantees hold under the arbitrary sampling paradigm, and as such, we give insights into the complexity of minibatching.
| accept | This paper introduces a new expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO under this condition for solving a class of stochastic variational inequality problems that are potentially non-monotone. Reviewers find the new conditions very interesting, and very mild in many settings. Therefore the new convergence results under these mild settings are rather strong. The techniques developed here are also of broad interests.
| train | [
"A1l-MaHlAid",
"n88EIaNQft2",
"Aqq4RrYiqpm",
"CcvFgJKP5om",
"waI0QHm7Tm",
"sB7JjYdcCHt",
"IphvN4anCFG",
"ge8PYwWYQwK",
"SGILYEM31n9",
"FtNOeTCLCQ_",
"BX5bsm1fU5Q",
"cOWv-aJJQI",
"-yBm3KbiB1A",
"tLEEIcaTGTz",
"WCjlV8vQUHi",
"hNpijhmqZtT",
"jpDbRfRMfsK",
"A8WIz3hRcGY",
"NB1a6bizreE... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" OK. Great. ",
" Point (1). In our current proofs, the EC is required, as it allows us to obtain an upper bound for the $\\mathbb{E}_{\\cal D}||\\xi_v(x)||^2$, (see Lemma 3.4). For this reason, we believe that EC is the natural notion for the stochastic setting. \\\nIn the deterministic setting (distribution $\\... | [
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8
] | [
-1,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"n88EIaNQft2",
"BX5bsm1fU5Q",
"nips_2021_9_CTZ_xdQk",
"IphvN4anCFG",
"nips_2021_9_CTZ_xdQk",
"SjP6i06fGN",
"SGILYEM31n9",
"FtNOeTCLCQ_",
"NB1a6bizreE",
"A8WIz3hRcGY",
"PXmKdrR9Z_e",
"fzcbAaBcHex",
"fzcbAaBcHex",
"fzcbAaBcHex",
"fzcbAaBcHex",
"fzcbAaBcHex",
"Aqq4RrYiqpm",
"Aqq4RrYiq... |
nips_2021_xJYek6zantM | Tighter Expected Generalization Error Bounds via Wasserstein Distance | Borja Rodríguez Gálvez, German Bassi, Ragnar Thobaben, Mikael Skoglund | accept | Overall, the reviewers were positive about the paper. My main concern is regarding its novelty: it seems that this paper combines Wang et al. 2019 with the individual sample approach proposed by Bu et al., 2020 and with the conditional mutual information approach proposed by Steinke and Zakynthinou, 2020, respectively. There are also limitations in terms of how useful the Wasserstein-based generalization bounds proposed in this paper are in practice, particularly regarding estimation of Wasserstein distance from data. The bounds seem difficult to instantiate even for very basic examples. As noted by several reviewers, it is unclear how the paper's results would translate to more complex models. Note that recent work in the area of information-theoretic generalization bounds provide numerical results to fairly sophisticated settings (e.g., the analysis of SGLD by Negrea and Haghifam et. al.) and, despite the bounds themselves often being vacuous, at least they correlate with the generalization behavior of neural networks on real-world data. The reviewers also pointed out several changes to the paper that I view as "mandatory," especially the points raised by reviewer 61bD.
However, I do find that this paper moves the needle on generalization bounds forward. Despite mostly relying on a combination of existing theoretical tools used in the area of information-theoretic generalization bounds, it combines these tools in a novel way. The limitations/future work section is particularly thoughtful and reveals interesting new research directions. Together, the positive aspects of the paper outweigh its limitations, making me lean towards acceptance --- with the caveat that the authors should add the changes promised in their rebuttal. | train | [
"1aTHSB6Ubd",
"D5zY7rRjTeY",
"GSrMIwmxORp",
"Cck-p6x3QtF",
"Ji6RHqxMNIU",
"0MBChmTsANG",
"XNlq14_zx2q",
"IYHjIHEd4tU",
"BlKulAlBJZA",
"BCEwBrK9K2N",
"bNa1CMypLi7",
"e6nDA3Mwy8K",
"NmOY-4mh6y2",
"rW13C4kiaZ2",
"tC63ISQa_zS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Bm5z,\n\nThank you for your comment. Following your suggestion, we added a small commentary about the non-linearity of the individual mutual information bound (and how it leads to a sub-optimal bound in the Gaussian location model) at the end of page 5.",
" Thanks for your clarification. My questi... | [
-1,
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
2,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"D5zY7rRjTeY",
"BlKulAlBJZA",
"0MBChmTsANG",
"nips_2021_xJYek6zantM",
"tC63ISQa_zS",
"XNlq14_zx2q",
"nips_2021_xJYek6zantM",
"tC63ISQa_zS",
"rW13C4kiaZ2",
"XNlq14_zx2q",
"XNlq14_zx2q",
"XNlq14_zx2q",
"Cck-p6x3QtF",
"nips_2021_xJYek6zantM",
"nips_2021_xJYek6zantM"
] |
nips_2021_8n0eirHTH7I | Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization | Deeksha Adil, Brian Bullins, Sushant Sachdeva | accept | The reviewers and area chair agree that the technical contribution of the paper is significant as it yields a simpler algorithm for an important class of problems and extends the applicability of the width-reduced approach of [CKM+11]. The main weakness of the paper is the lack of clear explanations behind the technical analysis and comparison with existing similar methods. However, the authors have addressed the reviewers' questions and provided some explanation in the rebuttal phase. Assuming these explanations will find their way in the final version, I believe the submission should be accepted. | train | [
"3jlFRPaxdVr",
"R4KBRF4O4jw",
"ImoNqUoSsK",
"KvMk-0TK7_H",
"T3uSD2IXmXe",
"9omdytOHvl6",
"d9Jy_Wl6cVU",
"g8PXsG3Es_U",
"75gkUrOhmmX",
"FkoDjuTXX0Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for this detailed answer. I understand much better now. However, reading other reviews confirm me in my opinion. It is good work but the paper would benefit from having more informal explanations and more comparisons to other works. I encourage the authors to include in the paper some of the remarks/exp... | [
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
1,
4
] | [
"g8PXsG3Es_U",
"nips_2021_8n0eirHTH7I",
"9omdytOHvl6",
"T3uSD2IXmXe",
"nips_2021_8n0eirHTH7I",
"nips_2021_8n0eirHTH7I",
"FkoDjuTXX0Q",
"75gkUrOhmmX",
"nips_2021_8n0eirHTH7I",
"nips_2021_8n0eirHTH7I"
] |
nips_2021_Wlx0DqiUTD_ | Bridging the Imitation Gap by Adaptive Insubordination | In practice, imitation learning is preferred over pure reinforcement learning whenever it is possible to design a teaching agent to provide expert supervision. However, we show that when the teaching agent makes decisions with access to privileged information that is unavailable to the student, this information is marginalized during imitation learning, resulting in an "imitation gap" and, potentially, poor results. Prior work bridges this gap via a progression from imitation learning to reinforcement learning. While often successful, gradual progression fails for tasks that require frequent switches between exploration and memorization. To better address these tasks and alleviate the imitation gap we propose 'Adaptive Insubordination' (ADVISOR). ADVISOR dynamically weights imitation and reward-based reinforcement learning losses during training, enabling on-the-fly switching between imitation and exploration. On a suite of challenging tasks set within gridworlds, multi-agent particle environments, and high-fidelity 3D simulators, we show that on-the-fly switching with ADVISOR outperforms pure imitation, pure reinforcement learning, as well as their sequential and parallel combinations.
| accept | There was a strong consensus among reviewers that this paper should be accepted. The paper proposes a method to address the imitation gap - the setting where the expert has access to privileged information making imitation impossible for the student. The paper shows both the efficacy of the method and the importance of the imitation gap for performance. They only point of improvement would be to look at more diverse environment. Currently the paper is focused on visual navigation. | train | [
"pj6no3I4seA",
"fx5TEH69Zs3",
"XfFd16Meslq",
"LvuMXptdCvk",
"7CTTyWbpkdb",
"yxnMjRLZJB9",
"0YOTrtSD0SH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed answer. The answer addressed most of my questions and I think the paper will be in a good form once including the discussion in the rebuttal. Yet, I believe the paper can have much broader impact and better usability by showing (1) experiments on manipulation where a student learns fro... | [
-1,
-1,
-1,
-1,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"fx5TEH69Zs3",
"0YOTrtSD0SH",
"yxnMjRLZJB9",
"7CTTyWbpkdb",
"nips_2021_Wlx0DqiUTD_",
"nips_2021_Wlx0DqiUTD_",
"nips_2021_Wlx0DqiUTD_"
] |
nips_2021_cQLkLAQgZ5I | Adversarial Robustness with Non-uniform Perturbations | Robustness of machine learning models is critical for security related applications, where real-world adversaries are uniquely focused on evading neural network based detectors. Prior work mainly focus on crafting adversarial examples (AEs) with small uniform norm-bounded perturbations across features to maintain the requirement of imperceptibility. However, uniform perturbations do not result in realistic AEs in domains such as malware, finance, and social networks. For these types of applications, features typically have some semantically meaningful dependencies. The key idea of our proposed approach is to enable non-uniform perturbations that can adequately represent these feature dependencies during adversarial training. We propose using characteristics of the empirical data distribution, both on correlations between the features and the importance of the features themselves. Using experimental datasets for malware classification, credit risk prediction, and spam detection, we show that our approach is more robust to real-world attacks. Finally, we present robustness certification utilizing non-uniform perturbation bounds, and show that non-uniform bounds achieve better certification.
| accept | The paper considers an understudied and yet important area of adversarial ML. The paper is very well written. The need to consider attacks adjusted to specific features is intuitive and makes sense. The paper considers real world constraints and evaluates on multiple datasets of distinct domains such as malware, bank fraud and spam. The results showing how the attacks end up looking more like benign samples is compelling, and the extension of the results to certifiable bounds in two different ways is good. After extensive discussions, the added new experiments convinced the reviewers to accept the paper. | train | [
"thFHKzQAG42",
"HdJScydXeJ",
"u1aNNHR4UgV",
"3nPpLTve75_",
"4u2pKpwZNE2",
"dXtMs7K_5_F",
"EkCjdWbYGh",
"XikFfYpBvfM",
"5PmeUXqX7hB",
"lUEDWlxjjV2",
"Z_1QptnQ8Br",
"yAUvols7W_",
"XeLGWpe7tVg",
"kfckCoiHs7",
"118T-9Yze6_",
"yXCGTq1bcGs",
"FmAL1Zi8PZ",
"ofDx6r8Fdc-",
"nGHtyntEm1",
... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"o... | [
"Most research on adversarial perturbations is focused on those within an lp-ball (termed \"uniform\" due to the assumption that relative perturbation costs are uniform within the ball). This paper introduces a non-uniform model of perturbations aimed to better capture feature dependencies and correlations through... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_cQLkLAQgZ5I",
"u1aNNHR4UgV",
"3nPpLTve75_",
"4u2pKpwZNE2",
"dXtMs7K_5_F",
"EkCjdWbYGh",
"XikFfYpBvfM",
"5PmeUXqX7hB",
"lUEDWlxjjV2",
"Z_1QptnQ8Br",
"EeZ-DqqX2qa",
"nips_2021_cQLkLAQgZ5I",
"ieOGbwdCp1x",
"yXCGTq1bcGs",
"FmAL1Zi8PZ",
"nGHtyntEm1",
"AfXQMropHWS",
"nips_2021... |
nips_2021_J9Rc5P4xjT | Container: Context Aggregation Networks | Convolutional neural networks (CNNs) are ubiquitous in computer vision, with a myriad of effective and efficient variations. Recently, Transformers -- originally introduced in natural language processing -- have been increasingly adopted in computer vision. While early adopters continued to employ CNN backbones, the latest networks are end-to-end CNN-free Transformer solutions. A recent surprising finding now shows that a simple MLP based solution without any traditional convolutional or Transformer components can produce effective visual representations. While CNNs, Transformers and MLP-Mixers may be considered as completely disparate architectures, we provide a unified view showing that they are in fact special cases of a more general method to aggregate spatial context in a neural network stack. We present the \model (CONText AggregatIon NEtwoRk), a general-purpose building block for multi-head context aggregation that can exploit long-range interactions \emph{a la} Transformers while still exploiting the inductive bias of the local convolution operation leading to faster convergence speeds, often seen in CNNs. Our \model architecture achieves 82.7 \% Top-1 accuracy on ImageNet using 22M parameters, +2.8 improvement compared with DeiT-Small, and can converge to 79.9 \% Top-1 accuracy in just 200 epochs. In contrast to Transformer-based methods that do not scale well to downstream tasks that rely on larger input image resolutions, our efficient network, named \modellight, can be employed in object detection and instance segmentation networks such as DETR, RetinaNet and Mask-RCNN to obtain an impressive detection mAP of 38.9, 43.8, 45.1 and mask mAP of 41.3, providing large improvements of 6.6, 7.3, 6.9 and 6.6 pts respectively, compared to a ResNet-50 backbone with a comparable compute and parameter size. Our method also achieves promising results on self-supervised learning compared to DeiT on the DINO framework. Code is released at https://github.com/allenai/container.
| accept | The paper is somewhat borderline: three reviewers argue for acceptance, while one leans towards rejection. Based on the reviews, the rebuttal, the discussion, and the paper itself, below is the summary of key pros and cons.
Pros:
1) An interesting framework for unifying several classes of vision models (ConvNet, ViT, Mixer).
2) Good empirical results in terms of performance/compute tradeoff on classification and, especially, on detection.
Cons:
1) Not overwhelmingly novel
(some other points authors successfully addressed in their responses and I strongly recommend to include these responses/results in the paper)
To conclude, this is a solid paper, and the authors quite successfully responded to the concerns of the only negative reviewer, so overall I recommend acceptance. | train | [
"cw9DTgxwCr8",
"7frq40aH0y",
"V0N7XueTYK",
"0n3Z3nK_sTZ",
"Mto6ut9FHFb",
"o3PB0tZroR7",
"KKPa1-MmQR",
"cAps8We_9vn",
"0WukUmI1gSc"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your answers. I still think this is good work and should be accepted, even after reading the other reviews and answers.",
" Q1: For image classification tasks, it requires much more computation (higher FLOPs) than other models with comparable parameters. For the detection tasks, it is hard to directl... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
4
] | [
"0n3Z3nK_sTZ",
"cAps8We_9vn",
"KKPa1-MmQR",
"0WukUmI1gSc",
"o3PB0tZroR7",
"nips_2021_J9Rc5P4xjT",
"nips_2021_J9Rc5P4xjT",
"nips_2021_J9Rc5P4xjT",
"nips_2021_J9Rc5P4xjT"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.