paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_HFvPfNDTShj | Dimension-free empirical entropy estimation | We seek an entropy estimator for discrete distributions with fully empirical accuracy bounds. As stated, this goal is infeasible without some prior assumptions on the distribution. We discover that a certain information moment assumption renders the problem feasible. We argue that the moment assumption is natural and, in some sense, {\em minimalistic} --- weaker than finite support or tail decay conditions. Under the moment assumption, we provide the first finite-sample entropy estimates for infinite alphabets, nearly recovering the known minimax rates. Moreover, we demonstrate that our empirical bounds are significantly sharper than the state-of-the-art bounds, for various natural distributions and non-trivial sample regimes. Along the way, we give a dimension-free analogue of the Cover-Thomas result on entropy continuity (with respect to total variation distance) for finite alphabets, which may be of independent interest.
| accept | This paper considers the fundamental problem of estimating entropy of discrete distributions and analyzes the error of the empirical estimator for this problem. If one relies on L1 continuity a dependence on the domain size d is unavoidable (there will be a log d term). The authors define a new notion of continuity for entropy that provides nice bounds on the estimation error of empirical distributions. The paper is worth publishing. | val | [
"DRbFGLAtNFq",
"7KEodhFJPwW",
"AQXbH3-a4I5",
"z5wofUR_Gvj",
"HOqGTdZI546",
"7TZskXhPGGk",
"_ddX_LCmwb5",
"FT8abagMOo0",
"BvmrSbB28_o",
"O1z2IYhj6A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The main contribution of the paper is to give a continuity bound on entropy: how far the entropies of two categorical distributions are, based on the L1-distance between these distributions. This bound does not depend on dimension, and is instead expressed in terms of a higher information-moment, which is similar ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4
] | [
"nips_2021_HFvPfNDTShj",
"z5wofUR_Gvj",
"HOqGTdZI546",
"O1z2IYhj6A",
"BvmrSbB28_o",
"FT8abagMOo0",
"DRbFGLAtNFq",
"nips_2021_HFvPfNDTShj",
"nips_2021_HFvPfNDTShj",
"nips_2021_HFvPfNDTShj"
] |
nips_2021_ibD-yZEVBUX | Towards Biologically Plausible Convolutional Networks | Convolutional networks are ubiquitous in deep learning. They are particularly useful for images, as they reduce the number of parameters, reduce training time, and increase accuracy. However, as a model of the brain they are seriously problematic, since they require weight sharing - something real neurons simply cannot do. Consequently, while neurons in the brain can be locally connected (one of the features of convolutional networks), they cannot be convolutional. Locally connected but non-convolutional networks, however, significantly underperform convolutional ones. This is troublesome for studies that use convolutional networks to explain activity in the visual system. Here we study plausible alternatives to weight sharing that aim at the same regularization principle, which is to make each neuron within a pool react similarly to identical inputs. The most natural way to do that is by showing the network multiple translations of the same image, akin to saccades in animal vision. However, this approach requires many translations, and doesn't remove the performance gap. We propose instead to add lateral connectivity to a locally connected network, and allow learning via Hebbian plasticity. This requires the network to pause occasionally for a sleep-like phase of "weight sharing". This method enables locally connected networks to achieve nearly convolutional performance on ImageNet and improves their fit to the ventral stream data, thus supporting convolutional networks as a model of the visual stream.
| accept | This paper received 3 marginal accepts and 1 accept. The reviewers stated that their lack of enthusiasm had to do with the somewhat niche aspect of the work (not because of technical or experimental limitations). Many NeurIPS attendees do care about biology and the work should be of interest to this community as noted by one of the reviewers. The AC thus recommends acceptance.
| train | [
"8IZhkmkTVJ",
"LbJBqnCK3lI",
"WOopp5wkZdc",
"nQWhULjUew6",
"Ea_rxELUZSc",
"eVcSHt-fNdJ",
"3STT22pmv49",
"4qJqJFC9m9-",
"HgVj4iJxzNJ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The main point of this paper is that we can rest easy about the biological plausibility of convolutional networks, because a simple hebbian mechanism - along with a “sleep phase” - provides a method for making initially random weights match up, i.e., makes a locally connected network convolutional. The hebbian le... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
6,
7
] | [
5,
-1,
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_ibD-yZEVBUX",
"nips_2021_ibD-yZEVBUX",
"nips_2021_ibD-yZEVBUX",
"8IZhkmkTVJ",
"HgVj4iJxzNJ",
"WOopp5wkZdc",
"4qJqJFC9m9-",
"nips_2021_ibD-yZEVBUX",
"nips_2021_ibD-yZEVBUX"
] |
nips_2021_kR95DuwwXHZ | DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification | Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Cho-Jui Hsieh | accept | All the reviewers agree that the observations made by this submission are interesting and the proposed dynamic token sparsification method is novel and effective. Some concerns on the experiment details are raised by the reviewers but the authors address them well in the response. A clear acceptance. | train | [
"6LA5BfVomA1",
"njw8SShoZA5",
"9Jv71pyFR-Y",
"LCnKdteY94g",
"5kAx5mYldTN",
"a0phiurGrAj",
"QN9tkS5UYB",
"NBJaNl1qOu4",
"V_P9b0sqAeZ",
"nhAff7RXEUo",
"6Wy4PPgq_l0",
"bT7EGJ8zFzV",
"99jrL93FUA",
"tMS-4ifA9Oa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces a dynamic network to achieve efficient inference for vision transformers. Specifically, the proposed framework adopts a multi-stage architecture. In each stage, a learnable mask with the Gumbel-softmax training strategy is generated to remove some tokens for efficiency. The distillation loss a... | [
6,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
4,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_kR95DuwwXHZ",
"QN9tkS5UYB",
"nips_2021_kR95DuwwXHZ",
"nhAff7RXEUo",
"nips_2021_kR95DuwwXHZ",
"9Jv71pyFR-Y",
"tMS-4ifA9Oa",
"6LA5BfVomA1",
"99jrL93FUA",
"5kAx5mYldTN",
"bT7EGJ8zFzV",
"nips_2021_kR95DuwwXHZ",
"nips_2021_kR95DuwwXHZ",
"nips_2021_kR95DuwwXHZ"
] |
nips_2021_sIDvIyR5I1R | Learning Transferable Adversarial Perturbations | While effective, deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, recent work has shown that such attacks could be generated by another deep network, leading to significant speedups over optimization-based perturbations. However, the ability of such generative methods to generalize to different test-time situations has not been systematically studied. In this paper, we, therefore, investigate the transferability of generated perturbations when the conditions at inference time differ from the training ones in terms of the target architecture, target data, and target task. Specifically, we identify the mid-level features extracted by the intermediate layers of DNNs as common ground across different architectures, datasets, and tasks. This lets us introduce a loss function based on such mid-level features to learn an effective, transferable perturbation generator. Our experiments demonstrate that our approach outperforms the state-of-the-art universal and transferable attack strategies.
| accept | The paper studied the generalization of adversarial attacks induced by generative methods. The authors found that, by maximizing the difference of mid-level features of neural networks between the clean and perturbed images, the generated perturbation can be better transferred to another task setting such as a different data distribution or a different network structure. The finding is interesting and useful to the community. However, there is still a gap between the attack performance under the white-box and black-box settings. Furthermore, the proposed attack is effective against HGD and R&P defenses, but not effective against PGD and feature-denoising-based defenses. We suggest the authors discuss these in more detail in future revisions. | test | [
"uxwHD_7_Gbc",
"VawAS2aL8Iu",
"GnquoY2ANK",
"MwkD0GfPZBy",
"4zlp5S1o0x",
"HzGUj2DU6J",
"DSNV8GSRzKw",
"D8BBPRoko9",
"a2rVIZUYTeG",
"YBiPnSwSk9m",
"AanzF-hTIOk",
"D2pagg6T3pS",
"fzpILLwbnlj"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors for the response. I think the further results help to make the claim clearer but I think the idea is not that novel so I keep my rating. ",
" Thank you for your encouraging comments. We will definitely incorporate the additional visualizations and explanations in the final version.",
"T... | [
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"YBiPnSwSk9m",
"MwkD0GfPZBy",
"nips_2021_sIDvIyR5I1R",
"4zlp5S1o0x",
"HzGUj2DU6J",
"a2rVIZUYTeG",
"nips_2021_sIDvIyR5I1R",
"fzpILLwbnlj",
"GnquoY2ANK",
"D2pagg6T3pS",
"DSNV8GSRzKw",
"nips_2021_sIDvIyR5I1R",
"nips_2021_sIDvIyR5I1R"
] |
nips_2021_xmJsuh8xlq | PortaSpeech: Portable and High-Quality Generative Text-to-Speech | Non-autoregressive text-to-speech (NAR-TTS) models such as FastSpeech 2 and Glow-TTS can synthesize high-quality speech from the given text in parallel. After analyzing two kinds of generative NAR-TTS models (VAE and normalizing flow), we find that: VAE is good at capturing the long-range semantics features (e.g., prosody) even with small model size but suffers from blurry and unnatural results; and normalizing flow is good at reconstructing the frequency bin-wise details but performs poorly when the number of model parameters is limited. Inspired by these observations, to generate diverse speech with natural details and rich prosody using a lightweight architecture, we propose PortaSpeech, a portable and high-quality generative text-to-speech model. Specifically, 1) to model both the prosody and mel-spectrogram details accurately, we adopt a lightweight VAE with an enhanced prior followed by a flow-based post-net with strong conditional inputs as the main architecture. 2) To further compress the model size and memory footprint, we introduce the grouped parameter sharing mechanism to the affine coupling layers in the post-net. 3) To improve the expressiveness of synthesized speech and reduce the dependency on accurate fine-grained alignment between text and speech, we propose a linguistic encoder with mixture alignment combining hard word-level alignment and soft phoneme-level alignment, which explicitly extracts word-level semantic information. Experimental results show that PortaSpeech outperforms other TTS models in both voice quality and prosody modeling in terms of subjective and objective evaluation metrics, and shows only a slight performance degradation when reducing the model parameters to 6.7M (about 4x model size and 3x runtime memory compression ratio compared with FastSpeech 2). Our extensive ablation studies demonstrate that each design in PortaSpeech is effective.
| accept | The paper presents an interesting blend of VAE and flow based approach for TTS. The reviewers raised several points about the comparisons -- including that some of the baselines are possibly not as good as the original work, since original implementations were not released and third party implementations had to be used for comparison. The authors address a lot of these concerns in the discussions and added analyses and clarification that I hope will make it to the final submission as they strengthen the presentation significantly. Thanks, to the reviewers for their constructive suggestions. | val | [
"EYQwFe_Tbg7",
"Qujny3snqLe",
"dlBTsc_ZKmC",
"5sNdUQC9rK",
"FDSz9lhi20r",
"P6KkQztwYQO",
"TSbCpdBtHi",
"i0Sa2dqQHhw",
"AnDqIPmIzJ2",
"D9cvfLmakTX",
"PcACz-CUNY0",
"G0kWFAnwHx5",
"voJduH4Uluz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose PortaSpeech, a non-autoregressive model that can sythesize high-quality speech with reduced model size. Based on the experiment results, PortaSpeech slightly outperforms counterpart models in speech quality and shows only a slight performance degradtaion when reducing the model size. Strength:... | [
6,
-1,
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
3,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_xmJsuh8xlq",
"i0Sa2dqQHhw",
"D9cvfLmakTX",
"nips_2021_xmJsuh8xlq",
"AnDqIPmIzJ2",
"G0kWFAnwHx5",
"nips_2021_xmJsuh8xlq",
"PcACz-CUNY0",
"5sNdUQC9rK",
"voJduH4Uluz",
"EYQwFe_Tbg7",
"TSbCpdBtHi",
"nips_2021_xmJsuh8xlq"
] |
nips_2021_l2UWXn5iBQI | Exponential Graph is Provably Efficient for Decentralized Deep Training | Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, PAN PAN, Wotao Yin | accept | This paper deals with fully decentralized learning with SGD and shows that network topologies in the form of exponential graphs provide fast convergence and accurate models.
The reviews were quite positive and the author response helped to consolidate the scores. Overall, the theoretical and empirical results were found to be strong and relevant for practitioners in a increasingly popular topic. Therefore, the paper is accepted. | train | [
"_1WGzh6WYuA",
"0U347Qt3TMM",
"KdrKMQPOcUI",
"84LxVqveuQ",
"GLswpI7RHBI",
"HoPzX4gpJfr",
"SIzKQaZgYoI"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies static exponential and one-peer exponential graphs for Decentralized momentum SGD. The results show that these two exponential graphs have a better balance between per-iteration communication and iteration complexity compared to the existing communication topology. Especially, they show that on... | [
7,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_l2UWXn5iBQI",
"SIzKQaZgYoI",
"SIzKQaZgYoI",
"HoPzX4gpJfr",
"_1WGzh6WYuA",
"nips_2021_l2UWXn5iBQI",
"nips_2021_l2UWXn5iBQI"
] |
nips_2021_3ccoZ40Us0N | CLIP-It! Language-Guided Video Summarization | A generic video summary is an abridged version of a video that conveys the whole story and features the most important scenes. Yet the importance of scenes in a video is often subjective, and users should have the option of customizing the summary by using natural language to specify what is important to them. Further, existing models for fully automatic generic summarization have not exploited available language models, which can serve as an effective prior for saliency. This work introduces CLIP-It, a single framework for addressing both generic and query-focused video summarization, typically approached separately in the literature. We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query (for query-focused summarization) or an automatically generated dense video caption (for generic video summarization). Our model can be extended to the unsupervised setting by training without ground-truth supervision. We outperform baselines and prior work by a significant margin on both standard video summarization datasets (TVSum and SumMe) and a query-focused video summarization dataset (QFVS). Particularly, we achieve large improvements in the transfer setting, attesting to our method's strong generalization capabilities.
| accept | This submission received the following final ratings: 6, 6, 7.
Reviewer X5gK had initially expressed concerns about the effectiveness of the approach and the value of captions for summarization. The responses on these points provided by the authors convinced the Reviewer to raise the rating to 7.
Reviewer Uj81 gives a rating of 6, but recommends adding details about the sampling of the generated captions to the paper.
Reviewer tjo5 appreciates the clarifications given in the author response and confirms the original rating.
The ACs agree with the recommendation of acceptance. | train | [
"tR54RTg3j0v",
"brCX2yzYvM",
"GO0FyaQDziX",
"lq4Q3--L0X5",
"CE2jZz2vROk",
"oTte7kqEdC_",
"vQis5-cCcp",
"wSEcuuTlgJy",
"8xfjmq6z5tw",
"DvoUFVK52CN",
"f4ZkswsIXsV",
"zk0kKtYDo-E",
"KIUrWIpt3uR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper describes a video summarization model that is trained to use either user provided query or automatically generated dense captioning to select key-frames as summaries.\n\nAt the core of the approach is an attention mechanism that computes similarity between frame embeddings and text embeddings. Subsequen... | [
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_3ccoZ40Us0N",
"tR54RTg3j0v",
"brCX2yzYvM",
"oTte7kqEdC_",
"nips_2021_3ccoZ40Us0N",
"CE2jZz2vROk",
"8xfjmq6z5tw",
"brCX2yzYvM",
"KIUrWIpt3uR",
"CE2jZz2vROk",
"tR54RTg3j0v",
"nips_2021_3ccoZ40Us0N",
"nips_2021_3ccoZ40Us0N"
] |
nips_2021_gEXbJVhVK5_ | Learning Treatment Effects in Panels with General Intervention Patterns | Vivek Farias, Andrew Li, Tianyi Peng | accept | This paper considers the problem of learning treatment effects with panel data and provides an algorithm with tight theoretical guarantees as well as good empirical performance. All the reviewers agree that the results in this paper are a substantial improvement over existing results for this problem. On the technical side, this problem is closely related to matrix completion but at the same time requires new technical results, which the authors do, building upon recent advances in understanding the convex relaxation formulation of matrix completion through a nonconvex viewpoint. Overall, I feel that this paper has both interesting results as well as techniques that are more broadly interesting. | train | [
"7tGvxHCbjr4",
"D2ZIWxk0i5",
"7Jh6OI0HJ37",
"exEbbXjStg2",
"UqnXzacmp0",
"34OGzQbPced",
"WvCJmiIn8uT",
"x6jAEJ89HHg"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the positive review. In response to your minor points:\n\n1. The generalization to $k>1$: There does not appear to be any technical barrier to generalizing the proof framework to $k>1$, although the proof will be more mechanically involved, particularly when performing the inductive analysis under m... | [
-1,
-1,
-1,
-1,
7,
7,
10,
9
] | [
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"x6jAEJ89HHg",
"UqnXzacmp0",
"WvCJmiIn8uT",
"34OGzQbPced",
"nips_2021_gEXbJVhVK5_",
"nips_2021_gEXbJVhVK5_",
"nips_2021_gEXbJVhVK5_",
"nips_2021_gEXbJVhVK5_"
] |
nips_2021_wZrOOO9XBn | Lossy Compression for Lossless Prediction | Most data is automatically collected and only ever "seen" by algorithms. Yet, data compressors preserve perceptual fidelity rather than just the information needed by algorithms performing downstream tasks. In this paper, we characterize the bit-rate required to ensure high performance on all predictive tasks that are invariant under a set of transformations, such as data augmentations. Based on our theory, we design unsupervised objectives for training neural compressors. Using these objectives, we train a generic image compressor that achieves substantial rate savings (more than 1000x on ImageNet) compared to JPEG on 8 datasets, without decreasing downstream classification performance.
| accept | The paper analyzes "supervised" data compression for downstream predictive tasks, aiming to achieve higher compression rates with negligible or no performance loss. It theoretically characterizes the bitrate required to ensure high performance on all predictive tasks that are invariant under a set of transformations, such as data augmentation, and derives two corresponding variational objectives, VIC and BINCE. The method is validated empirically on image tasks, both training from scratch and adapting an existing SOTA vision transformer model with an entropy bottleneck.
The method can be seen as a generalization of "End-to-End Learning of Compressible Features" (Singh et al., 2020) but significantly expands its scope by considering lossy compression for a collection of related invariant tasks and by providing a rigorous information-theoretical justification of the approach. Original weaknesses pointed out by the reviewers were sufficiently addressed in the review period. Experiments are thorough and exhaustive. | train | [
"zo65jGcs0kU",
"1qdYxU2-13S",
"PkybgSQOLQl",
"b8k9xtWo6Cx",
"ntaGVmJfMuf",
"e-kE-br71E",
"MSpljSsYyrO",
"4tv6FEjFYbg",
"v1Io-5CD-gk",
"PlpiO_9MDY",
"qfuh-eeyo_5",
"30pxX2Dz0Qs",
"CuuNuBlJ5_",
"sV1WNU3ELj",
"1zFGw1qaDh",
"8MgmOEITsn1",
"nC-Dp0b1-i",
"iiXp8_XcuhF",
"uIsg_Ou_OYf",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
"This paper proposes methods to introduce a self-supervised approach to neural image compression for classification tasks.\nVIC is a modified neural compressor in which inputs are augmented but target reconstructions are not.\nBINCE is a modified neural compressor same as VIC but trained by an entropy bottleneck an... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_wZrOOO9XBn",
"PlpiO_9MDY",
"nips_2021_wZrOOO9XBn",
"ntaGVmJfMuf",
"PkybgSQOLQl",
"-Pm_oxP0u6",
"-Pm_oxP0u6",
"PkybgSQOLQl",
"LJ7eApXqBh3",
"zo65jGcs0kU",
"LJ7eApXqBh3",
"PkybgSQOLQl",
"PkybgSQOLQl",
"LJ7eApXqBh3",
"nips_2021_wZrOOO9XBn",
"LJ7eApXqBh3",
"LJ7eApXqBh3",
"-P... |
nips_2021_t6EL1tTI3D | From Optimality to Robustness: Adaptive Re-Sampling Strategies in Stochastic Bandits | The stochastic multi-arm bandit problem has been extensively studied under standard assumptions on the arm's distribution (e.g bounded with known support, exponential family, etc). These assumptions are suitable for many real-world problems but sometimes they require knowledge (on tails for instance) that may not be precisely accessible to the practitioner, raising the question of the robustness of bandit algorithms to model misspecification. In this paper we study a generic \emph{Dirichlet Sampling} (DS) algorithm, based on pairwise comparisons of empirical indices computed with \textit{re-sampling} of the arms' observations and a data-dependent \textit{exploration bonus}. We show that different variants of this strategy achieve provably optimal regret guarantees when the distributions are bounded and logarithmic regret for semi-bounded distributions with a mild quantile condition. We also show that a simple tuning achieve robustness with respect to a large class of unbounded distributions, at the cost of slightly worse than logarithmic asymptotic regret. We finally provide numerical experiments showing the merits of DS in a decision-making problem on synthetic agriculture data.
| accept | I believe there is a strong consensus around this paper's novelty and its contributions, and on that note it is also a good chance to thank the reviewers and authors for the informative dialogue. From my own reading I can add that while the problem is well motivated, the exposition in various parts and the notational conventions make the paper less accessible than it can be and this is something that can be improved with the final version (I hope). It may also be worth commenting further on the nature of the histograms and the relevance to ones seen in various modeling instances, e.g., mixture models, to ensure these are not perceived as just esoteric one-off examples. | test | [
"tV62Z7mk_N5",
"jNCF9AU8Cf3",
"2Yq8uVTVsd",
"4tcEp5IkHte",
"54kI-r2Prw",
"c5VR25Sg8p",
"HdCexTg2Pm",
"KVFXM36X3u8",
"D3iRGep36yd",
"c9swBraAyj"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers robustness of MAB algorithm to model misspecification when knowledge on the distribution (e.g., tails) is not directly accessible. They study a generic Dirichlet Sampling algorithm and provide a generic theorem for regret decomposition. They applied the algorithm in three examples: 1) bounded ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_t6EL1tTI3D",
"2Yq8uVTVsd",
"tV62Z7mk_N5",
"nips_2021_t6EL1tTI3D",
"tV62Z7mk_N5",
"c9swBraAyj",
"D3iRGep36yd",
"nips_2021_t6EL1tTI3D",
"nips_2021_t6EL1tTI3D",
"nips_2021_t6EL1tTI3D"
] |
nips_2021_VvGIT6AOGsx | CCVS: Context-aware Controllable Video Synthesis | This presentation introduces a self-supervised learning approach to the synthesis of new videos clips from old ones, with several new key elements for improved spatial resolution and realism: It conditions the synthesis process on contextual information for temporal continuity and ancillary information for fine control. The prediction model is doubly autoregressive, in the latent space of an autoencoder for forecasting, and in image space for updating contextual information, which is also used to enforce spatio-temporal consistency through a learnable optical flow module. Adversarial training of the autoencoder in the appearance and temporal domains is used to further improve the realism of its output. A quantizer inserted between the encoder and the transformer in charge of forecasting future frames in latent space (and its inverse inserted between the transformer and the decoder) adds even more flexibility by affording simple mechanisms for handling multimodal ancillary information for controlling the synthesis process (e.g., a few sample frames, an audio track, a trajectory in image space) and taking into account the intrinsically uncertain nature of the future by allowing multiple predictions. Experiments with an implementation of the proposed approach give very good qualitative and quantitative results on multiple tasks and standard benchmarks.
| accept | The paper proposed a conditional video synthesis model based on the vision transformer architecture and the VQVAE architecture All the reviewers considered the paper above the bar. The rebuttal successfully answered several questions raised by the reviewers, with two reviewers upgraded the score to more positive ratings. Overall, the reviewers consider the paper a welcomed extension of the transformer plus VQVAE paradigm for video synthesis. The quantitative results were convincing. The meta-reviewer agrees with the assessment and would like to recommend its acceptance. | val | [
"DJQmS73VNt3",
"VoEBr3AxHgm",
"Dbs1mKDcbxO",
"hvyMAy4DQAt",
"i0oLB-IkXUB",
"cyjFzhvfLc-",
"0jOA_CMqajN",
"8kxDcTr3yDF",
"5iuegO2joC8",
"Nc7q32fan6Z",
"ViQsN2WUYvX",
"cCG0N__yjkT",
"BfWdRYxZfq"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" **Diversity metric for conditioned scenarios.** We thank the reviewer with his/her comments. First, the PSNR metric as used in our experiment is in fact a measure of pixel-wise similarity, so pixel-wise distance increases as PSNR decreases. Moreover, we can also conclude there is diversity if the SSIM is not clos... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5
] | [
"VoEBr3AxHgm",
"ViQsN2WUYvX",
"hvyMAy4DQAt",
"cCG0N__yjkT",
"nips_2021_VvGIT6AOGsx",
"0jOA_CMqajN",
"Nc7q32fan6Z",
"nips_2021_VvGIT6AOGsx",
"nips_2021_VvGIT6AOGsx",
"8kxDcTr3yDF",
"BfWdRYxZfq",
"i0oLB-IkXUB",
"nips_2021_VvGIT6AOGsx"
] |
nips_2021_21uqYo8soks | An Online Riemannian PCA for Stochastic Canonical Correlation Analysis | Zihang Meng, Rudrasis Chakraborty, Vikas Singh | accept | The paper seems strong for both theoretical and practical point of view.
There were few issues with the paper, but it seems that most of them were solved during the rebuttal.
The argument used for the boundedness of the iterates needs to be made more rigorous and included in the camera-ready version. | test | [
"Z-iq6AFr64",
"cafanesiBg7",
"2M1m1y--17",
"YRBmlhYkJ0A",
"TSVj8Lhkou4",
"4MxNy2oZ3gt",
"zKiLpZKjACx",
"oCwV2Hdwcm",
"SPTSJxfnhbz",
"yRBdgFZlEhG",
"Atnke9DUUoF",
"dR2HSAvwxy6",
"BIv6220IjJB",
"S6k-HYui5KP",
"fB8H-Yx-P9",
"GkQFrxQwZEI",
"pHZ_3VnV6q9",
"XG3KGTjfueg",
"9SPDbdhDS3",
... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"a... | [
" I have slightly increased the score so that we can reach a consensus on this paper (hopefully this helps ACs to make decisions).",
"This paper proposes a stochastic algorithm for canonical correlation analysis (CCA). It is based on a reformulation of CCA so that it can use existing tools in online PCA and manif... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"2M1m1y--17",
"nips_2021_21uqYo8soks",
"YRBmlhYkJ0A",
"TSVj8Lhkou4",
"SPTSJxfnhbz",
"nips_2021_21uqYo8soks",
"oCwV2Hdwcm",
"GkQFrxQwZEI",
"yRBdgFZlEhG",
"dR2HSAvwxy6",
"XG3KGTjfueg",
"cafanesiBg7",
"S6k-HYui5KP",
"fB8H-Yx-P9",
"5a53wuHKuOT",
"4MxNy2oZ3gt",
"cafanesiBg7",
"9SPDbdhDS... |
nips_2021_v4vjMuXF-B | Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics | Deep neural networks excel at image classification, but their performance is far less robust to input perturbations than human perception. In this work we explore whether this shortcoming may be partly addressed by incorporating brain-inspired recurrent dynamics in deep convolutional networks. We take inspiration from a popular framework in neuroscience: predictive coding''. At each layer of the hierarchical model, generative feedbackpredicts'' (i.e., reconstructs) the pattern of activity in the previous layer. The reconstruction errors are used to iteratively update the network’s representations across timesteps, and to optimize the network's feedback weights over the natural image dataset--a form of unsupervised training. We show that implementing this strategy into two popular networks, VGG16 and EfficientNetB0, improves their robustness against various corruptions and adversarial attacks. We hypothesize that other feedforward networks could similarly benefit from the proposed framework. To promote research in this direction, we provide an open-sourced PyTorch-based package called \textit{Predify}, which can be used to implement and investigate the impacts of the predictive coding dynamics in any convolutional neural network.
| accept | This paper introduces a framework for incorporating recurrent feedback connections based on predictive coding principles into feed-forward networks. Overall, the reviewers expressed excitement for the integration of neuroscience with ML and thought that the evaluations were convincing. However, there was initially high spread in reviewer scores due to concern over the novelty of the model and approach. After the detailed author responses and a thorough discussion, the consensus moved towards acceptance and two reviewers increased their scores. | train | [
"SHNC-0Llx_4",
"KXeC_gkN0W",
"J2_Okd_leed",
"QHw0ZlPsFlc",
"In_dHUTAab-",
"KBOWhsxAgAt",
"s_8jhw4Pyc",
"qckgDDA7DoV",
"Vrh0ONKFwSX",
"48FdsvarKG",
"z9ehOrr48Xd",
"R4jnLjbi9Y9",
"UU9eVdXsZKH",
"1Rr28Xf8yV",
"R-dJge_7KG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I'd like to thank the authors for their careful response. I am happy to see the new experiment results. I do think the experiments make the paper more comprehensive, and the paper is above the acceptance threshold. ",
"The authors draw inspiration from neuroscience and propose augmenting feedforward neural netw... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"QHw0ZlPsFlc",
"nips_2021_v4vjMuXF-B",
"In_dHUTAab-",
"z9ehOrr48Xd",
"KBOWhsxAgAt",
"Vrh0ONKFwSX",
"UU9eVdXsZKH",
"nips_2021_v4vjMuXF-B",
"48FdsvarKG",
"KXeC_gkN0W",
"1Rr28Xf8yV",
"qckgDDA7DoV",
"R-dJge_7KG",
"nips_2021_v4vjMuXF-B",
"nips_2021_v4vjMuXF-B"
] |
nips_2021_NCDMYD2y5kK | Deep Extrapolation for Attribute-Enhanced Generation | Attribute extrapolation in sample generation is challenging for deep neural networks operating beyond the training distribution. We formulate a new task for extrapolation in sequence generation, focusing on natural language and proteins, and propose GENhance, a generative framework that enhances attributes through a learned latent space. Trained on movie reviews and a computed protein stability dataset, GENhance can generate strongly-positive text reviews and highly stable protein sequences without being exposed to similar data during training. We release our benchmark tasks and models to contribute to the study of generative modeling extrapolation and data-driven design in biology and chemistry.
| accept | The paper presents a method for generating sequences with attributes or characteristics that are not present in the training data. It is based on an encoder-decoder model, trained according to a regularized loss. The applications concern two domains: more positive sentence generation and more stable protein sequence generation. The reviewers agree that the topic, extrapolating to unseen attribute values for sequence generation is important and that the technical contribution – through the combination of a contrastive loss and a cycle consistency loss is original.
The initial reviews highlighted weaknesses concerning two main aspects: the positioning w.r.t. the literature and the lack of experimental comparisons with SOTA baselines. The authors provided a strong rebuttal answering with details to the different questions and remarks. They provided comparisons with two new baselines suggested by the reviewers and performed a qualitative human evaluation also suggested in the reviews. They additionally introduced new ablation analyses. The reviewers have appreciated the quality of the answers and in light of this, they all raised their scores. I propose an accept. | train | [
"rBNHxVAiI1l",
"vGrGg-PAZu",
"xWBDO3iwGN",
"Axt6qv_qbMA",
"gBc8TC1zkxq",
"vNbz4bX5Ej-",
"wsmBEDI6tdP",
"DOQ1sXRemaF",
"2F7CkxXUaeR",
"9w1sgHXIDX",
"kaq8wO3lWgB",
"lFrxfrv4Uop",
"I7hxJlsVnYa",
"wNuMdoYd4WU",
"YzTbPoWT9og",
"-O9a4wRnBAt"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
" Thank you for the comments and increasing the score. We are happy to hear that the additional results significantly strengthen the work, and we certainly agree!\n\nLastly, in our revision edits for the manuscript, we made sure to properly address all prior work noted by the reviewers and will not overclaim formal... | [
-1,
5,
7,
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
-1,
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Axt6qv_qbMA",
"nips_2021_NCDMYD2y5kK",
"nips_2021_NCDMYD2y5kK",
"wsmBEDI6tdP",
"nips_2021_NCDMYD2y5kK",
"wNuMdoYd4WU",
"YzTbPoWT9og",
"-O9a4wRnBAt",
"I7hxJlsVnYa",
"nips_2021_NCDMYD2y5kK",
"lFrxfrv4Uop",
"nips_2021_NCDMYD2y5kK",
"9w1sgHXIDX",
"vGrGg-PAZu",
"xWBDO3iwGN",
"gBc8TC1zkxq"
... |
nips_2021_xRrdX_wV1JI | Generalized DataWeighting via Class-Level Gradient Manipulation | Can Chen, Shuhao Zheng, Xi Chen, Erqun Dong, Xue (Steve) Liu, Hao Liu, Dejing Dou | accept | There is a strong disagreement with this paper, with scores ranging as 8,7,5, and 4. The reviewer who provided a 4 said she increased her score to 5 but this has not propagated. In this case, the rating would be 6.25 rather than 6.
While the accept reviews are stronger score wise, the reasons to reject (marginal improvements and novelty) have been far more detailed than the reasons to accept (also centered on improvements and novelty).
My own reading would align with the “accept” reviewers: novelty and improvements are genuine.
How novel is this contribution seems to hinge on a key question: is the proposed method similar to methods that would modify the loss function (see for instance https://arxiv.org/pdf/1811.08400.pdf publishded by one of the reviewers who supports reject) or weight instances ? One can observe that the proposed method does not reduce to a modification of the loss function, but does it add something fundamentally different ? The example in figure 1 is excellent: the authors clearly show that their scheme performs differently than instance weighting.
Furthermore, I do not think any modification of the loss function could achieve the same results as a gradient-based method.
Improvements are indeed small, but this is what one would expect with a change in the loss function (we are not talking about a new architecture). As they do not incur added computational cost, this is something one could add on top of any system. Basically, the approach does no harm if there is no label noise, but seems to always improve as label noise increases, which is enough for adoption.
My main concern here remains that despite the claim that the method adds no computational complexity, we still do not have results on a larger dataset such as ImageNet (as promised by the authors)
| train | [
"43vJ4z7eXN",
"5BbgpJ1hztE",
"KS4TxldYyfs",
"ICHkd5JG15G",
"5UOyMAqfhf7",
"We_J-8inGfA",
"1vzCgiAFoM5",
"THLSrz3hODC",
"1UPgArrqWnu",
"rh0kje2mmn",
"B3Z8Q69IYAM",
"4J2Y8NBWzkp",
"K13cDk-lSbn",
"PJ0LroomLN"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nMany thanks for your valuable and constructive comments on clarifying, correcting, and improving the materials in this paper!\n\nAs you have said,\n> To conclude, I improve the rating to 5: Marginally below the acceptance threshold.\n\nWe really appreciate your recognition. As the rebuttal DDL i... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"5UOyMAqfhf7",
"5UOyMAqfhf7",
"5UOyMAqfhf7",
"We_J-8inGfA",
"B3Z8Q69IYAM",
"1UPgArrqWnu",
"nips_2021_xRrdX_wV1JI",
"PJ0LroomLN",
"1vzCgiAFoM5",
"K13cDk-lSbn",
"4J2Y8NBWzkp",
"nips_2021_xRrdX_wV1JI",
"nips_2021_xRrdX_wV1JI",
"nips_2021_xRrdX_wV1JI"
] |
nips_2021_kAFq29tuVw0 | Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation | Graph Similarity Computation (GSC) is essential to wide-ranging graph applications such as retrieval, plagiarism/anomaly detection, etc. The exact computation of graph similarity, e.g., Graph Edit Distance (GED), is an NP-hard problem that cannot be exactly solved within an adequate time given large graphs. Thanks to the strong representation power of graph neural network (GNN), a variety of GNN-based inexact methods emerged. To capture the subtle difference across graphs, the key success is designing the dense interaction with features fusion at the early stage, which, however, is a trade-off between speed and accuracy. For slow learning of graph similarity, this paper proposes a novel early-fusion approach by designing a co-attention-based feature fusion network on multilevel GNN features. To further improve the speed without much accuracy drop, we introduce an efficient GSC solution by distilling the knowledge from the slow early-fusion model to the student one for fast inference. Such a student model also enables the offline collection of individual graph embeddings, speeding up the inference time in orders. To address the instability through knowledge transfer, we decompose the dynamic joint embedding into the static pseudo individual ones for precise teacher-student alignment. The experimental analysis on the real-world datasets demonstrates the superiority of our approach over the state-of-the-art methods on both accuracy and efficiency. Particularly, we speed up the prior art by more than 10x on the benchmark AIDS data.
| accept | This paper addresses the problem of speeding up the computation of accurate similarity between two graphs, which is one of the key problems in the highly active field of graph learning.
In order to mediate between time-consuming-but-detailed comparisons and fast-but-insufficient comparisons, this paper proposes a distillation approach.
The use of distillation in this context of graph comparison is new and makes much sense, and the tailed experiments also supports the effectiveness of the proposed method (although the property for very large graph is still open to investigation.)
Overall, this paper makes a solid technical contribution to this field and will be well received by the NeurIPS community.
| val | [
"q_Jf21Yx4CQ",
"eFtBLwZYo4V",
"qNBDHi_aQm-",
"EffDfXDfqmZ",
"8V5zsDRFfT3",
"dHVfjDBJzak",
"MLIsjGMhHcT"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I have read the other reviews and author responses. In my own review, the authors clarified a couple of important points regarding student-teacher relation that was a concern for me. Though I think the paper still needs more explanation on this dimension, I find author responses satisfactory and I am leaning towa... | [
-1,
6,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
4,
4
] | [
"EffDfXDfqmZ",
"nips_2021_kAFq29tuVw0",
"dHVfjDBJzak",
"eFtBLwZYo4V",
"MLIsjGMhHcT",
"nips_2021_kAFq29tuVw0",
"nips_2021_kAFq29tuVw0"
] |
nips_2021_hhU9TEvB6AF | Meta Learning Backpropagation And Improving It | Many concepts have been proposed for meta learning with neural networks (NNs), e.g., NNs that learn to reprogram fast weights, Hebbian plasticity, learned learning rules, and meta recurrent NNs. Our Variable Shared Meta Learning (VSML) unifies the above and demonstrates that simple weight-sharing and sparsity in an NN is sufficient to express powerful learning algorithms (LAs) in a reusable fashion. A simple implementation of VSML where the weights of a neural network are replaced by tiny LSTMs allows for implementing the backpropagation LA solely by running in forward-mode. It can even meta learn new LAs that differ from online backpropagation and generalize to datasets outside of the meta training distribution without explicit gradient calculation. Introspection reveals that our meta learned LAs learn through fast association in a way that is qualitatively different from gradient descent.
| accept | The reviewers were in agreement that the method developed were novel and well evaluated, particularly with the additional details and experimental results that came up in the discussion. These results should be incorporated in the final version of the paper. Overall this paper should be of significant interest to the NeurIPS community. | train | [
"aSzvnWOeSH-",
"2YC1acEt1kH",
"vnEadZRyoMn",
"id5upP3r4g",
"vZf6KvaQFP4",
"LBV126XGZUz",
"7TknL_T4OK0",
"ErofapUBVc",
"bf8Mzn_dY17",
"ywbd3rtmYRv",
"pHjoTZwfR_r",
"pw0VGXCL8EU",
"8ojp5ojvmqU",
"lUwI3jVOn8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I still have a few concerns but as a reviewer, I feel the need to support bold research even if it has small gaps.",
"The authors in this paper propose to learn learning algorithms (LAs) like backpropagation using an RNN/LSTM to replace all the weights of a network. Subsequently, they use LA on a different data... | [
-1,
8,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"pw0VGXCL8EU",
"nips_2021_hhU9TEvB6AF",
"nips_2021_hhU9TEvB6AF",
"ErofapUBVc",
"nips_2021_hhU9TEvB6AF",
"ywbd3rtmYRv",
"nips_2021_hhU9TEvB6AF",
"bf8Mzn_dY17",
"8ojp5ojvmqU",
"vZf6KvaQFP4",
"lUwI3jVOn8",
"2YC1acEt1kH",
"vnEadZRyoMn",
"nips_2021_hhU9TEvB6AF"
] |
nips_2021_AhuVLaYp6gn | Posterior Meta-Replay for Continual Learning | Learning a sequence of tasks without access to i.i.d. observations is a widely studied form of continual learning (CL) that remains challenging. In principle, Bayesian learning directly applies to this setting, since recursive and one-off Bayesian updates yield the same result. In practice, however, recursive updating often leads to poor trade-off solutions across tasks because approximate inference is necessary for most models of interest. Here, we describe an alternative Bayesian approach where task-conditioned parameter distributions are continually inferred from data. We offer a practical deep learning implementation of our framework based on probabilistic task-conditioned hypernetworks, an approach we term posterior meta-replay. Experiments on standard benchmarks show that our probabilistic hypernetworks compress sequences of posterior parameter distributions with virtually no forgetting. We obtain considerable performance gains compared to existing Bayesian CL methods, and identify task inference as our major limiting factor. This limitation has several causes that are independent of the considered sequential setting, opening up new avenues for progress in CL.
| accept | We thank the authors for their detailed clarifications. Most reviewers agreed that this paper makes interesting contributions to the area of continual learning, which is relevant to the NeurIPS community. There were extended discussions about the Bayesian positioning of the paper and gaps in the related work. The former was settled by the authors' detailed response how online EWC can be obtained from a series of approximations and how posterior meta-replay can be derived by approximations of the posterior. Regarding the latter, I would encourage the authors to not merely cite, but to clearly explain how their work builds on prior work. | train | [
"sb5g_JWL2QO",
"51CMEHG6u0h",
"dwAXr4r_NlE",
"N6BWruOjO3D",
"1bmPD1-GLBR",
"gZvASuWDc1e",
"wd5uO-5pRno",
"q_e3mi2TH2",
"Q6ym8evN_i5",
"Xgcvb_hKuut",
"tMWJ2FIBOG0",
"pSzVeAXDCOs",
"tq2Bw6y7z06"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer *mFdX* for the critical feedback, which challenges us to further clarify our positioning and design choices. However, we respectfully disagree with the view that it is unjustified to call our CL method Bayesian, and explain below why.\n\n**Just like Online EWC, our method can be derived through ... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"51CMEHG6u0h",
"Q6ym8evN_i5",
"nips_2021_AhuVLaYp6gn",
"gZvASuWDc1e",
"wd5uO-5pRno",
"dwAXr4r_NlE",
"tq2Bw6y7z06",
"nips_2021_AhuVLaYp6gn",
"tMWJ2FIBOG0",
"pSzVeAXDCOs",
"nips_2021_AhuVLaYp6gn",
"nips_2021_AhuVLaYp6gn",
"nips_2021_AhuVLaYp6gn"
] |
nips_2021_sFyrGPCKQJC | Optimizing Reusable Knowledge for Continual Learning via Metalearning | When learning tasks over time, artificial neural networks suffer from a problem known as Catastrophic Forgetting (CF). This happens when the weights of a network are overwritten during the training of a new task causing forgetting of old information. To address this issue, we propose MetA Reusable Knowledge or MARK, a new method that fosters weight reusability instead of overwriting when learning a new task. Specifically, MARK keeps a set of shared weights among tasks. We envision these shared weights as a common Knowledge Base (KB) that is not only used to learn new tasks, but also enriched with new knowledge as the model learns new tasks. Key components behind MARK are two-fold. On the one hand, a metalearning approach provides the key mechanism to incrementally enrich the KB with new knowledge and to foster weight reusability among tasks. On the other hand, a set of trainable masks provides the key mechanism to selectively choose from the KB relevant weights to solve each task. By using MARK, we achieve state of the art results in several popular benchmarks, surpassing the best performing methods in terms of average accuracy by over 10% on the 20-Split-MiniImageNet dataset, while achieving almost zero forgetfulness using 55% of the number of parameters. Furthermore, an ablation study provides evidence that, indeed, MARK is learning reusable knowledge that is selectively used by each task.
| accept | After viewing the review and rebuttal and skimming through the paper I agree that the paper proposes a few components that are at least quite interesting, and that I feel the community can build on easily moving forward.
One of the main arguments against the work is that it assumes similarity between tasks. I think partially the answer to this is through the benchmarks considered (Cifar-100 and mini-Imagenet). I find these benchmarks as valid benchmarks (proposed in other works) so the implicit similarity between tasks is the one exploited by previously existing works as well.
What I do agree with though is that while this assumption is crucial for the method, we do not have a good understanding of it (and the paper does not provide one or build ablations towards one). E.g. how similar things needs to be for this to work (or how dissimilar). Explicitly trying to find that task that would break this assumption would greatly improve our understanding on this assumption (at least in an empirical way). One could of course compound current method with a more traditional CL method to add that additional guaranty against forgetting in KB or deal with scenarios where the distributions become less and less related.
However I find this applications of ideas from meta-learning directly in a continual learning setting interesting, and in particular I find the results quite surprising so I'm thinking the community will find them too.
Unfortunately after thinking carefully about this work, I do not think it can be accepted in its current format. But I think is a very relevant and interesting work to the community. And I urge the authors to resubmit a new version where you add some more ablation or explicitly try to understand when the method breaks, what part of the similarity between tasks you explore, how do the learned representation change. It definitely is a borderline paper in my view, with the main worry being that the current version mostly harms the impact the paper can have (if for e.g. we could understand better what is going on). I think an improved version of this would make for a really interesting paper. | train | [
"AFtvIXp5Tg_",
"GaOUj6rVaW1",
"0ihgesfZxbP",
"lWoQDywX8bL",
"gbfY-mih7S-",
"vMJXoJj01T5",
"kqKsD-betGS",
"trBkjouXExp",
"2tRDD2M5pwW",
"dtCsKEsKj8Z",
"9hj1A-WewZf",
"cu0cQZgVPW_"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the reviewer's participation in the discussion process.\n\nWe want to clarify that we did resolve most of the comments from the reviewer. We added extra experiments due to the concern of comparing ourselves with more recent methods (GPM: 76.67% (std 3.1%) BWT: -0.42%, Experience Replay: 65.9% (std 1... | [
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"0ihgesfZxbP",
"nips_2021_sFyrGPCKQJC",
"kqKsD-betGS",
"nips_2021_sFyrGPCKQJC",
"trBkjouXExp",
"dtCsKEsKj8Z",
"cu0cQZgVPW_",
"lWoQDywX8bL",
"9hj1A-WewZf",
"GaOUj6rVaW1",
"nips_2021_sFyrGPCKQJC",
"nips_2021_sFyrGPCKQJC"
] |
nips_2021_9FREJhzo1q | A sampling-based circuit for optimal decision making | Many features of human and animal behavior can be understood in the framework of Bayesian inference and optimal decision making, but the biological substrate of such processes is not fully understood. Neural sampling provides a flexible code for probabilistic inference in high dimensions and explains key features of sensory responses under experimental manipulations of uncertainty. However, since it encodes uncertainty implicitly, across time and neurons, it remains unclear how such representations can be used for decision making. Here we propose a spiking network model that maps neural samples of a task-specific marginal distribution into an instantaneous representation of uncertainty via a procedure inspired by online kernel density estimation, so that its output can be readily used for decision making. Our model is consistent with experimental results at the level of single neurons and populations, and makes predictions for how neural responses and decisions could be modulated by uncertainty and prior biases. More generally, our work brings together conflicting perspectives on probabilistic brain computation.
| accept | Novel, technically sound contribution to the field of computational neuroscience that proposes a combined neural model for inference and decision making. Particularly, the analysis linking the neural activity of the proposed model to experimental observations was deemed valuable by reviewers. Accept. | val | [
"96W5Bqv0KRp",
"nQ1W7jw0Acc",
"iXV-vt1nh92",
"XB10YF_Z5vN",
"SfdEXeSWk1b",
"k0Xd_Opb1ce",
"4zj7dBJ8KqW",
"pPuXT-jved",
"YHEEQequlSL",
"B-hMXrlV9ZO"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifications. Very interesting paper, look forward to seeing experimental work for testing the ideas presented here! ",
" Thanks for the clarifications. Very nice paper.",
" We agree that there is a lot of confusion in the literature about what exactly the different probabilistic coding mo... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"SfdEXeSWk1b",
"iXV-vt1nh92",
"B-hMXrlV9ZO",
"YHEEQequlSL",
"pPuXT-jved",
"4zj7dBJ8KqW",
"nips_2021_9FREJhzo1q",
"nips_2021_9FREJhzo1q",
"nips_2021_9FREJhzo1q",
"nips_2021_9FREJhzo1q"
] |
nips_2021_RdWt-VDPZEG | Compressed Video Contrastive Learning | This work concerns self-supervised video representation learning (SSVRL), one topic that has received much attention recently. Since videos are storage-intensive and contain a rich source of visual content, models designed for SSVRL are expected to be storage- and computation-efficient, as well as effective. However, most existing methods only focus on one of the two objectives, failing to consider both at the same time. In this work, for the first time, the seemingly contradictory goals are simultaneously achieved by exploiting compressed videos and capturing mutual information between two input streams. Specifically, a novel Motion Vector based Cross Guidance Contrastive learning approach (MVCGC) is proposed. For storage and computation efficiency, we choose to directly decode RGB frames and motion vectors (that resemble low-resolution optical flows) from compressed videos on-the-fly. To enhance the representation ability of the motion vectors, hence the effectiveness of our method, we design a cross guidance contrastive learning algorithm based on multi-instance InfoNCE loss, where motion vectors can take supervision signals from RGB frames and vice versa. Comprehensive experiments on two downstream tasks show that our MVCGC yields new state-of-the-art while being significantly more efficient than its competitors.
| accept | The reviewers acknowledged that this paper tackles an important problem of making video pipelines compute/storage-efficient. However, they raised concerns over the limited novelty, especially compared to CoCLR and IMRNet. They also commented that the arguments around codecs and decoding mechanisms used in prior approaches are system-level design choices rather than a scientific challenge, and questioned whether replacing coviar-like video processing with the `pyav` python library can be considered a novel technical contribution. The rebuttal addressed some of the questions but unfortunately did not perfectly clear up the major concerns. Based on this, I am recommending a rejection at this time, but would invite the authors to improve their paper based on the reviewers' suggestions. | train | [
"VjmsqRECKcv",
"ce1r5E_38Y",
"KPPWp9-VhUT",
"81xzM_lRbz7",
"MwJNJoUGek",
"ILoMZbR0wu",
"cpUg3dAxzNk",
"TgZncXzKXoU",
"UB6Mb1qqkFE",
"zHzbi9h7VaC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors proposed to leverage cross guidance between decoded RGB frames and Motion Vectors from compressed codecs to learn an efficient and effective action recognition model in a self-supervised manner. Experimental results on two datasets demonstrate the effectiveness of the proposed method. Pros:\n1. The pr... | [
5,
4,
-1,
5,
4,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_RdWt-VDPZEG",
"nips_2021_RdWt-VDPZEG",
"TgZncXzKXoU",
"nips_2021_RdWt-VDPZEG",
"nips_2021_RdWt-VDPZEG",
"UB6Mb1qqkFE",
"MwJNJoUGek",
"ce1r5E_38Y",
"81xzM_lRbz7",
"VjmsqRECKcv"
] |
nips_2021_ZOeN0pU8jae | Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation | We study reinforcement learning (RL) with linear function approximation. Existing algorithms for this problem only have high-probability regret and/or Probably Approximately Correct (PAC) sample complexity guarantees, which cannot guarantee the convergence to the optimal policy. In this paper, in order to overcome the limitation of existing algorithms, we propose a new algorithm called FLUTE, which enjoys uniform-PAC convergence to the optimal policy with high probability. The uniform-PAC guarantee is the strongest possible guarantee for reinforcement learning in the literature, which can directly imply both PAC and high probability regret bounds, making our algorithm superior to all existing algorithms with linear function approximation. At the core of our algorithm is a novel minimax value function estimator and a multi-level partition scheme to select the training samples from historical observations. Both of these techniques are new and of independent interest.
| accept | The reviewers find this to be a theoretically strong paper. I see no reason to disagree and acceptance seems warranted. However, I feel that the technical notion of a uniform PAC bound is mainly of theoretical interest. It not clear how much practical insight uniform bounds provide over the insights already provided by standard PAC bounds. A poster seems appropriate. | train | [
"rxWh8vV2fqi",
"Z7GvDxb1V3",
"XJDSmF9QCUB",
"2Xg3oz-Fu9U",
"GGul1aSj3c",
"rNiOlXVl6Dc",
"rqzMXF-VDL8"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I think this is a good paper and am confident the authors can make the necessary minor changes. I maintain my score and vote for acceptance.",
" Q1: The action space $A$ is required to be finite. Is it fundamental? Can it be relaxed?\n\nA1: This finite action space assumption is not... | [
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"XJDSmF9QCUB",
"rqzMXF-VDL8",
"rNiOlXVl6Dc",
"GGul1aSj3c",
"nips_2021_ZOeN0pU8jae",
"nips_2021_ZOeN0pU8jae",
"nips_2021_ZOeN0pU8jae"
] |
nips_2021_KJ5h-yfUHa | Attention Bottlenecks for Multimodal Fusion | Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks.A common approach for building multimodal models is to simply combine multiple of these modality-specific architectures using late-stage fusion of final representations or predictions ('late-fusion').Instead, we introduce a novel transformer based architecture that uses 'attention bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, these bottlenecks force information between different modalities to pass through a small number of '`bottleneck' latent units, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
| accept | The reviewers unanimously recommend an acceptance. They acknowledged that the proposed attention bottleneck module is simple and demonstrated to be effective by extensive experiments and ablation analyses. They also appreciated an empirical exploration of different fusion mechanisms in Transformer models. The initial reviews raised some concerns about insufficient experiments and asked clarifying questions, and the rebuttal successfully cleared up the questions. Overall, this is a nice paper addressing an important problem of modeling multimodal data and proposes a simple method based on Transformer architectures, which is a timely topic for this conference. | test | [
"wQq3OZSFlAa",
"iaK09RFKr1u",
"rcafjmqVW2N",
"EMN20cPnkN",
"ZKQ5uzqgFkL",
"nzYz2m5TvXd",
"hVZVo9PKhl6",
"anekHwjGxe",
"xWGHre1gNlu",
"Eh6YyYEwxS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper conducts an experimental study of different fusion methods for multimodal audiovisual transformers, and proposes a new fusion mechanism. They mix ordinary transformer layers with new fusion attention layers. These new layers perform cross-attention, in which one modality is used and the other is used as... | [
7,
7,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_KJ5h-yfUHa",
"nips_2021_KJ5h-yfUHa",
"nips_2021_KJ5h-yfUHa",
"anekHwjGxe",
"nips_2021_KJ5h-yfUHa",
"nips_2021_KJ5h-yfUHa",
"rcafjmqVW2N",
"ZKQ5uzqgFkL",
"iaK09RFKr1u",
"wQq3OZSFlAa"
] |
nips_2021_GNFcszMtYvV | Convergence of adaptive algorithms for constrained weakly convex optimization | Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher | accept | The paper introduces an adaptive method for weakly convex optimization, which generalizes the setting of smooth nonconvex optimization. A particularly important aspect of the introduced method is that it automatically adapts to the weak convexity parameter, without any prior knowledge. The paper makes a good contribution to the literature on adaptive gradient methods (similar to the very popular AdaGrad) and all the reviews expressed support in seeing this paper published at NeurIPS. | train | [
"hcCyb6pIaZS",
"I4jRRKVWBlZ",
"rudSPTOAvI0",
"t8zmG3uhLdW",
"-TtW6cG-Kov",
"uaHsjjOx901",
"T-DEI4lCWcd"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your careful reply addressing the small mathematical rigor concerns raised. My score (of Good paper, accept) remains unchanged.",
" We are grateful to the reviewer for the careful reading of our manuscript and thoughtful comments. \n\n> *\"the authors could have done a better job in highlighting t... | [
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rudSPTOAvI0",
"T-DEI4lCWcd",
"uaHsjjOx901",
"-TtW6cG-Kov",
"nips_2021_GNFcszMtYvV",
"nips_2021_GNFcszMtYvV",
"nips_2021_GNFcszMtYvV"
] |
nips_2021_M-W0asp3fD | On the Convergence of Step Decay Step-Size for Stochastic Optimization | Xiaoyu Wang, Sindri Magnússon, Mikael Johansson | accept | This paper analyzes the step decay schedule (constant and then cut) for nonconvex optimization problems, showing that it can find an approximate first order stationary point in O(ln T/\sqrt{T}) rate. Most reviewers found the result interesting and that it gives a better understanding of the step decay schedule. There are some concerns that should be addressed in the revision: 1. clarifying why the requirement on boundedness of f can be replaced by E[f(x_t) - f(x^*)] and why the latter expectation can be bounded in natural cases; 2. detailed comparisons with previous results acknowledging that similar (or even better) rates were achieved by different algorithms in all the settings. | test | [
"bqCAMgB8V8X",
"q2Yl34OrFdP",
"8vXm6hbPV1M",
"ZLc3J7SBMMz",
"S_JWh27RXLZ",
"Ny0ZpJTWpkC",
"RgrLJOycTDs",
"fdTmtHSU2qS",
"EceCkjTTT-8",
"wa5RI7drAyc",
"Wp4kMM_5-4X",
"yKUbbuRCo7o"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer - thank you for letting us know about your remaining concerns.\n\nActually, it is a misconception that $f$ has to be bounded. On the contrary, a more useful class of problems are those where the loss function tends to infinity as the norm of $x$ tends to infinity (for example, regularized training p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"q2Yl34OrFdP",
"RgrLJOycTDs",
"S_JWh27RXLZ",
"wa5RI7drAyc",
"yKUbbuRCo7o",
"Wp4kMM_5-4X",
"EceCkjTTT-8",
"nips_2021_M-W0asp3fD",
"nips_2021_M-W0asp3fD",
"nips_2021_M-W0asp3fD",
"nips_2021_M-W0asp3fD",
"nips_2021_M-W0asp3fD"
] |
nips_2021_WigDnV-_Gq | BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation | Mingguo He, Zhewei Wei, zengfeng Huang, Hongteng Xu | accept | The paper proposes a new algorithmic approach for GNNs, and is backed up with both theory and empirical validation. The reviewers are all generally positive and in agreement of its important contributions. | train | [
"_xcVBnkLUc0",
"h8u6G0FPnss",
"HD-F6o6Vyz",
"IV9UJe7sJi",
"H409PLEzMc",
"2Kby7DTR_fy",
"eeRokv89vI",
"sUG1fHq0hn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. All my concerns have been clarified. I have no further comments on this paper.",
" Thanks for the response and additional experiments! I have no more problem with this paper. Wish the authors the best of luck.",
"This paper proposes a new graph neural network (GNN) architecture wh... | [
-1,
-1,
6,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
5,
5
] | [
"2Kby7DTR_fy",
"IV9UJe7sJi",
"nips_2021_WigDnV-_Gq",
"sUG1fHq0hn",
"HD-F6o6Vyz",
"eeRokv89vI",
"nips_2021_WigDnV-_Gq",
"nips_2021_WigDnV-_Gq"
] |
nips_2021_PcpExudEmDd | Co-evolution Transformer for Protein Contact Prediction | He Zhang, Fusong Ju, Jianwei Zhu, Liang He, Bin Shao, Nanning Zheng, Tie-Yan Liu | accept | This paper wishes to address the issue of protein contact prediction problem, State of the art approach uses hand-crafted features through Multiple Sequence Alignments(MSAs). This paper proposes a Transformer architecture to automatically identify features suitable for Protein Contact Prediction problem. All the referees were unanimous that while the ideas are novel but their comparison with the state of the art
is problematic. We hope that the authors can address these concerns in the final manuscript.
| train | [
"UWqYlM2Aw0e",
"r2GnfefnE6v",
"mM6_QMg9rnz",
"8rStYH9nNM",
"CJ-ExL6PceS",
"UN94mPIEMZI",
"UJmXy1KoRz0",
"jaHu2RotkCb",
"uINYz-Sskl",
"X--wzgi3RSo",
"z0OyyFI5wTY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors present a model that predicts contact maps from multiple sequence alignments. The paper proposes a weighted sum of k, LxL attention maps (k=#sequences, L=sequence length), where a per-residue-pair weight is learned across all sequences in the MSA. Afterwards a convolutional layer is applied to refine t... | [
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_PcpExudEmDd",
"mM6_QMg9rnz",
"8rStYH9nNM",
"CJ-ExL6PceS",
"UJmXy1KoRz0",
"nips_2021_PcpExudEmDd",
"X--wzgi3RSo",
"UN94mPIEMZI",
"z0OyyFI5wTY",
"UWqYlM2Aw0e",
"nips_2021_PcpExudEmDd"
] |
nips_2021_tu5Wg41hWl_ | Unsupervised Foreground Extraction via Deep Region Competition | We present Deep Region Competition (DRC), an algorithm designed to extract foreground objects from images in a fully unsupervised manner. Foreground extraction can be viewed as a special case of generic image segmentation that focuses on identifying and disentangling objects from the background. In this work, we rethink the foreground extraction by reconciling energy-based prior with generative image modeling in the form of Mixture of Experts (MoE), where we further introduce the learned pixel re-assignment as the essential inductive bias to capture the regularities of background regions. With this modeling, the foreground-background partition can be naturally found through Expectation-Maximization (EM). We show that the proposed method effectively exploits the interaction between the mixture components during the partitioning process, which closely connects to region competition, a seminal approach for generic image segmentation. Experiments demonstrate that DRC exhibits more competitive performances on complex real-world data and challenging multi-object scenes compared with prior methods. Moreover, we show empirically that DRC can potentially generalize to novel foreground objects even from categories unseen during training.
| accept | All reviewers rate this work as interesting and unanimously recommend acceptance of the paper but still see room for improvement. | val | [
"cj-IFhDRAB-",
"dXSyzBD8g5L",
"6wDM3fR5mE5",
"avGMfu3Ai9V",
"UhW7IaSucRs",
"-lEJ2aUZWd",
"UVJuKch1RAI",
"palYnOSCmYg",
"wKDMRSG12Bc",
"8aOyBrLuiVk"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper tackles the problem of unsupervised foreground segmentation with a combination of energy-based and deep-learning-based models. It models the foreground and background as two components that can be selected by a latent variable. The background has an inductive bias that pixels can be re-assigned to a diff... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_tu5Wg41hWl_",
"nips_2021_tu5Wg41hWl_",
"UVJuKch1RAI",
"dXSyzBD8g5L",
"wKDMRSG12Bc",
"8aOyBrLuiVk",
"avGMfu3Ai9V",
"cj-IFhDRAB-",
"nips_2021_tu5Wg41hWl_",
"nips_2021_tu5Wg41hWl_"
] |
nips_2021_BKeJmkspvc | Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation | We study the problem of estimating at a central server the mean of a set of vectors distributed across several nodes (one vector per node). When the vectors are high-dimensional, the communication cost of sending entire vectors may be prohibitive, and it may be imperative for them to use sparsification techniques. While most existing work on sparsified mean estimation is agnostic to the characteristics of the data vectors, in many practical applications such as federated learning, there may be spatial correlations (similarities in the vectors sent by different nodes) or temporal correlations (similarities in the data sent by a single node over different iterations of the algorithm) in the data vectors. We leverage these correlations by simply modifying the decoding method used by the server to estimate the mean. We provide an analysis of the resulting estimation error as well as experiments for PCA, K-Means and Logistic Regression, which show that our estimators consistently outperform more sophisticated and expensive sparsification methods.
| accept | This paper studies communication efficient estimators for the distributed mean estimation problem. The paper presents two methods that exploit either spatial or temporal correlations of the data, with the goal to improve communication efficiency. The MSE of the two estimators is studied analytically, and numerical benchmarks demonstrate that the proposed techniques can improve the communication efficiency in distributed learning.
The reviewers spotted a few inaccuracies in the proofs, but following the author's response they believe that these issues can be addressed in the final version.
On the one hand, the reviewers emphasized the simplicity of the method, but on the other hand, they found the contribution to be
slightly incremental. At the end, the good experimental results were the deciding factor in the discussion.
The reviewers believe that this work inspire future work (including attempts to address current limitations regarding practicality in distributed settings) and will be of interest to the community.
The authors are strongly encouraged to take the reviewer's feedback into account when preparing the final version, including the already proposed changes, and perhaps also to include a discussion on lower bounds from an information-theoretic perspective (as guides for follow-up work). | test | [
"dRceO3mWTq_",
"A6eYo9KKuTg",
"KdlwqNFvpa",
"ndRURHM-Yzk",
"04Xyo0be3SN",
"8VD9Ekkpwh6",
"F1uRgivAmDx",
"PmsSF7STFU",
"ceS4PL6VVa",
"sFmplz_Yba3",
"ok10-da6FF6",
"xI2tmKQMOiz",
"GysF2os277Z"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response to our rebuttal. We are very grateful for the increased rating!\n\nWe chose to analyze our proposed estimator with Rand-$k$(uniform sampling) due to its simplicity and ease of exposition. Our proposed estimators can also be applied with non-uniform sampling techniques such as Top-$k$ a... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"ndRURHM-Yzk",
"PmsSF7STFU",
"nips_2021_BKeJmkspvc",
"ceS4PL6VVa",
"nips_2021_BKeJmkspvc",
"F1uRgivAmDx",
"ok10-da6FF6",
"xI2tmKQMOiz",
"KdlwqNFvpa",
"GysF2os277Z",
"nips_2021_BKeJmkspvc",
"nips_2021_BKeJmkspvc",
"nips_2021_BKeJmkspvc"
] |
nips_2021_C0bV8xGhsz | Last-iterate Convergence in Extensive-Form Games | Regret-based algorithms are highly efficient at finding approximate Nash equilibria in sequential games such as poker games. However, most regret-based algorithms, including counterfactual regret minimization (CFR) and its variants, rely on iterate averaging to achieve convergence. Inspired by recent advances on last-iterate convergence of optimistic algorithms in zero-sum normal-form games, we study this phenomenon in sequential games, and provide a comprehensive study of last-iterate convergence for zero-sum extensive-form games with perfect recall (EFGs), using various optimistic regret-minimization algorithms over treeplexes. This includes algorithms using the vanilla entropy or squared Euclidean norm regularizers, as well as their dilated versions which admit more efficient implementation. In contrast to CFR, we show that all of these algorithms enjoy last-iterate convergence, with some of them even converging exponentially fast. We also provide experiments to further support our theoretical results.
| accept | The reviewers were unanimously positive about the paper, and the rebuttal addressed all major concerns that remained; we are happy to recommend acceptance. Please make sure to implement the issues identified in the review process in the updated version of the paper. | train | [
"WdLUPK_QecV",
"AGC2zY3rFO",
"zB5eJzFchvK",
"mGfZbAIKEif",
"kShJ8hkrFss",
"tRp7eouvq2",
"vnElIK3hck",
"p4jmD-QsTJ",
"DFz2eBAs-kv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers last-iterate convergence of Optimistic Online Mirror Descent in zero-sum perfect-recall extensive-form games. It includes a proof for a general case that the last iterate must convergence in the limit, and specific convergence rates for three particular choices of regularizer function based on... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_C0bV8xGhsz",
"kShJ8hkrFss",
"nips_2021_C0bV8xGhsz",
"zB5eJzFchvK",
"WdLUPK_QecV",
"p4jmD-QsTJ",
"DFz2eBAs-kv",
"nips_2021_C0bV8xGhsz",
"nips_2021_C0bV8xGhsz"
] |
nips_2021_8dqEeFuhgMG | Class-Incremental Learning via Dual Augmentation | Deep learning systems typically suffer from catastrophic forgetting of past knowledge when acquiring new skills continually. In this paper, we emphasize two dilemmas, representation bias and classifier bias in class-incremental learning, and present a simple and novel approach that employs explicit class augmentation (classAug) and implicit semantic augmentation (semanAug) to address the two biases, respectively. On the one hand, we propose to address the representation bias by learning transferable and diverse representations. Specifically, we investigate the feature representations in incremental learning based on spectral analysis and present a simple technique called classAug, to let the model see more classes during training for learning representations transferable across classes. On the other hand, to overcome the classifier bias, semanAug implicitly involves the simultaneous generating of an infinite number of instances of old classes in the deep feature space, which poses tighter constraints to maintain the decision boundary of previously learned classes. Without storing any old samples, our method can perform comparably with representative data replay based approaches.
| accept | We thank the authors for the additional clarifications provided in their rebuttal, which resolved most of the concerns raised by the reviewers. The contributions are relatively simple, but convincingly shown to be very effective compared to more sophisticated approaches recently proposed in continual learning. These results provide new insights and will benefit the NeurIPS community. However, the reviewers also provided a list of improvements that should be incorporated in the revised version. The paper was not self-contained at times and needs to be polished in order not to hamper the understanding. Exposition of the experiments would also benefit of a revision. | train | [
"HOE3XAGXob",
"W7RS3t_2Bn7",
"bjFSErn7Sv4",
"5zw5ErKkicV",
"LR47LorFosj",
"zQuyqVjGHyG",
"UADS7PzNkSA",
"gfzFbvFXwcT",
"BRdNopnE1Gz",
"Fo9nyjvKfd9",
"RYtgcusLofd",
"_Y-tcnQgoFy",
"Fokc61LCd_",
"bwa0wqkG0vt",
"knAkPtrbC8U",
"FKK9vGnoleU"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Scientific contribution.** We propose a novel dual augmentation framework, in which the classAug aims to reduce the representation bias and the semanAug focuses on the classifier bias in a complementary manner. Particularly, the two augmentations in our methods are not existing techniques, and we have discussed... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"W7RS3t_2Bn7",
"LR47LorFosj",
"zQuyqVjGHyG",
"UADS7PzNkSA",
"BRdNopnE1Gz",
"RYtgcusLofd",
"Fo9nyjvKfd9",
"nips_2021_8dqEeFuhgMG",
"knAkPtrbC8U",
"FKK9vGnoleU",
"bwa0wqkG0vt",
"Fokc61LCd_",
"nips_2021_8dqEeFuhgMG",
"nips_2021_8dqEeFuhgMG",
"nips_2021_8dqEeFuhgMG",
"nips_2021_8dqEeFuhgMG... |
nips_2021_we8d1FjibAc | Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems | Zixiu Wang, Yiwen Guo, Hu Ding | accept | This is a beautiful paper about coresets for handling outliers which will probably inspire many future related papers.
While there are concerns regarding experimental results (that are hidden in the supp. material), the theoretical contribution with an algorithm that it is not hard to implement is strong enough for such a fundamental problem in machine learning.
Please move some more experiments to the main paper in order to attract the practitioners.
Also please add the following references and maybe some more:
https://www.mdpi.com/1999-4893/13/12/311
https://dl.acm.org/doi/10.5555/1347082.1347173
| train | [
"5e6ZJX6NiFp",
"RCvGiV1af0u",
"7HzjHdNMxwy",
"hNFcCJo68C",
"5nNJ4o7rr0s",
"URR4NKDk5rX",
"dho3H_mt2el",
"cKZTyut1LsB",
"4Hy2e8nvCE6",
"SE4HRj5zgFD",
"K_3yzvdRxAQ",
"Z6EFojX7cWC",
"lUvPWqVf-R",
"bnrnrdh2Wb4",
"XzUINBh7tY",
"lIehhVSkO_",
"foN9XE2GAm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper suggests a coreset construction framework for a family of functions that are Lipschitz continuous, smooth, and has Lipschitz continuous Hessian. The framework considers only continuous-and-bounded learning, i.e., optimization problems concerning bounded space of candidate solutions. The paper suggests a... | [
7,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_we8d1FjibAc",
"lUvPWqVf-R",
"nips_2021_we8d1FjibAc",
"cKZTyut1LsB",
"4Hy2e8nvCE6",
"K_3yzvdRxAQ",
"nips_2021_we8d1FjibAc",
"XzUINBh7tY",
"Z6EFojX7cWC",
"nips_2021_we8d1FjibAc",
"lIehhVSkO_",
"7HzjHdNMxwy",
"5e6ZJX6NiFp",
"foN9XE2GAm",
"dho3H_mt2el",
"SE4HRj5zgFD",
"nips_20... |
nips_2021_42yEyjooGSC | Rethinking and Reweighting the Univariate Losses for Multi-Label Ranking: Consistency and Generalization | Guoqiang Wu, Chongxuan LI, Kun Xu, Jun Zhu | accept | The reviewers and I agree that this paper offers several valuable insights, both theoretically and empirically, into a class of methods for multi-label classification. I also agree with a reviewer that error bounds are not necessarily the best proxies for the actual error, and that the authors should include additional content on this point, e.g., through simulation studies as suggested by the reviewer, in the final version. | train | [
"vVath7EWsDv",
"gxlGs6Gij5y",
"YjysDkDJuqQ",
"6qRwfXXAuOM",
"0tEjqYkLJP-",
"TmK_H0ff8u7",
"ThfkbE_wRGh",
"YCFO78hsZF8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper attempts to fill the gap between existing theory and practice for Multi-Label Ranking (MLR) problems: why inconsistent pairwise losses often achieve better performance in practice than consistent univariate losses? The authors try to answer this question from the prospective of generalization error boun... | [
7,
7,
-1,
-1,
-1,
-1,
6,
7
] | [
5,
4,
-1,
-1,
-1,
-1,
3,
2
] | [
"nips_2021_42yEyjooGSC",
"nips_2021_42yEyjooGSC",
"ThfkbE_wRGh",
"vVath7EWsDv",
"gxlGs6Gij5y",
"YCFO78hsZF8",
"nips_2021_42yEyjooGSC",
"nips_2021_42yEyjooGSC"
] |
nips_2021_Zr9YPpxg2B1 | Fair Clustering Under a Bounded Cost | Clustering is a fundamental unsupervised learning problem where a dataset is partitioned into clusters that consist of nearby points in a metric space. A recent variant, fair clustering, associates a color with each point representing its group membership and requires that each color has (approximately) equal representation in each cluster to satisfy group fairness. In this model, the cost of the clustering objective increases due to enforcing fairness in the algorithm. The relative increase in the cost, the ```````''price of fairness,'' can indeed be unbounded. Therefore, in this paper we propose to treat an upper bound on the clustering objective as a constraint on the clustering problem, and to maximize equality of representation subject to it. We consider two fairness objectives: the group utilitarian objective and the group egalitarian objective, as well as the group leximin objective which generalizes the group egalitarian objective. We derive fundamental lower bounds on the approximation of the utilitarian and egalitarian objectives and introduce algorithms with provable guarantees for them. For the leximin objective we introduce an effective heuristic algorithm. We further derive impossibility results for other natural fairness objectives. We conclude with experimental results on real-world datasets that demonstrate the validity of our algorithms.
| accept | The paper proposes new formulations of (and algorithms for) fair clustering. Instead of minimizing clustering cost under fairness constraints, the paper considers a dual problem where the clustering cost is given and the goal is to maximize fairness.
The reviewers found the formulation of fairness to be interesting and potentially leading to a follow-up line of research. Almost all reviewers recommended acceptance, sometimes strongly so. At the same time, there was a concern that the approximation factors depend inversely on the cluster size, so they could be large. However, overall, the reviewers felt that the positives outweigh the negatives.
| train | [
"Ch9h9NuEViJ",
"6YVTk-3B5rM",
"SU6fEp2Fg8b",
"1A2pTIpDIVZ",
"bWJDecmT9-K",
"nTbUvRcj4qW",
"tEMRQ3Ndyo-",
"GkKxHMpm_5t",
"mwpyDzdP5jL",
"umEAJvXN7TA",
"6S1Wi0RtY8P",
"_gkIG5zGiIo",
"tv2vUm9VHe8",
"uTE7OZR_dr9",
"Q_oIOfQWJH_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper studies a new variant of fair clustering for different fairness (group utilitarian, group egalitarian, group leximen) and clustering objectives(k-center, k-median, k-means). The main approach is to optimize the fairness objective under the constraint that the clustering cost are bounded by some value U. ... | [
8,
5,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_Zr9YPpxg2B1",
"nips_2021_Zr9YPpxg2B1",
"nips_2021_Zr9YPpxg2B1",
"nTbUvRcj4qW",
"nips_2021_Zr9YPpxg2B1",
"tEMRQ3Ndyo-",
"tv2vUm9VHe8",
"mwpyDzdP5jL",
"umEAJvXN7TA",
"6S1Wi0RtY8P",
"bWJDecmT9-K",
"Ch9h9NuEViJ",
"6YVTk-3B5rM",
"SU6fEp2Fg8b",
"nips_2021_Zr9YPpxg2B1"
] |
nips_2021_NJex-5TZIQa | Improving Calibration through the Relationship with Adversarial Robustness | Neural networks lack adversarial robustness, i.e., they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated predictions, i.e., the predicted probability is not a good indicator of how much we should trust our model. In this paper, we study the connection between adversarial robustness and calibration and find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated predictions. Based on this insight, we examine if calibration can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary. We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration over the model even under distributional shifts. In addition, AR-AdaLS can also be applied to an ensemble model to further improve model calibration.
| accept | All reviewers agreed the method proposed in this submission is insightful and novel. The authors' rebuttal has successfully addressed the reviewers' concerns. However, the reviewers are also less satisfied by the fact that the proposed method seems to be, at best, on par with other baselines (e.g. Mixup). The robustness-based label smoothing is not consistently better and there are no theoretical arguments to favor their approach. This is a borderline paper but I recommend acceptance. Despite limited performance gain compared to baseline methods, I believe the technical contributions are sufficient.
| train | [
"F2L5KPZX-3r",
"evw03qfpL8y",
"HcNPIPU1-Oy",
"6PvVRlJ9Vo",
"l0o1pW1RRRQ",
"UKqlr50vSpF",
"r7QxzZKX6Z",
"illqtnu2-5w",
"nsUZdrDFaDz",
"k-S-Wux3tvk",
"qd3DGAncOLO",
"KNrAeWpVr_"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for raising your score and we are glad that we adequately addressed most of your concerns.\n\nWe will be more cautious while using ''significantly better '' when comparing single AR-AdaLS with other baselines in the final version. However, we still want to emphasize that our proposed ``AR-AdaLS of Ensembl... | [
-1,
6,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"6PvVRlJ9Vo",
"nips_2021_NJex-5TZIQa",
"r7QxzZKX6Z",
"illqtnu2-5w",
"nips_2021_NJex-5TZIQa",
"nsUZdrDFaDz",
"evw03qfpL8y",
"l0o1pW1RRRQ",
"qd3DGAncOLO",
"KNrAeWpVr_",
"nips_2021_NJex-5TZIQa",
"nips_2021_NJex-5TZIQa"
] |
nips_2021_ypj3xKoRfmr | Credal Self-Supervised Learning | Self-training is an effective approach to semi-supervised learning. The key idea is to let the learner itself iteratively generate "pseudo-supervision" for unlabeled instances based on its current hypothesis. In combination with consistency regularization, pseudo-labeling has shown promising performance in various domains, for example in computer vision. To account for the hypothetical nature of the pseudo-labels, these are commonly provided in the form of probability distributions. Still, one may argue that even a probability distribution represents an excessive level of informedness, as it suggests that the learner precisely knows the ground-truth conditional probabilities. In our approach, we therefore allow the learner to label instances in the form of credal sets, that is, sets of (candidate) probability distributions. Thanks to this increased expressiveness, the learner is able to represent uncertainty and a lack of knowledge in a more flexible and more faithful manner. To learn from weakly labeled data of that kind, we leverage methods that have recently been proposed in the realm of so-called superset learning. In an exhaustive empirical evaluation, we compare our methodology to state-of-the-art self-supervision approaches, showing competitive to superior performance especially in low-label scenarios incorporating a high degree of uncertainty.
| accept | After the discussion, the reviewers agree that the paper proposes an interesting approach of using credal sets as a generalization of using probability estimates in self-supervision approaches. The resulting loss function is relatively simple and the authors describe one practical way to implement it.
The paper is still borderline because the experimental results, even though they seem slightly better than the initial FixMatch approach, are not significantly better than other baselines, and additional ablations could be performed regarding some design choices, such as the effect of the weighting in Eq 6 or the exponential moving average of model weights. These ablations would be good to have, but since these come from previous work on the topic and the baselines are included in the main table, I do not feel that these are decisive arguments for rejection, even though they would be good to have.
Overall, the approach proposed in the paper provides a cleaner way of using pseudo-labeling than the confidence matching, which is illustrated for instance in section on efficiency (learning curves might be better than a fixed number of epochs here), where we see the advantage of not requiring the confidence threshold as additional hyperparameter. In that respect, the approach can be used in practice even if it does not significantly improves performance compared to the optimally tuned competitors. | train | [
"0v4n-ONPLFB",
"bFG12UjEPcT",
"bRpDqd5EiYp",
"fYaOY6hNpDP",
"tQZvRaT6O51",
"tfyg97ZBPKd",
"qQ5psmAxGc2",
"6yGyW3uVUYx",
"wgGTi3nqk_M",
"IrG624hJxkj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to modify the pseudo-labeling procedure in semi-supervised learning. Student model predictions are penalized by distance to a credal set of teacher predictions. The size of credal sets is determined heuristically. Experiments demonstrate improvement over baselines mainly in scarce-label scenari... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_ypj3xKoRfmr",
"fYaOY6hNpDP",
"tQZvRaT6O51",
"0v4n-ONPLFB",
"IrG624hJxkj",
"wgGTi3nqk_M",
"6yGyW3uVUYx",
"nips_2021_ypj3xKoRfmr",
"nips_2021_ypj3xKoRfmr",
"nips_2021_ypj3xKoRfmr"
] |
nips_2021_a-Lbgfy9RqV | Spot the Difference: Detection of Topological Changes via Geometric Alignment | Geometric alignment appears in a variety of applications, ranging from domain adaptation, optimal transport, and normalizing flows in machine learning; optical flow and learned augmentation in computer vision and deformable registration within biomedical imaging. A recurring challenge is the alignment of domains whose topology is not the same; a problem that is routinely ignored, potentially introducing bias in downstream analysis. As a first step towards solving such alignment problems, we propose an unsupervised algorithm for the detection of changes in image topology. The model is based on a conditional variational auto-encoder and detects topological changes between two images during the registration step. We account for both topological changes in the image under spatial variation and unexpected transformations. Our approach is validated on two tasks and datasets: detection of topological changes in microscopy images of cells, and unsupervised anomaly detection brain imaging.
| accept | This paper addresses the problem of *topology-aware registration* and presents a first approach towards this goal.
Reviews are quite mixed (even after the rebuttal phase) and the majority of reviewers perceived this work more like an *anomaly detection* approach rather than a means to an end, i.e., topology-aware registration. This seems to be primarily due to the proxy experiment of anomaly detection. Nevertheless, due to the non-existence of a suitable benchmark dataset to assess the ultimate objective of this work, this is a totally reasonable way to go in my point of view.
As to missing comparisons to recent state-of-the-art anomaly detection approaches, I do think that the authors make a fair point in their rebuttal: yes, one could compare to various anomaly detection methods, but this would, in essence, be a comparison against methods that cannot necessarily detect topological differences (which is the very task the presented method specifically addresses).
Overall, after reading the paper and carefully considering the reviews and the responses, I do not think that the changes required to address the reviewer's concerns are so major that they warrant rejection (e.g., the definition of topology, or clarification of voxel-wise losses). The required changes/adaptations are, in fact, quite minor and the concise comments by the authors already largely clarify the issues. Overall, I am recommending acceptance, based on the remarks outlined above. | train | [
"FSGC2nLO9tn",
"sS8CLBw9Xa",
"NruJWHdkOM2",
"vQJTXiAiJT",
"UD8BXxgQhDR",
"AIc1wNzJbR",
"V3UPxFgX4H5",
"oFsfpy0znd",
"x5jePx5ZE4i"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your careful response. It looks like there is a consensus that the paper could be improved from 1) definition of topology, 2) justification of using voxel-wise losses, 3) cohort used in the experiments, and 4) introducing more baselines. I, therefore, remain my rating as it is.",
" Thank you for y... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"sS8CLBw9Xa",
"NruJWHdkOM2",
"vQJTXiAiJT",
"x5jePx5ZE4i",
"oFsfpy0znd",
"V3UPxFgX4H5",
"nips_2021_a-Lbgfy9RqV",
"nips_2021_a-Lbgfy9RqV",
"nips_2021_a-Lbgfy9RqV"
] |
nips_2021_Si3SSyPDiJd | Rethinking the Variational Interpretation of Accelerated Optimization Methods | The continuous-time model of Nesterov's momentum provides a thought-provoking perspective for understanding the nature of the acceleration phenomenon in convex optimization. One of the main ideas in this line of research comes from the field of classical mechanics and proposes to link Nesterov's trajectory to the solution of a set of Euler-Lagrange equations relative to the so-called Bregman Lagrangian. In the last years, this approach led to the discovery of many new (stochastic) accelerated algorithms and provided a solid theoretical foundation for the design of structure-preserving accelerated methods. In this work, we revisit this idea and provide an in-depth analysis of the action relative to the Bregman Lagrangian from the point of view of calculus of variations. Our main finding is that, while Nesterov's method is a stationary point for the action, it is often not a minimizer but instead a saddle point for this functional in the space of differentiable curves. This finding challenges the main intuition behind the variational interpretation of Nesterov's method and provides additional insights into the intriguing geometry of accelerated paths.
| accept | This paper discusses a misconception (or implicit assumption) from some popular recent work that links Nesterov acceleration to continuous time equations. There was a very robust discussion between the reviewers and author(s), and also privately among the reviewers. We also solicited an extra review, and the AC and SAC had involved discussions.
Generally, the reviewers liked the paper, and there were no fatal flaws or mistakes. The writing is nice, and it's certainly not a hostile paper. The main argument against accepting the paper is that it is not clear the misconception has led to incorrect research, and it's also not clear how influential the original arguments have been.
However, the paper offers more than just pointing out the misconception (it discusses regimes where Nesterov's method is optimal) and makes some nice connections. It's an enjoyable read, and useful to anyone interested in modern optimization. Furthermore, the original misconception had been repeated in high-profile venues (like a plenary talk at the ICM). While it's not certain that this paper is currently needed in order to fix state-of-the-art research, this paper does contribute to the literature and may be useful in the future.
Overall, this is a nice paper, and has a chance of making an impact, hence we're pleased to suggest its acceptance. | train | [
"24Zq9QkEmxh",
"Bed3M7BJni",
"QQpuZ10sNw",
"tTboIraEne",
"UwCcDMgODkU",
"VnNHFeQ9E1",
"PaIcR1VUg7H",
"UhJO7kGC87k",
"HUaqQ2D7zeX",
"kUD3IWB3CsE"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nSince you mentioned that you will reconsider your score if we address your main concerns, we would be grateful to get some feedback on the answers we provided. We are happy to clarify any point if needed.\n\nThank you very much, The authors",
" Thank you for your response, I have read it and w... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
2
] | [
"PaIcR1VUg7H",
"tTboIraEne",
"HUaqQ2D7zeX",
"UhJO7kGC87k",
"kUD3IWB3CsE",
"PaIcR1VUg7H",
"nips_2021_Si3SSyPDiJd",
"nips_2021_Si3SSyPDiJd",
"nips_2021_Si3SSyPDiJd",
"nips_2021_Si3SSyPDiJd"
] |
nips_2021__pmQOVi3gHx | Linear and Kernel Classification in the Streaming Model: Improved Bounds for Heavy Hitters | Arvind Mahankali, David Woodruff | accept | The reviewers largely agreed that the paper provides a clear improvement over prior work on the problem of learning "heavy-hitter" weights. The reviewers initially debated the value of the problem studied by the authors, and were concerned it might be of limited interest. However, the author response did a good job of further justifying the setting studied. | train | [
"EB1kn4CzmH8",
"a3h-cach0Kg",
"Q3OJPmkNt6h",
"Mz9S9H3Zb1l",
"VEG9yB7hRa-",
"NEeFGpRtYU9",
"b_MN2LKQX8h",
"LY5a4SH_BzF",
"7adx62FFOs",
"hAKYpoWwt1c",
"jSEfUWnU9Zn",
"_3jw1wcJqe",
"ezLdEXE3bh",
"sORgQIgSSZC",
"Y0xK890YSj4"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for the comment! We will add discussion about this not always being the case to the introduction of our paper. ",
" Regarding your concern that the mass of the weight vector might not concentrate on the top K weights for K small, we found experimentally that on the RCV1 dataset used in our paper, the hig... | [
-1,
-1,
-1,
-1,
7,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"Q3OJPmkNt6h",
"jSEfUWnU9Zn",
"Mz9S9H3Zb1l",
"hAKYpoWwt1c",
"nips_2021__pmQOVi3gHx",
"nips_2021__pmQOVi3gHx",
"ezLdEXE3bh",
"nips_2021__pmQOVi3gHx",
"nips_2021__pmQOVi3gHx",
"_3jw1wcJqe",
"Y0xK890YSj4",
"7adx62FFOs",
"NEeFGpRtYU9",
"VEG9yB7hRa-",
"nips_2021__pmQOVi3gHx"
] |
nips_2021_sUBSPowU3L5 | A PAC-Bayes Analysis of Adversarial Robustness | We propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input. Instead of deriving a worst-case analysis of the risk of a hypothesis over all the possible perturbations, we leverage the PAC-Bayesian framework to bound the averaged risk on the perturbations for majority votes (over the whole class of hypotheses). Our theoretically founded analysis has the advantage to provide general bounds (i) that are valid for any kind of attacks (i.e., the adversarial attacks), (ii) that are tight thanks to the PAC-Bayesian framework, (iii) that can be directly minimized during the learning phase to obtain a robust model on different attacks at test time.
| accept | This paper studies the generalization of adversarial error using the PAC-Bayes framework. The authors provide generalization bounds that are independent of the type of perturbations.
The reviewers see this direction as novel and the results of interest. The more important concerns initially raised by the reviewers were addressed in the rebuttal (some reviewers increased their scores as a result). Overall, I think this paper makes an interesting contribution and I, therefore, recommend acceptance. Some reviewers still pointed out some limitations of the results and I would thus encourage the authors to add a discussion in the revised version.
| train | [
"XJqBhaN3d77",
"m8Ot5dG3w5",
"hFpmMC9CsYV",
"K02HvzGGBj2",
"Nc-m5SQtONI",
"rJpoqS65K3u",
"gQTRxS3chad",
"ky1h6HhGIgM",
"GlZsbwvljD",
"DMAY2nVWAJ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer,\n\nMany thanks for your answer and your support.\n\nAbout your last concern related to Comment 6, we are not really sure to exactly understand what you mean by “optimizing the distribution of perturbations”.\n\nWhat we can say is that from the theoretical point of view, the bound stands for any dis... | [
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"K02HvzGGBj2",
"nips_2021_sUBSPowU3L5",
"rJpoqS65K3u",
"gQTRxS3chad",
"nips_2021_sUBSPowU3L5",
"m8Ot5dG3w5",
"Nc-m5SQtONI",
"nips_2021_sUBSPowU3L5",
"DMAY2nVWAJ",
"nips_2021_sUBSPowU3L5"
] |
nips_2021_auGY2UQfhSu | SE(3)-equivariant prediction of molecular wavefunctions and electronic densities | Machine learning has enabled the prediction of quantum chemical properties with high accuracy and efficiency, allowing to bypass computationally costly ab initio calculations. Instead of training on a fixed set of properties, more recent approaches attempt to learn the electronic wavefunction (or density) as a central quantity of atomistic systems, from which all other observables can be derived. This is complicated by the fact that wavefunctions transform non-trivially under molecular rotations, which makes them a challenging prediction target. To solve this issue, we introduce general SE(3)-equivariant operations and building blocks for constructing deep learning architectures for geometric point cloud data and apply them to reconstruct wavefunctions of atomistic systems with unprecedented accuracy. Our model achieves speedups of over three orders of magnitude compared to ab initio methods and reduces prediction errors by up to two orders of magnitude compared to the previous state-of-the-art. This accuracy makes it possible to derive properties such as energies and forces directly from the wavefunction in an end-to-end manner. We demonstrate the potential of our approach in a transfer learning application, where a model trained on low accuracy reference wavefunctions implicitly learns to correct for electronic many-body interactions from observables computed at a higher level of theory. Such machine-learned wavefunction surrogates pave the way towards novel semi-empirical methods, offering resolution at an electronic level while drastically decreasing computational cost. Additionally, the predicted wavefunctions can serve as initial guess in conventional ab initio methods, decreasing the number of iterations required to arrive at a converged solution, thus leading to significant speedups without any loss of accuracy or robustness. While we focus on physics applications in this contribution, the proposed equivariant framework for deep learning on point clouds is promising also beyond, say, in computer vision or graphics.
| accept | The authors present an SE(3) equivariant architecture for predicting symmetric Hamiltonian matrices, which can then be used to derive physical properties of molecules. This architecture provides a dramatic improvement in accuracy over baseline methods. The majority of reviewers agreed it was an excellent paper deserving acceptance. The one holdout reviewer had concerns about the scale of the experiments and recommended that results be presented for the entirety of the MD17 dataset. However I am convinced by the author response that this would not be feasible with the computational resources at their disposal, and I am willing to cut them some slack on this given the quality of results on the systems presented. Therefore I recommend that the paper be accepted. | train | [
"oOyCIyXcO4x",
"SF_WqKxlYD1",
"1qf2ra-Driy",
"LqE_ds4NG0Q",
"TL1rWBtkSlz",
"cUOjBJ253AK",
"CJgjc_NmAvE",
"wVDMLiwflyd",
"SB6ke_xOcB",
"lZGalNiSTRO",
"SsEC32zpvl_",
"bXq63XrOXbt",
"liqdfGC4dil",
"sFgzEgMzC9H",
"Iyq4QjCEDb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" With the addition of the ablation studies and, importantly, the demonstration of a reduction in the time to solution for DFT calculations, the paper has improved significantly and passes the bar for acceptance at this conference. I've updated my score accordingly. ",
"\nThis contribution describes a method tha... | [
-1,
6,
-1,
-1,
-1,
8,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
2
] | [
"LqE_ds4NG0Q",
"nips_2021_auGY2UQfhSu",
"CJgjc_NmAvE",
"TL1rWBtkSlz",
"liqdfGC4dil",
"nips_2021_auGY2UQfhSu",
"bXq63XrOXbt",
"sFgzEgMzC9H",
"nips_2021_auGY2UQfhSu",
"SsEC32zpvl_",
"SB6ke_xOcB",
"cUOjBJ253AK",
"SF_WqKxlYD1",
"Iyq4QjCEDb",
"nips_2021_auGY2UQfhSu"
] |
nips_2021_YzasumDKCWV | Modified Frank Wolfe in Probability Space | We propose a novel Frank-Wolfe (FW) procedure for the optimization of infinite-dimensional functionals of probability measures - a task which arises naturally in a wide range of areas including statistical learning (e.g. variational inference) and artificial intelligence (e.g. generative adversarial networks). Our FW procedure takes advantage of Wasserstein gradient flows and strong duality results recently developed in Distributionally Robust Optimization so that gradient steps (in the Wasserstein space) can be efficiently computed using finite-dimensional, convex optimization methods. We show how to choose the step sizes in order to guarantee exponentially fast iteration convergence, under mild assumptions on the functional to optimize. We apply our algorithm to a range of functionals arising from applications in nonparametric estimation.
| accept | I carefully read the interactions between the reviewers and the authors as initially there was some strong split between the reviewers. Some of the things have been clarified in the meantime, so that the overall assessment seems to be that this paper is a good paper. Certain issues regarding notions and review of related methods etc have been raised however. I would like the authors to give the reviews appropriate considerations when preparing their revision. The authors answers have been taking into consideration. | train | [
"js7mp-FEie",
"KbhpTMjtCYv",
"OqJVTbbe2lZ",
"97J6meb4XA3",
"CL1LibrBjpA",
"MkMe8y8bkU0",
"6jDNxmEv0Z",
"_oTSrJ2x5Vh",
"XYPdEQjJcp",
"dupUBAvMJk",
"xcRYSjEjbkh",
"qAhWbk0hp8",
"b7Syevq9jDf",
"VQsP_7Or83j",
"8xeek7Ld_6C"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an algorithm for the minimization of functionals of probability distributions. The authors derive a convergence rate in the case of a smooth and PL function (under the geometry given by the Wasserstein distance of order 2). They also run some synthetic experiments of non-parametric estimation ... | [
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
9
] | [
2,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"nips_2021_YzasumDKCWV",
"js7mp-FEie",
"CL1LibrBjpA",
"nips_2021_YzasumDKCWV",
"XYPdEQjJcp",
"nips_2021_YzasumDKCWV",
"js7mp-FEie",
"97J6meb4XA3",
"_oTSrJ2x5Vh",
"6jDNxmEv0Z",
"8xeek7Ld_6C",
"b7Syevq9jDf",
"VQsP_7Or83j",
"nips_2021_YzasumDKCWV",
"nips_2021_YzasumDKCWV"
] |
nips_2021_HTk8q08-zI | Bayesian Optimization of Function Networks | We consider Bayesian optimization of the output of a network of functions, where each function takes as input the output of its parent nodes, and where the network takes significant time to evaluate. Such problems arise, for example, in reinforcement learning, engineering design, and manufacturing. While the standard Bayesian optimization approach observes only the final output, our approach delivers greater query efficiency by leveraging information that the former ignores: intermediate output within the network. This is achieved by modeling the nodes of the network using Gaussian processes and choosing the points to evaluate using, as our acquisition function, the expected improvement computed with respect to the implied posterior on the objective. Although the non-Gaussian nature of this posterior prevents computing our acquisition function in closed form, we show that it can be efficiently maximized via sample average approximation. In addition, we prove that our method is asymptotically consistent, meaning that it finds a globally optimal solution as the number of evaluations grows to infinity, thus generalizing previously known convergence results for the expected improvement. Notably, this holds even though our method might not evaluate the domain densely, instead leveraging problem structure to leave regions unexplored. Finally, we show that our approach dramatically outperforms standard Bayesian optimization methods in several synthetic and real-world problems.
| accept | This paper addresses Bayesian optimization which leverages the intermediate outputs as well as final outputs when they are available by evaluations in the function network. The problem setting is novel, extending the previous work [Astudillo, R. and Frazier, P., 2019]. A cascade of GPs is used as a surrogate model, leading to non-Gaussian process. A sample average approximation approach is used to optimize the EI. Strong empirical results are provided to demonstrate that the proposed method indeed achieves superior performance. The strength of this paper is in the novel problem setting, its usefulness in engineering applications mainly for practitioners. The downside is in the limited novelty in the proposed solution, since it is a direct employment of GPs and SAA. During the committee discussion period, I had a few communications with reviewers. The problem setting, which appears often in engineering applications, is valuable for further studying. Thus, even if the proposed solution is not novel, this work studies an important and valuable problem, and could serve as a starting point for new works that could develop more sophisticated algorithms or deeper theory for the DAG-dependency setting.
| train | [
"mxHvkrXZwye",
"cugmsbRnIj-",
"czZcy5XFVON",
"0M98A3qS9ef",
"SnA6XaBwrK9",
"3a5BNHft_kS",
"sR8MGfFSHzb",
"ITLWDHGXzY",
"k9ZAS6UG6V",
"PngVOMcodI",
"v3m6jeu2Mz3",
"x5xz6R5O45p",
"i7I_E7f3w_L",
"agKLngJiktd",
"-QcAYAZYDRy",
"D7Pw8yzBtQ0",
"55uzGPFC0w"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer r59T,\n\nThank you for confirming that our response has adequately addressed your concerns. We will make sure to take into account all the suggestions and concerns raised by the reviewing team in the revised version of our paper.\n\nSincerely,\n\nThe authors",
" I thank the authors for their respo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"cugmsbRnIj-",
"x5xz6R5O45p",
"ITLWDHGXzY",
"sR8MGfFSHzb",
"D7Pw8yzBtQ0",
"-QcAYAZYDRy",
"PngVOMcodI",
"nips_2021_HTk8q08-zI",
"ITLWDHGXzY",
"55uzGPFC0w",
"55uzGPFC0w",
"-QcAYAZYDRy",
"ITLWDHGXzY",
"D7Pw8yzBtQ0",
"nips_2021_HTk8q08-zI",
"nips_2021_HTk8q08-zI",
"nips_2021_HTk8q08-zI"
... |
nips_2021_tMFTT3BDEK9 | Look at What I’m Doing: Self-Supervised Spatial Grounding of Narrations in Instructional Videos | We introduce the task of spatially localizing narrated interactions in videos. Key to our approach is the ability to learn to spatially localize interactions with self-supervision on a large corpus of videos with accompanying transcribed narrations. To achieve this goal, we propose a multilayer cross-modal attention network that enables effective optimization of a contrastive loss during training. We introduce a divided strategy that alternates between computing inter- and intra-modal attention across the visual and natural language modalities, which allows effective training via directly contrasting the two modalities' representations. We demonstrate the effectiveness of our approach by self-training on the HowTo100M instructional video dataset and evaluating on a newly collected dataset of localized described interactions in the YouCook2 dataset. We show that our approach outperforms alternative baselines, including shallow co-attention and full cross-modal attention. We also apply our approach to grounding phrases in images with weak supervision on Flickr30K and show that stacking multiple attention layers is effective and, when combined with a word-to-region loss, achieves state of the art on recall-at-one and pointing hand accuracies.
| accept | This paper has a very nice insight, and its essence has been made much clearer by the discussion and new notation during the rebuttal discussions. The two reviewers who originally recommended rejection have upgraded their scores to accept following these discussions.
The authors have promised to make many updates to the paper, and should take note of the post-rebuttal statements of the reviewers. | train | [
"KiiqjHbTwQq",
"Qvd7wyHpVk4",
"12eP8XtCCw7",
"gdiAvi3Gqu",
"VaZblnems_n",
"xO8coqSDkbl",
"rbpa5NWKAE1",
"_6ki5-VE20f",
"_0IHRZumd5d",
"3bxLtGt0IQi",
"r_Z8A9s89Q0",
"pQSERV3Lba",
"FCqpcvniIck"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper concerns a new problem of grounding narrated interactions in video. It is similar to existing work on (object) phrase localization/grounding, but with a twist on grounding more complex narrations that could involve both entities and predicates (e.g., put tomatoes into the baking tray). The model trainin... | [
7,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_tMFTT3BDEK9",
"nips_2021_tMFTT3BDEK9",
"nips_2021_tMFTT3BDEK9",
"VaZblnems_n",
"xO8coqSDkbl",
"_0IHRZumd5d",
"12eP8XtCCw7",
"rbpa5NWKAE1",
"nips_2021_tMFTT3BDEK9",
"KiiqjHbTwQq",
"FCqpcvniIck",
"Qvd7wyHpVk4",
"nips_2021_tMFTT3BDEK9"
] |
nips_2021_jSz59N8NvUP | RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning | Krishnateja Killamsetty, Xujiang Zhao, Feng Chen, Rishabh Iyer | accept | This paper proposes RETRIEVE to address the computational cost issue in previous semi-supervised learning algorithms. It is well-written and well-motivated. The proposed idea is incremental but technically sound. The claims are well supported by theoretical analyses and extensive experimental results. | val | [
"-ZdF5tBPntN",
"XO3zqq3sFwx",
"5E8C9bdZA2i",
"Rlu8zdKxX2R",
"7UHjxhq9rnI",
"GQtSC21T-u",
"IkpXFCb-0Mi",
"STmlQj3H6w",
"nds2QwemB-K",
"mezD5vyNJM7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work extends previous approaches for coreset selection to the problem of semi-supervised learning (SSL). Experimental results show the approach has good performance even in settings with data imbalance and out of domain training data\n This work extends previous approaches for coreset selection to the proble... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nips_2021_jSz59N8NvUP",
"STmlQj3H6w",
"nips_2021_jSz59N8NvUP",
"IkpXFCb-0Mi",
"-ZdF5tBPntN",
"mezD5vyNJM7",
"5E8C9bdZA2i",
"nds2QwemB-K",
"nips_2021_jSz59N8NvUP",
"nips_2021_jSz59N8NvUP"
] |
nips_2021_1Kof-nkmQB8 | Collaborating with Humans without Human Data | Collaborating with humans requires rapidly adapting to their individual strengths, weaknesses, and preferences. Unfortunately, most standard multi-agent reinforcement learning techniques, such as self-play (SP) or population play (PP), produce agents that overfit to their training partners and do not generalize well to humans. Alternatively, researchers can collect human data, train a human model using behavioral cloning, and then use that model to train "human-aware" agents ("behavioral cloning play", or BCP). While such an approach can improve the generalization of agents to new human co-players, it involves the onerous and expensive step of collecting large amounts of human data first. Here, we study the problem of how to train agents that collaborate well with human partners without using human data. We argue that the crux of the problem is to produce a diverse set of training partners. Drawing inspiration from successful multi-agent approaches in competitive domains, we find that a surprisingly simple approach is highly effective. We train our agent partner as the best response to a population of self-play agents and their past checkpoints taken throughout training, a method we call Fictitious Co-Play (FCP). Our experiments focus on a two-player collaborative cooking simulator that has recently been proposed as a challenge problem for coordination with humans. We find that FCP agents score significantly higher than SP, PP, and BCP when paired with novel agent and human partners. Furthermore, humans also report a strong subjective preference to partnering with FCP agents over all baselines.
| accept | This paper presents a simple trick to get multi-agent RL agents to cooperate effectively with a previously unseen agent (such as a human). The idea is to first train multiple agents with self-play and then train the final agent to cooperate with all of those agents as well as their past checkpoints. Experiments show successful results in the Overcooked game.
This submission was a pleasure to read. Three of the reviewers enthusiastically recommend acceptance, while the remaining reviewer recommends rejection because they consider it an incremental extension of domain randomization methods. In my opinion, this setting is substantially different from traditional uses of domain randomization, and while this work is certainly inspired by the idea of randomization helping generalization, the details are (as far as I know) novel. Apart from this, the reviewers didn't have any major objections. Since the submission seems high quality and ought to be of broad interest, I recommend it for a spotlight.
| train | [
"1HFNaWFkDzu",
"-BF_G-Tluzh",
"eU5wbcwc4Mx",
"gBXUrjpkeG8",
"azxPcaPue5h",
"s8TIN3i9_zi",
"zzbIckH4PKF",
"x806KkPbffY",
"1mEKLYn5Kna",
"yws4Uc-Qo0r",
"ogLXzSJHYyc",
"E1y_AvFbHcv",
"9D54eG6_CW",
"uy_jQ_OEyf",
"Vi-38jmsdGr",
"fgmsScodKsY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your clarifications and acknowledgement of the limitations! I am maintaining my score of accept.",
" Thank you for your clarifications. I am maintaining my score of a strong accept.",
"The paper introduces a practical method, named Fictitious Co-Play, for agents to learn to collaborate with humans ... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
9
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"s8TIN3i9_zi",
"yws4Uc-Qo0r",
"nips_2021_1Kof-nkmQB8",
"azxPcaPue5h",
"E1y_AvFbHcv",
"9D54eG6_CW",
"x806KkPbffY",
"1mEKLYn5Kna",
"ogLXzSJHYyc",
"fgmsScodKsY",
"eU5wbcwc4Mx",
"Vi-38jmsdGr",
"uy_jQ_OEyf",
"nips_2021_1Kof-nkmQB8",
"nips_2021_1Kof-nkmQB8",
"nips_2021_1Kof-nkmQB8"
] |
nips_2021_f2Llmm_z5Sm | Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State | Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware. However, the supervised training of SNNs remains a hard problem due to the discontinuity of the spiking neuron model. Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks, and use surrogate derivatives or compute gradients with respect to the spiking time to deal with the problem. These approaches either accumulate approximation errors or only propagate information limitedly through existing spikes, and usually require information propagation along time steps with large memory costs and biological implausibility. In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation. First, we show that the average firing rates of SNNs with feedback connections would gradually evolve to an equilibrium state along time, which follows a fixed-point equation. Then by viewing the forward computation of feedback SNNs as a black-box solver for this equation, and leveraging the implicit differentiation on the equation, we can compute the gradient for parameters without considering the exact forward procedure. In this way, the forward and backward procedures are decoupled and therefore the problem of non-differentiable spiking functions is avoided. We also briefly discuss the biological plausibility of implicit differentiation, which only requires computing another equilibrium. Extensive experiments on MNIST, Fashion-MNIST, N-MNIST, CIFAR-10, and CIFAR-100 demonstrate the superior performance of our method for feedback models with fewer neurons and parameters in a small number of time steps. Our code is available at \url{https://github.com/pkuxmq/IDE-FSNN}.
| accept | The paper proposes a novel method to train spiking neural networks (SNNs) by considering the equilibrium state of their firing rates. They exploit results on implicit differentiation to compute gradients in the network. They show that the method achieves state-of-the-art performance on several data sets, including CIFAR-10 and CIFAR-100.
The method is based on rigorous proofs about the equilibrium states of an SNN.
All reviewers agree that the paper presents interesting results. The manuscript is well-written and the experimental results are convincing. | train | [
"lmKmfugrivM",
"2VEj0Ugxbr8",
"U8bjejJJvB",
"d7QuiVpwjCg",
"5lbw3Xef6A8",
"TGtLE9leKyG",
"XC0rmV67J4",
"IE6DStnsTwb",
"1xbU_m82xAq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am satisfied to see that the authors analyzed the average firing rate between IF and LIF as I suggested, so I raise my score from 4 to 6, I am happy to see this paper would be accepted. ",
"This paper presented a training method for feedback spiking neural networks to tackle existing training problems with ba... | [
-1,
6,
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"U8bjejJJvB",
"nips_2021_f2Llmm_z5Sm",
"2VEj0Ugxbr8",
"1xbU_m82xAq",
"XC0rmV67J4",
"IE6DStnsTwb",
"nips_2021_f2Llmm_z5Sm",
"nips_2021_f2Llmm_z5Sm",
"nips_2021_f2Llmm_z5Sm"
] |
nips_2021_cCQAzuT5q4 | Online Selective Classification with Limited Feedback | Aditya Gangrade, Anil Kag, Ashok Cutkosky, Venkatesh Saligrama | accept | This paper considers an online learning setting in which the true label is revealed only when the algorithm abstains from making a prediction. Algorithms are proposed for several variants of this setting, and strong theoretical guarantees provide a trade-off between the number of mistakes and the number of abstentions. Some experiments demonstrate that these procedures can also be applied in practice. The work advances the theoretical understanding of selective classification, and as such could be of interest to many in the NeurIPS community. | train | [
"JLzQ9ohWryB",
"9Tl0d6YNBg",
"xcvkb1rIL-",
"pRWWCFFr0aX",
"kOlsCXa_xde",
"2UgY_1SECLa",
"kepGMTuC3YL",
"A7G83cDPdof",
"TCfeOPhHE_f",
"NwvSsm-LzvV",
"hU33cXBFkpf",
"1UDmW-PCxk",
"7eeCnDEglyg",
"cRKAO8g9dA7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces a new setting for online classification with abstention and provide algorithms with provable guarantees for both the stochastic and adversarial setting. The bounds obtained are tight and very well exposed in the paper. The paper overall is of high quality, however, the setting studied makes mu... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_cCQAzuT5q4",
"hU33cXBFkpf",
"nips_2021_cCQAzuT5q4",
"kOlsCXa_xde",
"2UgY_1SECLa",
"kepGMTuC3YL",
"A7G83cDPdof",
"TCfeOPhHE_f",
"JLzQ9ohWryB",
"xcvkb1rIL-",
"7eeCnDEglyg",
"cRKAO8g9dA7",
"nips_2021_cCQAzuT5q4",
"nips_2021_cCQAzuT5q4"
] |
nips_2021_kTy7bbm-4I4 | Controlled Text Generation as Continuous Optimization with Multiple Constraints | As large-scale language model pretraining pushes the state-of-the-art in text generation, recent work has turned to controlling attributes of the text such models generate. While modifying the pretrained models via fine-tuning remains the popular approach, it incurs a significant computational cost and can be infeasible due to a lack of appropriate data. As an alternative, we propose \textsc{MuCoCO}---a flexible and modular algorithm for controllable inference from pretrained models. We formulate the decoding process as an optimization problem that allows for multiple attributes we aim to control to be easily incorporated as differentiable constraints. By relaxing this discrete optimization to a continuous one, we make use of Lagrangian multipliers and gradient-descent-based techniques to generate the desired text. We evaluate our approach on controllable machine translation and style transfer with multiple sentence-level attributes and observe significant improvements over baselines.
| accept | This paper proposes MUCOCO, a constrained text generation method. The constrained text generation is formulated as generating text with multiple controlling attributes. MUCOCO turns a discrete constrained decoding into differentiable optimization using techniques of continuous relaxation, Lagrangian multipliers and exponential gradient descend. Experiments on three conditional text generation tasks including text style transfer, machine translation and paraphrasing show the superior performance of the proposed MUCOCO.
The author may adjust the main focus a bit accordingly. Additional discussion and/or comparison with other highly related methods (methods that can also handle constrained sentence generation with multiple constraints) could be added as pointed out by reviewers. The authors may explain the inference clearly. Additional discussion on limitations of MUCOCO could be added (e.g. conditions for auxiliary objectives and compatibility of multiple constraints) . | train | [
"D6TZCHZTHbf",
"7llBpkArI77",
"AmLRA-FqcJJ",
"pZkX2HaOKY3",
"xMuh_6-8T48",
"ZYbRNhxamsF",
"6YFb_eU9rk9",
"8oUMRWlrTPb",
"am6bJiQJrrO",
"ISwiORdXOes",
"IYk7pSs1eD",
"B4T-WCOwqE-",
"Lp3ITWiQAXm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. My concerns have been mostly resolved.",
" We thank you for your response and appreciate your feedback. We will make sure to clearly address the mentioned issues in the final version. ",
"This paper proposes a flexible controllable decoding algorithm called MUCOCO, which incorporates ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"8oUMRWlrTPb",
"pZkX2HaOKY3",
"nips_2021_kTy7bbm-4I4",
"am6bJiQJrrO",
"ZYbRNhxamsF",
"6YFb_eU9rk9",
"Lp3ITWiQAXm",
"B4T-WCOwqE-",
"AmLRA-FqcJJ",
"IYk7pSs1eD",
"nips_2021_kTy7bbm-4I4",
"nips_2021_kTy7bbm-4I4",
"nips_2021_kTy7bbm-4I4"
] |
nips_2021_kQDPhAZHYi | S$^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks | Xinlin Li, Bang Liu, Yaoliang Yu, Wulong Liu, Chunjing XU, Vahid Partovi Nia | accept | Following the authors' response, this paper had two very positive reviewers (8), one slightly positive reviewer (6), and one very negative reviewer (2).
The main contribution of this paper, that excited the positive reviewers, was it showed for the first time it is possible to train from scratch with shift based quantization on ImageNet, and at a very low precision (3bit). This surprising observation that the suggested re-parameterization + "densifying regularization" are needed is insightful and should be helpful for developing other methods.
The very negative reviewer was mainly concerned with the re-parameterization, and suggested many weights should get "stuck" during optimization. Though the reviewer argument initially had some errors in the phase space diagram, we had a discussion whether the correct phase space diagram could should have such issues. However, during the discussion, I became convinced this is not a critical issue, because of the toy examples below (found in the discussion) by the most positive reviewer, in which we actually converge to the correct solution. Interestingly, this convergence seems to work be because of the "densifying regularization", and I encourage the authors to verify and discuss this. The results added by the authors during the review process are useful for this.
I would ask also the authors to polish the writing to help readability and impact of this paper. I tend to agree with the negative reviewer that some parts of this work are imprecise and confusing (and so did even the most positive reviewer).
**** Example ****
Suppose the training loss is $\frac12 (w_{ter} - 0)^2$ then $\frac{dL}{dw_{ter}} = g = w_{ter}= H(x)(2 H(y) - 1)$
Then, the STE gradient of the regularized loss L is
$\frac{dL}{dx} = ((2 H(y) - 1) H(x) ) (2 H(y) - 1) - \alpha I_{x<0} = H(x) - \alpha H(-x) $ (i.e., 1 for x>0, -$\alpha$ for x<0)
, where $H$ is the heaviside function. Therefore the step $-\frac{dL}{dx}$ will always drive $x$ toward 0, and if $\alpha$ is small, we will have $x<0$ for most iterations.
Another one example:
$L = \frac12 (w_{ter} - 1)^2$, $g = w_{ter} = H(x)(2 H(y) - 1) - 1 $
$\frac{dL}{dx} = ((2 H(y) - 1) H(x) - 1) (2 H(y) - 1) - \alpha I_{x<0} = H(x) - (2 H(y) - 1) - \alpha H(-x) $
$\frac{dL}{dy} = 2 H(x) g = 2 H(x) (( H(x) (2 H(y) - 1)) -1) = 2 H(x) ((2 H(y) - 1) - 1) = 4 H(x) (H(y) - 1) $
so, the step $-\frac{dL}{dy} = -4H(x)(H(y) - 1) $ will be non zero only if $x>0$ and $y < 0$. If $x>0$ it will eventually become positive as required.
If, on the other hand, $y<0$, the step $-\frac{dL}{dx} = -H(x) - 1 + \alpha H(-x) $ so it will move down while x<0, and, indeed, stay negative. Therefore, (-1,-1) won't shift to (1,1) in this setup. | train | [
"LDLA4r7l2e7",
"MvofoYyC3ht",
"o27vwx-wVON",
"8_KjxEy7lw",
"XMEeZS_VneF",
"plX1l5QJq7-",
"VUxIhHDRzV",
"Fku4TSrhiI0",
"TM5Fp0E9DT",
"V054y7Reb7b",
"Pan5kmNKlcn",
"UPWvZxrfBc8",
"dOe5ZYcanaq",
"dD3kIL59_9E",
"qtxVY9XUcqS",
"1gRXh1fkrJ",
"wLy8r9LmMaJ",
"pblwQC7xMRl",
"fGZAVeo8SzQ",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer KeAd, \n\nTo further clarify your concern, we have prepared further analysis on the trajectory of the weights. **The trajectory of the ternary weights in the decomposition space (w_sign and w_sparse) during the actual training clearly shows that the conclusion of your theoretical analysis is incorre... | [
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
2
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"o27vwx-wVON",
"nips_2021_kQDPhAZHYi",
"plX1l5QJq7-",
"nips_2021_kQDPhAZHYi",
"VUxIhHDRzV",
"dD3kIL59_9E",
"MvofoYyC3ht",
"MAE1K85z2ri",
"1gRXh1fkrJ",
"qtxVY9XUcqS",
"UPWvZxrfBc8",
"dOe5ZYcanaq",
"pblwQC7xMRl",
"MAE1K85z2ri",
"MvofoYyC3ht",
"fGZAVeo8SzQ",
"dD3kIL59_9E",
"8_KjxEy7lw... |
nips_2021_lR4aaWCQgB | Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions | Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP. Moreover, we show that I-MLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. Experiments on several datasets suggest that I-MLE is competitive with and often outperforms existing approaches which rely on problem-specific relaxations.
| accept | The paper presents new methods for gradient estimation in computation graphs with certain types of discrete stochastic variables. Reviewers all voted accept (scores 6, 6, 6, 8). The consensus was that the paper has fresh ideas and there is evidence in favor of the method working well, but it also has some tradeoffs, e.g., in terms of added complexity in the computations. Several reviewers raised (non-fatal) concerns about clarity, which the authors are encouraged to address in the final revision. The meta-reviewer recommends accept. | train | [
"TiRB4M8kJs_",
"r3NuH2DxaCH",
"jmlh-ZeLSVu",
"UeLv6-UPMtz",
"8fYT7Ommy5",
"Fp7MSsItVHn",
"l7JGZGM_rM",
"arh_5nDK0lm",
"T2YiGuXzPBH",
"HdPyqwNBaCT",
"BOd5pWbV938"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nThank you for pointing this out. As stated in our previous answer, experiments of Table 2 use the target distribution introduced in Eq. (12). As additional experiments, we also consider the setting where the target distribution is the one given in Eq. (8), and where noise samples were drawn from a SoG distribut... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"UeLv6-UPMtz",
"l7JGZGM_rM",
"nips_2021_lR4aaWCQgB",
"arh_5nDK0lm",
"BOd5pWbV938",
"HdPyqwNBaCT",
"jmlh-ZeLSVu",
"T2YiGuXzPBH",
"nips_2021_lR4aaWCQgB",
"nips_2021_lR4aaWCQgB",
"nips_2021_lR4aaWCQgB"
] |
nips_2021_PnY8rTJGOuU | Scaling up Continuous-Time Markov Chains Helps Resolve Underspecification | Modeling the time evolution of discrete sets of items (e.g., genetic mutations) is a fundamental problem in many biomedical applications. We approach this problem through the lens of continuous-time Markov chains, and show that the resulting learning task is generally underspecified in the usual setting of cross-sectional data. We explore a perhaps surprising remedy: including a number of additional independent items can help determine time order, and hence resolve underspecification. This is in sharp contrast to the common practice of limiting the analysis to a small subset of relevant items, which is followed largely due to poor scaling of existing methods. To put our theoretical insight into practice, we develop an approximate likelihood maximization method for learning continuous-time Markov chains, which can scale to hundreds of items and is orders of magnitude faster than previous methods. We demonstrate the effectiveness of our approach on synthetic and real cancer data.
| accept | This paper describes a method for modeling the time-series changes in a discrete set of items. The particular application the authors have in mind is phylogenetic inference where the items are genetic mutations. The method involves reframing and using continuous-time Markov chains. While moving to the continuous domain helps address the combinatorial nature of the problem, it introduces another problem - underspecification - which results in poor generalization, sensitivity, and false positive correlations. The authors propose a solution based on prior work (Schill et. al 2019) that employs so-called "unimportant items" to help recover the time order.
The reviewers noted two specific aspects of significance of the work. First, the paper provides bounds on the expected amount of information gain from including these "unimportant items". Second, the paper addresses an important problem in bioinformatics. Inferring the tumor evolution process is a well-studies problem in genomics and has been studied extensively. In clinical applications, the tumor evolution may be of limited concern because the physician is primarily interested in treating the whole tumor as it presents (including any heterogeneity). But, in fundamental research, understanding the tumor evolution can inform our basic understanding of the processes that lead to tumor growth and metastatic processes and therefore phylogenetic inference is a significant problem to address. It should be noted that for this method to be used in practice, a distinction must be made between "interesting" and "uninteresting" items. This may be done by the model or with prior knowledge. The authors have constructed their algorithm to allow the model to make this distinction which may improve the generality for problems outside of phylogenetic inference.
I'm in agreement with the reviewers after considering the author response and discussion that this paper should be recommended for acceptance. The theoretical contributions may prove useful for a broad range of similar problems and the practical utility on a data set relevant for practical phylogenetic inference is demonstrated.
| train | [
"diXkan9oNPC",
"odA4UeIHgO",
"3dpzzOCnZ3O",
"6MO3I8FjKeU",
"CikH1GAspX",
"LOnqtJzvAmF",
"3xj3ZN8oeaQ",
"FO-nWN2pCJ0",
"YCs23nxBXWn",
"kFv6GzdRgbm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The two main points of novelty in our work are (a) the theoretical analysis of how the addition of independent items affects the inference of the CTMC parameters (Theorem 1), and (b) the formulation of the approximate likelihood maximization in section 4, which allows us to scale up the inference to hundreds of i... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"odA4UeIHgO",
"CikH1GAspX",
"YCs23nxBXWn",
"FO-nWN2pCJ0",
"kFv6GzdRgbm",
"3xj3ZN8oeaQ",
"nips_2021_PnY8rTJGOuU",
"nips_2021_PnY8rTJGOuU",
"nips_2021_PnY8rTJGOuU",
"nips_2021_PnY8rTJGOuU"
] |
nips_2021_CI0T_3l-n1 | Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark | Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport---specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use input-convex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream.
| accept | In this paper, the authors propose an interesting approach to evaluating transport plans. There exist some concerns about methodology. However, to my knowledge, I have never seen this type of study before in OT problems. So, the problem itself is new and novel. Thus, I also would like to vote for acceptance. For the camera-ready version, I expect the authors to revise the paper based on the reviewer's comments. | train | [
"2HopmnEew4X",
"tjdBU8Fwwe",
"KQG9H94QduJ",
"XdAz0rP2Zqs",
"bxe4R-OorXs",
"b8kCOR67m5Y",
"B__GHau1rbm",
"zo3xwz9DXAH",
"sxP89rpK8dS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer oJwC,\n\nWe thank you for your review and appreciate your time reviewing our paper.\n\nThe end of the rebuttal phase is approaching. We would be grateful if we could hear your feedback regarding our answers to the reviews. We are happy to address any remaining points during the remaining period.\n\n... | [
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
5
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
5
] | [
"B__GHau1rbm",
"nips_2021_CI0T_3l-n1",
"b8kCOR67m5Y",
"nips_2021_CI0T_3l-n1",
"nips_2021_CI0T_3l-n1",
"tjdBU8Fwwe",
"sxP89rpK8dS",
"XdAz0rP2Zqs",
"nips_2021_CI0T_3l-n1"
] |
nips_2021_h7FqQ6hCK18 | Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients | We consider a standard federated learning (FL) setup where a group of clients periodically coordinate with a central server to train a statistical model. We develop a general algorithmic framework called FedLin to tackle some of the key challenges intrinsic to FL, namely objective heterogeneity, systems heterogeneity, and infrequent and imprecise communication. Our framework is motivated by the observation that under these challenges, various existing FL algorithms suffer from a fundamental speed-accuracy conflict: they either guarantee linear convergence but to an incorrect point, or convergence to the global minimum but at a sub-linear rate, i.e., fast convergence comes at the expense of accuracy. In contrast, when the clients' local loss functions are smooth and strongly convex, we show that FedLin guarantees linear convergence to the global minimum, despite arbitrary objective and systems heterogeneity. We then establish matching upper and lower bounds on the convergence rate of FedLin that highlight the effects of infrequent, periodic communication. Finally, we show that FedLin preserves linear convergence rates under aggressive gradient sparsification, and quantify the effect of the compression level on the convergence rate. Notably, our work is the first to provide tight linear convergence rate guarantees, and constitutes the first comprehensive analysis of gradient sparsification in FL.
| accept | This paper introduces the FedLin algorithm (which could be seen as an adaptation of FedSVRG) for cross-silo federated learning (without client sampling) and derives rigorous complexity guarantees.
The reviewers commended the theoretical results, in particular the simple convergence proof of FedLin, and corresponding (algorithm specific) lower bound.
Partial communication compression was studied as additional contribution. Yet, this aspect was assessed more critically by the reviewers, as compression of the parameters is not supported.
The reviewers are of the opinion that their concerns were adequately addressed by the author's response (and that the promised changes by the authors will be implemented in the final version).
Additionally, I strongly encourage the authors to also include:
- a discussion of the relation to 'FedSVRG' [[Konecny et al, Federated Optimization: Distributed Machine Learning for On-Device Intelligence, 2016]](https://arxiv.org/pdf/1610.02527.pdf) and possibly additional literature related to 'federated SVRG' variants. In this regard, the claims on novelty should be phrased more carefully. (In particular also more carefully than in the author's response, as e.g. the mentioned algorithmic differences seem quite small and partially irrelevant (i.e. omitting a discussion of heterogenity for an algorithm that does not depend on heterogenity does not seem to be a limitation, etc.).
- a discussion of the 'client sampling' aspect would be appreciated. The limitation of FedLin to do a pass over all clients (and requiring twice as much communication as FedAvg) seems quite limiting in practice. | train | [
"NeMvyylHXj0",
"OsBxs8jdG2",
"Rx48wp06fEs",
"DN6yJOvzQt0",
"am8IFO2uSu",
"25Q-AGodUGb",
"5Go0-JAGRYy",
"oneFm8nh16",
"IA-Z4KWTtpj",
"QlskDFAMjpf"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" Dear AC,\n\nThank you for pointing us to this specific algorithm. We took a close look at Algorithm 4 (Federated SVRG) in reference [1]. In what follows, we discuss the similarities and differences of FedLin with FedSVRG. We note here that (as far as we could tell), the FedSVRG algorithm in [1] does not come with... | [
-1,
-1,
8,
5,
7,
8,
-1,
-1,
-1,
-1
] | [
-1,
-1,
3,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"OsBxs8jdG2",
"nips_2021_h7FqQ6hCK18",
"nips_2021_h7FqQ6hCK18",
"nips_2021_h7FqQ6hCK18",
"nips_2021_h7FqQ6hCK18",
"nips_2021_h7FqQ6hCK18",
"DN6yJOvzQt0",
"am8IFO2uSu",
"Rx48wp06fEs",
"25Q-AGodUGb"
] |
nips_2021_anxHcl9_sE | On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms | Zeroth-order (ZO) optimization is widely used to handle challenging tasks, such as query-based black-box adversarial attacks and reinforcement learning. Various attempts have been made to integrate prior information into the gradient estimation procedure based on finite differences, with promising empirical results. However, their convergence properties are not well understood. This paper makes an attempt to fill up this gap by analyzing the convergence of prior-guided ZO algorithms under a greedy descent framework with various gradient estimators. We provide a convergence guarantee for the prior-guided random gradient-free (PRGF) algorithms. Moreover, to further accelerate over greedy descent methods, we present a new accelerated random search (ARS) algorithm that incorporates prior information, together with a convergence analysis. Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks.
| accept | While there were some initial concerns from the reviewers, these were all clearly resolved following the author rebuttal, with the reviewers in question acknowledging the depth and clarity of the responses. The reviewers are now in agreement that this paper is a valuable addition to the literature on zero-order optimization, and is suitable for publication at NeurIPS. I do not have any specific reviewer points to highlight here, but I do ask that the authors carefully consider all of the reviewer comments when forming the camera-ready version. | train | [
"i4o-7runq8S",
"WDGaY5zfp0",
"8z4OobMDV96",
"Nh2TXfpBEH2",
"EGKI_7K_I0w",
"HGxAADjTpG5",
"lqYC-ch84p",
"mcX_-gw9oIx",
"n3pXVPgmIOu",
"OkM-D41qqPf",
"qaEFqpcs9R",
"did1kV5BJs6",
"Ven49k5WU59",
"KUtOzHMj8EQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you very much for the update. We highly appreciate that.",
"The paper provides a convergence analysis of a class of prior-guided zeroth-order (ZO) optimization algorithms and gives a prior-guided variant of the accelerated random search (ARS) algorithm. \nContribution:\n1. Provides a complete convergence ... | [
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3
] | [
"8z4OobMDV96",
"nips_2021_anxHcl9_sE",
"qaEFqpcs9R",
"EGKI_7K_I0w",
"did1kV5BJs6",
"nips_2021_anxHcl9_sE",
"mcX_-gw9oIx",
"Ven49k5WU59",
"nips_2021_anxHcl9_sE",
"n3pXVPgmIOu",
"WDGaY5zfp0",
"HGxAADjTpG5",
"KUtOzHMj8EQ",
"nips_2021_anxHcl9_sE"
] |
nips_2021_V5prUHOrOP4 | Revisit Multimodal Meta-Learning through the Lens of Multi-Task Learning | Multimodal meta-learning is a recent problem that extends conventional few-shot meta-learning by generalizing its setup to diverse multimodal task distributions. This setup makes a step towards mimicking how humans make use of a diverse set of prior skills to learn new skills. Previous work has achieved encouraging performance. In particular, in spite of the diversity of the multimodal tasks, previous work claims that a single meta-learner trained on a multimodal distribution can sometimes outperform multiple specialized meta-learners trained on individual unimodal distributions. The improvement is attributed to knowledge transfer between different modes of task distributions. However, there is no deep investigation to verify and understand the knowledge transfer between multimodal tasks. Our work makes two contributions to multimodal meta-learning. First, we propose a method to quantify knowledge transfer between tasks of different modes at a micro-level. Our quantitative, task-level analysis is inspired by the recent transference idea from multi-task learning. Second, inspired by hard parameter sharing in multi-task learning and a new interpretation of related work, we propose a new multimodal meta-learner that outperforms existing work by considerable margins. While the major focus is on multimodal meta-learning, our work also attempts to shed light on task interaction in conventional meta-learning. The code for this project is available at https://miladabd.github.io/KML.
| accept | The submission tackles the problem of meta-learning on a multimodal task distribution. It introduces an analytical methodology inspired from recent work on transference in multi-task learning and proposes a new multimodal meta-learning approach called Kernel Modulation (KML) which is claimed to outperform competing approaches.
Reviewers found the problem tackled to be important to the research community and the transference analysis to be an interesting and valuable contribution borrowed from the multi-task learning literature. They expressed reservations about the way in which the contributions were framed (in that the transference analysis feels disconnected from the KML contribution) and how many important details are relegated to the supplementary material. However, given that the authors were very responsive and open to incorporating their feedback, the reviewers and I feel positive about acceptance, if the changes promised are incorporated in the final version (see summary here: https://openreview.net/forum?id=V5prUHOrOP4¬eId=l5jMUHLSraH). | test | [
"Sh1efH3DeQ5",
"l5jMUHLSraH",
"vjixE3wjDM3",
"uLYzzyEpGNz",
"V2SpMkzI0Iv",
"vBFxawUS5Bv",
"Mu-qttUtWJE",
"eAf3ciCL99T",
"SGY31A3d6Da",
"ZX8ZuxA_F01",
"QzkQG8VdDFM",
"OtLdhKBLjit",
"jOc9hSWvPnk",
"F7Oq1SQbbGR",
"ZxDdtNlBbC",
"wBwxhexIabv",
"kfNo8OL3cOr",
"my3oP64IyOW",
"paubX5pZVY... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"... | [
"- The paper analyses the important aspect of multimodal meta-learning in the context of few-shot classification. In this setting, the task distribution contains tasks sampled from multiple datasets with different input domains and labels, introducing an extra layer of complexity. It has been shown that sharing a... | [
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_V5prUHOrOP4",
"uLYzzyEpGNz",
"vBFxawUS5Bv",
"nips_2021_V5prUHOrOP4",
"nips_2021_V5prUHOrOP4",
"V2SpMkzI0Iv",
"eAf3ciCL99T",
"SGY31A3d6Da",
"wBwxhexIabv",
"QzkQG8VdDFM",
"paubX5pZVY",
"Sh1efH3DeQ5",
"nips_2021_V5prUHOrOP4",
"45AXxQGfY2",
"Sh1efH3DeQ5",
"Sh1efH3DeQ5",
"V2SpM... |
nips_2021_GlbCMt4vSFv | Dynamic Sasvi: Strong Safe Screening for Norm-Regularized Least Squares | A recently introduced technique, called safe screening,'' for a sparse optimization problem allows us to identify irrelevant variables in the early stages of optimization. In this paper, we first propose a flexible framework for safe screening based on the Fenchel--Rockafellar duality and then derive a strong safe screening rule for norm-regularized least squares using the proposed framework. We refer to the proposed screening rule for norm-regularized least squares asdynamic Sasvi'' because it can be interpreted as a generalization of Sasvi. Unlike the original Sasvi, it does not require the exact solution of a more strongly regularized problem; hence, it works safely in practice. We show that our screening rule always eliminates more features compared with the existing state-of-the-art methods.
| accept | All reviewers agrees that the derivation of a new screening rule is interesting, and that the mathematical
results are strong and leads to (slight) computational acceleration compared to SOTA methods.
The authors are encouraged to release their code for numerical benchmarks with other screening rules and sparse solvers.
(there exists a open-source project about that, named BenchOpt)
| train | [
"osQyEuTgScC",
"5HT0wVd0i_G",
"j6mDuQW0dD",
"UDdlQisAreG",
"ZjvOs66MwYY",
"SlbOmpUlQ99",
"avny7LhDHZ4",
"STa160H2jRm",
"JuP5naBGQNA",
"nqe_xrtGzYY",
"2r3H2pqQkzh"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" After reading the author response and other reviews, I would like to keep my original score.",
" Thank you again for your feedback. We are glad of your evaluation for our theoretical results. We will add some results on other Lasso-like problems.",
" Thank you again for your feedback. We will release the code... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
3,
4
] | [
"JuP5naBGQNA",
"nqe_xrtGzYY",
"ZjvOs66MwYY",
"nqe_xrtGzYY",
"avny7LhDHZ4",
"nips_2021_GlbCMt4vSFv",
"SlbOmpUlQ99",
"nqe_xrtGzYY",
"2r3H2pqQkzh",
"nips_2021_GlbCMt4vSFv",
"nips_2021_GlbCMt4vSFv"
] |
nips_2021_-OrwaD3bG91 | What Matters for Adversarial Imitation Learning? | Adversarial imitation learning has become a popular framework for imitation in continuous control. Over the years, several variations of its components were proposed to enhance the performance of the learned policies as well as the sample complexity of the algorithm. In practice, these choices are rarely tested all together in rigorous empirical studies.It is therefore difficult to discuss and understand what choices, among the high-level algorithmic options as well as low-level implementation details, matter. To tackle this issue, we implement more than 50 of these choices in a generic adversarial imitation learning frameworkand investigate their impacts in a large-scale study (>500k trained agents) with both synthetic and human-generated demonstrations. We analyze the key results and highlight the most surprising findings.
| accept | This paper presents an extensive experimental study on implementation choices in adversarial imitation learning. All reviewers were leaning towards acceptance of the paper since many observations made in the paper could be useful to practitioners. I am also recommending acceptance of the paper.
| train | [
"cikdB49moXT",
"KcdgbfklhFS",
"ykVnN4Qeaf-",
"Cb0Hkm2Zsbe",
"2EAXcvS2yCT",
"E8W3ZXBIYrm",
"K6S93iLv1qh",
"UowR7JNecpG",
"AelDBWNxU_A"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for answering our rebuttal.\nReleasing the policies would mean sharing over 350GBs of data, which is quite complicated. However, along with the notebook, we will share the raw result data of the experiments, which will allow users to analyze them in very different ways than we did if they want to. We... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
2,
5
] | [
"KcdgbfklhFS",
"E8W3ZXBIYrm",
"nips_2021_-OrwaD3bG91",
"nips_2021_-OrwaD3bG91",
"ykVnN4Qeaf-",
"AelDBWNxU_A",
"UowR7JNecpG",
"nips_2021_-OrwaD3bG91",
"nips_2021_-OrwaD3bG91"
] |
nips_2021_o6-k168bBD8 | Sequential Causal Imitation Learning with Unobserved Confounders | "Monkey see monkey do" is an age-old adage, referring to naive imitation without a deep understanding of a system's underlying mechanics. Indeed, if a demonstrator has access to information unavailable to the imitator (monkey), such as a different set of sensors, then no matter how perfectly the imitator models its perceived environment (See), attempting to directly reproduce the demonstrator's behavior (Do) can lead to poor outcomes. Imitation learning in the presence of a mismatch between demonstrator and imitator has been studied in the literature under the rubric of causal imitation learning (Zhang et. al. 2020), but existing solutions are limited to single-stage decision-making. This paper investigates the problem of causal imitation learning in sequential settings, where the imitator must make multiple decisions per episode. We develop a graphical criterion that is both necessary and sufficient for determining the feasibility of causal imitation, providing conditions when an imitator can match a demonstrator's performance despite differing capabilities. Finally, we provide an efficient algorithm for determining imitability, and corroborate our theory with simulations.
| accept | It is my pleasure to recommend acceptance of this paper.
As noted by reviewers:
*The paper is clear, rigorous, and well written. Several examples inserted in the text facilitate the understanding and development of ideas. The paper clearly explains its theoretical limitations while addressing an important problem with concrete applications in ML.*
Also:
*The paper contributes to the foundations of imitation learning and attempts to answer an important question of whether an imitator is able to imitate a demonstrator if there exists sensor input mismatch.* | train | [
"sdaBSohrWnN",
"7Kl596q3q9G",
"yOqn7FMYIv4",
"vM9KuYYTbuB",
"gw5OARU3J-k",
"R_z5t7o-Kn",
"JpAocVHLmv",
"vzfH6Fbn9ml",
"uzHS54RWOg",
"X6B6dSoSZU8",
"BamAHd3aKS",
"eTrON3K6TUU",
"0cnJyfcHRi-",
"YXChRYKsyqO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates the problem of determining imitability in casual imitation learning in sequential settings. This is an extension from a previous work that studies the same problem in single-stage settings. The authors propose necessary and sufficient conditions for determining whether an imitator is able t... | [
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
7
] | [
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
1
] | [
"nips_2021_o6-k168bBD8",
"uzHS54RWOg",
"vzfH6Fbn9ml",
"X6B6dSoSZU8",
"nips_2021_o6-k168bBD8",
"JpAocVHLmv",
"gw5OARU3J-k",
"YXChRYKsyqO",
"sdaBSohrWnN",
"0cnJyfcHRi-",
"eTrON3K6TUU",
"nips_2021_o6-k168bBD8",
"nips_2021_o6-k168bBD8",
"nips_2021_o6-k168bBD8"
] |
nips_2021_yewqeLly5D8 | Topic Modeling Revisited: A Document Graph-based Neural Network Perspective | Most topic modeling approaches are based on the bag-of-words assumption, where each word is required to be conditionally independent in the same document. As a result, both of the generative story and the topic formulation have totally ignored the semantic dependency among words, which is important for improving the semantic comprehension and model interpretability. To this end, in this paper, we revisit the task of topic modeling by transforming each document into a directed graph with word dependency as edges between word nodes, and develop a novel approach, namely Graph Neural Topic Model (GNTM). Specifically, in GNTM, a well-defined probabilistic generative story is designed to model both the graph structure and word sets with multinomial distributions on the vocabulary and word dependency edge set as the topics. Meanwhile, a Neural Variational Inference (NVI) approach is proposed to learn our model with graph neural networks to encode the document graphs. Besides, we theoretically demonstrate that Latent Dirichlet Allocation (LDA) can be derived from GNTM as a special case with similar objective functions. Finally, extensive experiments on four benchmark datasets have clearly demonstrated the effectiveness and interpretability of GNTM compared with state-of-the-art baselines.
| accept | Reviews have positive comments about the family of topic models explored here, which uses graph-neural-networks to remove the word independence assumptions in classical models like LDA. Some reviewers raised questions about experimental comparisons to related work, that were reasonably addressed by the author response; please be sure to include these comparisons in future revisions. The manuscript text also needs to be polished to improve readability, and clarify several details raised in reviews. | train | [
"M-BulZuVFY",
"sa7RfK3ZTrG",
"bV04QEFKTVp",
"9k8dIGPU4vF",
"DhnJjbR0ZiV",
"aW5maeYCS1t",
"3jr-uUTL6ua",
"bIAJR9CCh8r",
"JNlDotd6RZK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors provide a novel generative model to learn topics using graph modeling based on the word network, following a (now classic) variational encoding approach. The documents are encoded as graphs by modeling an edge between two words if they appear in the same window, which is similar to the e... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_yewqeLly5D8",
"9k8dIGPU4vF",
"nips_2021_yewqeLly5D8",
"M-BulZuVFY",
"JNlDotd6RZK",
"bIAJR9CCh8r",
"bV04QEFKTVp",
"nips_2021_yewqeLly5D8",
"nips_2021_yewqeLly5D8"
] |
nips_2021__6DawVPqyl | Hard-Attention for Scalable Image Classification | Athanasios Papadopoulos, Pawel Korus, Nasir Memon | accept | This work proposes reducing the computational cost of running inference on high resolution images by leveraging a multi-scale attention architecture. The method progressively employs higher resolution imagery to zoom in regions of interest that aid in discriminative training, while ignoring any computation on non-informative regions. The resulting model is tested on ImageNet, fMoW and CUB-200-2011 against several baseline architectures, including Saccader, DRAM, BagNet, and EfficientNet and shows favorable performance in terms of computational cost versus accuracy. The reviewers did voice some concerns about the lack of testing on a high resolution imagery, which remain unresolved. Overall, the reviewers were favorable with this work, and the concerns were notable, and did not rise to a significant enough level. This paper will be accepted to the conference.
| test | [
"yutXL0r9LIb",
"HOOFE3h3u1o",
"c5vB_lI4Mp-",
"-qvc1qp-KrY",
"9hYuuFWiO9j",
"ey1lDGsWwDD",
"4WRYQVGM4G0",
"sDnOa4aAOXm",
"XEiGQOHAT0J",
"YN0AyjL9ft6",
"80_TzULiomF",
"bIGiW6R7uTT",
"I2i3PRh4A3h",
"J7eD56ANXE",
"WzNFw2bJf9j",
"xFcKb_VdpL-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I like the paper and the idea. The one point that I proposed to make the paper stronger, namely also comparing with non-hard attention models, was not addressed by the authors. So I would not further increase by initial rating (still voting for accept, though) ",
" I appreciate the response from the authors. I... | [
-1,
-1,
8,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"-qvc1qp-KrY",
"YN0AyjL9ft6",
"nips_2021__6DawVPqyl",
"9hYuuFWiO9j",
"80_TzULiomF",
"nips_2021__6DawVPqyl",
"I2i3PRh4A3h",
"nips_2021__6DawVPqyl",
"J7eD56ANXE",
"xFcKb_VdpL-",
"WzNFw2bJf9j",
"c5vB_lI4Mp-",
"ey1lDGsWwDD",
"sDnOa4aAOXm",
"nips_2021__6DawVPqyl",
"nips_2021__6DawVPqyl"
] |
nips_2021_njIekVo3wLP | Fast Routing under Uncertainty: Adaptive Learning in Congestion Games via Exponential Weights | Dong Quan Vu, Kimon Antonakopoulos, Panayotis Mertikopoulos | accept | This work studies the question of learning an equilibrium in routing games and provides a smooth interpolation between static results with O(1/T^2) convergence and worst-case results with O(1/\sqrt{T}) convergence rates. This is a relatively narrow contribution, but the reviewers found the results interesting and technically non-trivial. | test | [
"gNCvsFsoM4U",
"ijQOzTr9MpN",
"Sw_AKRqVIy",
"ic2nb2gZG5Z",
"LErxGQMPJBS",
"8Qz3Pu3aqfB",
"YSK7o1y5Km_",
"YUtcoEBSUTj"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the equilibrium computation of congestion games and proposes a new algorithm, named ADAWEIGHT, that achieves $O(1/\\sqrt{T})$ convergence in the stochastic regime and $O(1/T^2)$ in the static regime. The algorithm has several desirable properties: \n\n1. it attains the best rates in the stochast... | [
6,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
3,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"nips_2021_njIekVo3wLP",
"gNCvsFsoM4U",
"YUtcoEBSUTj",
"8Qz3Pu3aqfB",
"YSK7o1y5Km_",
"nips_2021_njIekVo3wLP",
"nips_2021_njIekVo3wLP",
"nips_2021_njIekVo3wLP"
] |
nips_2021_S2-j0ZegyrE | Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent | Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO). In this work, we propose a novel gradient-based algorithm for profiling Pareto front by using Stein variational gradient descent (SVGD). We also provide a counterpart of our method based on Langevin dynamics. Our methods iteratively update a set of points in a parallel fashion to push them towards the Pareto front using multiple gradient descent, while encouraging the diversity between the particles by using the repulsive force mechanism in SVGD, or diffusion noise in Langevin dynamics. Compared with existing gradient-based methods that require predefined preference functions, our method can work efficiently in high dimensional problems, and can obtain more diverse solutions evenly distributed in the Pareto front. Moreover, our methods are theoretically guaranteed to converge to the Pareto front. We demonstrate the effectiveness of our method, especially the SVGD algorithm, through extensive experiments, showing its superiority over existing gradient-based algorithms.
| accept | This paper proposes and studies a novel algorithm, inspired by Stein variational gradient descent, whose aim is to find distinct elements on the Pareto front of a multi-objective optimisation problem. Theoretical and empirical results support the proposed method. All reviewers agreed that the paper would make an excellent contribution to NeurIPS, but I suggest the author(s) carefully consider the comments of Reviewer 6uVc, who explains that there are elements of the presentation of theoretical results that ought to be clarified in the manuscript. | train | [
"EBhujXweIM-",
"k-N7LDAwSGR",
"x8vCVRf7hB",
"VGHY2EgPipN",
"Gyi28WQtzPH",
"jxD93Ps82g",
"74hOvSzei9w",
"it3wJQCHAYw",
"ZKEOyQQGN4",
"mSgy27rvG6C",
"uFMmxqjibYU",
"Vg35uHfucR",
"jS0O9eHSzp0",
"zg24v36slof",
"WI8J6iysLF5",
"Ub8FqC2BT94",
"6LTBjuCGMgC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their detailed response to my comments and the additional clarifications and experiments in their general response. Hopefully, the authors can revise their work accordingly based on all the comments of the reviewers.",
" I thank the authors for their detailed replies to my ... | [
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"Vg35uHfucR",
"jS0O9eHSzp0",
"it3wJQCHAYw",
"jxD93Ps82g",
"nips_2021_S2-j0ZegyrE",
"mSgy27rvG6C",
"nips_2021_S2-j0ZegyrE",
"uFMmxqjibYU",
"nips_2021_S2-j0ZegyrE",
"Gyi28WQtzPH",
"74hOvSzei9w",
"6LTBjuCGMgC",
"Ub8FqC2BT94",
"nips_2021_S2-j0ZegyrE",
"nips_2021_S2-j0ZegyrE",
"nips_2021_S2... |
nips_2021_utt-q6jW5_w | MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement Learning Agents | Nearly all state-of-the-art deep learning algorithms rely on error backpropagation, which is generally regarded as biologically implausible. An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent, and thus the network is considered as a team of agents. As such, all units can be trained by REINFORCE, a local learning rule modulated by a global signal that is more consistent with biologically observed forms of synaptic plasticity. Although this learning rule follows the gradient of return in expectation, it suffers from high variance and thus the low speed of learning, rendering it impractical to train deep networks. We therefore propose a novel algorithm called MAP propagation to reduce this variance significantly while retaining the local property of the learning rule. Experiments demonstrated that MAP propagation could solve common reinforcement learning tasks at a similar speed to backpropagation when applied to an actor-critic network. Our work thus allows for the broader application of teams of agents in deep reinforcement learning.
| accept | This paper considers the problem of biologically plausible learning rules. For this, it looks at a local learning rule that uses REINFORCE to learn local updates for individual neurons. The main contribution is a novel variance reduction scheme for REINFORCE. While there were some fundamental question raised on biologically plausible learning rules (which is an active area of research), the reviewers found this a significant contribution and the paper interesting. As such, I would recommend acceptance of this paper. | val | [
"ySOHz2XPLRf",
"1JaiGJg5H7y",
"0pyxsFyrrPB",
"7LyebE5JM7M",
"K1WjxW3JJsZ",
"TntFbxWqfb7",
"adW4dMKCX7R",
"JWfLVHLDqj",
"vdZxpRggpBZ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think that author's for their response and pointing me to appendix F which indeed is helpful.\n\nI'm not sure I entirely agree with the argument against including other \"biologically plausible\" baselines.\n\nOverall, I was already positive about the paper so I'm leaving my scores unchanged.",
" We thank the... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
2,
1,
3,
4
] | [
"7LyebE5JM7M",
"TntFbxWqfb7",
"vdZxpRggpBZ",
"JWfLVHLDqj",
"adW4dMKCX7R",
"nips_2021_utt-q6jW5_w",
"nips_2021_utt-q6jW5_w",
"nips_2021_utt-q6jW5_w",
"nips_2021_utt-q6jW5_w"
] |
nips_2021_1GTpBZvNUrk | TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up | Yifan Jiang, Shiyu Chang, Zhangyang Wang | accept | This submission demonstrates that GANs with transformer-based architectures can learn to generate high-quality and high-resolution images, achieving strong results competitive with (or in some cases better than) convolution-based GANs on a variety of benchmark datasets.
Reviewers were generally in agreement that the paper is well-executed and does a nice job of tackling the scale-related challenges that arise from applying transformers to high resolution images. Although not all results are state-of-the-art, the paper makes use of very non-traditional architectures and as such doesn’t benefit as much from the tricks of the trade that have been developed and refined over several years of progress in convolutional GAN models, so this should not preclude acceptance. Another criticism that came up in review was the lack of results in the class-conditional setting (e.g. ImageNet). I would agree with the authors that it’s fair to save this for future work especially given the thoroughness of the evaluation in unconditional settings. Reviewers pointed out certain aspects of the method that weren’t clear and/or well-ablated in the original submission (e.g. what happens if transformers are used for only G and/or only D?) and the authors provided thorough responses including additional results/ablations. These results and clarifications should of course be included in the camera-ready version of the paper.
Given its strong image generation results using non-traditional architectures of current interest to much of the NeurIPS audience (with findings/ideas potentially useful beyond image generation), I recommend accepting the submission. | train | [
"8BdC5-9b9GZ",
"VuwMIHTvmY4",
"l1rO2UgJGNX",
"uGpW2MOH-ZJ",
"ND2-7WDdH5o",
"cVjIZvMTuu",
"W65TywK48BS",
"tjb1AyPlJM",
"Vzc7k0jhHJ",
"naOr0ehTWyO",
"THh1tBuDKH",
"ihckdu36jc1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThis work proposes TransGAN by replacing the CNN-based structure of both the generator and the discriminator in GANs with Transformer-based structure. To reduce the computing load, grid self-attention is proposed. Meanwhile, a relative position encoding scheme is introduced to the attention module to make the mo... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"nips_2021_1GTpBZvNUrk",
"ND2-7WDdH5o",
"cVjIZvMTuu",
"tjb1AyPlJM",
"8BdC5-9b9GZ",
"ihckdu36jc1",
"naOr0ehTWyO",
"THh1tBuDKH",
"nips_2021_1GTpBZvNUrk",
"nips_2021_1GTpBZvNUrk",
"nips_2021_1GTpBZvNUrk",
"nips_2021_1GTpBZvNUrk"
] |
nips_2021_YscYPF8bU13 | A Central Limit Theorem for Differentially Private Query Answering | Jinshuo Dong, Weijie Su, Linjun Zhang | accept | The authors theoretically analyze optimal privacy-utility tradeoffs when answering an n-dimensional query under differential privacy as n grows large, and provide new evidence pointing to the optimality of the Gaussian mechanism. Four expert reviewers voted accept, with scores 6, 7, 7, 8, and made overall very positive comments about the paper. Two reviewers described raised questions about the relationship between this paper and [DRS21]. The authors clarified that this paper considers correlated noise, while [DRS21] considers only independent noise, which satisfied the concerns of one reviewer, who raised their score. This point is probably worth clarifying in the final revision. | train | [
"GOWKT-lw_9x",
"jk_RA-5uLd",
"yx6xwZLIUl8",
"wwNEnTPX2J0",
"Zm8D_QyETq",
"3DIlL7aHUt",
"7QtRZFRVJkc",
"UVoVMQ-1NPT",
"VFGrYoMiA0h",
"Tq_n5sZdWfp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the clarification.",
"This paper studies the privacy-accuracy trade-off of answering a single n-dimensional query with ell-2 sensitivity 1, under differential privacy. This problem has been studied and the optimal noise distribution the Gaussian, and its upper and lower error bounds match up to co... | [
-1,
8,
6,
7,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
1
] | [
"7QtRZFRVJkc",
"nips_2021_YscYPF8bU13",
"nips_2021_YscYPF8bU13",
"nips_2021_YscYPF8bU13",
"3DIlL7aHUt",
"yx6xwZLIUl8",
"wwNEnTPX2J0",
"jk_RA-5uLd",
"Tq_n5sZdWfp",
"nips_2021_YscYPF8bU13"
] |
nips_2021_Nfbe1usrgx4 | Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent | What is the information leakage of an iterative randomized learning algorithm about its training data, when the internal state of the algorithm is \emph{private}? How much is the contribution of each specific training epoch to the information leakage through the released model? We study this problem for noisy gradient descent algorithms, and model the \emph{dynamics} of R\'enyi differential privacy loss throughout the training process. Our analysis traces a provably \emph{tight} bound on the R\'enyi divergence between the pair of probability distributions over parameters of models trained on neighboring datasets. We prove that the privacy loss converges exponentially fast, for smooth and strongly convex loss functions, which is a significant improvement over composition theorems (which over-estimate the privacy loss by upper-bounding its total value over all intermediate gradient computations). For Lipschitz, smooth, and strongly convex loss functions, we prove optimal utility with a small gradient complexity for noisy gradient descent algorithms.
| accept | There were a lot of productive discussion after the initial reviews. All reviewers are happy with the results in the paper and find the paper an excellent contribution to the problem of differentially private learning.
In particular, instead of the standard approach which shows privacy loss composes over multiple rounds, this paper provides a highly interesting new analysis that shows the Renyi-DP function (up to some order of \alpha) converges exponentially to some fixed value so that more rounds do not introduce additional privacy loss if we only release the last iterate. While the results apply to only the cases when we make some of the nicest assumptions (e.g., strong convexity + strong smoothness), it is the first result of its kind and is significantly stronger comparing to existing attempts to address this niche problem, i.e., privacy amplification by iterations.
Overall I think it's a good paper with original ideas that many readers will pick up.
| train | [
"XPtuHmkBYX",
"C2eRSz38y9p",
"khFY-_Ey0-Y",
"j67cDzzaWb1",
"SNH-_9tafa",
"dfZUvDTYrFa",
"QGGYlELQOU",
"SqPMEdfRBt9",
"mmKtd91sRKx",
"0dW_lvDG632",
"Gfws8AyJ-N",
"6MYSTmDYAB2",
"-fGX3vShKlc",
"iDd2zxRfhy0"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the feedbacks and suggestions. We will make sure to include all the discussions about restrictions of our assumptions in the main text or in the appendix, and we will fix the notation issues for future versions.",
" I'm satisfied with the authors response. I increased my score. Please make sure you i... | [
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"C2eRSz38y9p",
"0dW_lvDG632",
"nips_2021_Nfbe1usrgx4",
"dfZUvDTYrFa",
"nips_2021_Nfbe1usrgx4",
"6MYSTmDYAB2",
"SNH-_9tafa",
"SNH-_9tafa",
"SNH-_9tafa",
"khFY-_Ey0-Y",
"-fGX3vShKlc",
"SNH-_9tafa",
"nips_2021_Nfbe1usrgx4",
"nips_2021_Nfbe1usrgx4"
] |
nips_2021_n11B-1GmTJl | Data driven semi-supervised learning | We consider a novel data driven approach for designing semi-supervised learning algorithms that can effectively learn with only a small number of labeled examples. We focus on graph-based techniques, where the unlabeled examples are connected in a graph under the implicit assumption that similar nodes likely have similar labels. Over the past two decades, several elegant graph-based semi-supervised learning algorithms for inferring the labels of the unlabeled examples given the graph and a few labeled examples have been proposed. However, the problem of how to create the graph (which impacts the practical usefulness of these methods significantly) has been relegated to heuristics and domain-specific art, and no general principles have been proposed. In this work we present a novel data driven approach for learning the graph and provide strong formal guarantees in both the distributional and online learning formalizations. We show how to leverage problem instances coming from an underlying problem domain to learn the graph hyperparameters for commonly used parametric families of graphs that provably perform well on new instances from the same domain. We obtain low regret and efficient algorithms in the online setting, and generalization guarantees in the distributional setting. We also show how to combine several very different similarity metrics and learn multiple hyperparameters, our results hold for large classes of problems. We expect some of the tools and techniques we develop along the way to be of independent interest, for data driven algorithms more generally.
| accept | The reviewers generally consider this a strong theoretical submission that should be accepted despite a few shortcomings in the experiments. The author feedback clarified a few open questions and should be incorporated in the final version. The reviewers appreciated the solid theoretical work with impact to graph-based semi-supervised learning and beyond. | train | [
"PkFigWcdNh2",
"jMGVQ1v2Dy0",
"xIRd1liFg1D",
"-5jtcpvXWZW",
"bbeWMVx24Q",
"r1F-LVWq135",
"83YEcWFIiLH",
"PkhHA9y7Eeu",
"dXIA2NW0OTf",
"q7RaxGnS5kf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to learn the underlying graphs for graph-based semi-supervised learning problems. So far, graph-based SSL usually grounds on KNN-like graphs where distances are computed according to some measure. The present paper now learns the parameters of the measures (kernels) as well as a threshold to de... | [
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_n11B-1GmTJl",
"nips_2021_n11B-1GmTJl",
"PkhHA9y7Eeu",
"r1F-LVWq135",
"PkFigWcdNh2",
"q7RaxGnS5kf",
"dXIA2NW0OTf",
"jMGVQ1v2Dy0",
"nips_2021_n11B-1GmTJl",
"nips_2021_n11B-1GmTJl"
] |
nips_2021_8YSqxvRhi-Q | Meta-Learning via Learning with Distributed Memory | We demonstrate that efficient meta-learning can be achieved via end-to-end training of deep neural networks with memory distributed across layers. The persistent state of this memory assumes the entire burden of guiding task adaptation. Moreover, its distributed nature is instrumental in orchestrating adaptation. Ablation experiments demonstrate that providing relevant feedback to memory units distributed across the depth of the network enables them to guide adaptation throughout the entire network. Our results show that this is a successful strategy for simplifying meta-learning -- often cast as a bi-level optimization problem -- to standard end-to-end training, while outperforming gradient-based, prototype-based, and other memory-based meta-learning strategies. Additionally, our adaptation strategy naturally handles online learning scenarios with a significant delay between observing a sample and its corresponding label -- a setting in which other approaches struggle. Adaptation via distributed memory is effective across a wide range of learning tasks, ranging from classification to online few-shot semantic segmentation.
| accept | The paper operates in a simple approach to meta-learning, of running a recurrent network through tasks and back-propagating to learn the adaptation. They find that simple deep-convolutional lstm networks (with details) outperforms many of the previous architectures studies before on standard benchmarks and interestingly also works in classic supervised setting at state of the art level (as a single system).
On the positive side, the method is simple and the experiments are done carefully. The major drawback is that it is not being tested on harder problems and more modern online learning settings. Pushing the system this would we provide a major improvement to the paper. Despite this, the currently results are still sufficiently interesting. The are few points in terms of writing the paper that should be improved and I hope the authors will address that for the final version. | train | [
"guneoPm0W5",
"yb8w8md0bM",
"Hfeikn5YK5C",
"umZkla0ku7Q",
"Tj77AR2-DS",
"vXXkMYbZ8_8",
"VsIdYq7V92I",
"QjBThiIJp-3",
"6jBS_nRDVfn",
"cjCO3uUK6Z",
"cmOOToqmIBY",
"kmRN1MRP4zx",
"t9RMLE24lYv",
"4L_xu0a1WC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes to adapt Meta RNNs (memory-based meta-learning) by instantiating multiple layers of LSTMs where some of them are convolutional.\nThe authors refer to this as distributed memory due to each layer having its own LSTM-based memory.\nThey demonstrate good performance in few-shot learning and continu... | [
7,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_8YSqxvRhi-Q",
"cmOOToqmIBY",
"nips_2021_8YSqxvRhi-Q",
"cjCO3uUK6Z",
"nips_2021_8YSqxvRhi-Q",
"VsIdYq7V92I",
"QjBThiIJp-3",
"kmRN1MRP4zx",
"4L_xu0a1WC",
"Hfeikn5YK5C",
"guneoPm0W5",
"Tj77AR2-DS",
"nips_2021_8YSqxvRhi-Q",
"nips_2021_8YSqxvRhi-Q"
] |
nips_2021_0p0gt1Pn2Gv | Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling | Integrating physics models within machine learning models holds considerable promise toward learning robust models with improved interpretability and abilities to extrapolate. In this work, we focus on the integration of incomplete physics models into deep generative models. In particular, we introduce an architecture of variational autoencoders (VAEs) in which a part of the latent space is grounded by physics. A key technical challenge is to strike a balance between the incomplete physics and trainable components such as neural networks for ensuring that the physics part is used in a meaningful manner. To this end, we propose a regularized learning method that controls the effect of the trainable components and preserves the semantics of the physics-based latent variables as intended. We not only demonstrate generative performance improvements over a set of synthetic and real-world datasets, but we also show that we learn robust models that can consistently extrapolate beyond the training distribution in a meaningful manner. Moreover, we show that we can control the generative process in an interpretable manner.
| accept | The paper proposes to integrate a physics-based model within the Variational Auto-Encoder framework. Two of the reviewers are in favor of the paper and point out that the paper possesses many interesting features:
- The paper generalizes the integration of physical domain knowledge into ML models beyond the additive combination of learned components with physics ODEs.
- It applies physics integration in the context of deep generative modeling.
- It shows how to overcome the well-known problem of VAE decoders and latent spaces being overly flexible through regularization.
- The ideas in the paper are evaluated extensively across multiple experimental domains.
The third reviewer is inclided towards the rejection of the paper. However, the reviewer did not take part in the discussion and spent only an hour on the paper.
Overall, I find the points raised by the reviewers properly addressed by the authors in the rebuttal. Similarly, two reviewers seem to be satisfied with the rebuttal. Therefore, I am in favor of accepting the paper. | train | [
"2smmkrnl17U",
"cHPsf5jqStf",
"aZvIwTrSamf",
"ZCnsHmpKdey",
"x7WE7cR8Zy9",
"cDxgkpTKx8A",
"mCCEWQ4jK1H",
"1YaxVNfjqw_",
"_FNf9hriME",
"gLCKG5Yil26"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for checking our response. We are glad to hear it helped the clarity. Meanwhile, we are wondering if you still feel some concerns that keep the rating to be 6 (and not more). In the initial review, you commented as follows:\n\n> Overall, the authors propose an interesting and empirically succe... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"ZCnsHmpKdey",
"nips_2021_0p0gt1Pn2Gv",
"gLCKG5Yil26",
"_FNf9hriME",
"_FNf9hriME",
"gLCKG5Yil26",
"cHPsf5jqStf",
"_FNf9hriME",
"nips_2021_0p0gt1Pn2Gv",
"nips_2021_0p0gt1Pn2Gv"
] |
nips_2021_9PnKduzf-FT | Characterizing the risk of fairwashing | Fairwashing refers to the risk that an unfair black-box model can be explained by a fairer model through post-hoc explanation manipulation. In this paper, we investigate the capability of fairwashing attacks by analyzing their fidelity-unfairness trade-offs. In particular, we show that fairwashed explanation models can generalize beyond the suing group (i.e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model. We also demonstrate that fairwashing attacks can transfer across black-box models, meaning that other black-box models can perform fairwashing without explicitly using their predictions. This generalization and transferability of fairwashing attacks imply that their detection will be difficult in practice. Finally, we propose an approach to quantify the risk of fairwashing, which is based on the computation of the range of the unfairness of high-fidelity explainers.
| accept | This is an experimental study on the phenomenon of "fairwashing", namely an explanation model making an unfair blackbox model more fair than it is. In particular, the authors show that fidelity of an explanation model is not necessarily an indication of "fairness-fidelity", that is an explanation model may be misleading with respect to a given fairness measure, while scoring high no a fidelity metric. The study demonstrates this phenomenon with respect to various fairness metrics and explanation models.
The reviewers appreciated the clarity of the experimental setup and presentation. Moreover, the phenomenon studied here is clearly important and of interest to the ML community.
On the negative side, the submission does not attempt a formal/theoretical analysis of the the phenomenon. Section 4 seems of speculative nature, empirically exploring the range of unfairness among explanation models of a set fidelity. It is shown that the relationship can differ among different datasets, but no attempt of formally analyzing these seems to have been made.
| train | [
"ozwzVPvJUW",
"i4o2w4vPHve",
"MgzdULdArXk",
"dX0PqfG2NP1",
"jw4MNsqKOU",
"R6UR5ma1xIr",
"xcy9ypYTQ94",
"6Djkue_xo5N",
"nux7Y0xaAb5",
"QkpOxQwQJmW",
"l-zndYAfdMy",
"WIVGVy81L5T",
"k1Y8mW2mv0q",
"EDWUKRW5pZe",
"_NDgf7Yfpv",
"L-2ldIOkHq",
"f5xJqAg6DkQ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback.",
" We thank the reviewer for the response. Our responses to the questions are below.\n\n**Comment on calibration**\n\nWe thank the reviewer for the clarification. We believe that any measurable property can be “washed”. That is, having an explainer that exhibits positive values for... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"MgzdULdArXk",
"dX0PqfG2NP1",
"_NDgf7Yfpv",
"k1Y8mW2mv0q",
"xcy9ypYTQ94",
"nips_2021_9PnKduzf-FT",
"EDWUKRW5pZe",
"QkpOxQwQJmW",
"nips_2021_9PnKduzf-FT",
"WIVGVy81L5T",
"nips_2021_9PnKduzf-FT",
"nux7Y0xaAb5",
"f5xJqAg6DkQ",
"R6UR5ma1xIr",
"L-2ldIOkHq",
"nips_2021_9PnKduzf-FT",
"nips_... |
nips_2021_ejo1_Weiart | Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples | Model quantization is known as a promising method to compress deep neural networks, especially for inferences on lightweight mobile or edge devices. However, model quantization usually requires access to the original training data to maintain the accuracy of the full-precision models, which is often infeasible in real-world scenarios for security and privacy issues.A popular approach to perform quantization without access to the original data is to use synthetically generated samples, based on batch-normalization statistics or adversarial learning.However, the drawback of such approaches is that they primarily rely on random noise input to the generator to attain diversity of the synthetic samples. We find that this is often insufficient to capture the distribution of the original data, especially around the decision boundaries.To this end, we propose Qimera, a method that uses superposed latent embeddings to generate synthetic boundary supporting samples.For the superposed embeddings to better reflect the original distribution, we also propose using an additional disentanglement mapping layer and extracting information from the full-precision model.The experimental results show that Qimera achieves state-of-the-art performances for various settings on data-free quantization. Code is available at https://github.com/iamkanghyunchoi/qimera.
| accept | This paper addresses post-training data-free quantization to compress neural networks, an important domain of NN research. The reviewers found the approach, generating samples around the NN's decision boundaries (boundary supporting samples), and associated methods well motivated (though there were some clarification requests in the reviews, which the authors fulfilled). There were some concerns about similarities to prior work, both in motivations and methods: the authors agreed to include these and I encourage the authors to be generous in giving motivational credit to these works. The authors also provide additional experimental results in the rebuttal, which further demonstrated the effectiveness in the method. Therefore I recommend acceptance as a poster. | train | [
"o5Aeo6vdTiD",
"ZTk_TI7pK0u",
"vgpMVazJqYh",
"5iV2SIjVWGe",
"5uYJo1XNxeE",
"Ogll1UVwEG1",
"JBuhaclwtsq",
"RCBlNUXQ2W"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In the manuscript, the authors propose Qimera, which introduces superposed latent embeddings, disentanglement mapping, and extracted embedding initialization to generate boundary supporting samples.\nThese synthetic samples are useful for post-training data-free quantization.\nThe main contributions of the manuscr... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_ejo1_Weiart",
"5iV2SIjVWGe",
"nips_2021_ejo1_Weiart",
"RCBlNUXQ2W",
"vgpMVazJqYh",
"o5Aeo6vdTiD",
"nips_2021_ejo1_Weiart",
"nips_2021_ejo1_Weiart"
] |
nips_2021_8AgtfqiHUhs | Embedding Principle of Loss Landscape of Deep Neural Networks | Understanding the structure of loss landscape of deep neural networks (DNNs) is obviously important. In this work, we prove an embedding principle that the loss landscape of a DNN "contains" all the critical points of all the narrower DNNs. More precisely, we propose a critical embedding such that any critical point, e.g., local or global minima, of a narrower DNN can be embedded to a critical point/affine subspace of the target DNN with higher degeneracy and preserving the DNN output function. Note that, given any training data, differentiable loss function and differentiable activation function, this embedding structure of critical points holds.This general structure of DNNs is starkly different from other nonconvex problems such as protein-folding.Empirically, we find that a wide DNN is often attracted by highly-degenerate critical points that are embedded from narrow DNNs. The embedding principle provides a new perspective to study the general easy optimization of wide DNNs and unravels a potential implicit low-complexity regularization during the training.Overall, our work provides a skeleton for the study of loss landscape of DNNs and its implication, by which a more exact and comprehensive understanding can be anticipated in the near future.
| accept | This paper studies critical points in the loss landscape of deep neural networks and proves an "embedding principle", which the reviewers find novel and interesting. This could provide a framework towards deeper understanding of deep learning loss landscape. Most of the questions were raised regarding to the presentation and clarification. The reviewers are overall satisfied with the authors' responses and unanimously recommended acceptance. | train | [
"EGdOhs67KvG",
"qOEh_xQ45gU",
"OU7YEtNG0uZ",
"ReOb-mcbnE",
"IW0eFXSrq3w",
"_u2d9aGB2hE",
"4sRE4H6-S8k",
"mTK6MshPEsR",
"UBk427qnUYp",
"8ALo6cO4j0",
"0GNnDnT3Ve",
"N4gPO5YBVx-",
"axQ2csI1gKx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for responding to my reply. As mentioned, I am not an expert on the subfield, but reading the author reviewers and authors' responses I would like to maintain my score of 6, leaning towards accepting if I am a vote on the margin. Thank you!",
"The paper discusses the question o... | [
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"0GNnDnT3Ve",
"nips_2021_8AgtfqiHUhs",
"UBk427qnUYp",
"nips_2021_8AgtfqiHUhs",
"_u2d9aGB2hE",
"4sRE4H6-S8k",
"qOEh_xQ45gU",
"N4gPO5YBVx-",
"ReOb-mcbnE",
"nips_2021_8AgtfqiHUhs",
"axQ2csI1gKx",
"nips_2021_8AgtfqiHUhs",
"nips_2021_8AgtfqiHUhs"
] |
nips_2021_f5liPryFRoA | Adversarial Reweighting for Partial Domain Adaptation | Partial domain adaptation (PDA) has gained much attention due to its practical setting. The current PDA methods usually adapt the feature extractor by aligning the target and reweighted source domain distributions. In this paper, we experimentally find that the feature adaptation by the reweighted distribution alignment in some state-of-the-art PDA methods is not robust to the ``noisy'' weights of source domain data, leading to negative domain transfer on some challenging benchmarks. To tackle the challenge of negative domain transfer, we propose a novel Adversarial Reweighting (AR) approach that adversarially learns the weights of source domain data to align the source and target domain distributions, and the transferable deep recognition network is learned on the reweighted source domain data. Based on this idea, we propose a training algorithm that alternately updates the parameters of the network and optimizes the weights of source domain data. Extensive experiments show that our method achieves state-of-the-art results on the benchmarks of ImageNet-Caltech, Office-Home, VisDA-2017, and DomainNet. Ablation studies also confirm the effectiveness of our approach.
| accept | This paper tackles the problem of partial domain adaptation. After considering the reviews, author rebuttal, and discussions the reviewers remained split on their recommendations with one accept, one borderline accept, and one borderline reject. 2HKg found that the work contributed a “brand-new idea” combining re-weighting and adversarial learning and KSRV increased their recommendation from borderline reject to borderline accept post-rebuttal citing that the authors had “given enough results to demonstrate the effectiveness of the proposed model.” The AC has reviewed the weaknesses brought up by 3wdy and determined that the responses and additional experiments authors provided in the rebuttal are sufficient. The authors have in general provided many additional experiments and clarifying thoughts during the rebuttal and need to be sure to include these in the final paper. | test | [
"BzyECTISgNo",
"1_5OzYIU_tO",
"2xKSSDjcnlM",
"zLIwYbkF2Hz",
"OXLPXLBoxJ",
"1JEx3mCs0qp",
"1g1qg-PWU8R",
"q2iq1mDoHw",
"T8txbFge3He",
"Q8ZY5YqZ4Nn"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the comments again. We promise to include all those discussions in the paper.",
" Thanks for the comments again. We promise to include all those discussions in the paper.",
" Thank you for your comments.\n\nMost of my concerns are solved in the response.\nI also noticed that the responses to the ot... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
7,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
5
] | [
"2xKSSDjcnlM",
"zLIwYbkF2Hz",
"q2iq1mDoHw",
"1JEx3mCs0qp",
"nips_2021_f5liPryFRoA",
"OXLPXLBoxJ",
"Q8ZY5YqZ4Nn",
"T8txbFge3He",
"nips_2021_f5liPryFRoA",
"nips_2021_f5liPryFRoA"
] |
nips_2021_EEq6YUrDyfO | M-FAC: Efficient Matrix-Free Approximations of Second-Order Information | Elias Frantar, Eldar Kurtic, Dan Alistarh | accept | This paper uses a recursive Woodbury formula to maintain a low-rank approximation of the empirical Fisher information matrix, and uses this for parameter pruning and preconditioning in stochastic training. While this approach is a somewhat obvious application of the Woodbury formula, it appears to be the first time it has been seriously applied in the context of curvature matrix approximations for deep neural networks.
The reviewers found the paper to well-written and thorough, and the method to be a (potentially) useful tool. Therefore I'm recommending acceptance.
However, I agree with many of the concerns raised by the reviewers, but am less convinced by the rebuttals than they were. The main one for me is that this method will not scale to large networks due to the need to store m parameter-shaped vectors. Relatedly, I found the claim made by the paper that this method has storage costs "linear in the dimension of the model" to be highly misleading, since m is not really a constant (and is anyway quite large even if it were just a constant). In reality, the storage costs of this method are a lot higher than pretty much every other method compared to.
I also don't view the optimization experiments as meaningful, and largely agree with the points raised by Reviewer JscG. It is known that preconditioning doesn't really accelerate training of residual networks at small/medium batch sizes (see https://arxiv.org/abs/1907.04164). This means that all optimizers will perform similarly on such problems, and any small differences in test accuracy are incidental and have nothing to do with faster optimization. It would be much more interesting to try these methods on problems where stronger optimizers are already known to make a substantial difference (or on ResNets at very large batch sizes as in https://arxiv.org/abs/1811.12019). I would also recommend that the authors remove the graph comparing training performance for the different optimizers, since they aren't even optimizing the same objective function. (And even if they were, it's not really meaningful to compare training accuracy curves using hyperparameters that were tuned for best final test accuracy.)
| train | [
"xEnFgwayjRB",
"SSPrnqR-an6",
"3Wk0j6-f_CJ",
"HTXoErtzMw",
"sn-xT2F5Ikf",
"9n8NR75cZ65",
"VHOKSLdAOCU",
"iPEVZABQceu",
"8PvYGL3MKau",
"9CIRRTYPyxe",
"5qa8D1eavF",
"pa01OFYelQe",
"sl-mePKt56C",
"OhQIj80JjPW",
"-RnwkM6Rxrm",
"fbPlWmzg3xD",
"nxAEoiG3yqj"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper concentrates on the fast implementation of computing IHVPs (Inverse-Hessian Vector Products, and the expression here means \"Inverse-Empirical-Fisher Vector Products\") in training DNN, whose high time and memory costs remain a core problem. This paper is mainly based on the WoodFisher method. The WoodF... | [
6,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_EEq6YUrDyfO",
"sn-xT2F5Ikf",
"VHOKSLdAOCU",
"nips_2021_EEq6YUrDyfO",
"8PvYGL3MKau",
"nips_2021_EEq6YUrDyfO",
"sl-mePKt56C",
"9CIRRTYPyxe",
"9CIRRTYPyxe",
"OhQIj80JjPW",
"nips_2021_EEq6YUrDyfO",
"xEnFgwayjRB",
"9n8NR75cZ65",
"HTXoErtzMw",
"nips_2021_EEq6YUrDyfO",
"nxAEoiG3yqj... |
nips_2021_nVwJse40s1 | Graph Adversarial Self-Supervised Learning | This paper studies a long-standing problem of learning the representations of a whole graph without human supervision. The recent self-supervised learning methods train models to be invariant to the transformations (views) of the inputs. However, designing these views requires the experience of human experts. Inspired by adversarial training, we propose an adversarial self-supervised learning (\texttt{GASSL}) framework for learning unsupervised representations of graph data without any handcrafted views. \texttt{GASSL} automatically generates challenging views by adding perturbations to the input and are adversarially trained with respect to the encoder. Our method optimizes the min-max problem and utilizes a gradient accumulation strategy to accelerate the training process. Experimental on ten graph classification datasets show that the proposed approach is superior to state-of-the-art self-supervised learning baselines, which are competitive with supervised models.
| accept | This paper investigates the problem of self-supervised learning in the context of graph representation learning. It proposes to adopt the technique of adversarial training to automatically augment a training set, and then devises the corresponding adaptation scheme to make adversarial training viable. The authors conducted thorough and insightful experiments (also supplementing some experimental results in the rebuttal) on several benchmark datasets.
Although adversarial training was originally proposed as a defensive algorithm aiming at increasing the robustness of a certain learning model, it has also been recognized as a variant of data augmentation or hard example mining in many other works. Moreover, employing the philosophy of adversarial training to help generalization has also been discussed in other fields like NLP and style transformation. Thus, it is reasonable to see that the same theory can be verified in the domain of graph learning. Of course, verifying this theory in a new domain like graph learning requires huge efforts and insightful designs, which holds as the main contribution of this work. The proposed method is relatively simple but very effective on the evaluated tasks. In the rebuttal phase, the authors adequately answered the questions from the reviewers, including addressing the issues of writing clarity and evaluations in extra experimental settings. Eventually, the four reviewers reached the consensus of accepting the paper. Therefore, the AC recommends acceptance as poster regarding this submission. | val | [
"Qcj04Nx4df8",
"9DFt4Ro3DA5",
"bOnsaOpKreg",
"SY9B6XGSPB",
"FyhsveQyQx3",
"KbBAtQK1Tkz",
"yOzBpNU6D9",
"BaIELnFhgBL",
"KYL-HQU2Wsv",
"ws2aJ4p-_d0",
"OUUgDaMAQ2u",
"qzdhl3fq2PH",
"XQfIr13Y05f",
"p8CdpYSJVZw",
"8XyQLkcpCZe",
"l57a4hHvPQ1",
"_inHM70xJNm",
"z7BSkUQfvdC"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We sincerely thank you for your positive comments and constructive suggestions, which have greatly helped to improve the quality of the paper.",
"This paper introduces a novel way to improve unsupervised graph representation learning. The proposed method utilizes the adversarial attacking techniques with a teac... | [
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"bOnsaOpKreg",
"nips_2021_nVwJse40s1",
"p8CdpYSJVZw",
"yOzBpNU6D9",
"ws2aJ4p-_d0",
"nips_2021_nVwJse40s1",
"BaIELnFhgBL",
"KYL-HQU2Wsv",
"l57a4hHvPQ1",
"qzdhl3fq2PH",
"nips_2021_nVwJse40s1",
"XQfIr13Y05f",
"8XyQLkcpCZe",
"9DFt4Ro3DA5",
"OUUgDaMAQ2u",
"KbBAtQK1Tkz",
"z7BSkUQfvdC",
"... |
nips_2021_cAw860ncLRW | Anti-Backdoor Learning: Training Clean Models on Poisoned Data | Backdoor attack has emerged as a major security threat to deep neural networks (DNNs). While existing defense methods have demonstrated promising results on detecting or erasing backdoors, it is still not clear whether robust training methods can be devised to prevent the backdoor triggers being injected into the trained model in the first place. In this paper, we introduce the concept of \emph{anti-backdoor learning}, aiming to train \emph{clean} models given backdoor-poisoned data. We frame the overall learning process as a dual-task of learning the \emph{clean} and the \emph{backdoor} portions of data. From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data much faster than learning with clean data, and the stronger the attack the faster the model converges on backdoored data; 2) the backdoor task is tied to a specific class (the backdoor target class). Based on these two weaknesses, we propose a general learning scheme, Anti-Backdoor Learning (ABL), to automatically prevent backdoor attacks during training. ABL introduces a two-stage \emph{gradient ascent} mechanism for standard training to 1) help isolate backdoor examples at an early training stage, and 2) break the correlation between backdoor examples and the target class at a later training stage. Through extensive experiments on multiple benchmark datasets against 10 state-of-the-art attacks, we empirically show that ABL-trained models on backdoor-poisoned data achieve the same performance as they were trained on purely clean data. Code is available at \url{https://github.com/bboylyg/ABL}.
| accept | I read the paper and considered the discussions carefully, especially that of the authors with 9SWb who brought up many thoughtful comments and criticisms. I believe that the concerns and criticisms have been largely addressed by the authors during the rebuttal, which I will discuss below.
I will now discuss the merits of the paper:
1. Well-established threat model. The threat scenario of classical (non-federated) training on a dataset poisoned such that inserting the backdoor during inference will result in the target class is a well-established scenario. It is also one of the simplest scenarios, which means the attack is more likely to be deployed in the real world.
2. Novelty. To mine and all the reviewers' agreement, the observations and methods presented in this paper are quite novel. Compared to other backdoor defense methods which often aim to learn the backdoor pattern from among a large search space, I believe the easy-learning detection and backdoor unlearning methods are quite elegant since they exploit what seems to be a fundamental weakness of backdoors (that to be strong they must be easy to learn) and can be incorporated directly into the training procedure (as opposed to a separate stage).
3. Results. The authors have done a comprehensive study on 3 datasets and 6 backdoor attacks, and included some well-established datasets/architectures for comparison with prior art (i.e. resnet18 on cifar, along with resnet34 upon the request of 9SWb). The authors covered 4 additional attacks upon the request of reviewer 9SWb who subsequently raised their score from 3 to 5.
4. The authors provided ample ablation studies to provide better insight into why their method works, which helps the readers obtain some learnings from this paper.
There were some concerns about different threat models (e.g. adversary having control over more things than just a fraction of the dataset) and different training settings (federated learning). While interesting, these are outside the scope of the paper and that the current scope is sufficiently interesting.
I therefore recommend acceptance. | train | [
"cvkJQNBE4Q",
"kNzhMNoYdfb",
"Twg8iHrwQJ",
"kgtzImAYkES",
"0FNsV5TK9Op",
"yz62QUm9NFF",
"-Kh9snOHVUh",
"_y3GxhCYP1J",
"lprvJfWdwks",
"4ZcDWupXmLM",
"10JvloY399A",
"H6dNd_3Rvv7",
"yv8NnBtOJpO",
"vAoqH35ploW",
"GF_bRkvOES4",
"wk8APFLUtYk",
"wYNQZJa6Ub5",
"Pkdrxhg-bpQ",
"h7MjIQw22d"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"This paper observes that deep neural networks learn backdoored data faster than\nbenign samples. Based on this finding, the paper proposes Anti-Backdoor Learning\n(ABL), which can learn a benign model on poisoned datasets. Specifically, at the\nbeginning of model training, ABL maximizes the training loss gap betwe... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_cAw860ncLRW",
"nips_2021_cAw860ncLRW",
"kgtzImAYkES",
"4ZcDWupXmLM",
"10JvloY399A",
"c_ObrLiF1eH",
"_y3GxhCYP1J",
"H6dNd_3Rvv7",
"hMQpFQvf0JZ",
"JsaiHWWlSa8",
"j39-mUFIgb8",
"GF_bRkvOES4",
"cvkJQNBE4Q",
"cvkJQNBE4Q",
"wk8APFLUtYk",
"cvkJQNBE4Q",
"Pkdrxhg-bpQ",
"JsaiHWWlS... |
nips_2021_-nLW4nhdkO | Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models | Several out-of-distribution (OOD) detection scores have been recently proposed for deep generative models because the direct use of the likelihood threshold for OOD detection has been shown to be problematic. In this paper, we propose a new OOD score based on a Bayesian hypothesis test called the locally most powerful Bayesian test (LMPBT). The LMPBT is locally most powerful in that the alternative hypothesis (the representative parameter for the OOD sample) is specified to maximize the probability that the Bayes factor exceeds the evidence threshold in favor of the alternative hypothesis provided that the parameter specified under the alternative hypothesis is in the neighborhood of the parameter specified under the null hypothesis. That is, under this neighborhood parameter condition, the test with the proposed alternative hypothesis maximizes the probability of correct detection of OOD samples. We also propose numerical strategies for more efficient and reliable computation of the LMPBT for practical application to deep generative models. Evaluations conducted of the OOD detection performance of the LMPBT on various benchmark datasets demonstrate its superior performance over existing OOD detection methods.
| accept | The paper proposes a new method for OOD detection using deep generative models based on Bayesian hypothesis testing, that they refer to as the locally most powerful Bayesian test (LMPBT). Overall, the reviewers found the paper well-motivated and the experiments support the key claims. During the discussion phase, reviewers ossS and 6JoX increased their score and recommended acceptance. Reviewers msUZ and Huvy leaned towards acceptance but raised some concerns in the initial review; after reading the author rebuttal, I think that the authors satisfactorily address most of these concerns.
I recommend acceptance and encourage the authors to incorporate the reviewer feedback in the final version.
Additional comment:
While the paper shows that they outperform some existing methods such as IC, LR, LLR, I believe there are stronger published results (e.g.
DoSE https://arxiv.org/abs/2006.09273) that should probably be mentioned. | val | [
"D2rCutKnu8T",
"iqX7y54ESpq",
"cmuM7VAWUe_",
"b5x-q-v-pZt",
"AyVG0SL-HcX",
"IHPgLMkVkLp",
"Q3g2RqOIR5s",
"IY-GM2EpO7",
"57rwYmSlAn",
"o_MsmplhG4P",
"f0-4aZOr7s-",
"WH6Jc1HJngB",
"iqqC0od8ah",
"2DqEDk6hKEg"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" According to Proposition 1, only our method is LMPBT and thus maximizes the probability of OOD detection, whereas other methods (LLR, LR, and IC) do not. This is for sure as proven in Proposition 1. We will clarify this in the final version of our paper. Thank you for your comments that helped us clarify the impl... | [
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"cmuM7VAWUe_",
"nips_2021_-nLW4nhdkO",
"b5x-q-v-pZt",
"AyVG0SL-HcX",
"f0-4aZOr7s-",
"IY-GM2EpO7",
"nips_2021_-nLW4nhdkO",
"57rwYmSlAn",
"Q3g2RqOIR5s",
"2DqEDk6hKEg",
"iqX7y54ESpq",
"iqqC0od8ah",
"nips_2021_-nLW4nhdkO",
"nips_2021_-nLW4nhdkO"
] |
nips_2021_9CPc4EIr2t1 | Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks | Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equations (ODEs) are intrinsically more robust against adversarial attacks compared to vanilla DNNs. In this work, we propose a neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF). By ensuring that the equilibrium points of the ODE solution used as part of SODEF are Lyapunov-stable, the ODE solution for an input with a small perturbation converges to the same solution as the unperturbed input. We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability. Our analysis suggests that our proposed regularizers force the extracted feature points to be within a neighborhood of the Lyapunov-stable equilibrium points of the SODEF ODE. SODEF is compatible with many defense methods and can be applied to any neural network's final regressor layer to enhance its stability against adversarial attacks.
| accept | This paper proposes a neural network classifier architecture based on neural ODEs against adversarial attacks. Some of the reviewers have concerns on the experiments, while in the rebuttal the newly added experiments convince the reviewers. All reviewers finally give positive support to the paper. Thus, I recommend accepting the paper and the authors should include the new experiments in the final version. | test | [
"3ZhS-5-DyLO",
"8jLrW_f4bVh",
"XHgAgWbxH5c",
"7nwFSo98Ch3",
"8s0A7m_HL_m",
"khqkOdCkLx",
"ju_90Sflfnw",
"epmgN_R0l3O",
"3JU3xFdagFS",
"86LZ8rYry9t",
"KyPyqdyrt2l",
"celqRnhGpRE",
"QKO-J-8cD-F",
"7AA8FwYHu9F",
"TcZ4jAa1bK2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a neural network classifier architecture based on neural ODEs that is designed to be robust to adversarial attacks. In the proposed method, the classes are represented by vectors, chosen to be maximally separated in the output space w.r.t. cosine similarity. The neural ODE system is then encour... | [
7,
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_9CPc4EIr2t1",
"celqRnhGpRE",
"QKO-J-8cD-F",
"nips_2021_9CPc4EIr2t1",
"KyPyqdyrt2l",
"epmgN_R0l3O",
"nips_2021_9CPc4EIr2t1",
"86LZ8rYry9t",
"nips_2021_9CPc4EIr2t1",
"7AA8FwYHu9F",
"7nwFSo98Ch3",
"3ZhS-5-DyLO",
"TcZ4jAa1bK2",
"ju_90Sflfnw",
"nips_2021_9CPc4EIr2t1"
] |
nips_2021_wHoIjrT6MMb | Robust Compressed Sensing MRI with Deep Generative Priors | The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deepgenerative priors can be powerful tools for solving inverse problems.However, to date this framework has been empirically successful only oncertain datasets (for example, human faces and MNIST digits), and itis known to perform poorly on out-of-distribution samples. In thispaper, we present the first successful application of the CSGMframework on clinical MRI data. We train a generative prior on brainscans from the fastMRI dataset, and show that posterior sampling viaLangevin dynamics achieves high quality reconstructions. Furthermore,our experiments and theory show that posterior sampling is robust tochanges in the ground-truth distribution and measurement process.Our code and models are available at: \url{https://github.com/utcsilab/csgm-mri-langevin}.
| accept | The authors provide the first demonstration that Compressed Sensing with Generative Priors can be a competitive approach for MRI reconstruction relative to end-to-end, L1, and untrained neural network methods. The authors additionally demonstrate that the method is more robust than baselines in the context of certain distribution shifts between training and inversion. The paper also provides novel theoretical guarantees about distributional robustness of posterior sampling approach.
There was some concern from one of the reviewers about consistency with the literature about the performance of baseline methods, likely due to training being on RSS images with fine-tuning on MVUE images. The authors should provide commentary about this issue, as discussed with the reviewers during the rebuttal.
The authors should add additional comments that the paper should not be used for medical purposes without subsequent study by medical professionals, as per the ethics review.
| val | [
"tsdxV3xeKYS",
"GlU8exokxVB",
"l0N8OEhUuhg",
"vPlo8569cQ-",
"OJ62g7qaLZk",
"wutgpDKrpqr",
"s2U55EBb9SX",
"n4b89n-vcXZ",
"_rz3GAEgdyw",
"Ik9OSsjpuoq",
"XABKnc_IV1",
"wLtxwq4K2ZV",
"JbcY2dbrYd3",
"zRsY-OopR4W",
"oqbk6Brs03",
"OOuOt0YdGMi",
"TtTpWxB-hXi",
"O5BiEsggdJe",
"s9IZmyH-H39... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",... | [
" Thank you for the responses provided to me and other reviewers. Some answers have helped me better understand the paper.\n\nConcern 1-2\nI understand the limitation of the space. \nConcern 3\nMy suggestion was emphasizing the results of the different contrast images in the Appendix. Thanks for the response and th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"tyeRP-wFydI",
"l0N8OEhUuhg",
"xlZP0IdLFap",
"qh0xo1T-Ysn",
"tyeRP-wFydI",
"2IOyvjSSi29",
"nips_2021_wHoIjrT6MMb",
"_rz3GAEgdyw",
"TtKu4NK8Z5",
"LuyaV6A6-tt",
"tyeRP-wFydI",
"qh0xo1T-Ysn",
"TtKu4NK8Z5",
"2IOyvjSSi29",
"OOuOt0YdGMi",
"nips_2021_wHoIjrT6MMb",
"TtKu4NK8Z5",
"nips_2021... |
nips_2021_s-NI4H4e3Rf | H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction of Humans in Motion | We present neural radiance fields for rendering and temporal (4D) reconstruction of humans in motion (H-NeRF), as captured by a sparse set of cameras or even from a monocular video. Our approach combines ideas from neural scene representation, novel-view synthesis, and implicit statistical geometric human representations, coupled using novel loss functions. Instead of learning a radiance field with a uniform occupancy prior, we constrain it by a structured implicit human body model, represented using signed distance functions. This allows us to robustly fuse information from sparse views and generalize well beyond the poses or views observed in training. Moreover, we apply geometric constraints to co-learn the structure of the observed subject -- including both body and clothing -- and to regularize the radiance field to geometrically plausible solutions. Extensive experiments on multiple datasets demonstrate the robustness and the accuracy of our approach, its generalization capabilities significantly outside a small training set of poses and views, and statistical extrapolation beyond the observed shape.
| accept | The submission has received 4 positive final ratings: 6, 7, 7, 8.
The reviewers overall were excited about the method and the idea of combining implicit representations with explicit priors, and also acknowledged strong empirical results and solid presentation. The remaining questions and concerns were mostly addressed in the rebuttal: the reviewers were mostly satisfied, but left some recommendations for further improvements (the authors are recommended to follow them while preparing the camera ready version).
The final recommendation is to accept as a spotlight. | train | [
"PQ5hI2k57BS",
"lPBRVjWYMCb",
"CcuD-GGczQW",
"XMLNPN-ZKsU",
"jtGe8kAoU81",
"gazt1wmtjUy",
"hXIrnHIX9Xk",
"4URd91s0XxU",
"RY-BQxwJfga",
"WG8O9lGAKxM",
"iTZoc-ujs9V",
"ZP7D1UHe_x",
"UlmRn5fW6j"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the reply! The authors addressed most of my concerns. I would like to raise my rating. \n",
"This paper presents a new NeRF based method for rendering and reconstruction of humans observed from sparse cameras. The main contribution is combining volumetric radiance fields and an implicit SDF for th... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"XMLNPN-ZKsU",
"nips_2021_s-NI4H4e3Rf",
"gazt1wmtjUy",
"jtGe8kAoU81",
"WG8O9lGAKxM",
"RY-BQxwJfga",
"UlmRn5fW6j",
"ZP7D1UHe_x",
"iTZoc-ujs9V",
"lPBRVjWYMCb",
"nips_2021_s-NI4H4e3Rf",
"nips_2021_s-NI4H4e3Rf",
"nips_2021_s-NI4H4e3Rf"
] |
nips_2021_3ez9BSHTNT | DOBF: A Deobfuscation Pre-Training Objective for Programming Languages | Recent advances in self-supervised learning have dramatically improved the state of the art on a wide variety of tasks. However, research in language model pre-training has mostly focused on natural languages, and it is unclear whether models like BERT and its variants provide the best pre-training when applied to other modalities, such as source code. In this paper, we introduce a new pre-training objective, DOBF, that leverages the structural aspect of programming languages and pre-trains a model to recover the original version of obfuscated source code. We show that models pre-trained with DOBF significantly outperform existing approaches on multiple downstream tasks, providing relative improvements of up to 12.2% in unsupervised code translation, and 5.3% in natural language code search. Incidentally, we found that our pre-trained model is able to deobfuscate fully obfuscated source files, and to suggest descriptive variable names.
| accept | The reviewers appreciated the key idea presented in the paper of using a new pre-training objective DOBF to recover obfuscated variable names from programs, and its usefulness for several downstream tasks. Adding a new DAE-only baseline for comparison was also greatly appreciated. While there were still some concerns regarding fairness of comparisons across models, overall the recommendation is for acceptance. Hopefully the authors can incorporate the detailed feedback from reviews and add additional experiments in the final version. | train | [
"aXgKwjtdhj2",
"-A7ijXHXOR",
"tXZSEiPoh4A",
"ARcFy6HV0Y3",
"Njrsu6sbL3g",
"ETbFp96o2R",
"17kK3CkWuGp",
"O7mxgv8OrxL",
"7Dc7gjrfC8",
"uXRoFRmWcu",
"SrDun8rB2wv",
"BAvo3PNhVx_",
"WIIuoU2UR4T",
"JaE75bPGP8r"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a pre-training method for programming languages. The main idea is pre-training a seq2seq model to convert obfuscated functions back to their original forms. The method demonstrates performance improvement on multiple downstream tasks, e.g., code translation and code search. The proposed pre-tr... | [
5,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"nips_2021_3ez9BSHTNT",
"ARcFy6HV0Y3",
"uXRoFRmWcu",
"O7mxgv8OrxL",
"nips_2021_3ez9BSHTNT",
"SrDun8rB2wv",
"JaE75bPGP8r",
"WIIuoU2UR4T",
"aXgKwjtdhj2",
"O7mxgv8OrxL",
"Njrsu6sbL3g",
"nips_2021_3ez9BSHTNT",
"nips_2021_3ez9BSHTNT",
"nips_2021_3ez9BSHTNT"
] |
nips_2021_apK65PUH0l9 | Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles | When a deep learning model is deployed in the wild, it can encounter test data drawn from distributions different from the training data distribution and suffer drop in performance. For safe deployment, it is essential to estimate the accuracy of the pre-trained model on the test data. However, the labels for the test inputs are usually not immediately available in practice, and obtaining them can be expensive. This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs. In this paper, we propose a principled and practically effective framework that simultaneously addresses the two tasks. The proposed framework iteratively learns an ensemble of models to identify mis-classified data points and performs self-training to improve the ensemble with the identified points. Theoretical analysis demonstrates that our framework enjoys provable guarantees for both accuracy estimation and error detection under mild conditions readily satisfied by practical deep learning models. Along with the framework, we proposed and experimented with two instantiations and achieved state-of-the-art results on 59 tasks. For example, on iWildCam, one instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7% compared to existing methods.
| accept | The paper proposed a novel approach that leverages ensembles and self-training for unsupervised accuracy estimation and error detection. All reviewers find the problem setup interesting and the paper well written, with theoretical justification (although relying on strong assumptions) and reasonable empirical support. There are some useful suggestions during the discussion phase, in particular for improving the experimental results.
After a few rounds of interaction during the discussion phase, the committee reached a consensus on the technical contributions: Reviewers agree upon the technical novelty as using ensembles and self-training for improving model disagreement -- a subtle but different approach to the existing work of ProxyRisk, which warrants the novelty of this work; it is worth mentioning that reviewers also note that the use of domain invariant representations, a check model, and disagreement are in a similar spirit to prior work.
An initial disagreement among the committee was on the scope of the experiments: whether a modified version of ProxyRisk with an ensemble is necessary to be included. Although the modification suggested in the discussion phase (by Reviewer 6eDi) deviates from the ProxyRisk algorithm proposed in the original work, the committee agree that this could be viewed as an additional ablation of the role of ensembling vs self-training, which would have made the work stronger. The authors are encouraged to take into consideration of such feedback when preparing a revision. | val | [
"SD9Olv-SyOj",
"xtCzbtRuK12",
"PuSdxbQ-yFH",
"RHqP_nBr_Sg",
"NjNxXenNqqA",
"X61BqJCdfvK",
"SrVMXQdY32_",
"UI9lu6OQZzR",
"M3p4D8Bq0t4",
"R_9qlhFQNew",
"ziqkEFzKzeK",
"-6EI7ZLTN-Y",
"msBCvHyS8P",
"2iXml5CCGlV",
"e8npTY_AlGg",
"j3KtRvX4Rc",
"PxJmr0Y3KUw"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for the update! We want to point out the similarities and differences between our work and proxy risk work, and we will include clarifications in the next version. Some of the following have already appeared in our responses to Reviewer 6eDi.\n\n**[Similarities]**\n\n1. Both proxy risk and our framework tr... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"xtCzbtRuK12",
"nips_2021_apK65PUH0l9",
"UI9lu6OQZzR",
"NjNxXenNqqA",
"UI9lu6OQZzR",
"SrVMXQdY32_",
"UI9lu6OQZzR",
"msBCvHyS8P",
"j3KtRvX4Rc",
"nips_2021_apK65PUH0l9",
"nips_2021_apK65PUH0l9",
"e8npTY_AlGg",
"PxJmr0Y3KUw",
"xtCzbtRuK12",
"ziqkEFzKzeK",
"R_9qlhFQNew",
"nips_2021_apK65... |
nips_2021_f-ggKIDTu5D | Exploiting Chain Rule and Bayes' Theorem to Compare Probability Distributions | To measure the difference between two probability distributions, referred to as the source and target, respectively, we exploit both the chain rule and Bayes' theorem to construct conditional transport (CT), which is constituted by both a forward component and a backward one. The forward CT is the expected cost of moving a source data point to a target one, with their joint distribution defined by the product of the source probability density function (PDF) and a source-dependent conditional distribution, which is related to the target PDF via Bayes' theorem. The backward CT is defined by reversing the direction. The CT cost can be approximated by replacing the source and target PDFs with their discrete empirical distributions supported on mini-batches, making it amenable to implicit distributions and stochastic gradient descent-based optimization. When applied to train a generative model, CT is shown to strike a good balance between mode-covering and mode-seeking behaviors and strongly resist mode collapse. On a wide variety of benchmark datasets for generative modeling, substituting the default statistical distance of an existing generative adversarial network with CT is shown to consistently improve the performance. PyTorch code is provided.
| accept | Three reviewers recommend an accept, one reviewer indicates a reject. The general sentiment is that the presented approach for comparing probability distributions with forward and backward CT is novel, and that the empirical results for generative adversarial model training are promising. However, there were some concerns as to the lack of theoretical results and inadequate discussion about the limitations of the proposed method. A revised paper should incorporate the revisions and clarifications brought up in the rebuttal. | train | [
"rLFZvQjc0Cd",
"XDOSd4lGFFr",
"PngBigfGH_G",
"JqXEXf74htb",
"LIy6KqmPXu",
"A--mSsUje-K",
"s5VteERquF5",
"at9Xt56wkI",
"CzCRSAJuZsI",
"vVvh_7CD0gg",
"WTQ9tARM3JR",
"m3zRYicDQW",
"yth28wstYsy",
"2YN85fyKyYs"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to use conditional transport (CT) to measure the difference between two probability distributions. By combining both forward CT and backward CT, the proposed measure shows promising results for generative adversarial model training. \n After the discussion phase, I increase my overall score fr... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"nips_2021_f-ggKIDTu5D",
"PngBigfGH_G",
"at9Xt56wkI",
"nips_2021_f-ggKIDTu5D",
"yth28wstYsy",
"JqXEXf74htb",
"rLFZvQjc0Cd",
"JqXEXf74htb",
"nips_2021_f-ggKIDTu5D",
"2YN85fyKyYs",
"yth28wstYsy",
"rLFZvQjc0Cd",
"nips_2021_f-ggKIDTu5D",
"nips_2021_f-ggKIDTu5D"
] |
nips_2021_tCB-SCt5wWG | Actively Identifying Causal Effects with Latent Variables Given Only Response Variable Observable | In many real tasks, it is generally desired to study the causal effect on a specific target (response variable) only, with no need to identify the thorough causal effects involving all variables. In this paper, we attempt to identify such effects by a few active interventions where only the response variable is observable. This task is challenging because the causal graph is unknown and even there may exist latent confounders. To learn the necessary structure for identifying the effects, we provide the graphical characterization that allows us to efficiently estimate all possible causal effects in a partially mixed ancestral graph (PMAG) by generalized back-door criterion. The characterization guides learning a local structure with the interventional data. Theoretical analysis and empirical studies validate the effectiveness and efficiency of our proposed approach.
| accept | The paper explores a causal setting where we are interested in understanding the impact of possible interventions on a single response variable, using observational data (which might suffer from latent confounding) and limited interventional data. The limit on the interventional data is that we only observe the response variable, and nothing else. This work extends several other papers that have dealt with similar setting, with the important distinction of allowing latent (i.e. hidden) confounding in the observational data.
The paper is a theory paper, and the reviewers agreed that it contains a novel contribution and that it is technically correct.
The main concern of the reviewers was about the applicability of the specific scenario the authors propose; however, the overall decision is that the real theoretical contribution is important enough. Furthermore, I believe it is often difficult to know in advance which learning scenarios or which techniques will end up useful. Since the work is both rigorous and novel, the recommendation is to accept the paper.
The reviewers made some thoughtful suggestions in their reviews and the discussion period, and I encourage the authors to incorporate some of the conclusions in their revised version. | train | [
"oFVEC4RlEf",
"F58fcQFeOLh",
"PhsTvydbQjp",
"hVG27cZrjAc",
"JTGB-EVPvJ",
"vR4XUzq-1U2",
"_p6mTQTBnzO",
"0hOCrYyMfEA",
"Gu7dU9hZIYk",
"2TMITw6cOcL",
"pEm24Nf30n_",
"NvzBuV0mnVu",
"ALBvC6unOal",
"5VwQgCn5G8N",
"2ydWv84cM-J",
"d3P7oTbMaA3",
"bk4lCzPWrv",
"j6c0HQiC5jI",
"tGtKZ1DFWaa"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"... | [
"Based on the ancestral graph framework, the authors tackle the problem of identifying an interventional distribution $f (Y | do (X))$ by only observing the outcome $Y$ after each possible intervention. This work extended previous work by Wang et al. assuming no latent variables on the graph. As most of us know, ... | [
7,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_tCB-SCt5wWG",
"JTGB-EVPvJ",
"_p6mTQTBnzO",
"nips_2021_tCB-SCt5wWG",
"tGtKZ1DFWaa",
"Gu7dU9hZIYk",
"0hOCrYyMfEA",
"vR4XUzq-1U2",
"2TMITw6cOcL",
"j6c0HQiC5jI",
"NvzBuV0mnVu",
"9KLUzb9ZVSy",
"nips_2021_tCB-SCt5wWG",
"bk4lCzPWrv",
"hVG27cZrjAc",
"0D1PEIEeNyP",
"ALBvC6unOal",
... |
nips_2021_9QwPhXWmuRp | Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models | While probabilistic models are an important tool for studying causality, doing so suffers from the intractability of inference. As a step towards tractable causal models, we consider the problem of learning interventional distributions using sum-product networks (SPNs) that are over-parameterized by gate functions, e.g., neural networks. Providing an arbitrarily intervened causal graph as input, effectively subsuming Pearl's do-operator, the gate function predicts the parameters of the SPN. The resulting interventional SPNs are motivated and illustrated by a structural causal model themed around personal health. Our empirical evaluation against competing methods from both generative and causal modelling demonstrates that interventional SPNs indeed are both expressive and causally adequate.
| accept | A few of the reviewers felt that the paper is in its current form too vague and would improve from a major revision. For instance, they found the claim about how SPNs relate to causal inference not adequately developed. Unfortunately the rebuttal did not clarify the questions the reviewers raised. In light of this and the discussions and the reviews, I agree that the current version of the paper is not ready for publication. | train | [
"BUrIULBlIGo",
"0_GSgjupqGm",
"aUfnG6qO5Y",
"Mk-22QvSXna",
"Iyir3x3oAJH",
"eYfzTyAm3oG",
"whoPSgr_P2q",
"3VKc48q9AG"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your very positive review, $\\color{purple}{\\text{reviewer mLY8}}$!\n\n* > $\\color{purple}R:$ \"I think the paper could have significant impact, and could open new avenues for research\"\n\n We feel delighted by this assessment which we hope does hold for the future and we also strongly agree wit... | [
-1,
-1,
-1,
-1,
2,
5,
3,
7
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"3VKc48q9AG",
"whoPSgr_P2q",
"eYfzTyAm3oG",
"Iyir3x3oAJH",
"nips_2021_9QwPhXWmuRp",
"nips_2021_9QwPhXWmuRp",
"nips_2021_9QwPhXWmuRp",
"nips_2021_9QwPhXWmuRp"
] |
nips_2021_fLnsj7fpbPI | PettingZoo: Gym for Multi-Agent Reinforcement Learning | This paper introduces the PettingZoo library and the accompanying Agent Environment Cycle ("AEC") games model. PettingZoo is a library of diverse sets of multi-agent environments with a universal, elegant Python API. PettingZoo was developed with the goal of accelerating research in Multi-Agent Reinforcement Learning ("MARL"), by making work more interchangeable, accessible and reproducible akin to what OpenAI's Gym library did for single-agent reinforcement learning. PettingZoo's API, while inheriting many features of Gym, is unique amongst MARL APIs in that it's based around the novel AEC games model. We argue, in part through case studies on major problems in popular MARL environments, that the popular game models are poor conceptual models of the games commonly used with MARL, that they promote severe bugs that are hard to detect, and that the AEC games model addresses these problems.
| accept | I recommend this paper to be accepted. It provides a novel multi-agent reinforcement library with a corresponding games model. While the scientific novelty appears to be limited, the practical utility seems to be great for the community and such open-source libraries are explicitly encouraged in the Call for Papers. | train | [
"8W_SGozA6V3",
"-JB_obLJp77",
"bKoMGTKrjZL",
"kvJoMLsIat",
"hZxNMZ5sBvp",
"z0wuqIqP4n",
"ZIIPpAWjfWH",
"vOjT-GimBj6"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces PettingZoo, a Python library for multi-agent RL tasks. It also discusses the API choices of said library, and how they might favoriably compare to other approaches available. The paper is highly readable and relatively clearly written, with only minor mistypes and glitches (examples: a missin... | [
5,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_fLnsj7fpbPI",
"bKoMGTKrjZL",
"kvJoMLsIat",
"8W_SGozA6V3",
"ZIIPpAWjfWH",
"vOjT-GimBj6",
"nips_2021_fLnsj7fpbPI",
"nips_2021_fLnsj7fpbPI"
] |
nips_2021_HbViCqfbd7 | Parametric Complexity Bounds for Approximating PDEs with Neural Networks | Tanya Marwah, Zachary Lipton, Andrej Risteski | accept | Recently, neural networks (NN) have had great success in approximating solutions of partial differential equations (PDEs). One key observation is that the NN approach to PDEs do not seem to suffer from the curse-of-dimensionality. Hence, it is important to understand the strengths and limitations of this approach.
The authors take on a particular class of PDEs (i.e., linear elliptic PDEs with Dirichlet boundary conditions) and provide a theoretical characterization of how many parameters are needed to approximate their solutions within a desired accuracy. They identify "small" networks suffice where the number of parameters depend polynomially on the dimension and linearly on the number of parameters required to express the PDE. The theoretical analysis is non-trivial and the paper provides a great path moving forward in the PDE research vein.
While the initial scores of the work were below the threshold, the rebuttal was effective in clarifying the concerns of the reviewers, in particular mooting a counter-example proposed by a reviewer, confusions on the applications of the gradient operators (due to the final activations, and the way in which the NNs are constructed by authors (i.e., growing network in each iteration). As a result, the scores improved uniformly and I thank both the authors and the reviewers for their efforts.
| val | [
"939EDMddQH",
"EXI3mEW9qzs",
"GVxDaMmhtbV",
"GCRkjQeCAz",
"625uE0604uI",
"DPOjeZH9MZ",
"fVRZozT5Mkv",
"Av3VyBBS1t",
"DwvmzTZpVSy",
"kluxfyY8oCz",
"huE7qb0YP7",
"fwYtGAwzHrm",
"5bUbg2luFN",
"RIh-0x06iA6",
"9jQbZpKvE8l",
"IN2Mgjb-L0n",
"Hm-OO3Vh_eh",
"XmWcgMUgCO",
"KARFoM8ESc",
"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_rev... | [
" Since my correctness concerns have been fully resolved, I have increased my rating. I think the revisions pointed out by the other reviewers will also improve the clarity of the manuscript. I would like to thank the authors for their careful and thoughtful replies. ",
"The authors attempt to bound a neural ne... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5
] | [
"EXI3mEW9qzs",
"nips_2021_HbViCqfbd7",
"0nrR8P926vu",
"625uE0604uI",
"EXI3mEW9qzs",
"EXI3mEW9qzs",
"kluxfyY8oCz",
"TiG5HR3IPXV",
"nips_2021_HbViCqfbd7",
"huE7qb0YP7",
"5bUbg2luFN",
"DwvmzTZpVSy",
"fwYtGAwzHrm",
"EXI3mEW9qzs",
"IN2Mgjb-L0n",
"0i_GleYJa6",
"DwvmzTZpVSy",
"KARFoM8ESc"... |
nips_2021_USq7LP5pnDH | Learning-to-learn non-convex piecewise-Lipschitz functions | We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms. Starting from recent regret bounds for the exponential forecaster on losses with dispersed discontinuities, we generalize them to be initialization-dependent and then use this result to propose a practical meta-learning procedure that learns both the initialization and the step-size of the algorithm from multiple online learning tasks. Asymptotically, we guarantee that the average regret across tasks scales with a natural notion of task-similarity that measures the amount of overlap between near-optimal regions of different tasks. Finally, we instantiate the method and its guarantee in two important settings: robust meta-learning and multi-task data-driven algorithm design.
| accept | The paper gives an online algorithm for learning convex piecewise functions under certain assumptions regarding dispersion of the points of discontinuity. This follows recent work on algorithm meta-design and online learning. | val | [
"chgDJlWCrC",
"hyiHmvc7df_",
"UkUTxxW1GcM",
"2yp1O4ClGmF",
"57ISz9g7Xrr",
"9IyEcKfXB3n"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your positive review. We hope to address your concerns below. In particular we thank the reviewer for recommendations for additional experiments that help further demonstrate empirical usefulness of our algorithms, for which strong theoretical guarantees have been obtained. The experiments (in the m... | [
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"9IyEcKfXB3n",
"57ISz9g7Xrr",
"2yp1O4ClGmF",
"nips_2021_USq7LP5pnDH",
"nips_2021_USq7LP5pnDH",
"nips_2021_USq7LP5pnDH"
] |
nips_2021_sNKpWhzEDWS | Uncertain Decisions Facilitate Better Preference Learning | Existing observational approaches for learning human preferences, such as inverse reinforcement learning, usually make strong assumptions about the observability of the human's environment. However, in reality, people make many important decisions under uncertainty. To better understand preference learning in these cases, we study the setting of inverse decision theory (IDT), a previously proposed framework where a human is observed making non-sequential binary decisions under uncertainty. In IDT, the human's preferences are conveyed through their loss function, which expresses a tradeoff between different types of mistakes. We give the first statistical analysis of IDT, providing conditions necessary to identify these preferences and characterizing the sample complexity—the number of decisions that must be observed to learn the tradeoff the human is making to a desired precision. Interestingly, we show that it is actually easier to identify preferences when the decision problem is more uncertain. Furthermore, uncertain decision problems allow us to relax the unrealistic assumption that the human is an optimal decision maker but still identify their exact preferences; we give sample complexities in this suboptimal case as well. Our analysis contradicts the intuition that partial observability should make preference learning more difficult. It also provides a first step towards understanding and improving preference learning methods for uncertain and suboptimal humans.
| accept | This paper analyzes the inverse decision theory task of recovering the loss function of a decision maker making (observational) decisions under uncertainty. The paper's surprising insight is that uncertainty of the decision maker can enable better loss function recovery. This is supported with sample complexity bounds. There were some initial concerns among the reviewers about clarity in distinguishing "clear" vs. "uncertain" decisions and the positioning of the work among some other papers, but the author(s) was/were able to alleviate those concerns and the reviewers are satisfied with the intended direction of the final version of the paper. The reviewers discussed the lack of experiments in the paper, concluding that experiments demonstrating the analyzed benefits---particularly with suboptimal decision makers---would enhance the paper, but that the paper was a significant contribution without additional experiments and worthy of acceptance. | val | [
"7A70lPX6B2v",
"II2sm8e7B_W",
"WcXMKJjFpST",
"sWt3LCjlhxO",
"y6B_mKRmC7R",
"LpscqDjkycP",
"juX8AMeFXy2",
"oJRHb7HUhJp",
"fYNCXO1KS6s"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors study the problem of learning the preferences of a human decision-maker from past decisions. The setting is that feature–decision pairs (X, Y) are drawn from a known distribution $\\mathcal{D}$. Initially the authors assume the Bayes-optimal case, where the decision-maker perfectly knows P(Y | X), and ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_sNKpWhzEDWS",
"LpscqDjkycP",
"sWt3LCjlhxO",
"y6B_mKRmC7R",
"fYNCXO1KS6s",
"oJRHb7HUhJp",
"7A70lPX6B2v",
"nips_2021_sNKpWhzEDWS",
"nips_2021_sNKpWhzEDWS"
] |
nips_2021_a7APmM4B9d | Decision Transformer: Reinforcement Learning via Sequence Modeling | We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
| accept | After an involved discussion on several topics regarding experimental practices and relation to prior art, the reviewers have settled on a range of positive scores. The cited strengths of the paper include the potential impact of introducing modern sequence modeling tools for RL problems, both the offline-RL domain in the paper's experiments and potentially more areas in the future. I personally found several areas where further technical detail and discussion would be desireable in the final version:
A) Expand your analysis of the method's limitations. Note the requests of reviewer dbvQ, but additionally, consider the balance and fairness in your text between identifying the "deadly triad" of deep TD compared to your speculations on the peformance of transformers for the task. The empirical comparison shows DT is strong but not overwhelming and therefore we may find several of the same, or new, problems specific to modeling the relations of reward/return/state/actions over time are present. Help the reader to understand the limits you expect for your method and the community to know when to use this and where to push new methods next.
B) While the training procedure is well documented, some reviewers mentioned the lack of detail about using your model at test/inference time. Even in offline RL benchmarks, one must execute evaluation rollouts where novel states can be encountered and the future return is not available. I'd like to see a full description of the algorithm to perform these rollouts with your model (perhaps in picture form analagous to Fig 1).
| train | [
"NJcKkbk-QB5",
"b9PI0KxSIvh",
"7AAuR86IMuS",
"lMN6nUzycjr",
"zryrh8XdhgV",
"3dtvkDVSAZD",
"vHA7htDA2F",
"e9u1y2nEc87",
"uAytgieC7TG",
"E3QH5-CeDiz",
"nOIoPXaaGtC",
"_st1xK4wItF",
"2xjq8xjE9qB",
"MQaS0B2X9AQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the additional experiments on hyper-parameter tuning, they are quite informative. For offline model selection, you could refer to RLUnplugged and use their naive baseline (might be too late for this paper but it would strengthen the paper if you could include in the camera ready). ",
"This paper prop... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
9,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"_st1xK4wItF",
"nips_2021_a7APmM4B9d",
"zryrh8XdhgV",
"e9u1y2nEc87",
"3dtvkDVSAZD",
"uAytgieC7TG",
"nips_2021_a7APmM4B9d",
"E3QH5-CeDiz",
"b9PI0KxSIvh",
"vHA7htDA2F",
"MQaS0B2X9AQ",
"2xjq8xjE9qB",
"nips_2021_a7APmM4B9d",
"nips_2021_a7APmM4B9d"
] |
nips_2021_OU4LL1qP3Dg | Probability Paths and the Structure of Predictions over Time | In settings ranging from weather forecasts to political prognostications to financial projections, probability estimates of future binary outcomes often evolve over time. For example, the estimated likelihood of rain on a specific day changes by the hour as new information becomes available. Given a collection of such probability paths, we introduce a Bayesian framework -- which we call the Gaussian latent information martingale, or GLIM -- for modeling the structure of dynamic predictions over time. Suppose, for example, that the likelihood of rain in a week is 50%, and consider two hypothetical scenarios. In the first, one expects the forecast to be equally likely to become either 25% or 75% tomorrow; in the second, one expects the forecast to stay constant for the next several days. A time-sensitive decision-maker might select a course of action immediately in the latter scenario, but may postpone their decision in the former, knowing that new information is imminent. We model these trajectories by assuming predictions update according to a latent process of information flow, which is inferred from historical data. In contrast to general methods for time series analysis, this approach preserves important properties of probability paths such as the martingale structure and appropriate amount of volatility and better quantifies future uncertainties around probability paths. We show that GLIM outperforms three popular baseline methods, producing better estimated posterior probability path distributions measured by three different metrics. By elucidating the dynamic structure of predictions over time, we hope to help individuals make more informed choices.
| accept | The authors formulate a new problem of predicting the probability paths of a forecaster over time as it obtains new evidence, and a new algorithm for doing so, based on the fact that forecasts are expected to satisfy a martingale property. Reviewers agreed the problem setting and method were novel, interesting, and technically well executed, and gave scores of 5, 6, 6, 6. There were no major critiques raised, though Reviewer mTu3 wrote a long and insightful review that raised some questions. The meta-reviewer is unsure about the eventual scope of application for this problem and method, but overall finds that the ideas are fresh, interesting, and well executed, so should be of interest to the NeurIPS community. | test | [
"dzkSu86QNzX",
"GVU7O_i8hgr",
"kEok90W-aRf",
"KppSOYGWfUa",
"AlHMBXK7uon"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the detailed review and thoughtful suggestions. In response to the specific questions and concerns raised:\n\n1. MMFE indeed guarantees that the martingale property is satisfied in most cases where the Y_t are not bounded. However, many sampled paths generated by MMFE go beyond the [0, 1... | [
-1,
6,
6,
5,
6
] | [
-1,
4,
3,
4,
3
] | [
"KppSOYGWfUa",
"nips_2021_OU4LL1qP3Dg",
"nips_2021_OU4LL1qP3Dg",
"nips_2021_OU4LL1qP3Dg",
"nips_2021_OU4LL1qP3Dg"
] |
nips_2021_GUD7rNkaWKr | Deep Extended Hazard Models for Survival Analysis | Unlike standard prediction tasks, survival analysis requires modeling right censored data, which must be treated with care. While deep neural networks excel in traditional supervised learning, it remains unclear how to best utilize these models in survival analysis. A key question asks which data-generating assumptions of traditional survival models should be retained and which should be made more flexible via the function-approximating capabilities of neural networks. Rather than estimating the survival function targeted by most existing methods, we introduce a Deep Extended Hazard (DeepEH) model to provide a flexible and general framework for deep survival analysis. The extended hazard model includes the conventional Cox proportional hazards and accelerated failure time models as special cases, so DeepEH subsumes the popular Deep Cox proportional hazard (DeepSurv) and Deep Accelerated Failure Time (DeepAFT) models. We additionally provide theoretical support for the proposed DeepEH model by establishing consistency and convergence rate of the survival function estimator, which underscore the attractive feature that deep learning is able to detect low-dimensional structure of data in high-dimensional space. Numerical experiments also provide evidence that the proposed methods outperform existing statistical and deep learning approaches to survival analysis.
| accept | This paper has been reviewed by four knowledgeable referees resulting in one accept, two marginal accept and one marginal reject recommendations. Most of the criticism centers on limited novelty and utility of the theoretical portion of this work. Yet, the proposed method appears to stand on its own, yet it has been evaluated on a relatively low dimensional data only. The authors have engaged in discussions with the reviewers that helped resolve some but not all of the stated concerns. All things considered, I recommend this paper for acceptance if space permits its inclusion in the conference agenda. The authors should strive to incorporate all constructive recommendations from the reviewers in the finał camera ready version of their paper. | train | [
"HODzX_yH8ma",
"VCmG5sF0I6D",
"kDPDpE-9AYr",
"esgJ0qAZy7",
"-bQcRS1p5Jr",
"xScznGYY4L8",
"evLKOu9xi--",
"Co1UmgCPMqJ",
"N-yPs4i0tBO",
"dM1WNOaoqQu",
"i-8o1HHvd8"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. Of course, even without that theoretic analysis, most researchers would probably have used a network with only a few hidden layers. Also, this use of O( log n) was an assumption, not the result of the analysis. While I realize the theorem claims that this is {\\bf sufficient} for th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"VCmG5sF0I6D",
"kDPDpE-9AYr",
"dM1WNOaoqQu",
"i-8o1HHvd8",
"dM1WNOaoqQu",
"N-yPs4i0tBO",
"Co1UmgCPMqJ",
"nips_2021_GUD7rNkaWKr",
"nips_2021_GUD7rNkaWKr",
"nips_2021_GUD7rNkaWKr",
"nips_2021_GUD7rNkaWKr"
] |
nips_2021__aJnkoYKj6s | TNASP: A Transformer-based NAS Predictor with a Self-evolution Framework | Predictor-based Neural Architecture Search (NAS) continues to be an important topic because it aims to mitigate the time-consuming search procedure of traditional NAS methods. A promising performance predictor determines the quality of final searched models in predictor-based NAS methods. Most existing predictor-based methodologies train model-based predictors under a proxy dataset setting, which may suffer from the accuracy decline and the generalization problem, mainly due to their poor abilities to represent spatial topology information of the graph structure data. Besides the poor encoding for spatial topology information, these works did not take advantage of the temporal information such as historical evaluations during training. Thus, we propose a Transformer-based NAS performance predictor, associated with a Laplacian matrix based positional encoding strategy, which better represents topology information and achieves better performance than previous state-of-the-art methods on NAS-Bench-101, NAS-Bench-201, and DARTS search space. Furthermore, we also propose a self-evolution framework that can fully utilize temporal information as guidance. This framework iteratively involves the evaluations of previously predicted results as constraints into current optimization iteration, thus further improving the performance of our predictor. Such framework is model-agnostic, thus can enhance performance on various backbone structures for the prediction task. Our proposed method helped us rank 2nd among all teams in CVPR 2021 NAS Competition Track 2: Performance Prediction Track.
| accept | The authors apply transformer with Laplacian matrix positional encoding as a predictor in predictor based neural architecture search. They also introduce a self-evolution framework to incorporate new data points to improve the performance. They test the model on standard NAS and DARTS benchmarks (collected set of architectures with their performances) and achieve some of the best results.
Overall the paper is sound. The main issue from the reviewers was if the paper is sufficiently novel to be accepted to neurips and also partially the small size of the NAS and DARTS benchmarks as an evaluation for the neural architecture search. There were also a lot of question to relations to other works and reference that were missing in the paper, however the authors have clarified a lot of the differences from the previous works in the rebuttal. | train | [
"Y7V629II6sG",
"l5O0rE_GUua",
"f3I-RhWfoo3",
"rphwQGc4UWV",
"4a-ZvirV3tc",
"8HGqy4KNmhB",
"gH9Aoy0BVP1",
"VSAt1rnKYqM",
"QAftVLBCb6a",
"Qubdt9dsUj",
"rdKTJFRnUpR",
"mPg6D9-7UWb"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer's comments and appreciate that the merit of our method is well recognized by the reviewer. \nThe improvements of paper presentations and reformulations can be made straightforwardly by incorporating our responses and all reviewers' comments into our original paper, which can be done in a... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"l5O0rE_GUua",
"gH9Aoy0BVP1",
"4a-ZvirV3tc",
"nips_2021__aJnkoYKj6s",
"8HGqy4KNmhB",
"rphwQGc4UWV",
"mPg6D9-7UWb",
"rdKTJFRnUpR",
"Qubdt9dsUj",
"nips_2021__aJnkoYKj6s",
"nips_2021__aJnkoYKj6s",
"nips_2021__aJnkoYKj6s"
] |
nips_2021_TmKQ_XeezEB | Automorphic Equivalence-aware Graph Neural Network | Distinguishing the automorphic equivalence of nodes in a graph plays an essential role in many scientific domains, e.g., computational biologist and social network analysis. However, existing graph neural networks (GNNs) fail to capture such an important property. To make GNN aware of automorphic equivalence, we first introduce a localized variant of this concept --- ego-centered automorphic equivalence (Ego-AE). Then, we design a novel variant of GNN, i.e., GRAPE, that uses learnable AE-aware aggregators to explicitly differentiate the Ego-AE of each node's neighbors with the aids of various subgraph templates. While the design of subgraph templates can be hard, we further propose a genetic algorithm to automatically search them from graph data. Moreover, we theoretically prove that GRAPE is expressive in terms of generating distinct representations for nodes with different Ego-AE features, which fills in a fundamental gap of existing GNN variants. Finally, we empirically validate our model on eight real-world graph data, including social network, e-commerce co-purchase network, and citation network, and show that it consistently outperforms existing GNNs. The source code is public available at https://github.com/tsinghua-fib-lab/GRAPE.
| accept | This paper presents a new GNN architecture which addresses the automorphic equivalence among nodes. The idea is novel and effective. The results have demonstrated the superiority of the proposed approach. The reviewers liked the idea, while raised concerns such as some theoretical claims, datasets and the selection of subgraph templates. The authors have done a great job in rebuttal and have clarified some claims that caused misunderstanding. In the end, the reviewers reached consensus in accepting this paper. | train | [
"lqan4KqJwr-",
"GV6AL5fsseJ",
"QYgWAhVrlp",
"oiO2S17vgxs",
"A3VJtz029fr",
"-w_p8geoVuX",
"mz5rXqKzHJ",
"zjuZNwde14",
"uR6QU8fx0l",
"HsbG5PTqy9b",
"jt09T6XB95H",
"Qmamu6ZdNkE",
"QKoUzj_5dYK",
"UlPPdpPj7KZ",
"3k7QrPhNjIc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for the response. It answers some of my questions. However, I still decided to hold my original evaluation for this paper.",
"Overall this work contributes novel ideas from multiple aspects from introducing the new concept of GNNs being automorphic equivalence-aware to further push the expressiveness of ... | [
-1,
7,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
2,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"QKoUzj_5dYK",
"nips_2021_TmKQ_XeezEB",
"UlPPdpPj7KZ",
"nips_2021_TmKQ_XeezEB",
"HsbG5PTqy9b",
"nips_2021_TmKQ_XeezEB",
"zjuZNwde14",
"uR6QU8fx0l",
"Qmamu6ZdNkE",
"nips_2021_TmKQ_XeezEB",
"nips_2021_TmKQ_XeezEB",
"nips_2021_TmKQ_XeezEB",
"nips_2021_TmKQ_XeezEB",
"nips_2021_TmKQ_XeezEB",
... |
nips_2021_fNKwtwJHjx | Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems | Recently, there has been much interest in studying the convergence rates of without-replacement SGD, and proving that it is faster than with-replacement SGD in the worst case. However, these works ignore or do not provide tight bounds in terms of the problem's geometry, including its condition number. Perhaps surprisingly, we prove that when the condition number is taken into account, without-replacement SGD \emph{does not} significantly improve on with-replacement SGD in terms of worst-case bounds, unless the number of epochs (passes over the data) is larger than the condition number. Since many problems in machine learning and other areas are both ill-conditioned and involve large datasets, this indicates that without-replacement does not necessarily improve over with-replacement sampling for realistic iteration budgets. We show this by providing new lower and upper bounds which are tight (up to log factors), for quadratic problems with commuting quadratic terms, precisely quantifying the dependence on the problem parameters.
| accept | The writing in this paper is clear. It is well-motivated and informs theory being actively developed in its area. The paper's lower bound accounts for the condition number where previous ones did not, and in doing so significantly strengthens the known conditions under which the community can hope to guarantee, in general, that without-replacement SGD outperforms standard SGD.
Three of four reviewers support accepting this paper. One reviewer's recommendation is just below the acceptance bar, but without a particularly strong argument against acceptance. Some of the criticism from this reviewer is around wording in presentation and motivation. I encourage the authors to consider these as suggestions in their final revisions.
Lower bounds are valuable in a community's pursuit of theory and are technically challenging to establish. Several constructive comments from reviewers have been acknowledged by the authors and I believe will improve the writing and clarity further. Together with sufficiently many positive reviews, I recommend it for acceptance. | train | [
"0eg-9gNh9gk",
"AXCrRaMPy8C",
"eNhsnfdzukk",
"ztKEO9LVF0N",
"JxzBE_q7m9o",
"6zKQtcGqJ3p",
"9yvorfho1Rd",
"RJ91ha8SWOM",
"Vr2j8jqZagg",
"Lk7QJLbZEIG",
"-ZLm-q7IRxn",
"f3euVyMR7Bg",
"STR0X1ipgaM",
"bWtpIjYS5or",
"HBAZMYUwrV0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper compares the efficiency of using different variants of stochastic gradient descent to find the minimum of a function that is the sum of multiple convex functions. It focuses on the case where the minimum of the function is dramatically flatter in some directions than in others. More specifically, it com... | [
6,
5,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_fNKwtwJHjx",
"nips_2021_fNKwtwJHjx",
"HBAZMYUwrV0",
"nips_2021_fNKwtwJHjx",
"f3euVyMR7Bg",
"RJ91ha8SWOM",
"Lk7QJLbZEIG",
"Lk7QJLbZEIG",
"AXCrRaMPy8C",
"-ZLm-q7IRxn",
"Vr2j8jqZagg",
"ztKEO9LVF0N",
"0eg-9gNh9gk",
"HBAZMYUwrV0",
"nips_2021_fNKwtwJHjx"
] |
nips_2021_wxBGz3ScBBo | Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II | Yossi Arjevani, Michael Field | accept | This paper gives a detailed characterization of spurious local minima for 2-layer ReLU networks, which was a model that received substantial theoretical interest. The paper was able to do this using a powerful technique of symmetry-breaking, and the results explained some of the observations in previous works. Overall the paper is a very solid contribution to the understanding of optimization landscape of neural networks and the symmetry-breaking technique might be useful in more general settings. | train | [
"W9urGLH6lDU",
"FToo-Tyn5iX",
"EUbWN6eoKV",
"r0W17o3-8Wa",
"p1GWw4CGLfM",
"2Ue6BdwzFd",
"QxTBT1GWnI2",
"kM3JLTjFErH",
"lbJ-v-S_yMS",
"R1enheJGvZ2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Under a student-teacher setup with unit Gaussian input, this submission studies the Hessian of a finite-width two-layer ReLU network at critical points of the (population) MSE loss, and characterizes the spectrum and loss at different minima. The starting observation is a symmetry-breaking property based on which ... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nips_2021_wxBGz3ScBBo",
"kM3JLTjFErH",
"nips_2021_wxBGz3ScBBo",
"W9urGLH6lDU",
"nips_2021_wxBGz3ScBBo",
"lbJ-v-S_yMS",
"R1enheJGvZ2",
"EUbWN6eoKV",
"nips_2021_wxBGz3ScBBo",
"nips_2021_wxBGz3ScBBo"
] |
nips_2021_2r6F9duQ6o5 | CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks | We present a continual learning approach for generative adversarial networks (GANs), by designing and leveraging parameter-efficient feature map transformations. Our approach is based on learning a set of global and task-specific parameters. The global parameters are fixed across tasks whereas the task-specific parameters act as local adapters for each task, and help in efficiently obtaining task-specific feature maps. Moreover, we propose an element-wise addition of residual bias in the transformed feature space, which further helps stabilize GAN training in such settings. Our approach also leverages task similarities based on the Fisher information matrix. Leveraging this knowledge from previous tasks significantly improves the model performance. In addition, the similarity measure also helps reduce the parameter growth in continual adaptation and helps to learn a compact model. In contrast to the recent approaches for continually-learned GANs, the proposed approach provides a memory-efficient way to perform effective continual data generation. Through extensive experiments on challenging and diverse datasets, we show that the feature-map-transformation approach outperforms state-of-the-art methods for continually-learned GANs, with substantially fewer parameters. The proposed method generates high-quality samples that can also improve the generative-replay-based continual learning for discriminative tasks.
| accept | The paper addresses continual learning for GANs with two major novel techniques: 1) learnable task-specific adapters to combat catastrophic forgetting, 2) a task-similarity measure based on Fisher information matrix accelerated by computational approximations. Although the method appears a bit ad hoc, the experimental results show effective performance, and the rebuttal has addressed the bulk of the concerns (some of which are quite insightful). I suggest that the authors well incorporate the rebuttal to the paper. Overall it is a neat and practical approach that is a good addition to the proceedings. | train | [
"h6ULHE-id0s",
"doF6jpF-NLa",
"HoX0XfBp_t6",
"MqB6MTO7PvT",
"P_AvCdP8yCb",
"TKkVGi9gxey",
"D7Eipvgb2z-",
"F4CyBU6D5-",
"3zp8jh6op8f",
"LO31zkTPcoj",
"WRnfB9Gxwf",
"WzZfRXIoC4A",
"yY9u7x1bdNM",
"fquSPXkxXCw"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Hello Authors,\n\nThank you for the detailed response. I went through the comments, and it's good to know that we agree on most aspects of the paper that are strong and that needed improvement. \n\nThe response addresses my major concerns. ",
" Thanks for the authors' detailed responses.\nI agree that the compr... | [
-1,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"WRnfB9Gxwf",
"WzZfRXIoC4A",
"nips_2021_2r6F9duQ6o5",
"LO31zkTPcoj",
"D7Eipvgb2z-",
"nips_2021_2r6F9duQ6o5",
"F4CyBU6D5-",
"3zp8jh6op8f",
"TKkVGi9gxey",
"yY9u7x1bdNM",
"fquSPXkxXCw",
"HoX0XfBp_t6",
"nips_2021_2r6F9duQ6o5",
"nips_2021_2r6F9duQ6o5"
] |
nips_2021_QgNAUqQLh4 | Structured Dropout Variational Inference for Bayesian Neural Networks | Approximate inference in Bayesian deep networks exhibits a dilemma of how to yield high fidelity posterior approximations while maintaining computational efficiency and scalability. We tackle this challenge by introducing a novel variational structured approximation inspired by the Bayesian interpretation of Dropout regularization. Concretely, we focus on the inflexibility of the factorized structure in Dropout posterior and then propose an improved method called Variational Structured Dropout (VSD). VSD employs an orthogonal transformation to learn a structured representation on the variational Gaussian noise with plausible complexity, and consequently induces statistical dependencies in the approximate posterior. Theoretically, VSD successfully addresses the pathologies of previous Variational Dropout methods and thus offers a standard Bayesian justification. We further show that VSD induces an adaptive regularization term with several desirable properties which contribute to better generalization. Finally, we conduct extensive experiments on standard benchmarks to demonstrate the effectiveness of VSD over state-of-the-art variational methods on predictive accuracy, uncertainty estimation, and out-of-distribution detection.
| accept | This paper proposes structural variational dropout for BNNs.
Reviewers think the proposed methodology is novel, and the paper's theoretical contribution is significant, in the sense that it addresses a well-known theoretical issue of variational dropout & MC-dropout. Given that MC-dropout is quite often used in practice, such theoretical contribution is useful.
Still the paper has the problem that, some reviewers are concerned with the experiments using implementations that are different from the version used in theoretical analysis. So in revision, either this difference needs to be explained (if it is a confusion), or, the practical approach needs to be justified better to so that they do not have the infinite KL issues of MC-dropout. | train | [
"Hz6-zntFYO1",
"ljs8dE2TR2",
"lCA37mntphG",
"uk5A_314q9g",
"cwAK2IqJ0Fj",
"1LRp_rGUjsL",
"w1_eY-L_j4Y",
"xJMVu2AipsH",
"x0bpyS1p2-",
"-iGja_OB_QY",
"hdqYAu560GR",
"_AUBG_qsrgr",
"VsntLb3Wo2I",
"AWB_rbw1EUH",
"wNkOvRFHm-e",
"1aXUP01gOtj",
"3YAL3_YFeS6",
"IAj38KNwv6d",
"1t2yt2Q_96Z... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_r... | [
" We would like to thank the reviewer for acknowledging the contributions of our paper and changing the score.\n\nRegarding your comment for the experimental descriptions, we will clarify in Sections 4.1 & 4.2 as in our response to your Question-7. \n\nRegarding the roles of hierarchical prior, we will include the ... | [
-1,
-1,
5,
7,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
5,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"ljs8dE2TR2",
"AWB_rbw1EUH",
"nips_2021_QgNAUqQLh4",
"nips_2021_QgNAUqQLh4",
"x0bpyS1p2-",
"uk5A_314q9g",
"3YAL3_YFeS6",
"-iGja_OB_QY",
"nips_2021_QgNAUqQLh4",
"hdqYAu560GR",
"_AUBG_qsrgr",
"wNkOvRFHm-e",
"uk5A_314q9g",
"3YAL3_YFeS6",
"IAj38KNwv6d",
"1t2yt2Q_96Z",
"lCA37mntphG",
"x... |
nips_2021_lkYOOQIcC0L | Neural Relightable Participating Media Rendering | Learning neural radiance fields of a scene has recently allowed realistic novel view synthesis of the scene, but they are limited to synthesize images under the original fixed lighting condition. Therefore, they are not flexible for the eagerly desired tasks like relighting, scene editing and scene composition. To tackle this problem, several recent methods propose to disentangle reflectance and illumination from the radiance field. These methods can cope with solid objects with opaque surfaces but participating media are neglected. Also, they take into account only direct illumination or at most one-bounce indirect illumination, thus suffer from energy loss due to ignoring the high-order indirect illumination. We propose to learn neural representations for participating media with a complete simulation of global illumination. We estimate direct illumination via ray tracing and compute indirect illumination with spherical harmonics. Our approach avoids computing the lengthy indirect bounces and does not suffer from energy loss. Our experiments on multiple scenes show that our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods, and it can generalize to deal with solid objects with opaque surfaces as well.
| accept | The reviews were split: 3 reviewers gave 7 and one gave 4. Those who supported acceptance acknowledge the novelty of the paper as the method can jointly estimate volume density, scattering albedo and lighting in an unsupervised manner, and can capture more accurate global illuminations of participating media. The reviewer who gave 4 had concerns that he proposed method's using a network to predict the spherical harmonics approximation of indirect illumination/multiple scattering does not necessarily generalize well, and there was lengthy discussion between the authors and this reviewer, who was not convinced eventually. The AC agrees with the majority that the paper presented an interesting extension of previous "relightable NeRF" to participating media. The novelty and results warrant acceptance. | train | [
"c9u2Vc_jywH",
"Rk2D0JbRE77",
"uJT3wuuHoG5",
"cCI8VWiyfj",
"zkC280o8dl8",
"aIaBK7dkDaU",
"3UyPidbJt2y",
"I8mVI_azx3W",
"rpxQCSYleRA",
"cX1PGWhSw8m",
"uSImicqjS4",
"KWQhfiC23Dj",
"R0EWCGok6x",
"u2Rqo1F8NbW",
"Yq8wKppw6Zf",
"ZHLNHxISCk",
"QHzDUkMpJM",
"kxTdwTg0WQa",
"G4tGrPVeRjY"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
"The authors present a neural implicit representation for participating media. This requires (i) extending neural radiance field approaches that are limited to a fixed lighting and therefore less flexible regarding relighting and scene editing, or (ii) overcoming methods for disentangling reflectance and illuminati... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"nips_2021_lkYOOQIcC0L",
"uJT3wuuHoG5",
"zkC280o8dl8",
"aIaBK7dkDaU",
"3UyPidbJt2y",
"KWQhfiC23Dj",
"rpxQCSYleRA",
"uSImicqjS4",
"cX1PGWhSw8m",
"R0EWCGok6x",
"u2Rqo1F8NbW",
"G4tGrPVeRjY",
"kxTdwTg0WQa",
"c9u2Vc_jywH",
"QHzDUkMpJM",
"nips_2021_lkYOOQIcC0L",
"nips_2021_lkYOOQIcC0L",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.