paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_3-F0-Zpcrno | Streaming Belief Propagation for Community Detection | The community detection problem requires to cluster the nodes of a network into a small number of well-connected ‘communities’. There has been substantial recent progress in characterizing the fundamental statistical limits of community detection under simple stochastic block models. However, in real-world applications, the network structure is typically dynamic, with nodes that join over time. In this setting, we would like a detection algorithm to perform only a limited number of updates at each node arrival. While standard voting approaches satisfy this constraint, it is unclear whether they exploit the network information optimally. We introduce a simple model for networks growing over time which we refer to as streaming stochastic block model (StSBM). Within this model, we prove that voting algorithms have fundamental limitations. We also develop a streaming belief-propagation (STREAMBP) approach, for which we prove optimality in certain regimes. We validate our theoretical findings on synthetic and real data
| accept | Each reviewer has a positive opinion of this submission, which deals with a relevant problem. I recommend acceptance. | train | [
"KMORYD9HHGJ",
"nL5NEezxbAh",
"6Ly5OIy6c_x",
"92uEKcwP470",
"yh3jvIoyXVX",
"dsOHQyljlNj",
"BTWCJcfF2DQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the very detailed responses. This answers my questions about the technical contribution. For this I am keeping my current positive evaluation.",
" We thank the reviewer for careful reading and helpful comments.\n\n“Strength: The problem considered in the paper is well motivated. To me the proposed re... | [
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nL5NEezxbAh",
"BTWCJcfF2DQ",
"yh3jvIoyXVX",
"dsOHQyljlNj",
"nips_2021_3-F0-Zpcrno",
"nips_2021_3-F0-Zpcrno",
"nips_2021_3-F0-Zpcrno"
] |
nips_2021_fj6rFciApc | The staircase property: How hierarchical structure can guide deep learning | This paper identifies a structural property of data distributions that enables deep neural networks to learn hierarchically. We define the ``staircase'' property for functions over the Boolean hypercube, which posits that high-order Fourier coefficients are reachable from lower-order Fourier coefficients along increasing chains. We prove that functions satisfying this property can be learned in polynomial time using layerwise stochastic coordinate descent on regular neural networks -- a class of network architectures and initializations that have homogeneity properties. Our analysis shows that for such staircase functions and neural networks, the gradient-based algorithm learns high-level features by greedily combining lower-level features along the depth of the network. We further back our theoretical results with experiments showing that staircase functions are learnable by more standard ResNet architectures with stochastic gradient descent. Both the theoretical and experimental results support the fact that the staircase property has a role to play in understanding the capabilities of gradient-based learning on regular networks, in contrast to general polynomial-size networks that can emulate any Statistical Query or PAC algorithm, as recently shown.
| accept | This paper studies hierarchical structures that can be efficiently learned by neural networks to formalize the intuition that deep learning works by building feature hierarchies. It was shown that a class of sparsely connected networks trained in a neuron-wise and layer-wise manner can learn hierarchical boolean functions as defined in the paper. As all the reviewers agree, the paper contains strong theoretical results towards this direction and I recommend acceptance.
However, please note that the writing and notation can be improved to make the paper more accessible for a wider audience. Please take into account the updated reviews when preparing the final version to accommodate the requested changes including the necessary clarifications and typos pointed out by the reviewers.
An additional typo/notation issue: In section 3.2, the regularized loss \ell_R is defined as the population loss. However, in the gradient calculation step of Algorithm 2, \ell_R is evaluated at a batch of data points. This seems like a notational error. | train | [
"kMPWiVSlRzv",
"s-VXp4idyr",
"z6c-wCIUvBJ",
"ESmmhUG7el0",
"a2BzXJgymF",
"1OZhx9dQxJ",
"bFa-koRxz8A",
"t6FnF72E8hW",
"8MkZEx1bGn2",
"fNP_dFGjy-O",
"FNG_ID-_Hcf",
"RDThtF1fQzx",
"ABeAP7txfg2"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper discussed the learnability of function by neural networks.\n\nThe paper introduces a \"staircase function\": a boolean function composed of monomials of increasing order such that higher order monomials are recursively composed of lower-order monomials. The authors argue that this is a prototype of a hie... | [
7,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_fj6rFciApc",
"z6c-wCIUvBJ",
"RDThtF1fQzx",
"nips_2021_fj6rFciApc",
"8MkZEx1bGn2",
"nips_2021_fj6rFciApc",
"t6FnF72E8hW",
"1OZhx9dQxJ",
"ESmmhUG7el0",
"nips_2021_fj6rFciApc",
"kMPWiVSlRzv",
"ABeAP7txfg2",
"nips_2021_fj6rFciApc"
] |
nips_2021_TRDAFiwDq8A | MagNet: A Neural Network for Directed Graphs | The prevalence of graph-based data has spurred the rapid development of graph neural networks (GNNs) and related machine learning algorithms. Yet, despite the many datasets naturally modeled as directed graphs, including citation, website, and traffic networks, the vast majority of this research focuses on undirected graphs. In this paper, we propose MagNet, a GNN for directed graphs based on a complex Hermitian matrix known as the magnetic Laplacian. This matrix encodes undirected geometric structure in the magnitude of its entries and directional information in their phase. A charge parameter attunes spectral information to variation among directed cycles. We apply our network to a variety of directed graph node classification and link prediction tasks showing that MagNet performs well on all tasks and that its performance exceeds all other methods on a majority of such tasks. The underlying principles of MagNet are such that it can be adapted to other GNN architectures.
| accept | The paper proposes a new GNN for directed graphs using magnetic Laplacian. Specifically, the authors use a complex operator encoding edge direction via phase. This is an interesting approach, which was appreciated by the reviewers. The authors provided detailed responses that addressed the reviewers' comments in a satisfactory manner. The AC recommends acceptance.
| train | [
"RHcg59lE5kH",
"0ULusubYJLe",
"iE5MkUS37D",
"HKIvRCwDhYx",
"Sb6mCp4p5v",
"ghvXJ8E-eId"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper considers the design of graph neural networks for directed graphs. In particular, it uses the magnetic Laplacian, a complex-valued Hermitian matrix that encodes edge directions via a phase parameter, as a matrix from which convolutional filters are formed. By defining this matrix, the authors demonstrat... | [
7,
6,
7,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"nips_2021_TRDAFiwDq8A",
"nips_2021_TRDAFiwDq8A",
"nips_2021_TRDAFiwDq8A",
"RHcg59lE5kH",
"0ULusubYJLe",
"iE5MkUS37D"
] |
nips_2021_oE5lMpPRm0 | Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning | For deployment, neural architecture search should be hardware-aware, in order to satisfy the device-specific constraints (e.g., memory usage, latency and energy consumption) and enhance the model efficiency. Existing methods on hardware-aware NAS collect a large number of samples (e.g., accuracy and latency) from a target device, either builds a lookup table or a latency estimator. However, such approach is impractical in real-world scenarios as there exist numerous devices with different hardware specifications, and collecting samples from such a large number of devices will require prohibitive computational and monetary cost. To overcome such limitations, we propose Hardware-adaptive Efficient Latency Predictor (HELP), which formulates the device-specific latency estimation problem as a meta-learning problem, such that we can estimate the latency of a model's performance for a given task on an unseen device with a few samples. To this end, we introduce novel hardware embeddings to embed any devices considering them as black-box functions that output latencies, and meta-learn the hardware-adaptive latency predictor in a device-dependent manner, using the hardware embeddings. We validate the proposed HELP for its latency estimation performance on unseen platforms, on which it achieves high estimation performance with as few as 10 measurement samples, outperforming all relevant baselines. We also validate end-to-end NAS frameworks using HELP against ones without it, and show that it largely reduces the total time cost of the base NAS method, in latency-constrained settings.
| accept | This is a timely paper. It addresses a relevant problem using novel meta-learning methods and obtains strong results.
All reviewers were in favour of acceptance.
There was a long internal discussion with dozens of posts about the comparison with BRP-NAS. The authors first misleadingly claimed that the results shown in the paper are consistent with the ones in the BRP-NAS paper, but they are clearly not (with a huge difference in Spearman correlation of roughly 0.8 vs. 0.99!). During the review process, the authors' code for running BRP-NAS was checked and found to be fine, so that is not the reason for the performance difference.
The most likely explanations for the different performance of BRP-NAS are (1) The data may be measured differently and/or (2) BRP-NAS may need different hyperparameters for new data. Concerning the latency measurement methodology, the BRP-NAS paper described their detailed methodology for measurements in detail: they pruned NAS-Bench-201 graphs and optimized them before running them on the device, then discarded the first few measurements and averaged multiple runs. The current paper doesn't give full details about the measurement pipeline to the paper, and I strongly encourage the authors to add more details. In the further discussion, the authors indeed explained that they use a less controlled (and arguably, more realistic) noise than in the BRP-NAS paper, which reduces the predictability.
The authors also added a convincing comparison to BRP-NAS on BRP-NAS' own benchmark LatBench.
An orthogonal concern that was brought up in the reviewer discussion was that Table 5 is overclaiming: OFA + HELP obviously does *not* take a total time of 26 seconds, as OFA takes very substantial time to construct the one-shot model. I strongly encourage the authors to fix this by reporting the total time spent, in order to present their contribution realistically.
Another concern that came up was that authors were given an additional page this year and encouraged to use this page to discuss the broader impact and societal impact; the authors instead deferred that discussion to the appendix, which could be seen as cheating the page limit. I strongly encourage them to include a discussion in the main paper for the final version and move some less important details to the appendix instead.
Overall, the work addresses an important and relevant problem; the use of meta-learning for NAS is much underexplored and thus very timely. I am therefore clearly in favour of acceptance. | train | [
"-Drq5ad2N6Y",
"-iPyJsdLhZm",
"69EYlebhXy1",
"SXgTt6FbqLH",
"YRxy9qoc6dJ",
"XFD4oI7iU2",
"eWQLYWkf6Mh",
"0q1ahFHlB9g",
"BQRFnrH4bIW",
"0IjqrMG-zET",
"AQjFeWtFtB_",
"K2guRN4iGd",
"6EBXard72Jo",
"DSe__29DtvC",
"1f-FV5E3ZFk",
"K7YfmDmuApD",
"GffTkqyHESA",
"p5HJQLnHEZY"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
" **Comment** The big-O analysis correction that you mentioned looks good. It should address the concerns I raised in my original review.\n\n- We are happy to address the reviewer’s concern. We will update the paper with this correction version.\n\n**Comment** The correlation table you provided in your response is ... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"-iPyJsdLhZm",
"K2guRN4iGd",
"0q1ahFHlB9g",
"nips_2021_oE5lMpPRm0",
"XFD4oI7iU2",
"DSe__29DtvC",
"nips_2021_oE5lMpPRm0",
"GffTkqyHESA",
"0IjqrMG-zET",
"nips_2021_oE5lMpPRm0",
"1f-FV5E3ZFk",
"p5HJQLnHEZY",
"0IjqrMG-zET",
"SXgTt6FbqLH",
"K7YfmDmuApD",
"0IjqrMG-zET",
"nips_2021_oE5lMpPR... |
nips_2021_YOc9i6-NrQk | Topological Relational Learning on Graphs | Graph neural networks (GNNs) have emerged as a powerful tool for graph classification and representation learning. However, GNNs tend to suffer from over-smoothing problems and are vulnerable to graph perturbations. To address these challenges, we propose a novel topological neural framework of topological relational inference (TRI) which allows for integrating higher-order graph information to GNNs and for systematically learning a local graph structure. The key idea is to rewire the original graph by using the persistent homology of the small neighborhoods of the nodes and then to incorporate the extracted topological summaries as the side information into the local algorithm. As a result, the new framework enables us to harness both the conventional information on the graph structure and information on higher order topological properties of the graph. We derive theoretical properties on stability of the new local topological representation of the graph and discuss its implications on the graph algebraic connectivity. The experimental results on node classification tasks demonstrate that the new TRI-GNN outperforms all 14 state-of-the-art baselines on 6 out 7 graphs and exhibit higher robustness to perturbations, yielding up to 10\% better performance under noisy scenarios.
| accept | While the paper initially obtained quite mixed reviews and some reviewers flagged ethical concerns, a substantial amount of constructive discussion among the authors and the expert reviewers (both technical and ethical) eventually clarified many issues. Taking into account all changes and clarifications mentioned (and promised) by the authors, the revised manuscript will be a valuable contribution to the field. I do, however, encourage the authors to take all comments seriously and adjust the manuscript accordingly (especially regarding notation, presentation, choice of hyperparameters, social impact, and an honest discussion of limitations).
| train | [
"ll5cTeNtD9_",
"ofmNtrdKQzD",
"ARKHo1cnS5",
"i7ijILcPTTs",
"IWBhMicm06",
"8W8DzzMw9N0",
"uGMI8XgaRx",
"vexOEWxo2-2",
"eyvYl0kB6VK",
"TKstIyRKQpo",
"r4-rJ1tNZLb",
"_dHCTR7mzm",
"16anhVZt9CS",
"NDwVVE7pDeG",
"iJnH6HLdNt",
"JL8s3f8qQck",
"tIiXHBzCbQ",
"gGcR3VuXAp2",
"XseITfHQCJW",
... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_re... | [
" Thank you very much for your positive feedback and also for the very motivating discussion throughout the review period! \n\nYes, we will add such a plot to the final version (we cannot add any plots to our responses on openreview unless we create an external link and it will also take some time to prepare the pl... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"ARKHo1cnS5",
"nips_2021_YOc9i6-NrQk",
"uGMI8XgaRx",
"8W8DzzMw9N0",
"nips_2021_YOc9i6-NrQk",
"zwd1k8H7qUm",
"vexOEWxo2-2",
"eyvYl0kB6VK",
"r4-rJ1tNZLb",
"r4-rJ1tNZLb",
"wZGBTd_1DTo",
"nWiypJN3WV",
"nWiypJN3WV",
"tIiXHBzCbQ",
"JL8s3f8qQck",
"nips_2021_YOc9i6-NrQk",
"nips_2021_YOc9i6-N... |
nips_2021_Pye1c7itBu | Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks | In recent years, several results in the supervised learning setting suggested that classical statistical learning-theoretic measures, such as VC dimension, do not adequately explain the performance of deep learning models which prompted a slew of work in the infinite-width and iteration regimes. However, there is little theoretical explanation for the success of neural networks beyond the supervised setting. In this paper we argue that, under some distributional assumptions, classical learning-theoretic measures can sufficiently explain generalization for graph neural networks in the transductive setting. In particular, we provide a rigorous analysis of the performance of neural networks in the context of transductive inference, specifically by analysing the generalisation properties of graph convolutional networks for the problem of node classification. While VC-dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models. We further use the generalisation error bounds based on transductive Rademacher complexity to demonstrate the role of graph convolutions and network architectures in achieving smaller generalisation error and provide insights into when the graph structure can help in learning. The findings of this paper could re-new the interest in studying generalisation in neural networks in terms of learning-theoretic measures, albeit in specific problems.
| accept | This paper analyzed the generalization gap of GNNs based on statistical learning theory. First, this paper gives an upper bound of the generalization gap by using the VC dimension of the model, which is characterized by the rank of the adjacency matrix. However, it is shown that such a bound is vacuous. To overcome this issue, the authors give a tighter Transductive Rademacher Complexity (TRC) which is bounded by norms of $S$ and $SX$. They gave detailed discussions about the derived bound in some specific situations. They also discussed benefit of residual connections which reduce the generalization gap.
Overall, this paper is well written, and the problems that the paper is analyzing is indeed important. This paper gives a detailed analysis on the issue of generalization ability of graph neural networks, which is valuable to the community.
One of the biggest weakness of this study is that evaluating TRC is not new because it has already been addressed by Oono and Suzuki (2020). Unfortunately, this paper does not cite Oono and Suzuki (2020) nor give discuss relation to it. On the other hand, there are several interesting insights that were not indicated by Oono and Suzuki (2020). In that sense, this paper has novelty.
In summary, this paper gives an instructive theoretical analysis for GNNs and it can be accepted by NeurIPS.
On the other hand, I definitely recommend the authors to include Oono and Suzuki (2020) and discuss relation to it. | train | [
"vfGeFa8Gb5b",
"25vEPc_Zcag",
"Qxt_BiKzLMh",
"aX7pAYZZsbh",
"w09Bwd3eHvB",
"eqEmYC3Mds9",
"LVOnYCDw-F2",
"eKX15Ie5A7",
"eWdlnVDy7Tx",
"-CD0_62KLhY",
"NeYTGmwJqTl",
"zBieWIt5_2u",
"2_4VROTmdrT"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper analyzed the generalization performance of GNNs based on statistical learning theory. First, this paper derived the VC dimension-based generalization error bounds. It showed that graph structures affected the upper bound of the generalization performance gap via the rank of the adjacency matrix (and its... | [
5,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_Pye1c7itBu",
"LVOnYCDw-F2",
"nips_2021_Pye1c7itBu",
"NeYTGmwJqTl",
"eKX15Ie5A7",
"eWdlnVDy7Tx",
"-CD0_62KLhY",
"2_4VROTmdrT",
"zBieWIt5_2u",
"vfGeFa8Gb5b",
"Qxt_BiKzLMh",
"nips_2021_Pye1c7itBu",
"nips_2021_Pye1c7itBu"
] |
nips_2021_Rt5mjXAqHrY | Federated Linear Contextual Bandits | Ruiquan Huang, Weiqiang Wu, Jing Yang, Cong Shen | accept | During discussion, all reviewers agreed that though the motivation for the precise setting is unclear, the paper makes good technical contributions and so I am recommending the paper for acceptance. However, I ask the authors to incorporate a discussion on the assumptions of each client serving one user and possible extension to potential new users entering the system. | train | [
"W5egKqrOUsX",
"tu8fgymq2Rc",
"DsHaGPAhi1_",
"A4aKu5G0mOw",
"a8wtjDf3ox6",
"2eGrLbgqZdG",
"7UYysA9PPBF",
"Pd_Sq7DKrMM",
"Uz9kS1DTEth",
"n9B0jYMjibP",
"FWdzWt3Oi9m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies a federated setup where each client faces a stochastic contextual bandit. The parameters of the bandits are coupled across the clientsm. The clients share their local estimates with the server which aggregate them and share back with the clients. The client are hetogenous and the server leverage... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021_Rt5mjXAqHrY",
"2eGrLbgqZdG",
"Uz9kS1DTEth",
"Pd_Sq7DKrMM",
"7UYysA9PPBF",
"n9B0jYMjibP",
"FWdzWt3Oi9m",
"W5egKqrOUsX",
"nips_2021_Rt5mjXAqHrY",
"nips_2021_Rt5mjXAqHrY",
"nips_2021_Rt5mjXAqHrY"
] |
nips_2021_rTxCRLXRtk9 | Least Square Calibration for Peer Reviews | Peer review systems such as conference paper review often suffer from the issue of miscalibration. Previous works on peer review calibration usually only use the ordinal information or assume simplistic reviewer scoring functions such as linear functions. In practice, applications like academic conferences often rely on manual methods, such as open discussions, to mitigate miscalibration. It remains an important question to develop algorithms that can handle different types of miscalibrations based on available prior knowledge. In this paper, we propose a flexible framework, namely \emph{least square calibration} (LSC), for selecting top candidates from peer ratings. Our framework provably performs perfect calibration from noiseless linear scoring functions under mild assumptions, yet also provides competitive calibration results when the scoring function is from broader classes beyond linear functions and with arbitrary noise. On our synthetic dataset, we empirically demonstrate that our algorithm consistently outperforms the baseline which select top papers based on the highest average ratings.
| accept | This paper addresses miscalibration in peer review. Miscalibration can cause problems with the fairness of the peer review process, but so far not much has been achieved in terms of theoretical algorithms that also have practical appeal. Some previous papers study simple linear models and another study considers arbitrary or adversarial miscalibration. This paper fills in this void by considering general classes of miscalibration models. It proposes an interesting "LSC" method, provides theoretical results to back it up, and simulations to investigate it further. The rebuttal, in response to some reviewers' comments, also provides experiments on real-world data and their algorithms work well here. This is a good contribution to the field and I recommend acceptance.
Important: In their camera ready version, the authors should incorporate the various points discussed with the reviewers or with me during the review process. We hope that this review process has helped the authors improve the paper.
Summary of reviews:
- Reviewer o3ak provides a detailed review, regards the paper as novel and interesting
- Reviewer zLc7 makes some initial criticisms in their initial review. The rebuttal addresses the objective parts of these criticisms but the reviewer did not show up after the rebuttal (I had sent multiple pings).
- Reviewer PoXu provides a detailed review. They make several criticisms in their initial review, and the objective parts of these criticisms are addressed in a 3-way discussion between me, the authors, and the reviewer.
- Reviewer Emh1's review has only minor points of concern
| train | [
"KHsDwX-JePd",
"N5YO22_8Gxb",
"pVHTFDGKJ11",
"tj2lvvQZvM4",
"R7Qcauu6ajD",
"QZCobjeOHjA",
"XzphPw2vx4t",
"hTpGXsViEZK",
"WgUj1eQFF4y",
"VPDTFXmw5B_",
"JjSDCQWWtyz",
"OYkOGTh44Wo",
"nzqrp_igtsc",
"bCLF31OpGU",
"ztRcxWUJNAg"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply! We are glad that our response resolves most of your confusion. The reason that the performance of QP degrades more than other models under noisy setting is the following. QP uses maximum likelihood estimate to learn the parameters of reviewers’ linear scoring function with no regularization... | [
-1,
7,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"pVHTFDGKJ11",
"nips_2021_rTxCRLXRtk9",
"VPDTFXmw5B_",
"R7Qcauu6ajD",
"QZCobjeOHjA",
"hTpGXsViEZK",
"nips_2021_rTxCRLXRtk9",
"WgUj1eQFF4y",
"XzphPw2vx4t",
"N5YO22_8Gxb",
"nips_2021_rTxCRLXRtk9",
"bCLF31OpGU",
"ztRcxWUJNAg",
"nips_2021_rTxCRLXRtk9",
"nips_2021_rTxCRLXRtk9"
] |
nips_2021_tqQ-8MuSqm | Scaling Up Exact Neural Network Compression by ReLU Stability | We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable. However, current approaches to determine the stability of neurons with Rectified Linear Unit (ReLU) activations require solving or finding a good approximation to multiple discrete optimization problems. In this work, we introduce an algorithm based on solving a single optimization problem to identify all stable neurons. Our approach is on median 183 times faster than the state-of-art method on CIFAR-10, which allows us to explore exact compression on deeper (5 x 100) and wider (2 x 800) networks within minutes. For classifiers trained under an amount of L1 regularization that does not worsen accuracy, we can remove up to 56% of the connections on the CIFAR-10 dataset. The code is available at the following link, https://github.com/yuxwind/ExactCompression .
| accept | All the reviewers have agreed that the submission is original and interesting with clear/practical contributions to the domain. Authors’ detailed and informative responses also addressed most of the major concerns and some of the reviewers accordingly increased their scores. Given the overwhelming positive opinions, I am recommending an acceptance. | train | [
"meNWEYAR4a5",
"y5-bRJJ4-e-",
"FqReEGzToaN",
"wIJBu64xev",
"U_kg6e-wFom",
"BWZs6brQ-Y-",
"Ei5ZPAsM88",
"vtkTpfyXYM",
"a0CQ7aEvFHT",
"TsKLi1-NmGd",
"vo-iF3p-Wdd",
"M7dvVVfl3Pt",
"TI-P-sbX90b",
"NKXJq8d-l-F",
"WeOj5ATMFVs",
"VFxzoSjkQI",
"_Zn73i7UXKq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method to solve the problem of exact compression of a neural network, where the network is decreased in size as much as possible, with the result being functionally equivalent to the original network over the input domain. The proposed method identifies stably active and inactive neurons and... | [
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_tqQ-8MuSqm",
"FqReEGzToaN",
"wIJBu64xev",
"NKXJq8d-l-F",
"nips_2021_tqQ-8MuSqm",
"vtkTpfyXYM",
"vtkTpfyXYM",
"M7dvVVfl3Pt",
"nips_2021_tqQ-8MuSqm",
"_Zn73i7UXKq",
"M7dvVVfl3Pt",
"U_kg6e-wFom",
"nips_2021_tqQ-8MuSqm",
"meNWEYAR4a5",
"VFxzoSjkQI",
"nips_2021_tqQ-8MuSqm",
"ni... |
nips_2021_AzmEMstdf3o | Passive attention in artificial neural networks predicts human visual selectivity | Developments in machine learning interpretability techniques over the past decade have provided new tools to observe the image regions that are most informative for classification and localization in artificial neural networks (ANNs). Are the same regions similarly informative to human observers? Using data from 79 new experiments and 7,810 participants, we show that passive attention techniques reveal a significant overlap with human visual selectivity estimates derived from 6 distinct behavioral tasks including visual discrimination, spatial localization, recognizability, free-viewing, cued-object search, and saliency search fixations. We find that input visualizations derived from relatively simple ANN architectures probed using guided backpropagation methods are the best predictors of a shared component in the joint variability of the human measures. We validate these correlational results with causal manipulations using recognition experiments. We show that images masked with ANN attention maps were easier for humans to classify than control masks in a speeded recognition experiment. Similarly, we find that recognition performance in the same ANN models was likewise influenced by masking input images using human visual selectivity maps. This work contributes a new approach to evaluating the biological and psychological validity of leading ANNs as models of human vision: by examining their similarities and differences in terms of their visual selectivity to the information contained in images.
| accept | This paper is a real tour-de-force, containing "data from 78 new experiments and 6,610 participants" and "6 distinct behavioral tasks including visual discrimination, spatial localization, recognizability, free-viewing, cued-object search and saliency search fixations." Though this paper is different than the majority of NeurIPS submissions, I think the breadth of the experimentation, and the relation to current trends in DL make it a great contribution and a breath of fresh air. I think it will be of interest to NeurIPS attendees, could spark some interesting discussion and inspire future work. | train | [
"JPgsxfWh9V3",
"z5eDWrtTi4O",
"eQ6VWa1QgHf",
"8iaOzvgsVff",
"-Rsog0FuSlS",
"Cpmhhtw4n5v",
"JSJanZtvWJf",
"-4OE0x0hrH",
"cOsUMOlrJ6i",
"hLsrdwEhA6"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
" We thank the reviewer for the many comments and constructive criticism. We agree that expanding the discussion to address all the issues that were raised here, as well as including all the additional checks in the final paper will greatly improve the work. ",
" We thank the reviewer for the response to our rebu... | [
-1,
-1,
-1,
6,
-1,
9,
7,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
-1,
5,
3,
-1,
-1,
-1
] | [
"JSJanZtvWJf",
"-Rsog0FuSlS",
"Cpmhhtw4n5v",
"nips_2021_AzmEMstdf3o",
"cOsUMOlrJ6i",
"nips_2021_AzmEMstdf3o",
"nips_2021_AzmEMstdf3o",
"JSJanZtvWJf",
"8iaOzvgsVff",
"Cpmhhtw4n5v"
] |
nips_2021_ephWA7KaWmD | GRIN: Generative Relation and Intention Network for Multi-agent Trajectory Prediction | Learning the distribution of future trajectories conditioned on the past is a crucial problem for understanding multi-agent systems. This is challenging because humans make decisions based on complex social relations and personal intents, resulting in highly complex uncertainties over trajectories. To address this problem, we propose a conditional deep generative model that combines advances in graph neural networks. The prior and recognition model encodes two types of latent codes for each agent: an inter-agent latent code to represent social relations and an intra-agent latent code to represent agent intentions. The decoder is carefully devised to leverage the codes in a disentangled way to predict multi-modal future trajectory distribution. Specifically, a graph attention network built upon inter-agent latent code is used to learn continuous pair-wise relations, and an agent's motion is controlled by its latent intents and its observations of all other agents. Through experiments on both synthetic and real-world datasets, we show that our model outperforms previous work in multiple performance metrics. We also show that our model generates realistic multi-modal trajectories.
| accept | This work explores two types of latent variables for multi-agent trajectory forecasting, an inter-agent latent code representing social relations and an intra-agent latent code representing agent intentions. Scores for this paper were initially split between rejection and accept. Additional clarifications and (ablation) experiments provided during the rebuttal caused multiple reviewers to increase their scores. While the experiments here are not extensive, they were perceived as being sufficient by the reviewers. After the rebuttal period all reviewers recommend accepting this paper.
The AC recommends acceptance. | train | [
"BkSViIBSXEN",
"tuFqMp3XpnA",
"1Yvo7ls_pKq",
"eRKCnD-hDY6",
"tQcg1hlZzIT",
"4ZOzX8lLkJV",
"VM_J1DwBRr",
"JvRSf8ntpjq",
"atB8O6XFNc9",
"KAJVIxbXxN5",
"Ce9UEOjqBz",
"8ukzsrg97Uv",
"ut0ws4etht5"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" The authors addressed most of my concerns with the rebuttal and I am happy to see that they fixed a bug and improved their results. I ask that these clarifications and new results be carefully added to the final manuscript.",
" We thank the reviewer for the suggestion. We present the results of the suggested ex... | [
-1,
-1,
6,
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
5,
-1,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
3
] | [
"8ukzsrg97Uv",
"JvRSf8ntpjq",
"nips_2021_ephWA7KaWmD",
"atB8O6XFNc9",
"KAJVIxbXxN5",
"nips_2021_ephWA7KaWmD",
"nips_2021_ephWA7KaWmD",
"Ce9UEOjqBz",
"1Yvo7ls_pKq",
"4ZOzX8lLkJV",
"VM_J1DwBRr",
"ut0ws4etht5",
"nips_2021_ephWA7KaWmD"
] |
nips_2021_AnJUTpZiiWD | Instance-Dependent Partial Label Learning | Partial label learning (PLL) is a typical weakly supervised learning problem, where each training example is associated with a set of candidate labels among which only one is true. Most existing PLL approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels. However, this assumption is not realistic since the candidate labels are always instance-dependent. In this paper, we consider instance-dependent PLL and assume that each example is associated with a latent label distribution constituted by the real number of each label, representing the degree to each label describing the feature. The incorrect label with a high degree is more likely to be annotated as the candidate label. Therefore, the latent label distribution is the essential labeling information in partially labeled examples and worth being leveraged for predictive model training. Motivated by this consideration, we propose a novel PLL method that recovers the label distribution as a label enhancement (LE) process and trains the predictive model iteratively in every epoch. Specifically, we assume the true posterior density of the latent label distribution takes on the variational approximate Dirichlet density parameterized by an inference model. Then the evidence lower bound is deduced for optimizing the inference model and the label distributions generated from the variational posterior are utilized for training the predictive model. Experiments on benchmark and real-world datasets validate the effectiveness of the proposed method. Source code is available at https://github.com/palm-ml/valen.
| accept | This paper is the first to address partial label learning in the setting where the partial label depends on the feature vector. The authors introduce a principled approach to solve this problem and demonstrate some state-of-the-art empirical performance. This paper will be well-cited in the PLL literature and serve as an important benchmark for future research. The reviewers make a handful of comments that should be reflected in the final version. A pertinent missing reference is Katz-Samuels et al., Decontamination of Mutual Contamination Models, JMLR 2019. | test | [
"IEUupr3CIq",
"RiYcTzR8wb2",
"dMMe-B3lhbq",
"r7WMouYLD38",
"AzyFOqO6Vnq",
"JCvtdavw7Ws"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the problem of feature-dependent partial label learning, which is an interesting weakly supervised learning problem where the candidate labels of each instance are feature-dependent. Accordingly, the first attempt to feature-dependent partial label learning with latent label distribution is prop... | [
8,
-1,
-1,
-1,
7,
7
] | [
5,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_AnJUTpZiiWD",
"IEUupr3CIq",
"JCvtdavw7Ws",
"AzyFOqO6Vnq",
"nips_2021_AnJUTpZiiWD",
"nips_2021_AnJUTpZiiWD"
] |
nips_2021_RYcgfqmAOHh | Deep Learning with Label Differential Privacy | The Randomized Response (RR) algorithm is a classical technique to improve robustness in survey aggregation, and has been widely adopted in applications with differential privacy guarantees. We propose a novel algorithm, Randomized Response with Prior (RRWithPrior), which can provide more accurate results while maintaining the same level of privacy guaranteed by RR. We then apply RRWithPrior to learn neural networks with label differential privacy (LabelDP), and show that when only the label needs to be protected, the model performance can be significantly improved over the previous state-of-the-art private baselines. Moreover, we study different ways to obtain priors, which when used with RRWithPrior can additionally improve the model performance, further reducing the accuracy gap between private and non-private models. We complement the empirical results with theoretical analysis showing that LabelDP is provably easier than protecting both the inputs and labels.
| accept | The paper presents an algorithm for learning ML models under label DP. While label DP provides weaker protection than regular DP, the authors make a convincing case that it may be sufficient for certain applications. The paper demonstrates that cleverly exploiting label DP allows significant increase in model utility compared to learning under full DP.
The reviews are divided with two reviewers recommending acceptance and two recommending rejection. However, the main arguments for rejection seem to be based on dismissal of label DP as a valid topic of research without clear supporting evidence.
As a result, I tend to recommend acceptance as the paper seems to yield significant improvements for a problem with at least some practical relevance.
While the paper includes very extensive bibliography, it seems that the authors have missed at least the following previous papers and methods for learning under label DP: "Differentially Private Regression with Gaussian Processes" and "Differentially Private Regression and Classification with Sparse Gaussian Processes". | train | [
"j5L3uOhnWxm",
"VmZCfz8X4fG",
"ACwOgWc7qNU",
"fniR56TcZP",
"s1gxBB-zhf",
"j8IwVYKxy3M",
"vBEUh3khnha",
"YKvTFGtC71K"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Other privacy-protection.**\nPlease note that since we are in a classification setting, adding Laplace/Gaussian noise to labels is not feasible. The work of Wang & Xu considers label privacy only for linear regression. Neither the work of Phan et al. nor the work of Zhang et al. concerns label privacy, as far... | [
-1,
-1,
-1,
-1,
4,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"j8IwVYKxy3M",
"YKvTFGtC71K",
"vBEUh3khnha",
"s1gxBB-zhf",
"nips_2021_RYcgfqmAOHh",
"nips_2021_RYcgfqmAOHh",
"nips_2021_RYcgfqmAOHh",
"nips_2021_RYcgfqmAOHh"
] |
nips_2021_m4rb1Rlfdi | Semialgebraic Representation of Monotone Deep Equilibrium Models and Applications to Certification | Tong Chen, Jean B. Lasserre, Victor Magron, Edouard Pauwels | accept | The paper got 4 "Marginally above the acceptance threshold"s and 1 "Good paper, accept", all with relatively high confidences. The author rebuttals mostly addressed the concerns from the reviewers. Reviewers UG93, kdsN, SqFE, and ENDm were all in favor of accepting the paper in their post-rebuttal opinions. Thus the AC recommended acceptance. Nonetheless, some issues remain, e.g., scalability of the experiments, a bit overselling, etc. The AC also found that the "certification" problem, the core topic of the paper, was not clearly defined (lines 29-30 may allude to, but still in a vague manner). Hope these issues could be addressed in the revision. | train | [
"a2zIx83uBl",
"Dd5ni8OyMh",
"GTUyLvdBTXT",
"nAs6eleKAwx",
"g3CDER0rLAh",
"pEI3ZBpjZ0A",
"rvEYMoyRxOR",
"1lgq0TJ_G1l",
"PyzMvExxMrV",
"pRfGWcEv1Z3",
"eBQztGfW0We",
"jUVk0QKyxfS",
"eCnjGzr5o_E",
"SfrS6WJtAzV",
"753iRcfR8eh"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors,\n\nThank you for reply.\nHaving read the other reviews as well as your responses I don't see any major concerns and retain my initial score.\n\nBest,\nReviewer ENDm",
" The rebuttal has addressed my concerns. After reading other reviews and rebuttals, I'd like to see this paper accepted.",
" Dea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3,
3
] | [
"PyzMvExxMrV",
"GTUyLvdBTXT",
"SfrS6WJtAzV",
"eCnjGzr5o_E",
"eBQztGfW0We",
"SfrS6WJtAzV",
"eBQztGfW0We",
"753iRcfR8eh",
"eCnjGzr5o_E",
"jUVk0QKyxfS",
"nips_2021_m4rb1Rlfdi",
"nips_2021_m4rb1Rlfdi",
"nips_2021_m4rb1Rlfdi",
"nips_2021_m4rb1Rlfdi",
"nips_2021_m4rb1Rlfdi"
] |
nips_2021_3S0z0IjWkyl | The Role of Global Labels in Few-Shot Classification and How to Infer Them | Few-shot learning is a central problem in meta-learning, where learners must quickly adapt to new tasks given limited training data. Recently, feature pre-training has become a ubiquitous component in state-of-the-art meta-learning methods and is shown to provide significant performance improvement. However, there is limited theoretical understanding of the connection between pre-training and meta-learning. Further, pre-training requires global labels shared across tasks, which may be unavailable in practice. In this paper, we show why exploiting pre-training is theoretically advantageous for meta-learning, and in particular the critical role of global labels. This motivates us to propose Meta Label Learning (MeLa), a novel meta-learning framework that automatically infers global labels to obtains robust few-shot models. Empirically, we demonstrate that MeLa is competitive with existing methods and provide extensive ablation experiments to highlight its key properties.
| accept | The main issues amongst reviewers were 1. A lack of compelling scenarios without global labels, 2. Experiments are only on small benchmarks (in today’s environment) in which global labels are all well-defined. The main suggestion was to apply the technique to the meta-dataset benchmark, and the authors have provided some initial results that are indeed promising, however very preliminary. Other issues like performance with data augmentation, class imbalance (robustness in general) have been satisfactorily addressed in the discussion period.
The reviewers tend to agree that a more in-depth look at the case with no global labels is a topic for future research. Even still, there is quite a bit of work to be done to incorporate the new results/feedback from the discussion period, and the reviewers remain neutral-to-positive. I do think that the hypothesis and associated performance is compelling enough to be of interest to the community. As a last suggestion, I think it’s worth briefly citing and discussing the paper “Unsupervised Learning Via Meta-Learning” by Hsu et al., ICLR 2019. The approaches are somewhat different, but related enough to be worth mentioning. | train | [
"VyH6bq_g0TJ",
"kcxD9hc0KK",
"hGpWsqHhcmK",
"fyaIX8xa4M_",
"Umy1bgLq7q9",
"yypjlfYrkpK",
"7u4qwWDxRuH",
"Ds0PXGB6eSP",
"QNz1L35uWrd",
"G2_6nH2fuR_",
"E_BGH_Rz_HL",
"YnZr-kMrm-B",
"qes6_l66z_",
"2Re0hCjFi-D",
"Eb2W5uFdKP9",
"BBevCrW4rP",
"sXxV1WG2wq",
"58scBESm2RC",
"6bFz33wGzLw",... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"a... | [
" We thank the reviewer for the detailed feedback and for raising the score. It's much appreciated. We will incorporate your suggestions in our paper.",
" We thank the reviewer for the feedback and raising the score. It's much appreciated.",
" I increase the score a bit. But, remember that, the proposed model i... | [
-1,
-1,
-1,
5,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"yypjlfYrkpK",
"hGpWsqHhcmK",
"E_BGH_Rz_HL",
"nips_2021_3S0z0IjWkyl",
"nips_2021_3S0z0IjWkyl",
"G2_6nH2fuR_",
"nips_2021_3S0z0IjWkyl",
"QNz1L35uWrd",
"sXxV1WG2wq",
"qes6_l66z_",
"YnZr-kMrm-B",
"XVO6yXcrQbW",
"IkGM7zbPMWc",
"Umy1bgLq7q9",
"6bFz33wGzLw",
"sXxV1WG2wq",
"nips_2021_3S0z0I... |
nips_2021_D7bPRxNt_AP | NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction | We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.
| accept | The paper presented a novel idea of combining volumetric rendering with 3D surface representation (SDF) and achieved SOTA results. All the reviewers liked the paper and gave very constructive comments. Please include the results on novel view synthesis, evaluation of SDF quality, details of training & inference time in the final version. Also, please carefully proof read the paper. | train | [
"BS5uFVAUyYp",
"6KiECcdvske",
"NMg9t-uG-xs",
"xom_uAgTimP",
"sXOJe_dpjhk",
"ehvXWDxMVmJ",
"DIRD4kRt2EP",
"Dt8IvwLpXGz",
"cCLL9t3nVSP"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a novel multi-view reconstruction model that couples an implicit SDF representation (as in IDR [31]) with an unbiased volumetric rendering function (as in NeRF [20]). The volumetric rendering function is deliberately \"de-biased\" by redefining the opacity values $\\alpha_i$ such that the weigh... | [
8,
-1,
-1,
-1,
-1,
-1,
7,
8,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"nips_2021_D7bPRxNt_AP",
"DIRD4kRt2EP",
"Dt8IvwLpXGz",
"BS5uFVAUyYp",
"cCLL9t3nVSP",
"nips_2021_D7bPRxNt_AP",
"nips_2021_D7bPRxNt_AP",
"nips_2021_D7bPRxNt_AP",
"nips_2021_D7bPRxNt_AP"
] |
nips_2021_z9Xs6T0y9Eg | Improved Guarantees for Offline Stochastic Matching via new Ordered Contention Resolution Schemes | Brian Brubach, Nathaniel Grammel, Will Ma, Aravind Srinivasan | accept | All of the reviewers liked this paper and felt that it should be accepted. | train | [
"gdjo2BBiCUl",
"3ugGdB5GdjQ",
"GMcr76anMWX",
"p9-J4eFvAww",
"q_hMYqAWeW7",
"E-vOJJBRMm",
"McZzEnE9El2",
"tdfKLSetoD6",
"CJim9spdoFu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"They give an improved 0.38 approximation algorithm for a basic stochastic matching problem. They also get improvements for a few other variants. One notable result is improving a recent 0.33 approximation in [Hikima et al] to 0.63. This paper obtains improved approximation algorithms for a number of stochastic ma... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_z9Xs6T0y9Eg",
"McZzEnE9El2",
"nips_2021_z9Xs6T0y9Eg",
"gdjo2BBiCUl",
"CJim9spdoFu",
"tdfKLSetoD6",
"GMcr76anMWX",
"nips_2021_z9Xs6T0y9Eg",
"nips_2021_z9Xs6T0y9Eg"
] |
nips_2021_iEEAPq3TUEZ | UFC-BERT: Unifying Multi-Modal Controls for Conditional Image Synthesis | Conditional image synthesis aims to create an image according to some multi-modal guidance in the forms of textual descriptions, reference images, and image blocks to preserve, as well as their combinations. In this paper, instead of investigating these control signals separately, we propose a new two-stage architecture, UFC-BERT, to unify any number of multi-modal controls. In UFC-BERT, both the diverse control signals and the synthesized image are uniformly represented as a sequence of discrete tokens to be processed by Transformer. Different from existing two-stage autoregressive approaches such as DALL-E and VQGAN, UFC-BERT adopts non-autoregressive generation (NAR) at the second stage to enhance the holistic consistency of the synthesized image, to support preserving specified image blocks, and to improve the synthesis speed. Further, we design a progressive algorithm that iteratively improves the non-autoregressively generated image, with the help of two estimators developed for evaluating the compliance with the controls and evaluating the fidelity of the synthesized image, respectively. Extensive experiments on a newly collected large-scale clothing dataset M2C-Fashion and a facial dataset Multi-Modal CelebA-HQ verify that UFC-BERT can synthesize high-fidelity images that comply with flexible multi-modal controls.
| accept | The paper proposed a method to unify any number of multi-modal control signal inputs for conditional image synthesis. The key thing was a transformer model that could convert a variable number of multi-modal control inputs to the discrete latent space of a VQGAN. All the reviewers rated the paper above the bar, with one reviewer upgraded the score from 6 to 7 after the rebuttal. The meta-reviewer agreed with the assessment and concluded the paper is above the bar. Please include the reviewer feedback in the updated manuscript in the final version. | val | [
"MqC_vP8mXPF",
"_8swLX2p38l",
"6UnP7ATx9rJ",
"HnCUEKkxIae",
"HhF-Jk9PHT",
"GQk67JsvLb5",
"TYbYRqeLZSH",
"1p_2kspo851",
"xZlksXe7Lul",
"Rr3L55beSv",
"BnHgrOdREsQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you. If the precision and recall results are to be included in the paper, these hyperparameters should also be reported. If for other evaluations these parameters were adjusted (e.g. FID, CLIP score, ...), this should be stated as well as the sensitivity of the models with respect to these parameters. In ad... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
5,
4
] | [
"_8swLX2p38l",
"6UnP7ATx9rJ",
"HnCUEKkxIae",
"HhF-Jk9PHT",
"xZlksXe7Lul",
"nips_2021_iEEAPq3TUEZ",
"GQk67JsvLb5",
"Rr3L55beSv",
"BnHgrOdREsQ",
"nips_2021_iEEAPq3TUEZ",
"nips_2021_iEEAPq3TUEZ"
] |
nips_2021_9BvDIW6_qxZ | Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies | Reinforcement learning (RL) for continuous control typically employs distributions whose support covers the entire action space. In this work, we investigate the colloquially known phenomenon that trained agents often prefer actions at the boundaries of that space. We draw theoretical connections to the emergence of bang-bang behavior in optimal control, and provide extensive empirical evaluation across a variety of recent RL algorithms. We replace the normal Gaussian by a Bernoulli distribution that solely considers the extremes along each action dimension - a bang-bang controller. Surprisingly, this achieves state-of-the-art performance on several continuous control benchmarks - in contrast to robotic hardware, where energy and maintenance cost affect controller choices. Since exploration, learning, and the final solution are entangled in RL, we provide additional imitation learning experiments to reduce the impact of exploration on our analysis. Finally, we show that our observations generalize to environments that aim to model real-world challenges and evaluate factors to mitigate the emergence of bang-bang solutions. Our findings emphasise challenges for benchmarking continuous control algorithms, particularly in light of potential real-world applications.
| accept | After reading each other's reviews and the authors' feedback, the reviewers discussed the merits and flaws of the paper.
The authors' answers have been appreciated by the reviewer, who reached a consensus about accepting this paper.
I want to congratulate the authors and invite them to modify their paper following the reviewers' suggestions. | train | [
"RYjcnlDa1Qx",
"BH8bPDdQGs",
"2ODEjFjF8GI",
"3DzA44uH6Mf",
"Ja5YqdKb_yU",
"TbxXpMfSRmm",
"sPSw0oRAHSB",
"RK7jfFVYLHM",
"VaKNYJ790ns"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors addressed enough of my concerns to raise my score. The problem is important and the experiments are valuable. The paper will be stronger with the suggested clarifications.",
"The authors analyze the emergence of implicit bang-bang (action saturation) control behavior in Deep RL. They claim three con... | [
-1,
6,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"Ja5YqdKb_yU",
"nips_2021_9BvDIW6_qxZ",
"VaKNYJ790ns",
"RK7jfFVYLHM",
"BH8bPDdQGs",
"sPSw0oRAHSB",
"nips_2021_9BvDIW6_qxZ",
"nips_2021_9BvDIW6_qxZ",
"nips_2021_9BvDIW6_qxZ"
] |
nips_2021_PnpS7_SlNZi | Improving Generalization in Meta-RL with Imaginary Tasks from Latent Dynamics Mixture | The generalization ability of most meta-reinforcement learning (meta-RL) methods is largely limited to test tasks that are sampled from the same distribution used to sample training tasks. To overcome the limitation, we propose Latent Dynamics Mixture (LDM) that trains a reinforcement learning agent with imaginary tasks generated from mixtures of learned latent dynamics. By training a policy on mixture tasks along with original training tasks, LDM allows the agent to prepare for unseen test tasks during training and prevents the agent from overfitting the training tasks. LDM significantly outperforms standard meta-RL methods in test returns on the gridworld navigation and MuJoCo tasks where we strictly separate the training task distribution and the test task distribution.
| accept | This paper proposes to learn an ensemble of latent dynamics models and provide a mixture of them as input to a reward decoder, which allows the entire framework to generate imaginary tasks (i.e., imaginary reward functions). By training the agent on the generated tasks, the agent can generalize better to out-of-distribution tasks on grid-world and MuJoCo environments. The idea of generating imaginary reward functions using a mixture of latent dynamics model is novel, though the overall framework builds upon VariBad. Although there was an initial concern around the significance of the results (e.g., lack of baselines, challenging environments), the additional results during the rebuttal period addressed most of the questions raised by the reviewers. However, the question about how the proposed framework can extrapolate so well and how the mixture hyperparameters ($\beta$) affects interpolation/extrapolation performance remains the same, which may need to be discussed in the main paper rather than in the appendix. Given that the main result is still solid and consistent, I'd recommend acceptance for this paper. | train | [
"XUgq3LOrO-k",
"I_Nl3IRik0G",
"haNIEaCSXfD",
"bWF1sCGZY90",
"izkNOqaM9Dm",
"wZFz5zqV6SC",
"X2zBT9LKIh",
"EcOCDXYhSbn",
"40rrMycj2c7",
"7D6_IxpxZT",
"gRb5IlCZ8Y",
"jfeWO3fBuB",
"kU9nbwRKZ89",
"W1Mabdd2nIF"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear reviewer 1a3x,\n\nThank you for your effort to review our work.\n\nWe would like to kindly remind you to check our author response if you have not already. We would appreciate it if you could inform us whether our response successfully addressed your concerns. Even a short statement (why our response did or ... | [
-1,
7,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"W1Mabdd2nIF",
"nips_2021_PnpS7_SlNZi",
"nips_2021_PnpS7_SlNZi",
"wZFz5zqV6SC",
"nips_2021_PnpS7_SlNZi",
"X2zBT9LKIh",
"izkNOqaM9Dm",
"W1Mabdd2nIF",
"7D6_IxpxZT",
"nips_2021_PnpS7_SlNZi",
"I_Nl3IRik0G",
"haNIEaCSXfD",
"W1Mabdd2nIF",
"nips_2021_PnpS7_SlNZi"
] |
nips_2021_lVBu4PqM9HU | Localization with Sampling-Argmax | Soft-argmax operation is commonly adopted in detection-based methods to localize the target position in a differentiable manner. However, training the neural network with soft-argmax makes the shape of the probability map unconstrained. Consequently, the model lacks pixel-wise supervision through the map during training, leading to performance degradation. In this work, we propose sampling-argmax, a differentiable training method that imposes implicit constraints to the shape of the probability map by minimizing the expectation of the localization error. To approximate the expectation, we introduce a continuous formulation of the output distribution and develop a differentiable sampling process. The expectation can be approximated by calculating the average error of all samples drawn from the output distribution. We show that sampling-argmax can seamlessly replace the conventional soft-argmax operation on various localization tasks. Comprehensive experiments demonstrate the effectiveness and flexibility of the proposed method. Code is available at https://github.com/Jeff-sjtu/sampling-argmax
| accept | This paper received mixed ratings, with one reviewer recommending acceptance and three rejection. The paper was thoroughly discussed by the reviewers but the authors' feedback did not convince the negative reviewers to recommend acceptance. In particular, the reviewers' main concerns include the motivation behind the proposed sampling strategy and its theoretical justification, the lack of comparison with some baselines (although the authors provided some of these comparisons in their feedback, the reviewers remained unconvinced), and the significance of the improvements obtained in some experiments. Based on this, we therefore believe that this paper is not ready for acceptance to NeurIPS but encourage the authors to revise it based on the feedback and resubmit to a future venue. | val | [
"aSObUfYS7S",
"eLfIyx7TWt0",
"EJRkfQ335P3",
"QId7-fRVmpn",
"nzUA7P_MfMv",
"DEaxrHqYfg_",
"M8691lKEldO",
"GLItwvN6LNu",
"QFo3NW2E0V",
"Mah700dDDJa"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a sampling-based soft argmax formulation that is differentiable. The method focuses on the fact that the conventional softmax algorithm focuses on having the expectation of a heatmap to be in a desired location, which may not be what one desires. Instead, the paper proposes to consider the expec... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_lVBu4PqM9HU",
"EJRkfQ335P3",
"nzUA7P_MfMv",
"Mah700dDDJa",
"QFo3NW2E0V",
"GLItwvN6LNu",
"aSObUfYS7S",
"nips_2021_lVBu4PqM9HU",
"nips_2021_lVBu4PqM9HU",
"nips_2021_lVBu4PqM9HU"
] |
nips_2021_QX32YlxrQJc | Improved Regularization and Robustness for Fine-tuning in Neural Networks | A widely used algorithm for transfer learning is fine-tuning, where a pre-trained model is fine-tuned on a target task with a small amount of labeled data. When the capacity of the pre-trained model is much larger than the size of the target data set, fine-tuning is prone to overfitting and "memorizing" the training labels. Hence, an important question is to regularize fine-tuning and ensure its robustness to noise. To address this question, we begin by analyzing the generalization properties of fine-tuning. We present a PAC-Bayes generalization bound that depends on the distance traveled in each layer during fine-tuning and the noise stability of the fine-tuned model. We empirically measure these quantities. Based on the analysis, we propose regularized self-labeling---the interpolation between regularization and self-labeling methods, including (i) layer-wise regularization to constrain the distance traveled in each layer; (ii) self label-correction and label-reweighting to correct mislabeled data points (that the model is confident) and reweight less confident data points. We validate our approach on an extensive collection of image and text data sets using multiple pre-trained model architectures. Our approach improves baseline methods by 1.76% (on average) for seven image classification tasks and 0.75% for a few-shot classification task. When the target data set includes noisy labels, our approach outperforms baseline methods by 3.56% on average in two noisy settings.
| accept | Most reviewers and myself expressed a positive opinion on the paper following the discussion phase (which greatly helped reach a decision), and its interest to the NeurIPS community and significance of the results. The reviewers however asked for a number of clarifications and I would like to stress that the authors *must* revise significantly their initial submission according to the reviews and the discussion. | train | [
"nkuYMai07Rs",
"hwIF9W8ldaK",
"umTJXPTi_fs",
"eZMG9fi0M6F",
"8Xvmx1weeno",
"9BZmymTOtdP",
"0VRVBlm0Ku6",
"ycZsBQayFE7",
"5m6OToDy51",
"IV_7ABYv6bM",
"cX1nsNImU2x",
"wkMTyaZwItv",
"CKR-N6e7P0Z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThe authors present a new analysis of PAC-Bayes generalization in the finetuning with regularization setting and analyze it’s properties in terms of layer wise L2 distances and noise robustness. They propose a new modified algorithm for finetuning based on having different constraints on different layers as wel... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_QX32YlxrQJc",
"nips_2021_QX32YlxrQJc",
"9BZmymTOtdP",
"8Xvmx1weeno",
"IV_7ABYv6bM",
"0VRVBlm0Ku6",
"5m6OToDy51",
"nkuYMai07Rs",
"hwIF9W8ldaK",
"CKR-N6e7P0Z",
"wkMTyaZwItv",
"nips_2021_QX32YlxrQJc",
"nips_2021_QX32YlxrQJc"
] |
nips_2021_5Ya8PbvpZ9 | BARTScore: Evaluating Generated Text as Text Generation | A wide variety of NLP applications, such as machine translation, summarization, and dialog, involve text generation. One major challenge for these applications is how to evaluate whether such generated texts are actually fluent, accurate, or effective. In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models. The general idea is that models trained to convert the generated text to/from a reference output or the source text will achieve higher scores when the generated text is better. We operationalize this idea using BART, an encoder-decoder based pre-trained model, and propose a metric BARTScore with a number of variants that can be flexibly applied in an unsupervised fashion to evaluation of text from different perspectives (e.g. informativeness, fluency, or factuality). BARTScore is conceptually simple and empirically effective. It can outperform existing top-scoring metrics in 16 of 22 test settings, covering evaluation of 16 datasets (e.g., machine translation, text summarization) and 7 different perspectives (e.g., informativeness, factuality). Code to calculate BARTScore is available at https://github.com/neulab/BARTScore, and we have released an interactive leaderboard for meta-evaluation at http://explainaboard.nlpedia.ai/leaderboard/task-meval/ on the ExplainaBoard platform, which allows us to interactively understand the strengths, weaknesses, and complementarity of each metric.
| accept | This paper proposes a framework for evaluation of text generation (BARTScore) by modeling evaluation as a text generation problem, built on top of the BART text generation algorithm. Similar to BERTScore, which is based on BERT, the metric proposed in this paper requires no training. BARTScore is also shown to improve with using textual prompts and with fine-tuning on downstream domain tasks, showing good results across the board for evaluating various generation tasks including MT and summarization, approaching evaluation metrics such as COMET which are supervised on human assessments.
All reviewers feel positively about this paper. While the underlying idea is not particularly creative, this is a solid paper which proposes a simple and effective approach for evaluation of text generation with convincing results and which is likely to be impactful. | train | [
"1M6tu4spLz2",
"wtaGepTGP6y",
"pC_X4_ZFf-",
"KjCVPgwobj-",
"_1T1OwT46k0",
"1Hh5WZwfh_O",
"rTM_By2NP7",
"mrEFIa6MGTb",
"n_ziNDgVJtw",
"wrjRaZlWprQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper tries to propose a new generation-based BARTScore to evaluate text generation problems. The authors want to fill the gap between pretrainining objectives and the down stream feature extractors. The main idea is that BART can achieve better score when the generation results are better. The BART Score is... | [
8,
-1,
8,
6,
-1,
6,
-1,
-1,
-1,
-1
] | [
3,
-1,
5,
4,
-1,
4,
-1,
-1,
-1,
-1
] | [
"nips_2021_5Ya8PbvpZ9",
"n_ziNDgVJtw",
"nips_2021_5Ya8PbvpZ9",
"nips_2021_5Ya8PbvpZ9",
"rTM_By2NP7",
"nips_2021_5Ya8PbvpZ9",
"KjCVPgwobj-",
"pC_X4_ZFf-",
"1M6tu4spLz2",
"1Hh5WZwfh_O"
] |
nips_2021_Eec8D4UNceq | An analysis of Ermakov-Zolotukhin quadrature using kernels | We study a quadrature, proposed by Ermakov and Zolotukhin in the sixties, through the lens of kernel methods. The nodes of this quadrature rule follow the distribution of a determinantal point process, while the weights are defined through a linear system, similarly to the optimal kernel quadrature. In this work, we show how these two classes of quadrature are related, and we prove a tractable formula of the expected value of the squared worst-case integration error on the unit ball of an RKHS of the former quadrature. In particular, this formula involves the eigenvalues of the corresponding kernel and leads to improving on the existing theoretical guarantees of the optimal kernel quadrature with determinantal point processes.
| accept | This paper identifies connections between the literatures on kernel quadrature and determinantal point processes and uses these connections to perform theoretical analysis for a quadrature method due to Ermakov and Zolotukhin. The reviewers agreed on the correctness and the extent of the novelty of the paper (somewhat incremental), but strongly disagreed about its significance, with one reviewer arguing that the Ermakov-Zolotukhin method is not widely used and thus unimportant, and one reviewer championing the paper as being of theoretical interest. I believe the balance is somewhere in between these extremes - this paper makes a small theoretical contribution, but it is nevertheless one that will interest a subset of the participants at NeurIPS. | train | [
"coP1TcbPc1J",
"a8ZbmPmGMx",
"n-UR9bRY3zh",
"pvtj-iAelcN",
"Eq5sydWLGog",
"OgGYFXyRNr",
"ZMvTYwi1A0V",
"flc262wT33C",
"TD27rOYcI-J",
"QUq1UwhK94"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nWe thank the reviewer for the additional feedback.\n\nIt is true that many methods for choosing the nodes of KQ already exist. However, these methods are tailored for specific RKHSs and can hardly be generalized for other RKHSs. Now, DPPs offer a principled methodology to design kernel quadrature nodes, althoug... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
1,
2,
4
] | [
"n-UR9bRY3zh",
"nips_2021_Eec8D4UNceq",
"a8ZbmPmGMx",
"flc262wT33C",
"QUq1UwhK94",
"TD27rOYcI-J",
"a8ZbmPmGMx",
"nips_2021_Eec8D4UNceq",
"nips_2021_Eec8D4UNceq",
"nips_2021_Eec8D4UNceq"
] |
nips_2021_43mzkbz_QKG | Towards Understanding Why Lookahead Generalizes Better Than SGD and Beyond | Pan Zhou, Hanshu Yan, Xiaotong Yuan, Jiashi Feng, Shuicheng Yan | accept | A well written paper with solid theoretical contributions; well received by the reviewers. The authors also did a very good job of giving long and detailed responses to any questions and concerns. | train | [
"MDFmuUmjjCG",
"hPCTQlzdrrZ",
"B1n5ysdlErM",
"FdTnxAkGg5",
"RWg_kP8kWEf",
"dMGQAoiN77z",
"4nkcUJxJiw",
"d6VTCQS1fq",
"hu-kM7iUO68",
"ZpxArXsftE8"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors analyze the Lookahead optimizer (Zhang et al.) algorithm, showing that it has nice bounds on the excess risk error compared to SGD on strongly convex problems and noncovex problems which satisfy the PL condition. They also propose a locally-regularized version of the Lookahead algorithm which has some ... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_43mzkbz_QKG",
"FdTnxAkGg5",
"nips_2021_43mzkbz_QKG",
"d6VTCQS1fq",
"dMGQAoiN77z",
"4nkcUJxJiw",
"ZpxArXsftE8",
"B1n5ysdlErM",
"MDFmuUmjjCG",
"nips_2021_43mzkbz_QKG"
] |
nips_2021_CRPNhlp4jM | Online Market Equilibrium with Application to Fair Division | Computing market equilibria is a problem of both theoretical and applied interest. Much research to date focuses on the case of static Fisher markets with full information on buyers' utility functions and item supplies. Motivated by real-world markets, we consider an online setting: individuals have linear, additive utility functions; items arrive sequentially and must be allocated and priced irrevocably. We define the notion of an online market equilibrium in such a market as time-indexed allocations and prices which guarantee buyer optimality and market clearance in hindsight. We propose a simple, scalable and interpretable allocation and pricing dynamics termed as PACE. When items are drawn i.i.d. from an unknown distribution (with a possibly continuous support), we show that PACE leads to an online market equilibrium asymptotically. In particular, PACE ensures that buyers' time-averaged utilities converge to the equilibrium utilities w.r.t. a static market with item supplies being the unknown distribution and that buyers' time-averaged expenditures converge to their per-period budget. Hence, many desirable properties of market equilibrium-based fair division such as envy-freeness, Pareto optimality, and the proportional-share guarantee are also attained asymptotically in the online setting. Next, we extend the dynamics to handle quasilinear buyer utilities, which gives the first online algorithm for computing first-price pacing equilibria. Finally, numerical experiments on real and synthetic datasets show that the dynamics converges quickly under various metrics.
| accept | I decided to accept the paper since it presents an interesting problem and provides a novel solution with a surprising connection to the dual averaging technique.
In the final version, please make sure to clearly and explicitly discuss the bounds and their dependence on the problem parameters.
| test | [
"qUBo5sbiPOM",
"JEQbvyB89k",
"qzljMYxuFfs",
"Y4KZp2EubCW",
"4PcKxx43y2G",
"WHKKqhBumB6",
"02jEknF54PS",
"Nz5rDtURkU_",
"etgMSa9m5hp",
"M4852ErwF1X",
"QHIr4fm0K1L"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"=Paper summary=\nThe authors study an online Fisher market. Unlike classic Fisher markets, here the items arrive one by one in an online fashion and must be immediately and irrevocably allocated to a set of static agents. The online market equilibrium notion introduced is a pair of sequences of (step-wise) allocat... | [
7,
6,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_CRPNhlp4jM",
"nips_2021_CRPNhlp4jM",
"WHKKqhBumB6",
"02jEknF54PS",
"nips_2021_CRPNhlp4jM",
"etgMSa9m5hp",
"etgMSa9m5hp",
"JEQbvyB89k",
"4PcKxx43y2G",
"qUBo5sbiPOM",
"nips_2021_CRPNhlp4jM"
] |
nips_2021_J8JXWSlgvyY | Dynamic Resolution Network | Deep convolutional neural networks (CNNs) are often of sophisticated design with numerous learnable parameters for the accuracy reason. To alleviate the expensive costs of deploying them on mobile devices, recent works have made huge efforts for excavating redundancy in pre-defined architectures. Nevertheless, the redundancy on the input resolution of modern CNNs has not been fully investigated, i.e., the resolution of input image is fixed. In this paper, we observe that the smallest resolution for accurately predicting the given image is different using the same neural network. To this end, we propose a novel dynamic-resolution network (DRNet) in which the input resolution is determined dynamically based on each input sample. Wherein, a resolution predictor with negligible computational costs is explored and optimized jointly with the desired network. Specifically, the predictor learns the smallest resolution that can retain and even exceed the original recognition accuracy for each image. During the inference, each input image will be resized to its predicted resolution for minimizing the overall computation burden. We then conduct extensive experiments on several benchmark networks and datasets. The results show that our DRNet can be embedded in any off-the-shelf network architecture to obtain a considerable reduction in computational complexity. For instance, DR-ResNet-50 achieves similar performance with an about 34% computation reduction, while gaining 1.4% accuracy increase with 10% computation reduction compared to the original ResNet-50 on ImageNet. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/DRNet.
| accept | The manuscript has been reviewed by four experienced reviewers, all of whom, after reading the rebuttal provided by the authors, agree that the manuscript meets of the bar of NeurIPS and thus should be presented to a large audience. The AC also agrees that the proposed approach is novel, supported by sufficient emperical evaluations, and hence recommends acceptance. | train | [
"2wSPnj1AOA3",
"YEEOWLo6UqB",
"NCCMfUVHNBA",
"L-aai8ibEah",
"Gawh9yk3xHc",
"jpWASDvmg0P",
"anbNjRapAPB",
"kY7KRFarv2",
"9T6MMlHzUCz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"A dynamic resolution network is proposed in this submission. There is a computationally efficient predictor which selects the suitable resolution for the large classification CNN model. The predictor generates hard decision of resolutions. The usage of Gumbel softmax trick enables the gradient to propagate to the ... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
7,
6
] | [
5,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_J8JXWSlgvyY",
"Gawh9yk3xHc",
"nips_2021_J8JXWSlgvyY",
"9T6MMlHzUCz",
"NCCMfUVHNBA",
"2wSPnj1AOA3",
"kY7KRFarv2",
"nips_2021_J8JXWSlgvyY",
"nips_2021_J8JXWSlgvyY"
] |
nips_2021_fyL9HD-kImm | Gauge Equivariant Transformer | Attention mechanism has shown great performance and efficiency in a lot of deep learning models, in which relative position encoding plays a crucial role. However, when introducing attention to manifolds, there is no canonical local coordinate system to parameterize neighborhoods. To address this issue, we propose an equivariant transformer to make our model agnostic to the orientation of local coordinate systems (\textit{i.e.}, gauge equivariant), which employs multi-head self-attention to jointly incorporate both position-based and content-based information. To enhance expressive ability, we adopt regular field of cyclic groups as feature fields in intermediate layers, and propose a novel method to parallel transport the feature vectors in these fields. In addition, we project the position vector of each point onto its local coordinate system to disentangle the orientation of the coordinate system in ambient space (\textit{i.e.}, global coordinate system), achieving rotation invariance. To the best of our knowledge, we are the first to introduce gauge equivariance to self-attention, thus name our model Gauge Equivariant Transformer (GET), which can be efficiently implemented on triangle meshes. Extensive experiments show that GET achieves state-of-the-art performance on two common recognition tasks.
| accept | This paper develops the tools necessary to construct Gauge-equivariant transformers -- and to apply self-attention to manifolds or meshes with a focus on 2D smooth and orientable manifolds.
The presented methods can be applied to a wide variety of problems.
This paper is clearly written and technically sound. The experiments clearly show the strength of the proposed method (e.g. in comparison to MeshCNNs for shape classification).
Experimental results (Tables 1, 2) also support the idea that directly incorporating equivariances and invariances into the model's architecture is more efficient than using data-augmentation. | val | [
"0GnitVTbJkH",
"O085Y7qazTc",
"3R4jPxIhtH2",
"u8NJErtDA3m",
"eQ7IGECSJCQ",
"I47k08zvrmq",
"01tXPkZS-tw",
"m7LY7p6RDDi",
"q1azqrLyzmO",
"dbY8aTNgRuQ",
"GR5a392pPJ",
"S7OLz3xOzJh",
"K-FSw4M9OTC",
"-3cw9ht9ism",
"BwNlTIcvLQE",
"zQy89KnHNFV"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your support of our work and the improvement of the score. ",
"The paper presents a gauge equivariant transformer for geometric learning. The gauge equivariance aims to solve the orientation ambiguity when doing the convolution or local aggregation operations within a neighborhood. Specifically, this... | [
-1,
6,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"q1azqrLyzmO",
"nips_2021_fyL9HD-kImm",
"01tXPkZS-tw",
"m7LY7p6RDDi",
"dbY8aTNgRuQ",
"nips_2021_fyL9HD-kImm",
"m7LY7p6RDDi",
"q1azqrLyzmO",
"O085Y7qazTc",
"zQy89KnHNFV",
"BwNlTIcvLQE",
"-3cw9ht9ism",
"I47k08zvrmq",
"nips_2021_fyL9HD-kImm",
"nips_2021_fyL9HD-kImm",
"nips_2021_fyL9HD-kIm... |
nips_2021_1oVNAVhJ4GP | Unsupervised Object-Based Transition Models For 3D Partially Observable Environments | We present a slot-wise, object-based transition model that decomposes a scene into objects, aligns them (with respect to a slot-wise object memory) to maintain a consistent order across time, and predicts how those objects evolve over successive frames. The model is trained end-to-end without supervision using transition losses at the level of the object-structured representation rather than pixels. Thanks to the introduction of our novel alignment module, the model deals properly with two issues that are not handled satisfactorily by other transition models, namely object persistence and object identity. We show that the combination of an object-level loss and correct object alignment over time enables the model to outperform a state-of-the-art baseline, and allows it to deal well with object occlusion and re-appearance in partially observable environments.
| accept | Reviewers agreed that this is a solid paper that deserves acceptance. Authors are highly encouraged to address the key comments reported by reviewers as well as to implement all the improvements (as indicated by authors in the rebuttal) in the final camera-ready version. | train | [
"L_32YZJUK4k",
"uvCHQeKepPv",
"2Rs1L_5aXXk",
"QFgo97r-AIY",
"3ycNUcqUYA",
"gtaaVCOREX4",
"6tpYuwrpnNk",
"UFjSu_Jl9bw",
"KKH9Dqv4fn8",
"CA9N4H8LTC",
"ukkIayZkod",
"zknxU362Abt",
"jpS5GQLO9QG",
"NB0VWjHtJ07",
"oU_Q3zsNg94",
"AXjQXfEDk2r",
"36o47XdTAy7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper focuses on decomposing the scene into objects and predicting how these objects evolve in successive frames. This method is based on MONet and improves object alignment by proposing AlignNet. Compared with OP3, the quantitative results show better performance. I thank the authors for their efforts to re... | [
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
8,
7
] | [
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"nips_2021_1oVNAVhJ4GP",
"AXjQXfEDk2r",
"QFgo97r-AIY",
"3ycNUcqUYA",
"6tpYuwrpnNk",
"nips_2021_1oVNAVhJ4GP",
"UFjSu_Jl9bw",
"zknxU362Abt",
"AXjQXfEDk2r",
"oU_Q3zsNg94",
"L_32YZJUK4k",
"gtaaVCOREX4",
"NB0VWjHtJ07",
"nips_2021_1oVNAVhJ4GP",
"nips_2021_1oVNAVhJ4GP",
"nips_2021_1oVNAVhJ4GP... |
nips_2021_xLExSzfIDmo | Robust Contrastive Learning Using Negative Samples with Diminished Semantics | Unsupervised learning has recently made exceptional progress because of the development of more effective contrastive learning methods. However, CNNs are prone to depend on low-level features that humans deem non-semantic. This dependency has been conjectured to induce a lack of robustness to image perturbations or domain shift. In this paper, we show that by generating carefully designed negative samples, contrastive learning can learn more robust representations with less dependence on such features. Contrastive learning utilizes positive pairs which preserve semantic information while perturbing superficial features in the training images. Similarly, we propose to generate negative samples in a reversed way, where only the superfluous instead of the semantic features are preserved. We develop two methods, texture-based and patch-based augmentations, to generate negative samples. These samples achieve better generalization, especially under out-of-domain settings. We also analyze our method and the generated texture-based samples, showing that texture features are indispensable in classifying particular ImageNet classes and especially finer classes. We also show that the model bias between texture and shape features favors them differently under different test settings.
| accept | This paper copes with generating negative samples for contrastive learning, separating the signal from texture and shapes. The idea has been judged sufficiently novel and sound. The experiments are convincing, and the additional results presented in the rebuttal improved the empirical validation further. I recommend this paper for acceptance, and recommend the authors take into account all reviewer’s comments for preparing the final version of the manuscript. | train | [
"7ei2zzeDgl5",
"QwglmrutfHn",
"FZjg47k_G7A",
"Kz0F9SDz3_Z",
"CUHX30CwzdE",
"UEGLzUi22K",
"WDSQKI99-y",
"yg8dvFMgj_x",
"gEK-5jtLJoI",
"teS0nWF2zGL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to augment current contrastive self-supervised learning (SSL) methods such as BYOL and MoCo with additional negative samples that are augmented so that they remove semantics from the original image, thus forcing the learning of more robust representations that focus less on superficial image fea... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_xLExSzfIDmo",
"nips_2021_xLExSzfIDmo",
"CUHX30CwzdE",
"nips_2021_xLExSzfIDmo",
"QwglmrutfHn",
"7ei2zzeDgl5",
"teS0nWF2zGL",
"gEK-5jtLJoI",
"nips_2021_xLExSzfIDmo",
"nips_2021_xLExSzfIDmo"
] |
nips_2021_P9ld0c4dwUF | General Low-rank Matrix Optimization: Geometric Analysis and Sharper Bounds | Haixiang Zhang, Yingjie Bi, Javad Lavaei | accept | The paper provides improved analysis for both symmetric and asymmetric low-rank optimization problems via the Burer-Monterio factorization approach. Specifically, based on the assumption that the problem satisfies restricted isometry property (RIP), this paper provides much tighter RIP constants for ensuring the following three cases: (1) non-existence of spurious second-order critical point, (2) strict saddle property, and (3) the existence of a counterexample that has spurious second-order critical points. On the other hand, reviewers pointed out that the current presentation of the main results (like Theorem 4) appears too complicated and could be simplified and improved. Also, the analysis cannot be applied to other low-rank optimization problems (e.g., phase retrieval) that have no RIP.
Overall, all reviewers appreciate the technical contribution of this paper and agree that the merits outweigh the pitfalls, so I recommend accept. Nevertheless, please modify the paper accordingly to improve the presentation and highlight the limitation of RIP-based analysis in the introduction while mentioning some practical problems (e.g., quantum state tomography) that obey RIP. | test | [
"nENB1ftkV0P",
"l0zzV3RIa0o",
"BOF6XviINWN",
"l7a30AkcSGI",
"ZVDg2DYGJMo",
"YAlqk6B1Ul",
"wkUN2o7Omvt",
"UpS1cD-5VHh",
"apmZ5e_B4yx",
"hScyxu1Aoun",
"b5r7OJi6IM6",
"zZ-a4RW0agc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies low-rank matrix optimization, in particular solving $\\underset{M\\in\\mathbb{R}^{m\\times n}}{\\min} f(M)$ under a rank constraint $\\mathrm{rank}(M) \\leq r$, where $f$ is a convex function. This captures a lot of problems like matrix completion, phase retrieval, etc. \n\nThe authors assume th... | [
7,
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_P9ld0c4dwUF",
"UpS1cD-5VHh",
"nips_2021_P9ld0c4dwUF",
"b5r7OJi6IM6",
"nips_2021_P9ld0c4dwUF",
"wkUN2o7Omvt",
"hScyxu1Aoun",
"zZ-a4RW0agc",
"nENB1ftkV0P",
"ZVDg2DYGJMo",
"BOF6XviINWN",
"nips_2021_P9ld0c4dwUF"
] |
nips_2021_Arn2E4IRjEB | Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation | This paper is about the problem of learning a stochastic policy for generating an object (like a molecular graph) from a sequence of actions, such that the probability of generating an object is proportional to a given positive reward for that object. Whereas standard return maximization tends to converge to a single return-maximizing sequence, there are cases where we would like to sample a diverse set of high-return solutions. These arise, for example, in black-box function optimization when few rounds are possible, each with large batches of queries, where the batches should be diverse, e.g., in the design of new molecules. One can also see this as a problem of approximately converting an energy function to a generative distribution. While MCMC methods can achieve that, they are expensive and generally only perform local exploration. Instead, training a generative policy amortizes the cost of search during training and yields to fast generation. Using insights from Temporal Difference learning, we propose GFlowNet, based on a view of the generative process as a flow network, making it possible to handle the tricky case where different trajectories can yield the same final state, e.g., there are many ways to sequentially add atoms to generate some molecular graph. We cast the set of trajectories as a flow and convert the flow consistency equations into a learning objective, akin to the casting of the Bellman equations into Temporal Difference methods. We prove that any global minimum of the proposed objectives yields a policy which samples from the desired distribution, and demonstrate the improved performance and diversity of GFlowNet on a simple domain where there are many modes to the reward function, and on a molecule synthesis task.
| accept | After an extensive back-and-forth discussion, all reviewers felt positively about the paper and its contribution, and lean towards acceptance.
However, there were issues regarding clarity in the submitted draft — this has been largely addressed through clarifications from the authors, but it is essential that the discussion here is incorporated into the final version of the manuscript (and in particular, the responses to reviewer 1AuD). | train | [
"hkUmH4sA2s5",
"7Ics0M3uQU",
"PoecrPjgff",
"8Nru-7n-La4",
"jidMF9faS9l",
"tmdwFFn6sWJ",
"wZvAWPatGj",
"v1b96JaaNU0",
"AqbjKa9qIDf",
"7NONjpghZQ",
"CirHj3PFbi2",
"noh7ArVqa4r",
"OU08eKZrXDF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers generative processes for which state transitions form a DAG. The authors view such a generative process as a flow in a graph—with a single source and sinks connected to final objects. The authors set objects' probabilities being proportional to the reward function and train a generative model b... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
"nips_2021_Arn2E4IRjEB",
"OU08eKZrXDF",
"nips_2021_Arn2E4IRjEB",
"AqbjKa9qIDf",
"tmdwFFn6sWJ",
"wZvAWPatGj",
"7NONjpghZQ",
"hkUmH4sA2s5",
"noh7ArVqa4r",
"OU08eKZrXDF",
"PoecrPjgff",
"nips_2021_Arn2E4IRjEB",
"nips_2021_Arn2E4IRjEB"
] |
nips_2021_MjNFN44NbZm | Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning | Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai | accept | The reviewers all agreed that this paper provides important theoretical contributions to an important area: sample efficient online learning (fine-tuning), using first offline training. The primary concern is about the practicality of the algorithm and the lack of empirical validation. As a paper focused on theory, however, this is not a concern; such work can naturally be done as a follow-up. As suggested by a reviewer, I highly encourage the authors to use the additional space in the final version to discuss how such an algorithm could be implemented, to make it easier to follow-up on this work. | train | [
"wgyp2xWJNLy",
"PxyCIOpNVG6",
"2lGekKoa-uF",
"zJAhKKVwb4",
"aVn8SHwcUPL",
"ULA50xriaX",
"To6CsLXGXz5",
"2i9IclPNmvF"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a theoretical analysis of off-policy and on-policy RL algorithms, and proposes a new hybrid algorithm that learns a policy in the following way:\n\n- The beginning of episodes, time-steps 1 to h*, is used to learn offline, from samples collected by a behavior policy.\n- The end of episodes, time... | [
7,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"nips_2021_MjNFN44NbZm",
"ULA50xriaX",
"wgyp2xWJNLy",
"2i9IclPNmvF",
"To6CsLXGXz5",
"nips_2021_MjNFN44NbZm",
"nips_2021_MjNFN44NbZm",
"nips_2021_MjNFN44NbZm"
] |
nips_2021_MYs3AVBLeY8 | Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation | Weakly supervised semantic segmentation produces pixel-level localization from class labels; however, a classifier trained on such labels is likely to focus on a small discriminative region of the target object. We interpret this phenomenon using the information bottleneck principle: the final layer of a deep neural network, activated by the sigmoid or softmax activation functions, causes an information bottleneck, and as a result, only a subset of the task-relevant information is passed on to the output. We first support this argument through a simulated toy experiment and then propose a method to reduce the information bottleneck by removing the last activation function. In addition, we introduce a new pooling method that further encourages the transmission of information from non-discriminative regions to the classification. Our experimental evaluations demonstrate that this simple modification significantly improves the quality of localization maps on both the PASCAL VOC 2012 and MS COCO 2014 datasets, exhibiting a new state-of-the-art performance for weakly supervised semantic segmentation.
| accept | The submission proposes an improved method for computing class activation maps, which are then used in weakly (image level label) supervised segmentation. The main problem with class activation maps are that they often only highlight a small portion of an object. Addressing this issue is the main focus of the paper, and does so with a different classification loss and a different global pooling layer. The reviewers were unanimous in their opinion that the paper is above the bar for acceptance at NeurIPS, and appreciated the extensive experiments and utility of the setting. | train | [
"hpqL_xFQJj",
"8M_8w5nEKqv",
"cM7_k10NWsV",
"57vzVSOf-tX",
"PGg_qyi8LyM",
"5CV2NQUwVkF",
"RZt765ERMi",
"G7EvsVYb7Eb",
"ywXte6qQcb_",
"fXQFkOlOVok",
"1C13IdbeYX",
"1Gts3rQ__KT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate authors' responses to my comments. Some of my concerns have been addressed and I keep the initital rating.",
"This work addresses a limitation of CAMs (Class Activation Mappings) in the context of weakly supervised semantic segmentation (W3S) taking the information bottleneck perspective. The reaso... | [
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"ywXte6qQcb_",
"nips_2021_MYs3AVBLeY8",
"5CV2NQUwVkF",
"nips_2021_MYs3AVBLeY8",
"RZt765ERMi",
"57vzVSOf-tX",
"1Gts3rQ__KT",
"8M_8w5nEKqv",
"1C13IdbeYX",
"nips_2021_MYs3AVBLeY8",
"nips_2021_MYs3AVBLeY8",
"nips_2021_MYs3AVBLeY8"
] |
nips_2021_jqjsLUrB8F | SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs | Ayush Sekhari, Karthik Sridharan, Satyen Kale | accept | This paper presents interesting results refining the relationship between SGD and GD, and also implicit regularization. The reviewers are all in support, and I request the authors carefully consider the reviewers' suggestions in their revisions. | train | [
"jnprVVXFMor",
"DbaL4mf0WuX",
"mawzO4dYQNH",
"OxGtc8MkVPI",
"3QY3sHtH-R5",
"cF0PLOu2sy",
"A9B64IbcTov",
"8Q8a9134hC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies the generalization error bound of multi-epoch, small-batch, SGD for learning over-parameterized models. In particular, the authors compare the generalization performance of SGD to GD and regularized empirical risk minimization (RERM). For RERM, the authors show that there is an SCO problem for w... | [
6,
7,
7,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_jqjsLUrB8F",
"nips_2021_jqjsLUrB8F",
"nips_2021_jqjsLUrB8F",
"mawzO4dYQNH",
"8Q8a9134hC",
"DbaL4mf0WuX",
"jnprVVXFMor",
"nips_2021_jqjsLUrB8F"
] |
nips_2021_MwFdqFRxIF0 | AC-GC: Lossy Activation Compression with Guaranteed Convergence | Parallel hardware devices (e.g., graphics processor units) have limited high-bandwidth memory capacity.This negatively impacts the training of deep neural networks (DNNs) by increasing runtime and/or decreasing accuracy when reducing model and/or batch size to fit this capacity. Lossy compression is a promising approach to tackling memory capacity constraints, but prior approaches rely on hyperparameter search to achieve a suitable trade-off between convergence and compression, negating runtime benefits. In this paper we build upon recent developments on Stochastic Gradient Descent convergence to prove an upper bound on the expected loss increase when training with compressed activation storage. We then express activation compression error in terms of this bound, allowing the compression rate to adapt to training conditions automatically. The advantage of our approach, called AC-GC, over existing lossy compression frameworks is that, given a preset allowable increase in loss, significant compression without significant increase in error can be achieved with a single training run. When combined with error-bounded methods, AC-GC achieves 15.1x compression with an average accuracy change of 0.1% on text and image datasets. AC-GC functions on any model composed of the layers analyzed and, by avoiding compression rate search, reduces overall training time by 4.6x over SuccessiveHalving.
| accept | While there was a discrepancy between the overall scores initially, the authors' rebuttal letter clarified several concerns of the reviewers, which resulted in all the scores being positive. Hence, I recommend an acceptance. Please implement all the changes suggested by the reviewers in the camera-ready version. | test | [
"wOBIQHwtu1s",
"1tQ2H4kh-9",
"r3mxpGyWavQ",
"kuCF5uNpJmZ",
"RiDBehZm45j",
"CgWzw8FMfG",
"BNwKnnpkrws",
"f3LUi4FmF3G",
"wHviSZ_ALP",
"1Y8lpcYcURP",
"5dY_Y7MUDbP",
"pmR70YWlbG6"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
" Thank you! We are happy to have addressed your comments and will include the suggested changes in the next version of the paper.",
" We are glad we resolved your comments. We will emphasize the assumptions about loss conditioning and convexity in the introduction.",
" Thank you and we are pleased that our res... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
-1,
6,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
4,
-1,
-1,
-1
] | [
"kuCF5uNpJmZ",
"RiDBehZm45j",
"f3LUi4FmF3G",
"5dY_Y7MUDbP",
"pmR70YWlbG6",
"nips_2021_MwFdqFRxIF0",
"nips_2021_MwFdqFRxIF0",
"1Y8lpcYcURP",
"nips_2021_MwFdqFRxIF0",
"wHviSZ_ALP",
"BNwKnnpkrws",
"CgWzw8FMfG"
] |
nips_2021_x2TMPhseWAW | Label Noise SGD Provably Prefers Flat Global Minimizers | Alex Damian, Tengyu Ma, Jason D. Lee | accept | This paper studies the convergence of SGD in the presence of label noise. The authors show that SGD with label noise converges to a stationary point of a certain regularized loss function, where the regularization depends on the amount of noise, batch size, and the stepsize. The assumptions on size of regularization and starting points are somewhat restrictive, but the paper increases the understanding of generalization for this set of problems. | train | [
"Sk0FQUeIIi",
"gZUyqQX32uI",
"WKq6zd8AmH4",
"O0ZrePYkwRr",
"Q0GQmLXJ42Y",
"QiUYEvTrxkB",
"WF8jEEmgeqh",
"H3Oak3YaFu6",
"i5ocMVgwKHX",
"-cwU52aKs8h",
"QJq4W40VOgR",
"nAnd7AYnT_P",
"QwDqRvgMMmw",
"SDP-72yWZtz",
"8Rk2lifnoz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response. After reading the other reviews and responses I still believe that this paper is above the bar for NeurIPS and vote for acceptance. \n\n---\n\nMinor comment: I am fine with adding a remark about $\\delta$. ",
"## Post-rebuttal \nI thank the authors for th... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nAnd7AYnT_P",
"nips_2021_x2TMPhseWAW",
"Q0GQmLXJ42Y",
"nips_2021_x2TMPhseWAW",
"QiUYEvTrxkB",
"QwDqRvgMMmw",
"H3Oak3YaFu6",
"i5ocMVgwKHX",
"QJq4W40VOgR",
"SDP-72yWZtz",
"gZUyqQX32uI",
"8Rk2lifnoz",
"O0ZrePYkwRr",
"nips_2021_x2TMPhseWAW",
"nips_2021_x2TMPhseWAW"
] |
nips_2021_k9iBo3RmCFd | Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks | (Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e.g., translations, rotations) entreats both theoretical and empirical understanding. Spatial robustness to random translations and rotations is commonly attained via equivariant models (e.g., StdCNNs, GCNNs) and training augmentation, whereas adversarial robustness is typically achieved by adversarial training. In this paper, we prove a quantitative trade-off between spatial and adversarial robustness in a simple statistical setting. We complement this empirically by showing that: (a) as the spatial robustness of equivariant models improves by training augmentation with progressively larger transformations, their adversarial robustness worsens progressively, and (b) as the state-of-the-art robust models are adversarially trained with progressively larger pixel-wise perturbations, their spatial robustness drops progressively. Towards achieving Pareto-optimality in this trade-off, we propose a method based on curriculum learning that trains gradually on more difficult perturbations (both spatial and adversarial) to improve spatial and adversarial robustness simultaneously.
| accept | This work studies tradeoffs between robustness to translations (spatial robustness) and robustness to worst case lp-perturbations (adversarial robustness). First the authors mathematically construct distributions for which tradeoffs between these two forms of robustness provably exist. Then the authors present a series of empirical studies showing that the tradeoff occurs in practice, and methods towards achieving pareto optimality. Reviewers were overall supportive of acceptance, noting that the empirical experiments presented were particularly interesting and well executed. There was a long discussion during the rebuttal period regarding Theorem 2. For the distribution in question, a tradeoff between spatial and adversarial robustness follows as a direct consequence from a tradeoff between adversarial robustness and standard accuracy (which has been shown already theoretically for some distributions in prior work). Reviewers agreed that the original text did not make it clear that this weaker bound followed from the standard accuracy - robustness tradeoff, or how Theorem 2 strengthened upon this weaker bound. However, even with this issue reviewers agreed that the experimental results were arguably interesting enough to warrant publication. After follow up discussions between the AC and the authors, the authors have agreed to rework this section to make the weaker bound explicit (and better discussion on how it follow from the standard accuracy / adv robustness tradeoff) while highlighting how their construction strengthens this result. With these changes in mind, the paper seems ready for publication. | train | [
"xg0FmY0g48",
"cMOP0BgF-dq",
"t9w4yDvOaeD",
"p0vJwoLwt5",
"m2crw7nSMDi",
"b6PhEEAXd5C",
"VAGYHq8WPXz",
"aoeVMqU3U95",
"nTAQiw28el",
"U76uXcTWpcZ",
"dRH4491zGv7",
"acNcHWP-Yii",
"436DCEFz-_B",
"0k8dCXy60Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" The additional information our distribution brings is that it allows simultaneous adversarial robustness vs. accuracy trade-offs for multiple distributions (e.g., 0, 90, 180, 270 degree rotations of the original distribution, resp.) that are sufficiently different from one another (i.e., accuracy close to 1 on an... | [
-1,
7,
5,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"p0vJwoLwt5",
"nips_2021_k9iBo3RmCFd",
"nips_2021_k9iBo3RmCFd",
"m2crw7nSMDi",
"b6PhEEAXd5C",
"436DCEFz-_B",
"nips_2021_k9iBo3RmCFd",
"U76uXcTWpcZ",
"nips_2021_k9iBo3RmCFd",
"VAGYHq8WPXz",
"0k8dCXy60Q",
"cMOP0BgF-dq",
"t9w4yDvOaeD",
"nips_2021_k9iBo3RmCFd"
] |
nips_2021_09D1DIEHrJO | Universal Off-Policy Evaluation | When faced with sequential decision-making problems, it is often useful to be able to predict what would happen if decisions were made using a new policy. Those predictions must often be based on data collected under some previously used decision-making rule. Many previous methods enable such off-policy (or counterfactual) estimation of the expected value of a performance measure called the return. In this paper, we take the first steps towards a 'universal off-policy estimator' (UnO)---one that provides off-policy estimates and high-confidence bounds for any parameter of the return distribution. We use UnO for estimating and simultaneously bounding the mean, variance, quantiles/median, inter-quantile range, CVaR, and the entire cumulative distribution of returns. Finally, we also discuss UnO's applicability in various settings, including fully observable, partially observable (i.e., with unobserved confounders), Markovian, non-Markovian, stationary, smoothly non-stationary, and discrete distribution shifts.
| accept | The paper studies off-policy evaluation through estimates and high-confidence bounds on any parameter of the return distribution, such as its mean, variance, IQR, etc. The authors make use of importance sampling, an often-used tool in observational studies, proving that it leads to unbiased and consistent estimation of the cumulative distribution of rewards. This is later extended to provide high-confidence bounds on the CDF, and for parameters of the CDF. The implications of various commonly made assumptions are discussed and the estimators are evaluated empirically.
The reviews were predominantly positive, arguing for acceptance with one exception. One of the main limitations raised by reviewer j4b9 was that the return distribution is assumed to be discrete. The authors clarified in their response that this limitation may be removed. Additional clarifications were made by the authors which imply only small modifications to the manuscript. | train | [
"4iDt3PnvRAN",
"Vevax0YeCjW",
"dgD42prS0cw",
"zf3rkrQCi4",
"2J3jgMKhl7T",
"EU7uQ8ZmG3",
"Vnmi4Cnv6o3",
"5HKKQGeSyrC",
"7UHIvYdbpYS",
"w88roCWhLZe"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces UnO, a framework for estimating confidence bounds for many parameters of the reward in an off-policy setting and with finite state-action spaces. In this work, first an estimator for the CDF of the return is provided, which makes use of an importance sampling correction. From this estimator, i... | [
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_09D1DIEHrJO",
"nips_2021_09D1DIEHrJO",
"zf3rkrQCi4",
"Vnmi4Cnv6o3",
"Vevax0YeCjW",
"w88roCWhLZe",
"4iDt3PnvRAN",
"7UHIvYdbpYS",
"nips_2021_09D1DIEHrJO",
"nips_2021_09D1DIEHrJO"
] |
nips_2021_avgEPdjT2Uz | A Non-commutative Extension of Lee-Seung's Algorithm for Positive Semidefinite Factorizations | Yong Sheng Soh, Antonios Varvitsiotis | accept | I agree with the reviewers that the paper is not quite ready for publication. The paper is exploring a promising direction, and I encourage the authors to explore submitting this paper for future conferences. However, I suggest the authors address the following issues before another round of submission:
1. Move a big part of the introduction (where you describe connections with other problems) to the supplementary material to open up space for your own work.
2. Provide run-time analysis for different algorithms and compare them.
3. Given that the paper is mainly empirical, it is important to provide a fair comparison with existing algorithms.
I am sorry that the outcome is not what the authors expected it to be and I hope this outcome does not discourage the authors from pursuing this research direction further.
| test | [
"tuCsUS02Pi7",
"xtWxDw2mJhZ",
"icAGxHV1BA3",
"xEmsVnhUCX",
"BDkZtM9nVT",
"vVpTxkUCCAb"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors proposed an improved algorithm for positive semidefinite (PSD) factorization, where a wider range of matrices can be used for ensuring the updates remain PSD in the alternating steps. The newly proposed algorithm is a superset for the previous Lee-Seung’s Algorithm. And the superiority of the proposed ... | [
4,
-1,
-1,
-1,
4,
5
] | [
2,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_avgEPdjT2Uz",
"tuCsUS02Pi7",
"vVpTxkUCCAb",
"BDkZtM9nVT",
"nips_2021_avgEPdjT2Uz",
"nips_2021_avgEPdjT2Uz"
] |
nips_2021_hqDb8d65Vfh | Efficiently Identifying Task Groupings for Multi-Task Learning | Multi-task learning can leverage information learned by one task to benefit the training of other tasks. Despite this capacity, naively training all tasks together in one model often degrades performance, and exhaustively searching through combinations of task groupings can be prohibitively expensive. As a result, efficiently identifying the tasks that would benefit from training together remains a challenging design question without a clear solution. In this paper, we suggest an approach to select which tasks should train together in multi-task learning models. Our method determines task groupings in a single run by training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss. On the large-scale Taskonomy computer vision dataset, we find this method can decrease test loss by 10.0% compared to simply training all tasks together while operating 11.6 times faster than a state-of-the-art task grouping method.
| accept | This paper proposes a method for grouping tasks to achieve stronger multi-task learning performance. All four reviewers believe this is a well-written paper and should get accepted. Authors need to modify their final version based on reviewers' comments. | train | [
"tiFf7XecbSE",
"6DJ-oT7-xEY",
"AeSjbCoMrJu",
"olqv9n_1ohH",
"y7v22koCqe1",
"9z-f8U6-nJz",
"gEQLkmbUuhT",
"A1FRCBFSJDO",
"X1-uZ-F5xyM",
"5gdHzzJC47",
"kCFFjeWeUeP",
"LRcnTGycmPI"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely appreciate you leaving this final note. The runtime for STL in Figure 2 is the runtime to train all 9 single task models. We were mistaken to not differentiate this method from the task grouping methods (TAG, CS, HOA). \n\nGiven your comments, we now see that comparing the training times of non-task ... | [
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"6DJ-oT7-xEY",
"gEQLkmbUuhT",
"nips_2021_hqDb8d65Vfh",
"nips_2021_hqDb8d65Vfh",
"A1FRCBFSJDO",
"nips_2021_hqDb8d65Vfh",
"kCFFjeWeUeP",
"LRcnTGycmPI",
"AeSjbCoMrJu",
"olqv9n_1ohH",
"nips_2021_hqDb8d65Vfh",
"nips_2021_hqDb8d65Vfh"
] |
nips_2021_aUuTEEcyY_ | Instance-Conditioned GAN | Generative Adversarial Networks (GANs) can generate near photo realistic images in narrow domains such as human faces. Yet, modeling complex distributions of datasets such as ImageNet and COCO-Stuff remains challenging in unconditional settings. In this paper, we take inspiration from kernel density estimation techniques and introduce a non-parametric approach to modeling distributions of complex datasets. We partition the data manifold into a mixture of overlapping neighborhoods described by a datapoint and its nearest neighbors, and introduce a model, called instance-conditioned GAN (IC-GAN), which learns the distribution around each datapoint. Experimental results on ImageNet and COCO-Stuff show that IC-GAN significantly improves over unconditional models and unsupervised data partitioning baselines. Moreover, we show that IC-GAN can effortlessly transfer to datasets not seen during training by simply changing the conditioning instances, and still generate realistic images. Finally, we extend IC-GAN to the class-conditional case and show semantically controllable generation and competitive quantitative results on ImageNet; while improving over BigGAN on ImageNet-LT. Code and trained models to reproduce the reported results are available at https://github.com/facebookresearch/ic_gan.
| accept | Two of the reviewers strongly support the paper and find the idea original. Another reviewer thought that the ability to perform good transfer learning is also of interest to the community. I agree with their observations and recommend acceptance. | train | [
"mcd4YY1fXVE",
"UJI5JxcFLn0",
"7-AwuiKi4LB",
"JYm9ThlMwb",
"mbXkdUK3c9C",
"0gRrXMyyeLg",
"CuxUUcov0Up",
"6VZvsy9O5zz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you!\n\nI think the transfer learning stuff sounds really interesting. I maintain my rating.",
" Thank you for the response and effort in making the paper clearer and more specific. I will maintain my original score as it is a good paper.",
" We would like to thank the reviewer for their insightful feed... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"7-AwuiKi4LB",
"JYm9ThlMwb",
"0gRrXMyyeLg",
"CuxUUcov0Up",
"6VZvsy9O5zz",
"nips_2021_aUuTEEcyY_",
"nips_2021_aUuTEEcyY_",
"nips_2021_aUuTEEcyY_"
] |
nips_2021_tn6vqNUJaEW | DeepSITH: Efficient Learning via Decomposition of What and When Across Time Scales | Extracting temporal relationships over a range of scales is a hallmark ofhuman perception and cognition---and thus it is a critical feature of machinelearning applied to real-world problems. Neural networks are either plaguedby the exploding/vanishing gradient problem in recurrent neural networks(RNNs) or must adjust their parameters to learn the relevant time scales(e.g., in LSTMs). This paper introduces DeepSITH, a deep network comprisingbiologically-inspired Scale-Invariant Temporal History (SITH) modules inseries with dense connections between layers. Each SITH module is simply aset of time cells coding what happened when with a geometrically-spaced set oftime lags. The dense connections between layers change the definition of whatfrom one layer to the next. The geometric series of time lags implies thatthe network codes time on a logarithmic scale, enabling DeepSITH network tolearn problems requiring memory over a wide range of time scales. We compareDeepSITH to LSTMs and other recent RNNs on several time series prediction anddecoding tasks. DeepSITH achieves results comparable to state-of-the-artperformance on these problems and continues to perform well even as the delaysare increased.
| accept | The manuscript focuses on the problem of learning across long (and varied) temporal intervals. It introduces a novel, biologically-inspired architecture called DeepSITH (Scale Invariant Temporal History) that is based on a set of modules. The modules are interconnected and respond to their inputs with a geometrically spaced set of time constants. The particular arrangement allows the architecture to have a long memory of the far past, and also maintain a detailed memory of the recent past.
The essential idea behind the architecture --- of flexible modules that permit the representation of a scale-invariant history --- is well motivated by section 1.1 and corresponding equations. The method is shown to perform as well as, or better than existing RNN style networks (coRNN, LMU, LSTM) on a range of tasks: from sequential MNIST to chaotic time series prediction to a new "hateful-8" delayed prediction task. This is true both in terms of final performance, and speed of learning (where the method is frequently faster than most of the other architectures).
Multiple reviewers remarked on the novelty of the work, its technical soundness, and on its organization and writing. All of these aspects, as well as the presentation of the empirical results are well done.
Reviewers were compelled to raise their score by a strong rebuttal that squared the work with existing literature, e.g. deep reservoir networks and the computational neuroscience literature. One weakness noted by Reviewer [rewh] is that the manuscript does not compare to modern Transformer architectures. This would give a fuller sense of the utility of the method for modern applications. At the same time, reviewers were impressed by the ties to biology and by the minimal requirement for hyperparameter tuning required by the method.
Based on the above, I would recommend the manuscript for acceptance as a poster. It falls short, I believe, of being spotlight material because it is missing some comparisons that would better situate its results in the modern applied literature. It is nevertheless a well motivated and written manuscript that presents a thorough exploration of a novel RNN-like architecture with interesting ties to biology.
| train | [
"fJ5UX4sqBk4",
"LeLFxcK1zKJ",
"JD_0NCJpxvm",
"ENH2RDaB3Jb",
"FebmDqLZsRA",
"BEA3Jt03Aal",
"UfLmvYiIKR5",
"EXTcQcVNTUO",
"gql3nbYRTBH",
"sWopjL7_wjj",
"HOaIuw1QJI6",
"W3y-Os2_Qq-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I would like to thank the authors for their response. My main concern was the why use DeepSITH this was addressed in author response \"that it does not require iterative search of the hyperparameter space\". Based on this I am happy to change my score from 5->6",
"In this paper, the authors propose a neural ar... | [
-1,
8,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"ENH2RDaB3Jb",
"nips_2021_tn6vqNUJaEW",
"gql3nbYRTBH",
"nips_2021_tn6vqNUJaEW",
"nips_2021_tn6vqNUJaEW",
"HOaIuw1QJI6",
"W3y-Os2_Qq-",
"LeLFxcK1zKJ",
"LeLFxcK1zKJ",
"ENH2RDaB3Jb",
"FebmDqLZsRA",
"nips_2021_tn6vqNUJaEW"
] |
nips_2021_44EMx-dkQU | A Gaussian Process-Bayesian Bernoulli Mixture Model for Multi-Label Active Learning | Weishi Shi, Dayou Yu, Qi Yu | accept | This paper proposes a Gaussian process Bayesian-Bernoulli mixture model and its variational inference algorithm for multiple label active learning and shows its usefulness by experiments including ablation studies. This paper has three positive reviews and one negative one. In response to the author's FB, no consensus was reached in the discussion among the reviewers. An important point of discussion is whether the proposed mixture model is novel and valid. At least for the latter, the experimental comparison shows its superiority over traditional active learning on multiple label classification datasets. As for the former, as the reviewer pointed out, the use of the Bernoulli mixture model as label distribution is natural and not a particularly novel idea. The novelty of the individual techniques themselves is subtle, but the modeling as a whole is reasonable and can be evaluated as a contribution to the advancement in the research field. | train | [
"XvKNST6Epu5",
"wt_WmvRXL6T",
"8gQsvr9sta7",
"ezmjKrPhG6Z",
"HTNaMWu_X-r",
"OGBda7ejdZ",
"NSmGHKhez3d",
"TlUqMHXzOPg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Multi-label learning is an important yet challenging task in machine learning, particularly in extreme settings, where the label space is very large. Despite that, quantifying the prediction uncertainty in multi-label learning for active learning is even harder, as labels can be correlated. The paper presents a ... | [
5,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
2,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_44EMx-dkQU",
"nips_2021_44EMx-dkQU",
"TlUqMHXzOPg",
"XvKNST6Epu5",
"NSmGHKhez3d",
"wt_WmvRXL6T",
"nips_2021_44EMx-dkQU",
"nips_2021_44EMx-dkQU"
] |
nips_2021_7EFdodSWee4 | Differentially Private Empirical Risk Minimization under the Fairness Lens | Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems. It allows to measure and bound the risk associated with an individual participation in a computation. However, it was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals. This paper builds on these important observations and sheds light on the causes of the disparate impacts arising in the problem of differentially private empirical risk minimization. It focuses on the accuracy disparity arising among groups of individuals in two well-studied DP learning methods: output perturbation and differentially private stochastic gradient descent. The paper analyzes which data and model properties are responsible for the disproportionate impacts, why these aspects are affecting different groups disproportionately, and proposes guidelines to mitigate these effects. The proposed approach is evaluated on several datasets and settings.
| accept | The paper studies impact of various DP algorithms on fairness. The paper is well written and easy to follow. I encourage authors to incorporate reviewer suggestions and make the analysis more rigorous in the final version of the paper e.g., change Theorem 1 to include higher order error terms via O() notation. | test | [
"P8NLpaiJP3N",
"l7nuBgODrIX",
"aBrzW-pT9Nc",
"HyE6PeoweFi",
"_cau4kqcPCi",
"LQltFIw_Jz9",
"-_0LLoespEX",
"vAcjCW_w57u",
"QbI54skc29",
"T6RlwiiXwuk"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper investigates the underlying causes of the disparate impact produced by differentially private machine learning methods. To that end, the paper considers the notion of excessive risk gap, which measures the difference between the population-level and subgroup risks. The paper explores how two different d... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_7EFdodSWee4",
"HyE6PeoweFi",
"nips_2021_7EFdodSWee4",
"vAcjCW_w57u",
"LQltFIw_Jz9",
"QbI54skc29",
"T6RlwiiXwuk",
"aBrzW-pT9Nc",
"P8NLpaiJP3N",
"nips_2021_7EFdodSWee4"
] |
nips_2021_j6KoGtzPYa | A Unified View of cGANs with and without Classifiers | Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions. Existing cGANs are based on a wide range of different discriminator designs and training objectives. One popular design in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers. Somehow it remains unanswered whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations, which provides a unified view and brings new insights to understanding cGANs. Experimental results demonstrate that the design inspired by the proposed framework outperforms state-of-the-art cGANs on multiple benchmark datasets, especially on the most challenging ImageNet. The code is available at https://github.com/sian-chen/PyTorch-ECGAN.
| accept | This paper proposes a framework for thinking about several existing class-conditional GAN approaches in a unified way, which also leads to a new suggested algorithm. The algorithm outperforms existing approaches on TinyImageNet, and some initial results are shown in the rebuttal that it also does so on ImageNet. I think the framework is sensible and, from what we can tell, the new algorithm is probably an improvement over existing ones. Please ensure that you address the various concerns raised by reviewers, as well as adding the new results, in your final version. | train | [
"WjQMLFCYT6u",
"rcpvMl6ODiA",
"8XGkmO7yIHU",
"CrkGcqbqQLU",
"HE_K8k4zSzJ",
"XDKITmJQd-",
"AoLWfe97McU",
"hWSvm-dPRI",
"i8YHUDVh2r4",
"LU5OcwR-zuC",
"HT1UicQV8Y8",
"MZosLNYHMv",
"oY7YWLM9Pa9",
"kP24RKy5KE",
"CiO7WkGGH9g"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Area Chair,\n\nThank you for asking. The numbers that you request for ECGAN and BigGAN are shown in the table below. They strengthen our conclusion that ECGAN is able to enhance BigGAN, by improving Inception Score from 13.097 to 32.187 and FID from 64.661 to 32.596. Intra-FID on ImageNet is very time-consum... | [
-1,
-1,
6,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"rcpvMl6ODiA",
"HT1UicQV8Y8",
"nips_2021_j6KoGtzPYa",
"nips_2021_j6KoGtzPYa",
"nips_2021_j6KoGtzPYa",
"CiO7WkGGH9g",
"CrkGcqbqQLU",
"HE_K8k4zSzJ",
"kP24RKy5KE",
"8XGkmO7yIHU",
"nips_2021_j6KoGtzPYa",
"nips_2021_j6KoGtzPYa",
"nips_2021_j6KoGtzPYa",
"nips_2021_j6KoGtzPYa",
"nips_2021_j6KoG... |
nips_2021_HKtsGW-lNbw | Online and Offline Reinforcement Learning by Planning with a Learned Model | Learning efficiently from small amounts of data has long been the focus of model-based reinforcement learning, both for the online case when interacting with the environment, and the offline case when learning from a fixed dataset. However, to date no single unified algorithm could demonstrate state-of-the-art results for both settings.In this work, we describe the Reanalyse algorithm, which uses model-based policy and value improvement operators to compute improved training targets for existing data points, allowing for efficient learning at data budgets varying by several orders of magnitude. We further show that Reanalyse can also be used to learn completely without environment interactions, as in the case of Offline Reinforcement Learning (Offline RL). Combining Reanalyse with the MuZero algorithm, we introduce MuZero Unplugged, a single unified algorithm for any data budget, including Offline RL. In contrast to previous work, our algorithm requires no special adaptations for the off-policy or Offline RL settings. MuZero Unplugged sets new state-of-the-art results for Atari in the standard 200 million frame online setting as well as in the RL Unplugged Offline RL benchmark.
| accept | The authors have done a good job responding to the reviewers' questions. The reviewers are in consensus that the paper is a worthy contribution to the empirically aspects of offline reinforcement learning.
The authors are encouraged to include some discussion in relationship to the recent advances in the theory of offline RL, e.g., those from the recent workshops:
1. ICML RL Theory workshop: https://lyang36.github.io/icml2021_rltheory/#papers
2. NeurIPS'20 Offline RL workshop: https://offline-rl-neurips.github.io/
and the references therein. Some of the papers there might provide interesting theoretical insight into why model-based approaches are the way to go for offline RL.
| train | [
"EJIUooFDiA",
"rnlE9hZPHwW",
"qxabLN7QdBS",
"jlZblUTpO9O",
"q0GTNGrNQo",
"B3lT1iYeuqN",
"gdoZoSF9IY_",
"HmQuFJe7cNu",
"BaNrYWEvYGD",
"NuBJT31STwc",
"JARWvD84Isi"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the reply! I am happy to keep the positive score for this paper.",
"In this paper the authors focus on the Reanalyze algorithm introduced in MuZero [1] and show its applicability to improving sample efficiency, as well as the offline/batch RL setting with no modification to the underlying algorith... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"gdoZoSF9IY_",
"nips_2021_HKtsGW-lNbw",
"B3lT1iYeuqN",
"nips_2021_HKtsGW-lNbw",
"JARWvD84Isi",
"rnlE9hZPHwW",
"NuBJT31STwc",
"BaNrYWEvYGD",
"nips_2021_HKtsGW-lNbw",
"nips_2021_HKtsGW-lNbw",
"nips_2021_HKtsGW-lNbw"
] |
nips_2021_jHAAHg8T7Nx | Stochastic Multi-Armed Bandits with Control Variates | This paper studies a new variant of the stochastic multi-armed bandits problem where auxiliary information about the arm rewards is available in the form of control variates. In many applications like queuing and wireless networks, the arm rewards are functions of some exogenous variables. The mean values of these variables are known a priori from historical data and can be used as control variates. Leveraging the theory of control variates, we obtain mean estimates with smaller variance and tighter confidence bounds. We develop an improved upper confidence bound based algorithm named UCB-CV and characterize the regret bounds in terms of the correlation between rewards and control variates when they follow a multivariate normal distribution. We also extend UCB-CV to other distributions using resampling methods like Jackknifing and Splitting. Experiments on synthetic problem instances validate performance guarantees of the proposed algorithms.
| accept | The committee has appreciated the involved and detailed response to their questions and comments and agreed that this work should be accepted. One reviewer raised the question of the knowledge of the variance of the control variate $\sigma_{w,i}^2$ and would encourage the authors to investigate and discuss this question in their final version. | train | [
"QOyLey-ygw1",
"2izJKxZm9Zc",
"adLeQ5K0Zx",
"JxK1HjDZ0qw",
"9M4uoesa8dK",
"OduXei00Cpa",
"SOCBB7Ij9T",
"E4p68P3moS6",
"bIJQzBj3-yN",
"3YVZ3tVUX3I",
"-MGdmf4kwF",
"DN6iLJSxqnU"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper considers low regret learning in stochastic multi-armed bandits in the presence of control variables. For jointly Gaussian reward and control variables, and linear control variates, the authors show a factor 1-ρ^2 reduction in the regret. The paper focuses on reducing regret in multiarmed bandits using... | [
7,
-1,
-1,
-1,
-1,
7,
-1,
6,
7,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
4,
-1,
3,
3,
-1,
-1,
-1
] | [
"nips_2021_jHAAHg8T7Nx",
"adLeQ5K0Zx",
"3YVZ3tVUX3I",
"9M4uoesa8dK",
"DN6iLJSxqnU",
"nips_2021_jHAAHg8T7Nx",
"bIJQzBj3-yN",
"nips_2021_jHAAHg8T7Nx",
"nips_2021_jHAAHg8T7Nx",
"QOyLey-ygw1",
"E4p68P3moS6",
"OduXei00Cpa"
] |
nips_2021_cVwc7IHWEWi | Near-Optimal No-Regret Learning in General Games | Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich | accept | This paper resolves a fundamental question on individual regret of
optimistic Hedge in a multi-player general-sum game setting, which was
left open since the work of Syrgkanis et al. 2015. Despite the recent
progress by Chen & Peng, 2020 which shows T^{1/6} regret, to achieve
polylog(T) regret this paper proposes significantly new ideas, which we
believe could be highly beneficial for the community. All reviewers are
excited about this result. | train | [
"mxH8tHm1KHg",
"9KlwHUTsmu",
"lc6zRaNRPa",
"yf5qmFXSAR",
"UFnhRsUJO8k",
"D31uYSY9iFo",
"nRN3ekedPe",
"mKxQ8OJJnq",
"E7DpdRwh7Qh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response. I no longer have concerns about the correctness of the result, and have increased my score accordingly.",
"This paper analyzes the regret of Optimistic Hedge in multi-player general-sum games. The paper shows that the algorithm attains polylog($T$) regret for each player, w... | [
-1,
8,
-1,
-1,
-1,
-1,
9,
10,
8
] | [
-1,
2,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"yf5qmFXSAR",
"nips_2021_cVwc7IHWEWi",
"E7DpdRwh7Qh",
"9KlwHUTsmu",
"mKxQ8OJJnq",
"nRN3ekedPe",
"nips_2021_cVwc7IHWEWi",
"nips_2021_cVwc7IHWEWi",
"nips_2021_cVwc7IHWEWi"
] |
nips_2021_UKBok7bHMQm | Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration | Our work reveals a structured shortcoming of the existing mainstream self-supervised learning methods. Whereas self-supervised learning frameworks usually take the prevailing perfect instance level invariance hypothesis for granted, we carefully investigate the pitfalls behind. Particularly, we argue that the existing augmentation pipeline for generating multiple positive views naturally introduces out-of-distribution (OOD) samples that undermine the learning of the downstream tasks. Generating diverse positive augmentations on the input does not always pay off in benefiting downstream tasks. To overcome this inherent deficiency, we introduce a lightweight latent variable model UOTA, targeting the view sampling issue for self-supervised learning. UOTA adaptively searches for the most important sampling region to produce views, and provides viable choice for outlier-robust self-supervised learning approaches. Our method directly generalizes to many mainstream self-supervised learning approaches, regardless of the loss's nature contrastive or not. We empirically show UOTA's advantage over the state-of-the-art self-supervised paradigms with evident margin, which well justifies the existence of the OOD sample issue embedded in the existing approaches. Especially, we theoretically prove that the merits of the proposal boil down to guaranteed estimator variance and bias reduction. Code is available: https://github.com/ssl-codelab/uota.
| accept | There was a robust discussion amongst the reviewers about the merits of this work. There was some disagreement as to whether this work's potential impact on the field of semi-supervised learning is big enough. I decided to side with the reviewers that argued that this contribution could in fact be important. The reviewers have unanimously suggested that the new experimental results be included in the final version, as well improved flow & readability. | train | [
"fP_4bsE7fsS",
"7XbUT1HI952",
"I3EzMsM3WHG",
"6XyQ6nhBC0",
"bo9I5j1CzUb",
"eq3xXISOaiI",
"t6Zy335GHe-",
"MCb9W05tFMj",
"99MebAaGMTq",
"0z7l5rZwjTU",
"12-PIZAxK-i",
"-LNrtK9gz75",
"HbZfhoEfIYk",
"c2NsWuYfSBh",
"8Nnnw9tTb-8",
"E1dK6uW9Mj",
"tfP0KX2LmqI"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your reply. We feel sorry that our previous response did not completely clear up your concerns. While we might not be able to change your position at this time, we still find it necessary to share our thoughts as final remarks.\n\nFirstly, your suggested UOTA+[a] formulation has been shown effecti... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5
] | [
"7XbUT1HI952",
"HbZfhoEfIYk",
"tfP0KX2LmqI",
"eq3xXISOaiI",
"nips_2021_UKBok7bHMQm",
"0z7l5rZwjTU",
"MCb9W05tFMj",
"8Nnnw9tTb-8",
"nips_2021_UKBok7bHMQm",
"bo9I5j1CzUb",
"bo9I5j1CzUb",
"bo9I5j1CzUb",
"tfP0KX2LmqI",
"E1dK6uW9Mj",
"99MebAaGMTq",
"nips_2021_UKBok7bHMQm",
"nips_2021_UKBo... |
nips_2021__9-Lsdf191P | Improving Anytime Prediction with Parallel Cascaded Networks and a Temporal-Difference Loss | Although deep feedforward neural networks share some characteristics with the primate visual system, a key distinction is their dynamics. Deep nets typically operate in serial stages wherein each layer completes its computation before processing begins in subsequent layers. In contrast, biological systems have cascaded dynamics: information propagates from neurons at all layers in parallel but transmission occurs gradually over time, leading to speed-accuracy trade offs even in feedforward architectures. We explore the consequences of biologically inspired parallel hardware by constructing cascaded ResNets in which each residual block has propagation delays but all blocks update in parallel in a stateful manner. Because information transmitted through skip connections avoids delays, the functional depth of the architecture increases over time, yielding anytime predictions that improve with internal-processing time. We introduce a temporal-difference training loss that achieves a strictly superior speed-accuracy profile over standard losses and enables the cascaded architecture to outperform state-of-the-art anytime-prediction methods. The cascaded architecture has intriguing properties, including: it classifies typical instances more rapidly than atypical instances; it is more robust to both persistent and transient noise than is a conventional ResNet; and its time-varying output trace provides a signal that can be exploited to improve information processing and inference.
| accept | This paper presents a novel ANN architecture (which the authors term "cascaded" networks) inspired by the real brain. In brief, this architecture introduces delays in the propagation between computational blocks, such that skip connections serve to provide a rapid inference, whereas over time, the effective depth of the architecture increases. They use this architecture for anytime prediction, and develop a temporal difference based training algorithm that encourages the networks to find rapid solutions to the inference problem when possible. They show that this approach provides some interesting and desirable properties, such as different "reaction times" to prototypical versus atypical samples and greater robustness.
The initial review scores were a mix. Nonetheless, the reviewers were generally positive, although there was some concerns about the generality and potential benefits of the approach for neuromorphic systems. However, after author responses and discussion, though there was still some divergence in scores, the reviewers largely agreed that this paper makes a sufficiently interesting and worthwhile contribution to the field for acceptance. | train | [
"PUi5Fi0Z-4E",
"EBcbqAFsKTz",
"I32yCOw9EIQ",
"1r8QtnW1AW5",
"xx5IVysDAzv",
"F8rsYla-9Rd",
"_27WIuTAjn",
"BVlRabhD04w"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for taking the time to provide us with helpful feedback and for considering our response. We will certainly update the paper to clarify the runtime issue as well as other points you raised.",
"Feedforward networks can be converted into networks with temporal cascaded dynamics by introducing a propagat... | [
-1,
7,
-1,
6,
-1,
-1,
-1,
8
] | [
-1,
4,
-1,
5,
-1,
-1,
-1,
3
] | [
"I32yCOw9EIQ",
"nips_2021__9-Lsdf191P",
"_27WIuTAjn",
"nips_2021__9-Lsdf191P",
"BVlRabhD04w",
"1r8QtnW1AW5",
"EBcbqAFsKTz",
"nips_2021__9-Lsdf191P"
] |
nips_2021_bGXIX-CVzrq | Identifiable Generative models for Missing Not at Random Data Imputation | Real-world datasets often have missing values associated with complex generative processes, where the cause of the missingness may not be fully observed. This is known as missing not at random (MNAR) data. However, many imputation methods do not take into account the missingness mechanism, resulting in biased imputation values when MNAR data is present. Although there are a few methods that have considered the MNAR scenario, their model's identifiability under MNAR is generally not guaranteed. That is, model parameters can not be uniquely determined even with infinite data samples, hence the imputation results given by such models can still be biased. This issue is especially overlooked by many modern deep generative models. In this work, we fill in this gap by systematically analyzing the identifiability of generative models under MNAR. Furthermore, we propose a practical deep generative model which can provide identifiability guarantees under mild assumptions, for a wide range of MNAR mechanisms. Our method demonstrates a clear advantage for tasks on both synthetic data and multiple real-world scenarios with MNAR data.
| accept |
The paper is on imputing missing variables when they are missing not at random (MNAR). All reviewers agree that the paper makes a valuable contribution on a topic that is important but often overlooked. This is a clear accept.
In the camera-ready version, please incorporate the reviewers' feedback. Moreover, during discussion, reviewers brought up the idea of illustrating assumptions A1-A3 on a simple example, which I would indeed recommend to do (e.g in the appendix).
| val | [
"HDdB3r4oyx",
"7-wQq-Q9uM",
"uCSQ9U41U3v",
"ggFsjGIh_6w",
"6gUTT2O3ez0",
"ETZLs--n03",
"6Wt3lerM0g",
"homtJZtnwa4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies identifiability of generative models for Missing Not at Random (MNAR) data. MNAR is the most general missing mechanism, where the the missingness can depend on both observed and unobserved data. Most prior imputation techniques work under MCAR or MAR assumptions, and there are few models to hand... | [
7,
6,
8,
-1,
-1,
-1,
-1,
7
] | [
4,
5,
5,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_bGXIX-CVzrq",
"nips_2021_bGXIX-CVzrq",
"nips_2021_bGXIX-CVzrq",
"7-wQq-Q9uM",
"uCSQ9U41U3v",
"HDdB3r4oyx",
"homtJZtnwa4",
"nips_2021_bGXIX-CVzrq"
] |
nips_2021_DUy-qLzqvlU | DNN-based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel | We study the Solid Isotropic Material Penalization (SIMP) method with a density field generated by a fully-connected neural network, taking the coordinates as inputs. In the large width limit, we show that the use of DNNs leads to a filtering effect similar to traditional filtering techniques for SIMP, with a filter described by the Neural Tangent Kernel (NTK). This filter is however not invariant under translation, leading to visual artifacts and non-optimal shapes. We propose two embeddings of the input coordinates, which lead to (approximate) spatial invariance of the NTK and of the filter. We empirically confirm our theoretical observations and study how the filter size is affected by the architecture of the network. Our solution can easily be applied to any other coordinates-based generation method.
| accept | This paper considers the application of deep neural networks to the "SIMP" method for topology optimization. It proposes a novel approach for doing this and analyzes it using NTK theory.
This application is well outside the field of expertise of the reviewers, or indeed probably everyone associated with NeurIPS, including myself. That being said, the reviewers seem to think this is a good well-written contribution, with theoretical insights that might generalize beyond this particular application. | train | [
"y08wBegxoCU",
"JXjTmI-84qg",
"KT1ODMH7Rou",
"pdTwuEv5g9j",
"t7ac34eXShY",
"xZ2xapSTBlY",
"HL1kiynvvn",
"tHsZ64kd3Q-"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" In the proof we emphasize the fact that $a_i = \\dot{\\sigma}(x_i + \\bar{b}(X))$, where $\\sigma$ is the sigmoid function. \n\nFurthermore we changed the end of the proof to this to make it clearer:\n\nEigenvalues: We already know that $0$ is an eigenvalue with multiplicity $1$. So let $u \\neq 0$ in $\\mathbb{R... | [
-1,
-1,
6,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
2,
4
] | [
"JXjTmI-84qg",
"pdTwuEv5g9j",
"nips_2021_DUy-qLzqvlU",
"HL1kiynvvn",
"tHsZ64kd3Q-",
"KT1ODMH7Rou",
"nips_2021_DUy-qLzqvlU",
"nips_2021_DUy-qLzqvlU"
] |
nips_2021_Ghk0AJ8XtVx | Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval | Multi-hop reasoning (i.e., reasoning across two or more documents) is a key ingredient for NLP models that leverage large corpora to exhibit broad knowledge. To retrieve evidence passages, multi-hop models must contend with a fast-growing search space across the hops, represent complex queries that combine multiple information needs, and resolve ambiguity about the best order in which to hop between training passages. We tackle these problems via Baleen, a system that improves the accuracy of multi-hop retrieval while learning robustly from weak training signals in the many-hop setting. To tame the search space, we propose condensed retrieval, a pipeline that summarizes the retrieved passages after each hop into a single compact context. To model complex queries, we introduce a focused late interaction retriever that allows different parts of the same query representation to match disparate relevant passages. Lastly, to infer the hopping dependencies among unordered training passages, we devise latent hop ordering, a weak-supervision strategy in which the trained retriever itself selects the sequence of hops. We evaluate Baleen on retrieval for two-hop question answering and many-hop claim verification, establishing state-of-the-art performance.
| accept | The paper propose a multi-hop reasoning system called Baleen based on the idea of iterative retrieval. It includes three main ideas: 1) condensed retrieval which summarizes the documents at each hop; focused late interaction which ranks the top-k scores and only includes those for later computation; latent hop ordering which learns to order the passages. The method is evaluated on two datasets, HotpotQA and HoVer. Both experiments demonstrate the proposed method achieves significant better performance than baselines.
All reviewers agree on the quality of the work. The performance is solid. The paper is clear and easy to follow. The authors may want to address the questions raised by reviewers. In addition, the reference part is rather nonstandard. Please cite the correct source, i.e. the official publication should be cited if it is published, instead of the arxiv version. | val | [
"vq0eyWGCQ5f",
"EbJoDB0LVf",
"084YS9IUVIV",
"kSkm3TxUS_m",
"q05sWzkZL0W",
"c6_8ORQbEBz",
"tkafkIzt8Bg",
"z0UWNjxWag",
"PJKyfUHiRy5",
"S41zqgA9Ax6"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We would like to make a small correction to our response to reviewer **EV88**. On HotPotQA, Baleen requires 52 ELECTRA invocations. Our response to reviewer **EV88** originally accounted for 20 invocations only (i.e., 10+10 for the two hops, which was our original implementation, instead of currently 10 for the f... | [
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
5,
5
] | [
"c6_8ORQbEBz",
"084YS9IUVIV",
"nips_2021_Ghk0AJ8XtVx",
"nips_2021_Ghk0AJ8XtVx",
"PJKyfUHiRy5",
"kSkm3TxUS_m",
"S41zqgA9Ax6",
"084YS9IUVIV",
"nips_2021_Ghk0AJ8XtVx",
"nips_2021_Ghk0AJ8XtVx"
] |
nips_2021_HCkJQJyoHN | Local Hyper-Flow Diffusion | Kimon Fountoulakis, Pan Li, Shenghao Yang | accept | The paper extends the local flow diffusion framework of [26] to the submodular hypergraph setting,. The reviewers and AC agree that this is a solid theoretical contribution, even though it does not significantly depart from the graph analysis. More importantly, the reviewers and AC agree that the authors provide extensive and convincing experimental results. These constitute the main strength of this submission. | train | [
"Nic6ICUDmhd",
"fFrNxIT6XF",
"if3bX5Zl-Dc",
"Z1Gjels_BSs",
"l14TTzKE6uD",
"Jop43-vfOmI",
"L3h_RWIP7tQ",
"j-JQLe_4ZvY",
"Z2sbRqjPtO",
"d-u7EkXu6Vs",
"LYgJy1jhzHq"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer xXJk, we hope that our response addressed your questions. Please feel free to let us know if you have any further suggestions or reservations. We wish to thank you again for your time and insightful comments.",
" Dear Reviewer Qb7P, we hope that our response addressed your questions. Please feel f... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Jop43-vfOmI",
"L3h_RWIP7tQ",
"nips_2021_HCkJQJyoHN",
"j-JQLe_4ZvY",
"d-u7EkXu6Vs",
"LYgJy1jhzHq",
"Z2sbRqjPtO",
"if3bX5Zl-Dc",
"nips_2021_HCkJQJyoHN",
"nips_2021_HCkJQJyoHN",
"nips_2021_HCkJQJyoHN"
] |
nips_2021_hhkyM3ib9e6 | Permuton-induced Chinese Restaurant Process | Masahiro Nakano, Yasuhiro Fujiwara, Akisato Kimura, Takeshi Yamada, naonori ueda | accept | The reviewers all agreed that this paper should be accepted. Please read through the reviews and responses and make sure to include all suggested changes in the camera ready version. One area to pay special attention to when preparing the camera ready is clarity. For example, this paper contains quite technical definitions of permutations/partitions and their relationships. It would be very useful to the more general reader to provide many more illustrative examples (in the appendix, if space is limiting).
As a side note, the reviewers greatly appreciated the authors' enthusiastic engagement during the discussion period -- well done! | test | [
"Y8271rxtgO",
"3Ip-jbG9JIo",
"c4kA1Djudma",
"7FQzw4n9M9",
"iEBBomFfvQl",
"rCie8qXDs1p",
"lDokZJqAO0y",
"oQtU0owlgFk",
"k8JtmgD7hnG",
"9iL383dzGTY",
"gTi0HBFafVE",
"jj4CEsso9H3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate author's response which resolved most of the questions I raised. I keep my score intact.",
" We would like to thank you again for your careful reading of our paper and very positive feedback. In particular, we are happy that you liked the idea of introducing permuton to model Bayesian nonparametric... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3,
4
] | [
"rCie8qXDs1p",
"9iL383dzGTY",
"7FQzw4n9M9",
"jj4CEsso9H3",
"gTi0HBFafVE",
"k8JtmgD7hnG",
"oQtU0owlgFk",
"nips_2021_hhkyM3ib9e6",
"nips_2021_hhkyM3ib9e6",
"nips_2021_hhkyM3ib9e6",
"nips_2021_hhkyM3ib9e6",
"nips_2021_hhkyM3ib9e6"
] |
nips_2021_U5Af9S_RcI0 | Faster Algorithms and Constant Lower Bounds for the Worst-Case Expected Error | Jonah Brown-Cohen | accept | This paper continues a line of investigation initiated by Chen, Valiant, and Valiant in NeurIPS'20. That work proposed a new model of statistical estimation for worst-case data that is randomly collected and gave a polynomial time algorithm minimizes the expected error. The current work provides a significantly faster algorithm for this problem, matching the guarantees of the previous work, when the data is bounded in $\ell_{\infty}$-norm. Moreover, additional (new) results are obtained in the current work, e.g., for the setting that the data is bounded in $\ell_2$-norm. At the technical level, the proposed algorithms rely on some version of online gradient descent, as opposed to black-box convex optimization in the previous work. With the exception of one reviewer (who was unconvinced about the model itself, i.e., the prior NeurIPS'20 work), the reviewers ranked this paper above the acceptance threshold. | val | [
"KcQG1n-XLjV",
"y6fuD_w0XGB",
"fVDXfTpH4_d",
"fvbdsPUjHSw",
"2a39iTKp03",
"GvPIZADBim",
"6ki7LuIMMrg",
"mpV9etQ72-U",
"a3Mzdk8Nq0j"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" With profuse apologies for the delay I wanted to thank the authors very much for their response to my clarification questions. The whole discussion was very helpful. In particular the paragraph about how the reductions of the [CVV'20] SDP to a standard SDP result in having $O(m)$ constraints whereas the SDP in th... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
4
] | [
-1,
3,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"GvPIZADBim",
"nips_2021_U5Af9S_RcI0",
"2a39iTKp03",
"a3Mzdk8Nq0j",
"mpV9etQ72-U",
"y6fuD_w0XGB",
"nips_2021_U5Af9S_RcI0",
"nips_2021_U5Af9S_RcI0",
"nips_2021_U5Af9S_RcI0"
] |
nips_2021_LkNBNOut0oD | On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources | Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e.g., learning domain-invariant representations and its trade-off. However, it seems not the case for the multiple source DA and domain generalization (DG) settings which are remarkably more complicated and sophisticated due to the involvement of multiple source domains and potential unavailability of target domain during training. In this paper, we develop novel upper-bounds for the target general loss which appeal us to define two kinds of domain-invariant representations. We further study the pros and cons as well as the trade-offs of enforcing learning each domain-invariant representation. Finally, we conduct experiments to inspect the trade-off of these representations for offering practical hints regarding how to use them in practice and explore other interesting properties of our developed theory.
| accept | This paper develops novel upper bounds for the target risk in the multiple source domain adaptation/generalization settings. Based on the theoretical results, the authors further study how to learn proper domain invariant representations. The proposed theory provides new insights and offers practical hints on multiple source domain adaptation. I thus recommend acceptance of this paper.
Nevertheless, there are some concerns that need to be addressed in the final version. The main concern is the comparison with existing theoretical results and methods in Zhao et al. 2018 and Hoffman et al. 2018. The authors have well addressed these in the rebuttal, and I hope the authors could carefully revise the paper to incorporate these discussions.
| train | [
"YAki7HcAFHO",
"GyppX7DBErN",
"ebnkXeKMP2-",
"dSr2wdOIZ4a",
"usKl6RwMWd",
"rddsmqox5bz",
"Ryks_smtsIk",
"FJAwC5Rmdpp",
"xuqCOEAJFD",
"M7cWCD0FH9"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer C95H,\n\nThank you for your time helping to review and improve the paper.\n\nAs the discussion period is nearing an end, thus we wonder if you can spend some time going over our newest replies, to see if they successfully answered your questions or not. This is also to give us a decent amount of tim... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"FJAwC5Rmdpp",
"nips_2021_LkNBNOut0oD",
"xuqCOEAJFD",
"GyppX7DBErN",
"FJAwC5Rmdpp",
"xuqCOEAJFD",
"M7cWCD0FH9",
"nips_2021_LkNBNOut0oD",
"nips_2021_LkNBNOut0oD",
"nips_2021_LkNBNOut0oD"
] |
nips_2021_bvzhvNPZlqG | You Never Cluster Alone | Recent advances in self-supervised learning with instance-level contrastive objectives facilitate unsupervised clustering. However, a standalone datum is not perceiving the context of the holistic cluster, and may undergo sub-optimal assignment. In this paper, we extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation that encodes the context of each data group. Contrastive learning with this representation then rewards the assignment of each datum. To implement this vision, we propose twin-contrast clustering (TCC). We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one. On one hand, with the corresponding assignment variables being the weight, a weighted aggregation along the data points implements the set representation of a cluster. We further propose heuristic cluster augmentation equivalents to enable cluster-level contrastive learning. On the other hand, we derive the evidence lower-bound of the instance-level contrastive objective with the assignments. By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps. Extensive experiments show that TCC outperforms the state-of-the-art on benchmarked datasets.
| accept | The paper proposes a novel extension to contrastive learning, using both cluster level and instance level tasks in representation learning. The bulk of the reviewers found this method interesting and well-motivated, and from my perspective as well using this additional structure is a good direction in contrastive learning / representation learning. The authors felt their work should be viewed independently from SwAV, but the reviewers clearly saw the strong overlap between what the two models do. The reviewers added SwAV as a baseline, and their model performed favorably on their benchmark tasks. However, I encourage the authors to be generous on the connection between their work and SwAV in particular in the final draft, while the downstream applications are similar, I believe the motivations are very similar. Finally, the reviewers requested more benchmarks, such as TinyImagenet, and this satisfied the reviewers. Therefore I recommend acceptance as a poster.
A note that I do believe that [a] "Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation" should be included in the final draft. I do not believe that the significance of the submitted work should be judge by [a], as it is quite new and could have feasibly fallen under the radar with no ill intent. But I believe both papers are doing similar things and as such this paper could do the community a service by including results in [a] and providing this as additional context to their results. I strongly encourage the authors to do this. | test | [
"isiWcUMRwR",
"qPdgDlkhxfB",
"QaH0_av-aZD",
"ra9gxv-ZBY",
"538AzfHZcHs",
"yN4MqFTSz0",
"yrQyr56Tb5v",
"XZYOtgfTg1H",
"H_dg6-vbore",
"JurrYTgNGF_",
"_Bfvrtp3EI",
"s7KSnosk9oa",
"hAyHqsLs0V",
"Xo_H9ZtgQTj",
"MAiu2sMFppv",
"OinWNezFiKm",
"uZQTJcgsAFB",
"rizvChSRrVZ",
"f0SxWGWs_FZ",
... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" We deeply appreciate the efforts of **R#VWHL**, thank you!",
" I thank to authors for a passionate discussion and I appreciate their hard work. \n\nI had a quite detailed discussion with the authors (can be seen above). I believe our discussions were broad but also fruitful. I acknowledge that we have small nu... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"qPdgDlkhxfB",
"ra9gxv-ZBY",
"nips_2021_bvzhvNPZlqG",
"538AzfHZcHs",
"yN4MqFTSz0",
"XZYOtgfTg1H",
"H_dg6-vbore",
"yrQyr56Tb5v",
"Xo_H9ZtgQTj",
"hAyHqsLs0V",
"s7KSnosk9oa",
"uZQTJcgsAFB",
"OinWNezFiKm",
"MAiu2sMFppv",
"rizvChSRrVZ",
"Mb14z_UvQ4r",
"URPl9f1IjO",
"QaH0_av-aZD",
"vU7... |
nips_2021_jK9Hy4qJsB | Dynamic COVID risk assessment accounting for community virus exposure from a spatial-temporal transmission model | COVID-19 pandemic has caused unprecedented negative impacts on our society, including further exposing inequity and disparity in public health. To study the impact of socioeconomic factors on COVID transmission, we first propose a spatial-temporal model to examine the socioeconomic heterogeneity and spatial correlation of COVID-19 transmission at the community level. Second, to assess the individual risk of severe COVID-19 outcomes after a positive diagnosis, we propose a dynamic, varying-coefficient model that integrates individual-level risk factors from electronic health records (EHRs) with community-level risk factors. The underlying neighborhood prevalence of infections (both symptomatic and pre-symptomatic) predicted from the previous spatial-temporal model is included in the individual risk assessment so as to better capture the background risk of virus exposure for each individual. We design a weighting scheme to mitigate multiple selection biases inherited in EHRs of COVID patients. We analyze COVID transmission data in New York City (NYC, the epicenter of the first surge in the United States) and EHRs from NYC hospitals, where time-varying effects of community risk factors and significant interactions between individual- and community-level risk factors are detected. By examining the socioeconomic disparity of infection risks and interaction among the risk factors, our methods can assist public health decision-making and facilitate better clinical management of COVID patients.
| accept | This paper presents novel ML approaches to examining the socioeconomic disparity of infection risks and interaction among the risk factors, to assist public health decision-making and facilitate better clinical management of COVID patients. This study is highly relevant to the NeurIPS community. As one reviewer states, this work pushes the boundary of merging the epidemiological analysis with socio-economic factors and is quite ground breaking in this respect, with the potential for huge societal impact. The paper is methodologically solid and the authors have presented a strong rebuttal and discussion with the reviewers | train | [
"E5cIokDuYdV",
"kSOTQmyjz8J",
"pegGQQhcqrA",
"S3oFDZ5hXLs",
"27mlM249kL-",
"Du-oEuhfpYN",
"83wYSDNB3S",
"BSZwBIdvLSi",
"AaLjQ8326pb",
"eP6BdXpIrAx",
"nCYy1iGxBEx",
"lhEyLDo-1xU",
"FzWdXfq8lA"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This work proposes (a) a community-transmission model, and (b) an individual risk-assessment model for COVID-19. The model is general enough to be applicable to any infectious disease. The model for (b) is enriched by inferred variables from (a) by (i) using them as covariates and (ii) addressing the selection bia... | [
7,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_jK9Hy4qJsB",
"AaLjQ8326pb",
"Du-oEuhfpYN",
"27mlM249kL-",
"lhEyLDo-1xU",
"nCYy1iGxBEx",
"nips_2021_jK9Hy4qJsB",
"nips_2021_jK9Hy4qJsB",
"eP6BdXpIrAx",
"E5cIokDuYdV",
"83wYSDNB3S",
"FzWdXfq8lA",
"nips_2021_jK9Hy4qJsB"
] |
nips_2021_kpDf5AW_Dlc | Dueling Bandits with Adversarial Sleeping | Aadirupa Saha, Pierre Gaillard | accept | The reviewers are unanimous that this is an interesting problem. While there were some initial concerns about the analysis and relation to other algorithms, this has been cleared up in the response and discussion. Please take the reviewers minor comments into consideration when preparing the final revision. Note also the confusions of the reviewers on certain issues explained in the response will likely also be confusing to other readers, so the final revision should take special care to clarify these potential pitfalls.
| train | [
"mfG2mDzmovK",
"86CoV6sUCKI",
"T12G2fKTl0I",
"wDYY-FkGj2n",
"7V5gmv-f9Pv",
"ShmsMAmI9E",
"wYwHhSRB9S",
"e19EtQHkd3x",
"t3YkrihMVzi",
"n19TSvmeDI5",
"nBnFw0J1Ccb",
"d2Ukrplhze",
"whshcN25_eQ",
"DrvYVGGsT1",
"ZmSbEP_RFU"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer ASCB,\n\nGlad that the concern is clarified. Many thanks for confirming and re-considering the scores. We will certainly detail our regret bounds for SlDB-UCB more clearly in the update, including the dependency on delta, the high probability and the corresponding expected regret bound. Thanks for y... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"T12G2fKTl0I",
"nips_2021_kpDf5AW_Dlc",
"wDYY-FkGj2n",
"7V5gmv-f9Pv",
"ShmsMAmI9E",
"d2Ukrplhze",
"t3YkrihMVzi",
"nips_2021_kpDf5AW_Dlc",
"n19TSvmeDI5",
"nBnFw0J1Ccb",
"DrvYVGGsT1",
"86CoV6sUCKI",
"ZmSbEP_RFU",
"e19EtQHkd3x",
"nips_2021_kpDf5AW_Dlc"
] |
nips_2021_wEOlVzVhMW_ | Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game | Simulated DAG models may exhibit properties that, perhaps inadvertently, render their structure identifiable and unexpectedly affect structure learning algorithms. Here, we show that marginal variance tends to increase along the causal order for generically sampled additive noise models. We introduce varsortability as a measure of the agreement between the order of increasing marginal variance and the causal order. For commonly sampled graphs and model parameters, we show that the remarkable performance of some continuous structure learning algorithms can be explained by high varsortability and matched by a simple baseline method. Yet, this performance may not transfer to real-world data where varsortability may be moderate or dependent on the choice of measurement scales. On standardized data, the same algorithms fail to identify the ground-truth DAG or its Markov equivalence class. While standardization removes the pattern in marginal variance, we show that data generating processes that incur high varsortability also leave a distinct covariance pattern that may be exploited even after standardization. Our findings challenge the significance of generic benchmarks with independently drawn parameters. The code is available at https://github.com/Scriddie/Varsortability.
| accept | A summary from one of the reviews:
"This paper shows that one needs to be cautious when rescaling your data prior to using a causal structure learning algorithms. The authors introduce the concept of varsortability that measures the agreement in how much the marginal variance tends to increase along the causal order. They show that standardization of your data can hurt performance in identifying the DAG (or its equivalence class) which can be explained by varsortability. The authors claim that this concept of varsortability also explains why even after standardization certain continuous structure learning algorithm perform well. The authors focuses on additive noise models and perform an extensive benchmark."
While the initial reviewer opinions were split, the eventual consensus on this paper is that it brings a valuable message of caution regarding developing and benchmarking causal inference methods with simulated data. | val | [
"_ct4wu6D_V",
"CCADWhvMx2",
"ZFLzCLmvEe5",
"Yx0ouMbz1nH",
"63HKlrKJA32",
"QIsC4ILXufo",
"UfKhfxEctIs",
"t00uuo-yBG",
"xWd2uHmIRu",
"JdYCAWifBJl",
"J5FnjFd6Zd2",
"orUDPs5UZOy",
"l9VjbwGwnYU",
"SIBiib8dmN",
"sFjjyJmtADu",
"uNBUHJEeCJ",
"MiTK2G00Tff",
"n-KxnUDOp8H",
"CDICEjYnfx"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" Dear Reviewer,\n\nWe thank you for your continued interest.\n\nWe are in fact unsure about the internal stage of the review process and whether reviewer assessments are concluded. Therefore, to minimize overhead for (S)ACs in handling our submission, we (propose to) focus on those aspects here that may be relevan... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"Yx0ouMbz1nH",
"ZFLzCLmvEe5",
"_ct4wu6D_V",
"63HKlrKJA32",
"UfKhfxEctIs",
"nips_2021_wEOlVzVhMW_",
"t00uuo-yBG",
"J5FnjFd6Zd2",
"nips_2021_wEOlVzVhMW_",
"xWd2uHmIRu",
"orUDPs5UZOy",
"l9VjbwGwnYU",
"SIBiib8dmN",
"MiTK2G00Tff",
"CDICEjYnfx",
"n-KxnUDOp8H",
"xWd2uHmIRu",
"nips_2021_wE... |
nips_2021_sesLIFMA9x9 | Automated Dynamic Mechanism Design | We study Bayesian automated mechanism design in unstructured dynamic environments, where a principal repeatedly interacts with an agent, and takes actions based on the strategic agent's report of the current state of the world. Both the principal and the agent can have arbitrary and potentially different valuations for the actions taken, possibly also depending on the actual state of the world. Moreover, at any time, the state of the world may evolve arbitrarily depending on the action taken by the principal. The goal is to compute an optimal mechanism which maximizes the principal's utility in the face of the self-interested strategic agent.We give an efficient algorithm for computing optimal mechanisms, with or without payments, under different individual-rationality constraints, when the time horizon is constant. Our algorithm is based on a sophisticated linear program formulation, which can be customized in various ways to accommodate richer constraints. For environments with large time horizons, we show that the principal's optimal utility is hard to approximate within a certain constant factor, complementing our algorithmic result. These results paint a relatively complete picture for automated dynamic mechanism design in unstructured environments. We further consider a special case of the problem where the agent is myopic, and give a refined efficient algorithm whose time complexity scales linearly in the time horizon. In the full version of the paper, we show that memoryless mechanisms, which are without loss of generality optimal in Markov decision processes without strategic behavior, do not provide a good solution for our problem, in terms of both optimality and computational tractability. Moreover, we present experimental results where our algorithms are applied to synthetic dynamic environments with different characteristics, which not only serve as a proof of concept for our algorithms, but also exhibit intriguing phenomena in dynamic mechanism design.
| accept | SUMMARY
The authors consider a dynamic mechanism design problem. In this problem a single agent interacts with a single principal over a time horizon of T rounds. The problem is as follows: There is a distribution over initial states. A start state is drawn from this distribution. In each round, the agent observes the state s and reports a state s'. The principal takes an action a based on the history of states and actions and the reported current state s'. For each state and action pair there is a distribution over next round states, from which the next state is drawn. Both principal and agent have possibly distinct valuation functions that operate on state-action pairs, and quasi-linear utility for money that can flow in either way.
The goal is to design a (possibly randomized) dynamic mechanism that maximizes the principal's utility subject to dynamic IC plus overall or dynamic IR.
The two main results are:
(1) When T is non-constant, it is NP hard to approximate the principal's maximum utility to within a factor of 7/8+eps (Theorem 1).
(2) There is a LP that finds the optimal principal's utility in time O(|S|^T,|A|^T,L) where S is the state space, A is the action space, and L is the number of bits needed to encode each input parameter (LP in Figure 1, Theorem 2).
The paper mentions a couple of other theoretical results including experiments, but all of these are deferred to the appendix.
The theoretical results include an algorithm that scales linearly in T for a myopic agent. A result that shows that memoryless mechanisms, which are optimal for MDPs, do not provide a good solution to the problem in the presence of strategic decisions. The experiments amongst others show that taking incentives into account matters and that optimal designs are remarkably robust against misaligned preferences.
Related work:
The main point of comparison for this paper is a paper by Papadimitriou et al. (SODA 2016 [24]), which considers a closely related mechanism design problem in which the principal's decision in each round is to allocate an item. They show that it is strongly NP hard to complete the optimal deterministic mechanism for a single bidder and two days; and give a LP based algorithm for computing the optimal randomized mechanism when the number of agents and types are both constant (See third comment below).
RECOMMENDATION:
All reviewers liked the motivating story and the and the theoretical results. Although somewhat related, the extension over Papadimitriou et al. seems significant enough to warrant publication in NeurIPS.
For the camera ready:
** I would strongly encourage to cut most of the space used for the LP-based approach (just mention the LP and the theorem with some discussion what the key ingredients are); then use the additional space to state the additional theoretical results and describe some of the experiments
** Also: I think it would be nice to compare in more detail where and why your results are different from Papadimitriou et al., both qualitatively and technically
** Discussion of [24]: You write that [24] give a poly-time LP-based algorithm "when the number of agents and the time horizon are both constant" - I couldn't parse their theorem (Theorem 8 in Section 5 of the arXiv version of that paper). It says "For any number of days D, and a constant number of independent bidders k, the optimal adaptive randomized auction can be found in time polynomial in the number of types and in the number of days."
(please double check that you cite it correctly)
** Related work: The deep learning approach to AMD should be credited to [13].
** Related work: For work on AMD through ML, please also cite:
Payment Rules through Discriminant Based Classifiers
P. Dutting, F. Fischer, P. Jirapinyo, J. K. Lai, B Lubin, D. C. Parkes
ACM EC'12 | train | [
"ogCdH756QA",
"t5nonynWlJc",
"UMkh55KOHCM",
"QxAPYAteKy",
"mYwpZRKazP",
"377bzi2AmH7",
"3w1V4nMynN9",
"5p055jRYErj",
"gpZZPtpC8PP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors formulate and study dynamic mechanism design in an MDP-like setting. The motivation is principal-agent problems. The principal commits to a policy to choose actions given observations of states; the environment evolves according to some transitions dynamics based on these actions; the agent does not ac... | [
8,
7,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
2,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_sesLIFMA9x9",
"nips_2021_sesLIFMA9x9",
"377bzi2AmH7",
"ogCdH756QA",
"5p055jRYErj",
"t5nonynWlJc",
"gpZZPtpC8PP",
"nips_2021_sesLIFMA9x9",
"nips_2021_sesLIFMA9x9"
] |
nips_2021_WN1TaGjVC9U | A generative nonparametric Bayesian model for whole genomes | Generative probabilistic modeling of biological sequences has widespread existing and potential use across biology and biomedicine, particularly given advances in high-throughput sequencing, synthesis and editing. However, we still lack methods with nucleotide resolution that are tractable at the scale of whole genomes and that can achieve high predictive accuracy in theory and practice. In this article we propose a new generative sequence model, the Bayesian embedded autoregressive (BEAR) model, which uses a parametric autoregressive model to specify a conjugate prior over a nonparametric Bayesian Markov model. We explore, theoretically and empirically, applications of BEAR models to a variety of statistical problems including density estimation, robust parameter estimation, goodness-of-fit tests, and two-sample tests. We prove rigorous asymptotic consistency results including nonparametric posterior concentration rates. We scale inference in BEAR models to datasets containing tens of billions of nucleotides. On genomic, transcriptomic, and metagenomic sequence data we show that BEAR models provide large increases in predictive performance as compared to parametric autoregressive models, among other results. BEAR models offer a flexible and scalable framework, with theoretical guarantees, for building and critiquing generative models at the whole genome scale.
| accept | All four reviewers recommend accepting the submission, but three reviewers are not highly confident in their assessments and I am not highly confident in the other reviewer’s assessment; there is a disconnect between what this reviewer wrote and what rating this reviewer assigned. Therefore, I reviewed this submission carefully myself. The submission is well written, novel, and makes an important contribution to Bayesian modeling for genomic sequence data; I recommend accepting it. | train | [
"95kPtKEkt9N",
"e4o9kjnzfk6",
"s8hGSY3XNRS",
"I_EH7xn6jLC",
"W9m4wy3atBV",
"x5AwDpnPKe2",
"r5C_lrMrrKz",
"hY8sg7xIjUn",
"D5Tkj1yhfML",
"nAL939wsxsw",
"y_O-W654-pT",
"UVDxyGDK4ie",
"8zTfqKi4S6l"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their reply. In response to concern 4, we will add additional references to the most closely related uses of these AR models, in particular the compression literature discussed above with Reviewer MWYx.",
" Thank you for the detailed answer.\n\n1- I am not completely satisfied with the... | [
-1,
-1,
9,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"e4o9kjnzfk6",
"nAL939wsxsw",
"nips_2021_WN1TaGjVC9U",
"8zTfqKi4S6l",
"r5C_lrMrrKz",
"nips_2021_WN1TaGjVC9U",
"y_O-W654-pT",
"s8hGSY3XNRS",
"8zTfqKi4S6l",
"UVDxyGDK4ie",
"x5AwDpnPKe2",
"nips_2021_WN1TaGjVC9U",
"nips_2021_WN1TaGjVC9U"
] |
nips_2021_SFLSOd_hv-4 | Robust Predictable Control | Ben Eysenbach, Russ R. Salakhutdinov, Sergey Levine | accept | All 4 reviewers suggested acceptance of the paper and the authors clarified all open questions and concerns in their rebuttal. Therefore I am recommending acceptance of the paper. | test | [
"yjqYBXX2FTn",
"EqeDZiSBTAg",
"l-vmU8FPvl9",
"nySV003UuF5",
"ewgoUQ86dXg",
"tNjpa5PawWA",
"J-VrOVIm76Z",
"4asM5udOVLN",
"yzcUQU_gLhN",
"Oty48FAfVKF",
"D6CLNdnh8Qg",
"haOmuPpGPBU",
"pGCVvyhRbx"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarifications!",
" > Regarding detach, as long as the tools we are using to explain this concept are defined mathematically, it should be fine.\n\nWe will make sure to do that.",
" Thanks for your detailed explanations. \n\nRegarding `detach`, as long as the tools we are using to explain t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"tNjpa5PawWA",
"l-vmU8FPvl9",
"nySV003UuF5",
"4asM5udOVLN",
"pGCVvyhRbx",
"haOmuPpGPBU",
"D6CLNdnh8Qg",
"yzcUQU_gLhN",
"Oty48FAfVKF",
"nips_2021_SFLSOd_hv-4",
"nips_2021_SFLSOd_hv-4",
"nips_2021_SFLSOd_hv-4",
"nips_2021_SFLSOd_hv-4"
] |
nips_2021_QmxFsofRvW9 | Unsupervised Speech Recognition | Despite rapid progress in the recent past, current speech recognition systems still require labeled training data which limits this technology to a small fraction of the languages spoken around the globe. This paper describes wav2vec-U, short for wav2vec Unsupervised, a method to train speech recognition models without any labeled data. We leverage self-supervised speech representations to segment unlabeled audio and learn a mapping from these representations to phonemes via adversarial training. The right representations are key to the success of our method. Compared to the best previous unsupervised work, wav2vec-U reduces the phone error rate on the TIMIT benchmark from 26.1 to 11.3. On the larger English Librispeech benchmark, wav2vec-U achieves a word error rate of 5.9 on test-other, rivaling some of the best published systems trained on 960 hours of labeled data from only two years ago. We also experiment on nine other languages, including low-resource languages such as Kyrgyz, Swahili and Tatar.
| accept | This work was recommended for acceptance by all reviewers. They praised the thoroughness and replicability of the experimental setup, as well as the significance of the results. This work has the potential to strongly impact speech recognition for low-resource languages, and constitutes a step change in our ability to build practically usable speech recognition systems. | train | [
"XaPyp2Q1PBq",
"BgXWB9bCRb",
"thR7sNkOoxz",
"BVU5ShAM8JN",
"lKKp6AD7CWj",
"cJcQ8anuDky",
"Ly3sI52HO3k"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the response. My recommendation for the paper still stands.",
"This paper builds upon previous work on unsupervised speech recognition. This work introduces several improvements which are crucial to achieve impressive performance on a variety of datasets. These improvements are: using a ... | [
-1,
8,
-1,
-1,
-1,
8,
7
] | [
-1,
4,
-1,
-1,
-1,
4,
3
] | [
"BVU5ShAM8JN",
"nips_2021_QmxFsofRvW9",
"Ly3sI52HO3k",
"cJcQ8anuDky",
"BgXWB9bCRb",
"nips_2021_QmxFsofRvW9",
"nips_2021_QmxFsofRvW9"
] |
nips_2021_Y8YqrYeFftd | Robustness between the worst and average case | Several recent works in machine learning have focused on evaluating the test-time robustness of a classifier: how well the classifier performs not just on the target domain it was trained upon, but upon perturbed examples. In these settings, the focus has largely been on two extremes of robustness: the robustness to perturbations drawn at random from within some distribution (i.e., robustness to random perturbations), and the robustness to the worst case perturbation in some set (i.e., adversarial robustness). In this paper, we argue that a sliding scale between these two extremes provides a valuable additional metric by which to gauge robustness. Specifically, we illustrate that each of these two extremes is naturally characterized by a (functional) q-norm over perturbation space, with q=1 corresponding to robustness to random perturbations and q=\infty corresponding to adversarial perturbations. We then present the main technical contribution of our paper: a method for efficiently estimating the value of these norms by interpreting them as the partition function of a particular distribution, then using path sampling with MCMC methods to estimate this partition function (either traditional Metropolis-Hastings for non-differentiable perturbations, or Hamiltonian Monte Carlo for differentiable perturbations). We show that our approach provides substantially better estimates than simple random sampling of the actual “intermediate-q” robustness of both standard, data-augmented, and adversarially-trained classifiers, illustrating a clear tradeoff between classifiers that optimize different metrics. Code for reproducing experiments can be found at https://github.com/locuslab/intermediate_robustness.
| accept | The paper presents a new notion of robustness that interpolates between random perturbations and adversarial perturbations. The authors also present a MCMC based approach to evaluate the proposed robustness notion. All the reviewers agreed that this is an interesting and insightful notion of robustness and the results presented are above the bar for NeurIPS. There were concerns raised about the significance of the proposed definition, but the author response in this direction has been satisfactory. The authors should take into account the reviewers' comments before preparing the final version. | train | [
"tYYiJCPFvc",
"2M7PZbJUUa8",
"PSftnqVTRa_",
"vxgudcDFLta",
"QWMTj7FZ7uz",
"0Ya6Ysyiqm",
"q4BxK7rYSpG",
"AS5FmPwONxv",
"DKn4rHdfgbm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer J5Fx,\n\nWe thank you again for your helpful and insightful review. Since the main question from your review was on better understanding our proposed intermediate-p robustness metric, we will be very grateful if you could check out [our response](https://openreview.net/forum?id=Y8YqrYeFftd¬eId=QW... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"AS5FmPwONxv",
"AS5FmPwONxv",
"nips_2021_Y8YqrYeFftd",
"0Ya6Ysyiqm",
"AS5FmPwONxv",
"PSftnqVTRa_",
"DKn4rHdfgbm",
"nips_2021_Y8YqrYeFftd",
"nips_2021_Y8YqrYeFftd"
] |
nips_2021_x6tV8QhHjs1 | Online Learning and Control of Complex Dynamical Systems from Sensory Input | Identifying an effective model of a dynamical system from sensory data and using it for future state prediction and control is challenging. Recent data-driven algorithms based on Koopman theory are a promising approach to this problem, but they typically never update the model once it has been identified from a relatively small set of observation, thus making long-term prediction and control difficult for realistic systems, in robotics or fluid mechanics for example. This paper introduces a novel method for learning an embedding of the state space with linear dynamics from sensory data. Unlike previous approaches, the dynamics model can be updated online and thus easily applied to systems with non-linear dynamics in the original configuration space. The proposed approach is evaluated empirically on several classical dynamical systems and sensory modalities, with good performance on long-term prediction and control.
| accept | From the SAC. This is an instance where the rebuttal and the discussion worked. While the original decision for this paper was to not accept, it is being raised to a recommended accept. The primary reason is the quality of the rebuttal, and the useful technical discussion between authors and reviewers that ensued that seems to have been revealing. To the authors: I trust that you will take all reviewer feedback into account and most importantly, that all of the things in your rebuttal and discussion that were promised will be done in the next version of the paper. | test | [
"JauNxqb49eT",
"IVaj4g0AfQ7",
"enbKA43lq6",
"9k9IBkTtwt7",
"riBYVqm033f",
"94fVaVojtj",
"lKiMJ7bu_YO",
"WIs-_Qj5olD",
"jQb0fqxULD",
"h3QnRJh1ql5",
"dnicaS5ai6W",
"4vhB5y8GK6n"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" We thank the reviewer for his/her comments. We commit of course to adding a comprehensive description and discussion on the new 2D fluid dynamics experiments to the main paper if it is accepted. We are currently working on including control in this setting, similar to [6], but conclusive results are not ready yet... | [
-1,
-1,
6,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
3,
-1,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1
] | [
"IVaj4g0AfQ7",
"dnicaS5ai6W",
"nips_2021_x6tV8QhHjs1",
"4vhB5y8GK6n",
"WIs-_Qj5olD",
"nips_2021_x6tV8QhHjs1",
"nips_2021_x6tV8QhHjs1",
"h3QnRJh1ql5",
"nips_2021_x6tV8QhHjs1",
"lKiMJ7bu_YO",
"enbKA43lq6",
"94fVaVojtj"
] |
nips_2021_zOngaSKrElL | Self-Supervised Bug Detection and Repair | Machine learning-based program analyses have recently shown the promise of integrating formal and probabilistic reasoning towards aiding software development. However, in the absence of large annotated corpora, training these analyses is challenging. Towards addressing this, we present BugLab, an approach for self-supervised learning of bug detection and repair. BugLab co-trains two models: (1) a detector model that learns to detect and repair bugs in code, (2) a selector model that learns to create buggy code for the detector to use as training data. A Python implementation of BugLab improves by 30% upon baseline methods on a test dataset of 2374 real-life bugs and finds 19 previously unknown bugs in open-source software.
| accept | The paper provides a novel approach to train a bug detector by co-training a bug injection procedure together with the bug detector. This is an interesting idea, and while the resulting bug detector has a high number of false positives, it was able to find new bugs in PyPI packages. (19 of 1000 reported bugs turned out to be real bugs).
The bug injection procedure is based on transformations that are meant to introduce bugs in the code at a known location; the transformations are hand-crafted, but the model learns where to apply them to make the bugs hard to find. The system also relies on semantics preserving transformations to introduce additional variety to the set of non-buggy programs. One limitation of the approach is that the bug introducing transformations may not actually be introducing bugs in all cases (for example, if they swap two variables that happen to be aliases of each other), and the semantics preserving transformation may not be semantics preserving (as one of the reviewers pointed out).
Overall, this is a strong paper; it presents a novel and interesting idea, and while there was a desire from some reviewers for additional baselines, the evaluation is still fairly convincing. One of the reviewers surfaced a paper that I think is relevant and should be cited ("Generating Adversarial Computer Programs using Optimized Obfuscations"), but that paper is quite different does not detract from the novelty of this work.
| val | [
"T2lxXLtZ_--",
"owCi-4WNPzO",
"1H9NpRJ4KkJ",
"GoTCpnHjnkm",
"1l-iTtXnHCn",
"3uHkInROw5G",
"7TMcohVfsPd",
"uokunvYoEee",
"txs8Uob9F3k"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper describes a self-supervised ML-based bug fixer that uses a combination of a bug localization and repair model, with a bug injection model. The two models are identical in architecture: they select a program location to mutate, and then one of a few possible mutations and its parameters (e.g., swapping a... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_zOngaSKrElL",
"3uHkInROw5G",
"1l-iTtXnHCn",
"7TMcohVfsPd",
"txs8Uob9F3k",
"T2lxXLtZ_--",
"uokunvYoEee",
"nips_2021_zOngaSKrElL",
"nips_2021_zOngaSKrElL"
] |
nips_2021_Rz-hPxb6ODl | Faster Neural Network Training with Approximate Tensor Operations | We propose a novel technique for faster deep neural network training which systematically applies sample-based approximation to the constituent tensor operations, i.e., matrix multiplications and convolutions. We introduce new sampling techniques, study their theoretical properties, and prove that they provide the same convergence guarantees when applied to SGD training. We apply approximate tensor operations to single and multi-node training of MLP and CNN networks on MNIST, CIFAR-10 and ImageNet datasets. We demonstrate up to 66% reduction in the amount of computations and communication, and up to 1.37x faster training time while maintaining negligible or no impact on the final test accuracy.
| accept | This paper presents a method for approximating the matrix multiplication and convolution operations in neural networks. Theoretical results show that the gradient estimates are robust to the approximation applied in the backwards pass (but not necessarily the forwards pass). Experiments show that the number of computations can be reduced significantly without degrading test accuracy.
Reviewers generally feel like the paper is sound overall, without significant gaps in correctness or discussion of prior work. They had various specific comments which they feel were mostly addressed in the author response. However, the reviewers kept their scores at middling values, due to concerns about whether the method translates into significant wall clock gains, whether it outperforms other approximations, and whether the noise robustness will extend to other architectures. These are all valid concerns, but this approach seems like worthwhile addition to the toolbox, and the authors have validated it in a reasonable variety of situations. I'd favor acceptance.
| train | [
"YQRGuya21iF",
"9-o7RHFhon8",
"n-13B6URnie",
"uGVVw5-X45G",
"ShINj0a5EXW",
"kSHj81FTqDs",
"ZwyGt5nEOzA",
"_tbQyTPKiEX",
"7myy8LcfNwo",
"aT4P5N2WXdQ",
"MDyXErCkWvR",
"GIx3asTjNld",
"ob5VWc80V7Z",
"LGP1HkcJuEq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarifications. I will maintain my score.",
" I would like to thank the authors for addressing my concerns and clarifying questions. I remain with my original score.",
" We thank reviewer pWRo for the comments and updated score.\n\nDue to the 8 page limit we had to compromise between the paper ... | [
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"MDyXErCkWvR",
"aT4P5N2WXdQ",
"_tbQyTPKiEX",
"kSHj81FTqDs",
"nips_2021_Rz-hPxb6ODl",
"GIx3asTjNld",
"nips_2021_Rz-hPxb6ODl",
"7myy8LcfNwo",
"ZwyGt5nEOzA",
"LGP1HkcJuEq",
"ob5VWc80V7Z",
"ShINj0a5EXW",
"nips_2021_Rz-hPxb6ODl",
"nips_2021_Rz-hPxb6ODl"
] |
nips_2021_pZHGKM9mAp | Learning Interpretable Decision Rule Sets: A Submodular Optimization Approach | Rule sets are highly interpretable logical models in which the predicates for decision are expressed in disjunctive normal form (DNF, OR-of-ANDs), or, equivalently, the overall model comprises an unordered collection of if-then decision rules. In this paper, we consider a submodular optimization based approach for learning rule sets. The learning problem is framed as a subset selection task in which a subset of all possible rules needs to be selected to form an accurate and interpretable rule set. We employ an objective function that exhibits submodularity and thus is amenable to submodular optimization techniques. To overcome the difficulty arose from dealing with the exponential-sized ground set of rules, the subproblem of searching a rule is casted as another subset selection task that asks for a subset of features. We show it is possible to write the induced objective function for the subproblem as a difference of two submodular (DS) functions to make it approximately solvable by DS optimization algorithms. Overall, the proposed approach is simple, scalable, and likely to be benefited from further research on submodular optimization. Experiments on real datasets demonstrate the effectiveness of our method.
| accept | The authors provide an interesting connection between an interpretable decision rule set and submodular optimization. Technically, they either consider regularized submodular maximization framework or the difference of two submodular functions.
I found the formalization very interesting. Also, the authors had a successful rebuttal and convincing new set of experiments. Regarding scaling up the experiments, the authors may look at a recent ICML paper on "Regularized Submodular Maximization at Scale" by Kazemi et al. Also, there is a related paper on interpretability using submodularity that the authors may consider looking into "Streaming Weak Submodularity: Interpreting Neural Networks on the Fly" by Elenberg et al.
All in all, this is an interesting paper and I suggest acceptance. | train | [
"EYTXFg2x8Iz",
"oR3RbK9VvUA",
"iYV2k5UJbi0",
"0ftiGtsEc4Z",
"eSS5OS_Ifm5",
"iR366aYNmbT",
"W-atO3OeUMH",
"wEHEVc3r3jE",
"dk0a_qRj-2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper addresses learning rule set models for binary classification, ie combinations of logical rules on features to classify a positive class. Since learning rule sets requires selection of features over which the logical rules are applied and selection of the rules themselves, the subset selection problem is... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_pZHGKM9mAp",
"nips_2021_pZHGKM9mAp",
"0ftiGtsEc4Z",
"iR366aYNmbT",
"oR3RbK9VvUA",
"nips_2021_pZHGKM9mAp",
"dk0a_qRj-2",
"EYTXFg2x8Iz",
"nips_2021_pZHGKM9mAp"
] |
nips_2021_IKz9uYkf3vZ | Spatial-Temporal Super-Resolution of Satellite Imagery via Conditional Pixel Synthesis | High-resolution satellite imagery has proven useful for a broad range of tasks, including measurement of global human population, local economic livelihoods, and biodiversity, among many others. Unfortunately, high-resolution imagery is both infrequently collected and expensive to purchase, making it hard to efficiently and effectively scale these downstream tasks over both time and space. We propose a new conditional pixel synthesis model that uses abundant, low-cost, low-resolution imagery to generate accurate high-resolution imagery at locations and times in which it is unavailable. We show that our model attains photo-realistic sample quality and outperforms competing baselines on a key downstream task – object counting – particularly in geographic locations where conditions on the ground are changing rapidly.
| accept | Three of the four reviewers recommended accepting the paper , and one of them increased the score following the rebuttal. I am happy to accept it, I encourage the authors to include the additional material that they discussed in the rebuttal in the final version. | test | [
"G9X9pG9dkGY",
"pHyelHs9Sso",
"mhVceR1nc2_",
"Z2DLBriY_AP",
"J4sDpoxmvTS",
"NVBZ2BqmzyD",
"aXnCNZ8xvr",
"3Q1zI0G-5B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The submission deals with high resolution satellite images, given a low resolution image. The method is trained by learning to predict a high resolution image given a low resolution one, adversiarially, knowing the true high resolution. The point is to use LR imagery to predict HR at points in time where actual HR... | [
6,
-1,
-1,
-1,
-1,
8,
5,
7
] | [
5,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"nips_2021_IKz9uYkf3vZ",
"3Q1zI0G-5B",
"aXnCNZ8xvr",
"G9X9pG9dkGY",
"NVBZ2BqmzyD",
"nips_2021_IKz9uYkf3vZ",
"nips_2021_IKz9uYkf3vZ",
"nips_2021_IKz9uYkf3vZ"
] |
nips_2021_PlGSgjFK2oJ | On Memorization in Probabilistic Deep Generative Models | Recent advances in deep generative models have led to impressive results in a variety of application domains. Motivated by the possibility that deep learning models might memorize part of the input data, there have been increased efforts to understand how memorization arises. In this work, we extend a recently proposed measure of memorization for supervised learning (Feldman, 2019) to the unsupervised density estimation problem and adapt it to be more computationally efficient. Next, we present a study that demonstrates how memorization can occur in probabilistic deep generative models such as variational autoencoders. This reveals that the form of memorization to which these models are susceptible differs fundamentally from mode collapse and overfitting. Furthermore, we show that the proposed memorization score measures a phenomenon that is not captured by commonly-used nearest neighbor tests. Finally, we discuss several strategies that can be used to limit memorization in practice. Our work thus provides a framework for understanding problematic memorization in probabilistic generative models.
| accept | The paper introduces a memorization score for generative models which quantifies how the much the log-probability of a given datapoint (under the trained model) depends on its presence in the training set, and proposes an efficient cross-validation-based estimator for it. The authors applies this metric to VAEs trained on several datasets and produce several surprising findings about memorization in such models.
This is an interesting and well written paper that sheds some light on an important problem. The approach is simple and sensible and the experimental results are quite thought-provoking. The main concerns the reviewers had about the paper were the lack of guidance about what constitutes a high memorization score and the lack of precision in the claim that memorization is different from overfitting. The authors are encouraged to address these when revising the paper, which has the potential to be an influential contribution to the field. | train | [
"MZ2UDH8lgYF",
"Laa_Y_Zbso6",
"AX_R4YB71iJ",
"vOrPVUQkgQz",
"2wrh_ktFDCx",
"caTR1M824u2",
"1s495e7o4_",
"67zDXtmGEJC",
"Di9YGZNDNu",
"3pgNfSVRyxC",
"b0Lg1gA_l_H",
"-5WQ32dLoLO",
"2gkCZXwppDz",
"QvZTozGGF_"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for the additional comments.\n\n> This alleviates my original concern slightly, but I don’t think that this distinction is as fundamental as suggested/claimed in the paper.\n\nWe don't think that we claim a fundamental distinction in our work, and as we mentioned above we agree that in some ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"Laa_Y_Zbso6",
"vOrPVUQkgQz",
"2wrh_ktFDCx",
"caTR1M824u2",
"3pgNfSVRyxC",
"2gkCZXwppDz",
"nips_2021_PlGSgjFK2oJ",
"nips_2021_PlGSgjFK2oJ",
"nips_2021_PlGSgjFK2oJ",
"QvZTozGGF_",
"67zDXtmGEJC",
"Di9YGZNDNu",
"1s495e7o4_",
"nips_2021_PlGSgjFK2oJ"
] |
nips_2021_xmx5rE9QP7R | You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring Mechanism | I consider the setting where reviewers offer very noisy scores for a number of items for the selection of high-quality ones (e.g., peer review of large conference proceedings) whereas the owner of these items knows the true underlying scores but prefers not to provide this information. To address this withholding of information, in this paper, I introduce the Isotonic Mechanism, a simple and efficient approach to improving on the imprecise raw scores by leveraging certain information that the owner is incentivized to provide. This mechanism takes as input the ranking of the items from best to worst provided by the owner, in addition to the raw scores provided by the reviewers. It reports adjusted scores for the items by solving a convex optimization problem. Under certain conditions, I show that the owner's optimal strategy is to honestly report the true ranking of the items to her best knowledge in order to maximize the expected utility. Moreover, I prove that the adjusted scores provided by this owner-assisted mechanism are indeed significantly moreaccurate than the raw scores provided by the reviewers. This paper concludes with several extensions of the Isotonic Mechanism and some refinements of the mechanism for practical considerations.
| accept | Researchers in machine learning and many other fields have regularly complained about the quality of reviews. This paper proposes a novel idea to mitigate the "noise" in the decisions in a peer-reviewed conference. The idea leverages the known fact from statistics and optimization that given a ranking of papers, isotonic optimization (finding the closest answer under the given ranking) can yield significant reduction in noise. The key (very nice) idea in this paper is to ask authors to provide a ranking of their own submissions. The scores given by the reviewers are then projected on these provided rankings. The paper shows that under certain assumptions, authors are incentivized to report their true perceived rankings of their authored papers, which is then expected to yield the noise-reducing benefits of isotonic optimization.
I have read the paper carefully myself, and the reviewers and I uniformly agree that this is a novel contribution to an important problem. Overall, this paper is a "novel but imperfect" paper. The assumptions here make the current proposal not-yet-practical. Moreover, the paper claims that the assumed conditions are "mild" whereas our assessment is that these assumptions are very strong. (More on this below.) However, I recommend acceptance due to the really fresh perspective offered by this paper to a problem of interest to the NeurIPS community. (I am aware of the typical bias against novel papers in peer review.)
Interestingly, the paper is also timely given the NeurIPS 2021 experiment of authors reporting the rankings of their papers (although NeurIPS 2021 is not using it for making decisions, unlike the proposal of this paper).
With this acceptance, please ensure the following three action items in the camera ready version:
(A) Please include a thorough discussion in the main text regarding the strong assumptions and resulting challenges. It is important to discuss them not only to ensure that the reader gets an accurate understanding of the work, but also to facilitate follow-up research towards taking this novel idea to practice.
(B) Please incorporate the reviewers' comments and the items promised in the rebuttal.
(C) Please see the comments at the bottom of this meta-review regarding *experiments* quantifying the *magnitude* of benefits, and act accordingly.
Based on the reviews as well as my reading of the paper, here is a summary of some issues that arise with the mechanism due to its strong assumptions.
- - -
1. Convex utilities: This can incentivize reporting of wrong ranking
The paper assumes convex utilities, without any supporting evidence. In my opinion and that of multiple reviewers, the utility for most authors will be quite non-convex with a step-like upward curve near the acceptance threshold. If the utility is non-convex then it incentivizes reporting of the wrong ranking. Here is an example. Suppose I am submitting two papers: one is I think a near-sure accept, and the other is just below bar. Then suppose I rank the worse paper as higher. In this case, under the isotonic method, the top paper's scores will reduce a bit and the worse paper's score will improve a bit. Then my top paper will still get accepted (and hence ranking wrongly didn't really affect me much for this paper) whereas the worse paper has a much higher chance of acceptance under the wrong ranking!
The claim "threshold utilities may not be a sensible choice for modeling the behavior of the user" made in the paper is unsubstantiated and hard to believe. Please remove such claims unless there is supporting evidence.
2. L2 error
It is known that isotonic optimization is good for L2 error. But why is the L2 error a good metric for peer review? In practice, making the accept/reject decision accurately will be considerably important but that is not captured in the L2 error metric analyzed here. Please clarify its purpose in the revision. It is ok to say that this is chosen for theoretical convenience, if that is the case.
3. Does not account for self-selection, and can incentivize submission of low quality papers
Suppose I have a paper that is below the bar (which in my opinion is worse off than all my other submissions) and I am not planning to submit it in the conventional setting (I don't care about this paper since it is not in good shape yet). However if the isotonic method is employed, then the following happens. I rank this paper as worst. If the ratings received are the lowest then it doesn't affect anything. However, if due to noise, it gets rated higher than other papers, then the isotonic method will increase the rating of my other papers, thereby increasing their chance of acceptance. Hence submitting a low quality paper can help me get my other papers accepted.
4. Not accounting for other strategic behavior: This gives undesirably high power to the reviewer
There are a number of issues of dishonest behavior by reviewers in peer review. One example is where a reviewer doesn't like an author or an author's line of research and so deliberately tries to reject their work. This method will give even more power to this dishonest reviewer. For instance, if this reviewer is reviewing that author's work, then giving a low score to that paper will not only reject this paper but under the isotonic mechanism, can also get other papers by this author rejected. As another example, if there is a reviewing ring, if an author has multiple submissions and knows that her strongest paper is being reviewed by people in her reviewing ring, then she can report that it is her weakest paper and presumably benefit from the highly positive reviews given to her now-"weakest" paper.
5. Exchangibility and authors' stress
Submitting papers is already quite stressful for authors, and the dependence of the decisions on these self reports may significantly add to the stress felt by authors. In particular, it can require authors to estimate the review process, and not just their own paper, which can be stressful. Also, what exactly should you ask authors? To rank papers in order of their perceived chances of acceptance? Or in terms of their scientific contribution? These two things need not be the same. The assumption on exchangeability of noise is unnatural, for instance, it is known that novel papers suffer from biases against novelty and interdisciplinary papers also suffer in the peer review process.
6. "Guest authorship"
The process of publishing in single blind venues has a problem of guest authorship, where a researcher is made an author to give increased visibility or chances of acceptance to the paper even though that researcher has made no real contribution. Will this mechanism further incentivize guest authorship -- somebody who can provide a ranking that can increase chances of a paper getting accepted?
- - -
The authors should revise the paper to make the shortcomings very clear. For instance, the abstract calls the conditions "mild", which does not represent the nature of assumptions accurately, and was disconcerting. Please clarify the nature of the assumptions here and elsewhere in the paper. The exchangability condition on noise is also called "mild" in the paper, which as discussed above, is not. Please expand the discussion section to discuss these issues in the main text. Section 2.3 can be shortened to make room for this. It is important to discuss them not only to ensure that the reader gets an accurate understanding of the work, but also to facilitate follow-up research to make this novel idea practical.
Minor point: In my opinion, calling it "author-assisted" instead of "owner-assisted" will be more clear.
Other points raised in the discussions: Reviewers were concerned about the lack of experiments. I had a number of back-and-forth exchanges with the reviewers on this. We discussed that there is no real data available on authors' perceptions of their own papers and no ground truth on papers to validate. As for synthetic experiments, we discussed that there is a good amount of literature showing the magnitude of benefits of isotonic optimization, but the author should either cite literature empirically quantifying the magnitude of benefits of isotonic optimization (not just orderwise but in terms of actual values) or even better include synthetic simulations in the appendix to illustrate this. There was also some confusion where a reviewer misunderstood that the authors have to rank all submitted papers, and this confusion was clarified in a discussion between the AC and reviewer. | train | [
"a2bmZOOfcjS",
"oNouv4isMp0",
"15UKK4J6cmG",
"cHCBZcXfq4N",
"PjSNa6XhUPA",
"mrpkBM2Xh6v",
"Jsjd1-Nefhe",
"phA1lSzXIUX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper looks at the question improving the process of peer review by allowing authors to submit a ranking of their own papers. The idea is to use this ranking from the authors to improve the noisy rankings coming from the reviewers. The authors show that one can formulate this problem as an isotonic regression... | [
6,
6,
8,
-1,
-1,
-1,
-1,
7
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_xmx5rE9QP7R",
"nips_2021_xmx5rE9QP7R",
"nips_2021_xmx5rE9QP7R",
"oNouv4isMp0",
"15UKK4J6cmG",
"phA1lSzXIUX",
"a2bmZOOfcjS",
"nips_2021_xmx5rE9QP7R"
] |
nips_2021_aF60hOEwHP | Garment4D: Garment Reconstruction from Point Cloud Sequences | Learning to reconstruct 3D garments is important for dressing 3D human bodies of different shapes in different poses. Previous works typically rely on 2D images as input, which however suffer from the scale and pose ambiguities. To circumvent the problems caused by 2D images, we propose a principled framework, Garment4D, that uses 3D point cloud sequences of dressed humans for garment reconstruction. Garment4D has three dedicated steps: sequential garments registration, canonical garment estimation, and posed garment reconstruction. The main challenges are two-fold: 1) effective 3D feature learning for fine details, and 2) capture of garment dynamics caused by the interaction between garments and the human body, especially for loose garments like skirts. To unravel these problems, we introduce a novel Proposal-Guided Hierarchical Feature Network and Iterative Graph Convolution Network, which integrate both high-level semantic features and low-level geometric features for fine details reconstruction. Furthermore, we propose a Temporal Transformer for smooth garment motions capture. Unlike non-parametric methods, the reconstructed garment meshes by our method are separable from the human body and have strong interpretability, which is desirable for downstream tasks. As the first attempt at this task, high-quality reconstruction results are qualitatively and quantitatively illustrated through extensive experiments. Codes are available at https://github.com/hongfz16/Garment4D.
| accept |
This paper proposes a framework to reconstruct 3D garment models from 3D point cloud sequences of dressed humans. The paper raised concerns regarding robustness of the model to segmentation errors, limited real world results, similarities to recent works, requirement for body point clouds (under the garment), and accuracy of the proposed method. The rebuttal submitted by the authors addressed these concerns by showing real world results and the feasibility of obtaining body pointclouds using the SMPL model. Though the relevance of the paper to a wide NeurIPS audience maybe still be under question, all reviewers are positive about the paper, and it is suggested for publication.
| test | [
"tIL-xz8pDbY",
"us3KLyACySl",
"XJ8_twHtMDu",
"BIhfa69S_ph",
"fSg0YAFn-Ar",
"fN2vlcmiInZ",
"L1JfUDA4544",
"A6kMMx9Al0",
"o2FwsV9xqlZ",
"OhB7M3VSYm1",
"eU6zFu7pIqQ",
"uZ7eOCzI5Ni",
"1xio8qUqru",
"kzf5AmxOBJz",
"A105Fj8ylq"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the feedback. And yes, we will release the code upon paper acceptance.",
" Thanks for the rebuttal. The new results have addressed most of my concerns, so I will keep my original rating. One more question is: will the code be publicly released? The code should greatly facilitate the research of th... | [
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"us3KLyACySl",
"A6kMMx9Al0",
"nips_2021_aF60hOEwHP",
"OhB7M3VSYm1",
"L1JfUDA4544",
"nips_2021_aF60hOEwHP",
"uZ7eOCzI5Ni",
"eU6zFu7pIqQ",
"uZ7eOCzI5Ni",
"1xio8qUqru",
"A105Fj8ylq",
"fN2vlcmiInZ",
"XJ8_twHtMDu",
"nips_2021_aF60hOEwHP",
"nips_2021_aF60hOEwHP"
] |
nips_2021_74RmfBweB60 | Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization | This paper investigates the problem of computing the equilibrium of competitive games, which is often modeled as a constrained saddle-point optimization problem with probability simplex constraints. Despite recent efforts in understanding the last-iterate convergence of extragradient methods in the unconstrained setting, the theoretical underpinnings of these methods in the constrained settings, especially those using multiplicative updates, remain highly inadequate, even when the objective function is bilinear. Motivated by the algorithmic role of entropy regularization in single-agent reinforcement learning and game theory, we develop provably efficient extragradient methods to find the quantal response equilibrium (QRE)---which are solutions to zero-sum two-player matrix games with entropy regularization---at a linear rate. The proposed algorithms can be implemented in a decentralized manner, where each player executes symmetric and multiplicative updates iteratively using its own payoff without observing the opponent's actions directly. In addition, by controlling the knob of entropy regularization, the proposed algorithms can locate an approximate Nash equilibrium of the unregularized matrix game at a sublinear rate without assuming the Nash equilibrium to be unique. Our methods also lead to efficient policy extragradient algorithms for solving entropy-regularized zero-sum Markov games at a linear rate. All of our convergence rates are nearly dimension-free, which are independent of the size of the state and action spaces up to logarithm factors, highlighting the positive role of entropy regularization for accelerating convergence.
| accept | This paper treats the convergence of a regularized variant of the multiplicative weights and optimistic multiplicative weights update (MWU and OMWU respectively) in zero-sum games. A second part of the paper concerns the applications of these methods to zero-sum Markov games.
This paper was extensively discussed by the committee, and the reviewers identified both strong and weak points in the paper. On the positive side, the authors' linear convergence result for the regularized MWU/OMWU methods seems to be new (at the very least, the committee members were not aware of an equivalent result in the literature). On the negative side, the presentation and positioning of the paper left a lot to be desired, especially with regard to the similarity of the proposed regularized methods to other existing methods – and, in particular, running MWU/OMWU on a regularized game.
The main point of contention is as follows: if one considers an entropic regularization of the underlying game, it is immediate to see that the game's set of Nash equilibria is replaced by a unique quantal response equilibrium - or, rather, a logit equilibrium (as QRE are called in the context of entropic regularization; the term "Nash distribution" is also sometimes used). Thus, by focusing on QRE, the authors are essentially side-stepping the unique equilibrium requirement: they do not require equilibrium uniqueness, but they prove convergence to a perturbed equilibrium, not a Nash equilibrium. This point is crucial for the proper positioning of the paper, but it is not made clear by the authors.
Building on this, given that logit equilibria are _de facto_ interior, and given that the regularized game is strongly monotone, it is natural to expect a geometric rate of convergence for mirror descent / mirror-prox methods. [This, after all, is an entropic variant of the standard Tikhonov regularization approach.] Of course, the fact that this may be a "natural" result, does not subtract from its merit: however, a much clearer presentation of the topic would be expected in order to position these results in the proper context. [Further compounding the issue is that the authors' algorithm is not _exactly_ MWU/OMWU ran on the regularized game but closely related to it - and, if anything, this raises the question of why the authors' chose one variant over the other (especially since the former approach would seem simpler to analyze)]
Finally, regarding the applications to Markov games and $Q$-learning: one important limitation is that the paper operates in the "full-information" framework (with respect to individual player knowledge, not global one). While this assumption is meaningful from an "offline" viewpoint, it is harder to justify in the online setting where such algorithms are typically deployed (since only obtained payoffs are observed and counterfactual reasoning is not feasible in general). The authors touch on this issue briefly in the conclusions section, where they comment on the relevance of two-time-scale algorithms for the $Q$-learning problem: in this regard, the authors might want to check the 2005 paper of Leslie and Collins [1] and a follow-up by Coucheney et al. [2]. Both papers treat the problem of learning in normal-form games with entropic regularization and partial information, and the algorithms studied in both papers are very closely related to the regularized MWU algorithm studied by the authors; more to the point, [1] considers a two-time-scale variant that seems to do what the authors suggest in the conclusions section (modulo the extra-gradient part).
To sum up, the extent of the revision required to bring the paper's contributions into focus led the committee to the conclusion that the paper should go through another round of review before being considered again for publication. The decision to reject the current version was taken in this light, and I would strongly encourage the authors to resubmit a suitably revised version of the paper at the next opportunity once they have addressed the committee's concerns.
[1] D. S. Leslie and E. J. Collins, Individual Q-learning in normal form games, SIAM Journal on Control and Optimization 44 (2005), no. 2, 495–514.
[2] P. Coucheney, B. Gaujal, and P. Mertikopoulos, Penalty-regulated dynamics and robust learning procedures in games, Mathematics of Operations Research 40 (2015), no. 3, 611– 633. | train | [
"JiPC4aNCbh7",
"JLjhMGbGMsa",
"oS8IIxAtAnk",
"DB9XREMtIi",
"Fkzdp8XMnCm",
"aKPRuW--g_O",
"7ad4NhC12Y",
"bAarApgpWhv",
"aZqyiIgj5uu",
"ILk2Stia8K9",
"fXLsiEvNBC",
"mHwkHMCLT_B",
"q6NNc42ed-a"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to leverage entropic regularization and optimistic-type methods to solve zero-sum (ZS) matrix games and ZS Markov games.\nThe authors begin with ZS matrix games on the simplex: they add entropic regularization, thus transforming this game into a strongly-concave/strongly-concave problem.\nSligh... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
1
] | [
"nips_2021_74RmfBweB60",
"bAarApgpWhv",
"DB9XREMtIi",
"Fkzdp8XMnCm",
"aKPRuW--g_O",
"aZqyiIgj5uu",
"q6NNc42ed-a",
"JiPC4aNCbh7",
"mHwkHMCLT_B",
"fXLsiEvNBC",
"nips_2021_74RmfBweB60",
"nips_2021_74RmfBweB60",
"nips_2021_74RmfBweB60"
] |
nips_2021_uY-XMIbyXec | Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data | There has been a recent surge of interest in designing Graph Neural Networks (GNNs) for semi-supervised learning tasks. Unfortunately this work has assumed that the nodes labeled for use in training were selected uniformly at random (i.e. are an IID sample). However in many real world scenarios gathering labels for graph nodes is both expensive and inherently biased -- so this assumption can not be met. GNNs can suffer poor generalization when this occurs, by overfitting to superfluous regularities present in the training data. In this work we present a method, Shift-Robust GNN (SR-GNN), designed to account for distributional differences between biased training data and the graph's true inference distribution. SR-GNN adapts GNN models for the presence of distributional shifts between the nodes which have had labels provided for training and the rest of the dataset. We illustrate the effectiveness of SR-GNN in a variety of experiments with biased training datasets on common GNN benchmark datasets for semi-supervised learning, where we see that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (~40%) of the negative effects introduced by biased training data. On the largest dataset we consider, ogb-arxiv, we observe an 2% absolute improvement over the baseline and reduce 30% of the negative effects.
| accept | This paper studies the influence of biases in training data that are due to distributional shifts on the use of GNNs for semi-supervised learning. The key idea is to adopt the linearized GNN models and Central Moment Discrepancy (CMD) between (biased) training and i.i.d samples for regularization. This paper proposes a new method to tackle this bias issue, i.e., the Shift-robust GNN (SR-GNN). SR-GNN is shown to perform better than standard GNNs in the presence of distributional-related bias.
However, there exists some limitations as follows.
1) The practicality: does there really exist distributional shift in the real world graph dataset, such as Cora, Citeseer, PubMed, ogb-arxiv datasets? Meanwhile, all of the experiments in this paper were conducted on post-intervention datasets, which may not occur in real life.
2) The relationship: Uneven labeling refers that there is no random or uniform labeling the training node on the graph. Distributional shift means that there exits gap between training and test dataset distribution. Is there any relationship between the label shift and the distributional shift? Why does the deviation of the label lead to the deviation of the training set?
3) The shift: how to quantitatively change the difference between the distribution of the training set and the test set. What is the specific value of the distribution difference in the datasets in Table 1 and Table 2? Does increasing alpha mean increasing the distribution gap? If your answer is yes, please give some theoretical or experimental proofs. If not, please add experiments similar to Figure 1, that is, when the distribution difference changes from small to large, your proposed method can alleviate the negative correlation between performance and distribution difference to a certain extent.
This paper is a boardline case according to the average rating. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to marginally accept this paper for publication! Please include the additional experimental results in the next version. | val | [
"H7MF-g658n",
"jAUUCE0gZfh",
"lHWFVRonH6r",
"b5lpRvr3vZ",
"g0R__OtyP3K",
"4Qo2I5AX-vm",
"5TTHd17SoTy",
"xW5Ip6YpXHR",
"D6MVPz5RsCA",
"8uix2U7beo2",
"85lTfIAAueQ",
"ZEaYz7Kkhc6"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\nSorry for my late reply and thanks for the authors' extensive experiments.\nMy concerns are addressed well.\nBest",
" Dear Reviewer 6hbj,\n\nThanks again for your suggestions. We will definitely include the discussion and comparison between ours and DANN in the revised manuscript as well as the i... | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
3,
3
] | [
"D6MVPz5RsCA",
"lHWFVRonH6r",
"g0R__OtyP3K",
"5TTHd17SoTy",
"ZEaYz7Kkhc6",
"nips_2021_uY-XMIbyXec",
"85lTfIAAueQ",
"8uix2U7beo2",
"4Qo2I5AX-vm",
"nips_2021_uY-XMIbyXec",
"nips_2021_uY-XMIbyXec",
"nips_2021_uY-XMIbyXec"
] |
nips_2021_CEkbBN_-Ja8 | RIM: Reliable Influence-based Active Learning on Graphs | Message passing is the core of most graph models such as Graph Convolutional Network (GCN) and Label Propagation (LP), which usually require a large number of clean labeled data to smooth out the neighborhood over the graph. However, the labeling process can be tedious, costly, and error-prone in practice. In this paper, we propose to unify active learning (AL) and message passing towards minimizing labeling costs, e.g., making use of few and unreliable labels that can be obtained cheaply. We make two contributions towards that end. First, we open up a perspective by drawing a connection between AL enforcing message passing and social influence maximization, ensuring that the selected samples effectively improve the model performance. Second, we propose an extension to the influence model that incorporates an explicit quality factor to model label noise. In this way, we derive a fundamentally new AL selection criterion for GCN and LP--reliable influence maximization (RIM)--by considering quantity and quality of influence simultaneously. Empirical studies on public datasets show that RIM significantly outperforms current AL methods in terms of accuracy and efficiency.
| accept | After some concerns of the reviewers could be resolved by the author feedback, the reviewers unanimously suggest to accept the submission. They generally found the submission interesting, theoretically sound, and well-presented. The authors should carefully incorporate the reviewer comments including their clarifications into the final version.
| train | [
"g54hGOuNe2",
"RO_XFLlYdnk",
"GiSyrOw2wRl",
"xYseJQNo_SN",
"Z6NwaB629H",
"pDJVuHVrdvR",
"R-1OpeSuuDx",
"TSSD6K4eNpI",
"uMxnPJWMJsP",
"KL-NNCBRZXZ",
"KRnTE5jO7hG",
"jEsLt9BDry_"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your helpful and insightful reviews.\n\nAs shown in both Figure 2 and Figure 4(b) of the original paper, we have already demonstrated the advantage of influence quantity in a setting without label noise. \nBesides, we have also carefully explained the relationship between Eq.8 and Eq.9 in our previous... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"KL-NNCBRZXZ",
"TSSD6K4eNpI",
"nips_2021_CEkbBN_-Ja8",
"Z6NwaB629H",
"pDJVuHVrdvR",
"GiSyrOw2wRl",
"jEsLt9BDry_",
"KRnTE5jO7hG",
"KL-NNCBRZXZ",
"nips_2021_CEkbBN_-Ja8",
"nips_2021_CEkbBN_-Ja8",
"nips_2021_CEkbBN_-Ja8"
] |
nips_2021_QE_h_FgqdEu | Dynamical Wasserstein Barycenters for Time-series Modeling | Many time series can be modeled as a sequence of segments representing high-level discrete states, such as running and walking in a human activity application. Flexible models should describe the system state and observations in stationary ``pure-state'' periods as well as transition periods between adjacent segments, such as a gradual slowdown between running and walking. However, most prior work assumes instantaneous transitions between pure discrete states. We propose a dynamical Wasserstein barycentric (DWB) model that estimates the system state over time as well as the data-generating distributions of pure states in an unsupervised manner. Our model assumes each pure state generates data from a multivariate normal distribution, and characterizes transitions between states via displacement-interpolation specified by the Wasserstein barycenter. The system state is represented by a barycentric weight vector which evolves over time via a random walk on the simplex. Parameter learning leverages the natural Riemannian geometry of Gaussian distributions under the Wasserstein distance, which leads to improved convergence speeds. Experiments on several human activity datasets show that our proposed DWB model accurately learns the generating distribution of pure states while improving state estimation for transition periods compared to the commonly used linear interpolation mixture models.
| accept | The paper proposes dynamical Wasserstein barycenter for modeling time series, observation are considered to be sampled from Bures Wasserstein barycenter of states with time dependent mixing coefficients. The paper compares wasserstein interpolation of states to linear interpolation of states.
Reviewers were positive about the paper in the sense it demonstrated an advantage on a GMM model with linear interpolation. Several question were raised on why restricting to Bures barycenter and to go beyond gaussianity assumption and on using maybe of other baselines such as neural networks and in providing more experiments on more challenging benchmarks.
I think the paper has a value being a new application of Wasserstein barycenter to time series modeling , and the authors were transparent about the limitations of the work. Weak accept | train | [
"XbESp_sVryD",
"1XE1U-BV5zq",
"JN_USsNXJAj",
"4OUhmjxDYgT",
"hu8BCNOVE0",
"LoIJBT58cfH",
"b3ZHb2x_tQr",
"dzuQI_rcvVi",
"5SjIdZ9WufZ",
"7rXk5kpDiLV",
"GJdGY1vC05y",
"sVbBJ-GKZpl",
"pMwxRQnopg",
"cn4MnEZTKXV",
"XFfT_iDEJ3",
"3FTf8vkXqbF",
"GBCodvGU7Y",
"T2Z87hj8DHP"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" The table below provides the updated comparison between DWB and DeepSS models when accounting for the full MSR dataset (n=126). There is no major change in the discussion provided above given this expanded experiment. These results will be incorporated into the final version of our submission.\n\n| | DWB ($... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"hu8BCNOVE0",
"LoIJBT58cfH",
"b3ZHb2x_tQr",
"nips_2021_QE_h_FgqdEu",
"dzuQI_rcvVi",
"cn4MnEZTKXV",
"5SjIdZ9WufZ",
"pMwxRQnopg",
"XFfT_iDEJ3",
"nips_2021_QE_h_FgqdEu",
"nips_2021_QE_h_FgqdEu",
"GBCodvGU7Y",
"T2Z87hj8DHP",
"4OUhmjxDYgT",
"3FTf8vkXqbF",
"7rXk5kpDiLV",
"GJdGY1vC05y",
"... |
nips_2021_Qo6kYy4SBI- | RelaySum for Decentralized Deep Learning on Heterogeneous Data | In decentralized machine learning, workers compute model updates on their local data.Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network.This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers.A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions.To tackle this challenge, we introduce the RelaySum mechanism for information propagation in decentralized learning.RelaySum uses spanning trees to distribute information exactly uniformly across all workers with finite delays depending on the distance between nodes.In contrast, the typical gossip averaging mechanism only distributes data uniformly asymptotically while using the same communication volume per step as RelaySum.We prove that RelaySGD, based on this mechanism, is independent of data heterogeneity and scales to many workers, enabling highly accurate decentralized deep learning on heterogeneous data.
| accept | The paper presents a new communication protocol for decentralized learning when the nodes have significant data heterogeneity. The reviewers all agreed about several of the positives of the current draft: good presentation, extensive experiments, interesting ideas. The main concern was the algorithmic novelty of the scheme (similar to tree-reduce, and gossip with delays), and the memory overhead proportional to the number of neighbors. However, it was agreed during discussions that this work carries enough technical depth to be interesting to the related communities (decentralized/fed learning). | test | [
"2dKC8RHB78c",
"fGIuiiB0fPZ",
"Wpt7X-9MesS",
"ZbHeziEwk4",
"8JB2OZ-v9zs",
"FwpwcD8zmR0",
"CW0kCcS6xe"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" I have read the author response and the other reviews.\n\nThere are certainly some limitations to the work. As pointed out in other reviews as well: 1) the novelty is somewhat hampered by the fact that RelaySGD is (analytically) equivalent to delayed gossip over all-to-all graphs, 2) RelaySGD incurs additional me... | [
-1,
6,
6,
-1,
-1,
-1,
6
] | [
-1,
4,
4,
-1,
-1,
-1,
4
] | [
"FwpwcD8zmR0",
"nips_2021_Qo6kYy4SBI-",
"nips_2021_Qo6kYy4SBI-",
"Wpt7X-9MesS",
"fGIuiiB0fPZ",
"CW0kCcS6xe",
"nips_2021_Qo6kYy4SBI-"
] |
nips_2021_scn3RYn1DYx | Transformers Generalize DeepSets and Can be Extended to Graphs & Hypergraphs | Jinwoo Kim, Saeyoon Oh, Seunghoon Hong | accept | The paper develops transformers for order permutation invariant data such as sets, graphs and hypergaphs. Overall, the reviewers and I have enjoyed reading the paper since moving transformers towards structured data is an important research direction. The experimental results presented in the rolling discussion on sharing queries and keys across all equivalence classes as well as the ones comparing with Hyper-SAGNN and S2G have to be included in the final version. They really showed the pros of the proposed approach. While making those changes the authors may also wish to add the clarifications posted in the rolling discussion.
| train | [
"HP47h__-aoi",
"H8wQ-pXVQHE",
"PGeUGjbPUvG",
"TbN_vVy1Cy6",
"psRoGr0VyaG",
"cCVninHjywj",
"hSyNgZAI34S",
"xTxDwbn4uCh",
"aL8w1J6pKla",
"usKWw8jLRs_",
"hRBK1V0sfoq",
"m4wnqC8U7bs",
"1E4qahoz7jw",
"5ICrK8SrOZl",
"y5SRdCjDrn8",
"6q8dyx8UcXa",
"s6V9xcFe8xr",
"vHjrjfRn4b",
"dQoICBr-F1... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"a... | [
" As additional baselines, we report performances of a second-order MLP (DeepSet for graphs) and a vanilla (first-order) Transformer in the PCQM4M-LSC dataset of Open Graph Benchmark. As vanilla Transformer operates on node features only, we additionally used Laplacian graph embeddings (Belkin et al., 2003, Dwivedi... | [
-1,
-1,
-1,
6,
-1,
8,
-1,
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
-1,
4,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H8wQ-pXVQHE",
"PGeUGjbPUvG",
"usKWw8jLRs_",
"nips_2021_scn3RYn1DYx",
"vHjrjfRn4b",
"nips_2021_scn3RYn1DYx",
"QewooJ1hSEL",
"nips_2021_scn3RYn1DYx",
"dQoICBr-F1",
"hRBK1V0sfoq",
"y5SRdCjDrn8",
"nips_2021_scn3RYn1DYx",
"y5SRdCjDrn8",
"TbN_vVy1Cy6",
"m4wnqC8U7bs",
"xTxDwbn4uCh",
"6q8dy... |
nips_2021_Mj6MVmGyMDb | No Regrets for Learning the Prior in Bandits | We propose AdaTS, a Thompson sampling algorithm that adapts sequentially to bandit tasks that it interacts with. The key idea in AdaTS is to adapt to an unknown task prior distribution by maintaining a distribution over its parameters. When solving a bandit task, that uncertainty is marginalized out and properly accounted for. AdaTS is a fully-Bayesian algorithm that can be implemented efficiently in several classes of bandit problems. We derive upper bounds on its Bayes regret that quantify the loss due to not knowing the task prior, and show that it is small. Our theory is supported by experiments, where AdaTS outperforms prior algorithms and works well even in challenging real-world problems.
| accept | This paper studies a meta learning for bandits problem, using a hierarchical Bayesian formulation. It proposes the AdaTS algorithm, which can be instantiated to Gaussian {multi-armed, linear, combinatorial semi} bandits, and more general exponential family bandits.
Specialized to the Gaussian bandits setting, the proposed AdaTS algorithm improves over two baselines, including (1) the prior work of MetaTS, in its regret in learning the task prior parameter \mu_* from \sqrt{mn^2} to \sqrt{mn} (2) Thompson sampling without meta-learning, in that when the task prior is sufficiently concentrated and the meta prior is sufficiently "spread out". Empirical results clearly support the claims. One limitation of the current work is that it does not have theoretical results yet in the more general exponential family bandits setting (e.g. Bernoulli bandits). | test | [
"Mkz_ngpVtcE",
"HTz3NPLSYY5",
"MFYgxk6MWrD",
"WCQl3dCdlSO",
"j8-s5Y6lCLU",
"-0JLegqHJl",
"fsRiw0Es7qB",
"KHamSBCuNUY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the meta Thompson Sampling problem with hierarchical structure. In particular, paper considers a setting where there are m-bandits instances and within each instance, there are n-time periods as well K-arms. Here, within an instance, the parameters of the distribution corresponding to arms are ... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_Mj6MVmGyMDb",
"WCQl3dCdlSO",
"nips_2021_Mj6MVmGyMDb",
"KHamSBCuNUY",
"fsRiw0Es7qB",
"Mkz_ngpVtcE",
"nips_2021_Mj6MVmGyMDb",
"nips_2021_Mj6MVmGyMDb"
] |
nips_2021_A-RON3lv-aR | Encoding Robustness to Image Style via Adversarial Feature Perturbations | Adversarial training is the industry standard for producing models that are robust to small adversarial perturbations. However, machine learning practitioners need models that are robust to other kinds of changes that occur naturally, such as changes in the style or illumination of input images. Such changes in input distribution have been effectively modeled as shifts in the mean and variance of deep image features. We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce models that are robust to various unseen distributional shifts. We explore the relationship between these perturbations and distributional shifts by visualizing adversarial features. Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training. By fine-tuning neural networks on adversarial feature distributions, we observe improved robustness of networks to various unseen distributional shifts, including style variations and image corruptions. In addition, we show that our proposed adversarial feature perturbation can be complementary to existing image space data augmentation methods, leading to improved performance. The source code and pre-trained models are released at \url{https://github.com/azshue/AdvBN}.
| accept | After a thorough discussion (particularly with reviewer 4gjW) there are now four reviews that unanimously recommend acceptance.
Thus, I will also recommend acceptance. | train | [
"QpswnZOsTCi",
"HZ7LvIuhVnw",
"yLGMX4-nsBI",
"gRHW40x3qET",
"O2Lz7nfLKff",
"GnZduVn8JDM",
"4Opg8VIXksZ",
"9FCS4gbWvh",
"GuOXcj333NP",
"TM8aXpgGswm",
"FQsSUEeYLAx",
"tmldwMdGKyQ",
"BDfXUOuZhe9"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the problem of making predictive models robust to distributional shifts. This is achieved using an adversarial training strategy that adversarially perturbs batch norm statistics of a pre-trained model, which effectively captures variations in image style such as color, texture, etc. This worst-c... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"nips_2021_A-RON3lv-aR",
"yLGMX4-nsBI",
"O2Lz7nfLKff",
"GnZduVn8JDM",
"GuOXcj333NP",
"9FCS4gbWvh",
"FQsSUEeYLAx",
"BDfXUOuZhe9",
"QpswnZOsTCi",
"tmldwMdGKyQ",
"nips_2021_A-RON3lv-aR",
"nips_2021_A-RON3lv-aR",
"nips_2021_A-RON3lv-aR"
] |
nips_2021_bGfDnD7xo-v | Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms | We introduce the ``continuized'' Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter. The two variables continuously mix following a linear ordinary differential equation and take gradient steps at random times. This continuized variant benefits from the best of the continuous and the discrete frameworks: as a continuous process, one can use differential calculus to analyze convergence and obtain analytical expressions for the parameters; but a discretization of the continuized process can be computed exactly with convergence rates similar to those of Nesterov original acceleration. We show that the discretization has the same structure as Nesterov acceleration, but with random parameters. We provide continuized Nesterov acceleration under deterministic as well as stochastic gradients, with either additive or multiplicative noise. Finally, using our continuized framework and expressing the gossip averaging problem as the stochastic minimization of a certain energy function, we provide the first rigorous acceleration of asynchronous gossip algorithms.
| accept |
A fine paper that provides an original and refreshing perspective on one of the most enigmatic algorithmic cornerstones in convex optimization, namely, accelerated gradient descent. | train | [
"ikG3taYSiDa",
"rpFt5AOJpNh",
"IF2Bni0mL_S",
"v5ZEmnr-wtO",
"DC7YC_ejMY4",
"7mnCJ_cey-",
"EgAzwzDmggV",
"QS8KKCab2ip",
"BHoD1oCJovB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors introduce a novel idea to analyze accelerated methods from a continuized point of view, which allows to have a type of \"random learning rates\". In particular, this allows to be oblivious of the iteration in an asynchronous network, as long as you have a common synchronized clock for the nodes. The an... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
9,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_bGfDnD7xo-v",
"IF2Bni0mL_S",
"BHoD1oCJovB",
"QS8KKCab2ip",
"ikG3taYSiDa",
"EgAzwzDmggV",
"nips_2021_bGfDnD7xo-v",
"nips_2021_bGfDnD7xo-v",
"nips_2021_bGfDnD7xo-v"
] |
nips_2021_W9250bXDgpK | Natural continual learning: success is a journey, not (just) a destination | Biological agents are known to learn many different tasks over the course of their lives, and to be able to revisit previous tasks and behaviors with little to no loss in performance. In contrast, artificial agents are prone to ‘catastrophic forgetting’ whereby performance on previous tasks deteriorates rapidly as new ones are acquired. This shortcoming has recently been addressed using methods that encourage parameters to stay close to those used for previous tasks. This can be done by (i) using specific parameter regularizers that map out suitable destinations in parameter space, or (ii) guiding the optimization journey by projecting gradients into subspaces that do not interfere with previous tasks. However, these methods often exhibit subpar performance in both feedforward and recurrent neural networks, with recurrent networks being of interest to the study of neural dynamics supporting biological continual learning. In this work, we propose Natural Continual Learning (NCL), a new method that unifies weight regularization and projected gradient descent. NCL uses Bayesian weight regularization to encourage good performance on all tasks at convergence and combines this with gradient projection using the prior precision, which prevents catastrophic forgetting during optimization. Our method outperforms both standard weight regularization techniques and projection based approaches when applied to continual learning problems in feedforward and recurrent networks. Finally, the trained networks evolve task-specific dynamics that are strongly preserved as new tasks are learned, similar to experimental findings in biological circuits.
| accept | The paper proposes a framework that unifies regularization based techniques in CL and projection techniques. As other reviewers pointed out (e.g. v6dD) this is maybe one of the better formally motivated and better studied direction for addressing CL. And hence at a high level the final result might not seem surprising (e.g. that regularizion methods can be seen as a trust region method, and then there is a deep connection between projection methods and these regularization methods).
However I want to stress that the final algorithm and its derivation is by no means trivial. And I think the improvement (at least in the theoretical understanding) of these methods is of great values for those focusing on such techniques. So from my perspective, in terms of novelty and significance I think the manuscript has provided enough of both. And while there might be some scalability worries, I think there is a significant subset of the CL community that will find this result very useful.
The main weakness of the work is the empirical exploration. While I believe the experiments are done carefully, and provide evidence for the efficacy of the algorithm (plus I welcome the new experiments on feedforward models), the scale of the empirical section is a bit weaker compared to an average paper on this topic. However, particularly considering the new results mentioned by the authors that should be integrated in the paper, I think they might just be sufficient to reach the requirement of the conference.
| test | [
"ZUxn77RZAgF",
"VlVOrY4PCH",
"xk4LiTn7emI",
"YzxzXMpo1ub",
"Laj6ISbxR_",
"zGLLMDW_953",
"jfWNUbCBazO",
"c1ufW7EiXLs",
"UC5frIov8t",
"NYGZK2doH_l",
"nEot4m8-z5z",
"UEWk39MgT8I",
"VbFTnvBJryT",
"vtWRLpoxpb8",
"8jdOIqCmuSo",
"mnMJJvf3vd",
"FL2qVCVQIZ-",
"duNlYhnNp31",
"H6jsxDAsAN",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" After reading other reviewers' comments and the authors responses, I'd like to keep my rating 6 because the authors resolved my question before. However, it is somewhat unfortunate that experiments on large scale or vision dataset (e.g. CIFAR-100 or mini-ImageNet) was not carried out. Therefore, I decided to keep... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"duNlYhnNp31",
"nips_2021_W9250bXDgpK",
"YzxzXMpo1ub",
"FL2qVCVQIZ-",
"jfWNUbCBazO",
"nips_2021_W9250bXDgpK",
"mnMJJvf3vd",
"NYGZK2doH_l",
"nips_2021_W9250bXDgpK",
"H6jsxDAsAN",
"nips_2021_W9250bXDgpK",
"VbFTnvBJryT",
"vtWRLpoxpb8",
"8jdOIqCmuSo",
"jxMzsHd5RLb",
"zGLLMDW_953",
"RQa_K... |
nips_2021_PBctz6_47ug | Individual Privacy Accounting via a Rényi Filter | Vitaly Feldman, Tijana Zrnic | accept | This paper provides new results for (differential) privacy filters that ensure that the individual privacy loss does not exceed certain thresholds. This is an important question in differentially private machine learning, and the paper gives non-trivial technical contributions. During the discussion, reviewers raised some questions regarding the experiments. The authors should provide further details about their experimental setup in the next revision. | train | [
"br8gHP_LhAd",
"pAHUIJ62Kmc",
"l19bBqhhThf",
"hiudFm0X4_q",
"Oz5So8sEQeu",
"cpZEIfqXB5m",
"7xjm1SgczK8",
"_EpjActhtz",
"rVdnv5QE484",
"Bqb_5VX5PGH",
"Vn7hjEk3d0",
"55dpwcWt4lP",
"5Fh8uRIv-zc",
"o7-aRUhsB2F",
"mbz65mrs2x6",
"P9KHXcB_WOc"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your continued engagement, we appreciate your time. As we said in a previous response, we will gladly highlight and clarify our observations about SGD and GD (and support these observations with experimental data and code). Our experiments’ main focus was on showing that our conceptual tools and tec... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"pAHUIJ62Kmc",
"l19bBqhhThf",
"hiudFm0X4_q",
"Oz5So8sEQeu",
"7xjm1SgczK8",
"nips_2021_PBctz6_47ug",
"55dpwcWt4lP",
"5Fh8uRIv-zc",
"cpZEIfqXB5m",
"P9KHXcB_WOc",
"mbz65mrs2x6",
"o7-aRUhsB2F",
"nips_2021_PBctz6_47ug",
"nips_2021_PBctz6_47ug",
"nips_2021_PBctz6_47ug",
"nips_2021_PBctz6_47u... |
nips_2021_wCrH0JBCFNm | Post-Training Quantization for Vision Transformer | Recently, transformer has achieved remarkable performance on a variety of computer vision applications. Compared with mainstream convolutional neural networks, vision transformers are often of sophisticated architectures for extracting powerful feature representations, which are more difficult to be developed on mobile devices. In this paper, we present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers. Basically, the quantization task can be regarded as finding the optimal low-bit quantization intervals for weights and inputs, respectively. To preserve the functionality of the attention mechanism, we introduce a ranking loss into the conventional quantization objective that aims to keep the relative order of the self-attention results after quantization. Moreover, we thoroughly analyze the relationship between quantization loss of different layers and the feature diversity, and explore a mixed-precision quantization scheme by exploiting the nuclear norm of each attention map and output feature. The effectiveness of the proposed method is verified on several benchmark models and datasets, which outperforms the state-of-the-art post-training quantization algorithms. For instance, we can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/VT-PTQ.
| accept | This paper is solving an important problem for efficient vision transformer deployment in industrial environments. The goal of the paper is significant in practical applications. However, reviewers are concerned about the novelty is not enough and the method part is not clear; A large number of hyper-parameters are manually chosen, and the baselines and comparison with other approaches is weak. This paper is recommended for rejection. | train | [
"Vu0_mCmHCF",
"YnW8qnL01r",
"zSb-nEuVh7d",
"G1I5hBErAyD",
"iTEDOni2-n5",
"fFnX145byj6",
"delsUQX1jNX",
"iFVsYIEEZj_"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for the helpful and valuable feedbacks.\n\n$\\textbf{Q1:}$ The manuscript proposes a post-training quantization for vision transformer. The task is urgently needed for deployment in industrial environments. The related work is cited adequately, but more analysis about the relat... | [
-1,
-1,
-1,
-1,
4,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"fFnX145byj6",
"iTEDOni2-n5",
"iFVsYIEEZj_",
"delsUQX1jNX",
"nips_2021_wCrH0JBCFNm",
"nips_2021_wCrH0JBCFNm",
"nips_2021_wCrH0JBCFNm",
"nips_2021_wCrH0JBCFNm"
] |
nips_2021_iHXQPrISusS | Unsupervised Part Discovery from Contrastive Reconstruction | The goal of self-supervised visual representation learning is to learn strong, transferable image representations, with the majority of research focusing on object or scene level. On the other hand, representation learning at part level has received significantly less attention. In this paper, we propose an unsupervised approach to object part discovery and segmentation and make three contributions. First, we construct a proxy task through a set of objectives that encourages the model to learn a meaningful decomposition of the image into its parts. Secondly, prior work argues for reconstructing or clustering pre-computed features as a proxy to parts; we show empirically that this alone is unlikely to find meaningful parts; mainly because of their low resolution and the tendency of classification networks to spatially smear out information. We suggest that image reconstruction at the level of pixels can alleviate this problem, acting as a complementary cue. Lastly, we show that the standard evaluation based on keypoint regression does not correlate well with segmentation quality and thus introduce different metrics, NMI and ARI, that better characterize the decomposition of objects into parts. Our method yields semantic parts which are consistent across fine-grained but visually distinct categories, outperforming the state of the art on three benchmark datasets. Code is available at the project page: https://www.robots.ox.ac.uk/~vgg/research/unsup-parts/.
| accept | This submission initially received mixed reviews (two weak rejects, one weak accept). After the rebuttal, the reviews remain mixed (two weak accepts, one weak reject). The AC has carefully read the paper, reviews, and rebuttals. The AC agrees with the concerns raised by the reviewers, including the definition of parts, the lack of discussion on methods using motion, and the missing discussion on the number of parts (K). The AC also believes this submission has made enough contributions to warrant acceptance, given its technical innovations, results, and analyses. In the camera ready, the authors should definitely discuss the related work on using motion to discover object parts, in particular those raised by reviewer ogkV:
[A]: Sabour, Sara, et al. "Unsupervised part representation by flow capsules." International Conference on Machine Learning. PMLR, 2021.
[B]: Bear, Daniel M., et al. "Learning physical graph representations from visual scenes." arXiv preprint arXiv:2006.12373 (2020). | train | [
"oLDqEAp21_z",
"dl7oED11zPS",
"fvZsXFL9dyP",
"-jrxdzhlZAw",
"BbG5KjevG7i",
"vV4Z-QEcHju",
"jJQKv0TB-RZ",
"q6fntgvLQJa",
"ZWDe0ZoAm0h",
"j8R_9EGCjx9",
"SjY4hVbyd0e"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for raising their rating from 4 to 5 after our response. We would like to point out our response to reviewer ogkV with respect to selecting K. It is a natural hyper-parameter that is inherent to all unsupervised grouping algorithms (e.g. k-means). It controls the granularity of the predictio... | [
-1,
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"vV4Z-QEcHju",
"jJQKv0TB-RZ",
"nips_2021_iHXQPrISusS",
"ZWDe0ZoAm0h",
"nips_2021_iHXQPrISusS",
"q6fntgvLQJa",
"j8R_9EGCjx9",
"BbG5KjevG7i",
"fvZsXFL9dyP",
"SjY4hVbyd0e",
"nips_2021_iHXQPrISusS"
] |
nips_2021_X0ein5pH4YJ | ASSANet: An Anisotropic Separable Set Abstraction for Efficient Point Cloud Representation Learning | Guocheng Qian, Hasan Hammoud, Guohao Li, Ali Thabet, Bernard Ghanem | accept | The paper presents a new variant of PointNet that's faster and more accurate. This paper received positive reviews and all reviewers recommended acceptance. The reviewers find many positive points including impressive empirical results. AC does not find grounds to overturn this consensus recommendation. The authors should incorporate the suggestions of the reviews when revising the paper for the camera ready version. | train | [
"Pq-8woSg45B",
"h0JnOvfXOcO",
"-Ty205cx1Kr",
"5dl4u7FUUJ",
"Gqqa4yrAUPZ",
"z4koR35RIw",
"ZPIkheqJCI6",
"39TkbxxJrc",
"RZ9V3x7d1Cc",
"7PMDMXHPcR3",
"_oGWpPUa8yZ",
"0zw2InZ08X"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for the response. It addressed all my concerns. I am positive towards acceptance of this paper and keep my original rating.",
" We thank the reviewer for their continued constructive comments which are valued as they help make our paper more influential. \n\nQ1. [More visualization examples].... | [
-1,
-1,
6,
-1,
-1,
9,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
3,
4
] | [
"39TkbxxJrc",
"Gqqa4yrAUPZ",
"nips_2021_X0ein5pH4YJ",
"-Ty205cx1Kr",
"ZPIkheqJCI6",
"nips_2021_X0ein5pH4YJ",
"z4koR35RIw",
"0zw2InZ08X",
"-Ty205cx1Kr",
"_oGWpPUa8yZ",
"nips_2021_X0ein5pH4YJ",
"nips_2021_X0ein5pH4YJ"
] |
nips_2021_Z8mLxlpSyrJ | An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers | Recent work demonstrates that deep neural networks trained using Empirical Risk Minimization (ERM) can generalize under distribution shift, outperforming specialized training algorithms for domain generalization. The goal of this paper is to further understand this phenomenon. In particular, we study the extent to which the seminal domain adaptation theory of Ben-David et al. (2007) explains the performance of ERMs. Perhaps surprisingly, we find that this theory does not provide a tight explanation of the out-of-domain generalization observed across a large number of ERM models trained on three popular domain generalization datasets. This motivates us to investigate other possible measures—that, however, lack theory—which could explain generalization in this setting. Our investigation reveals that measures relating to the Fisher information, predictive entropy, and maximum mean discrepancy are good predictors of the out-of-distribution generalization of ERM models. We hope that our work helps galvanize the community towards building a better understanding of when deep networks trained with ERM generalize out-of-distribution.
| accept | The authors perform a large-scale empirical evaluation of how well empirical risk minimisers perform out-of-domain generalisation, and how well this performance is predicted by various properties of the source and target domains. They find that measures of domain discrepancy which have been used to bound domain adaptation error were not among the most predictive aspects, but that other measures showed good correlation with target error.
The reviewers all recognised the value of an empirical study like this, and were largely happy with how it was conducted. Reviewer Dvcv questioned the strong focus on the theory of Ben-David, and how it was a surprisingly poor predictor, when several more recent works have already pointed out the looseness in those early results. Additionally, several reviewers asked for increased clarity surrounding some of the results. Nevertheless, I believe this work is likely to lead to new theoretical insights and empirical results. | train | [
"80Rw2zZ8ci-",
"-YiPqXPfqF",
"IAeVUMRKb3",
"l6gUERCeX7y",
"fwxbDhJrqRq",
"DVn3fIxVPA",
"7bJgcfUaGoa",
"-hVH9cm1QZ5",
"ZaLoPDDopO2",
"1EsUOI9zL8E",
"E1x_bC4jB_w"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have updated my ratings. ",
"This paper conducts a large-scale empirical study of ERM models tested in the domainBed pipeline, together with the empirical results tested, the authors also investigated the numerical values of several different measures, which allows the authors to further study the relationshi... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"IAeVUMRKb3",
"nips_2021_Z8mLxlpSyrJ",
"l6gUERCeX7y",
"fwxbDhJrqRq",
"-YiPqXPfqF",
"E1x_bC4jB_w",
"1EsUOI9zL8E",
"ZaLoPDDopO2",
"nips_2021_Z8mLxlpSyrJ",
"nips_2021_Z8mLxlpSyrJ",
"nips_2021_Z8mLxlpSyrJ"
] |
nips_2021_w-EabDtADg | Fair Sequential Selection Using Supervised Learning Models | We consider a selection problem where sequentially arrived applicants apply for a limited number of positions/jobs. At each time step, a decision maker accepts or rejects the given applicant using a pre-trained supervised learning model until all the vacant positions are filled. In this paper, we discuss whether the fairness notions (e.g., equal opportunity, statistical parity, etc.) that are commonly used in classification problems are suitable for the sequential selection problems. In particular, we show that even with a pre-trained model that satisfies the common fairness notions, the selection outcomes may still be biased against certain demographic groups. This observation implies that the fairness notions used in classification problems are not suitable for a selection problem where the applicants compete for a limited number of positions. We introduce a new fairness notion, ``Equal Selection (ES),'' suitable for sequential selection problems and propose a post-processing approach to satisfy the ES fairness notion. We also consider a setting where the applicants have privacy concerns, and the decision maker only has access to the noisy version of sensitive attributes. In this setting, we can show that the \textit{perfect} ES fairness can still be attained under certain conditions.
| accept | Reviewer all agreed that the paper formulates and partially addresses an interesting and practically relevant question: namely, how to define and guarantee an appropriate notion of fairness in sequential decision-making settings where a limited number of positions are available and diversity among the selected is an important consideration. However, among other suggestions for improvement, reviewers urge the authors to expand their discussion of the related work, the motivation behind proposed fairness notion, and the post-processing approach. Assuming that the authors will add the necessary discussions and revise the paper to reflect the reviewers’ suggestions, I recommend acceptance. | train | [
"-iSOW-7FohG",
"S840AVm_PJ",
"Q8AmR6YsHH5",
"9QC-prssalw",
"0jV4XIMP6Lq",
"gD9eYEp-0Ty",
"gbyAxYivOuH",
"5RFQX7ebn_u",
"TkGnwzn0Ic1",
"ECFsouME0w",
"Jh6So1KRR3K",
"OarRvoph2W1",
"3DJIKyi0hGP",
"2rxen6x16sk",
"8ZgQxsC9E_5",
"urp3f9ntX7T",
"F8ZjUi7CvTT",
"IQWlEUrfFW1"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies how to \"fairly\" select a single candidate from a repeated i.i.d. classification perspective \"selection process\". They contribute a new fairness notion that is unique in the selection setting, however, it is not that different from previously contributed group fairness notions. They provide s... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_w-EabDtADg",
"Q8AmR6YsHH5",
"9QC-prssalw",
"gbyAxYivOuH",
"OarRvoph2W1",
"TkGnwzn0Ic1",
"8ZgQxsC9E_5",
"nips_2021_w-EabDtADg",
"ECFsouME0w",
"Jh6So1KRR3K",
"2rxen6x16sk",
"IQWlEUrfFW1",
"-iSOW-7FohG",
"3DJIKyi0hGP",
"F8ZjUi7CvTT",
"5RFQX7ebn_u",
"nips_2021_w-EabDtADg",
"... |
nips_2021_-KU_e4Biu0 | Towards Sample-efficient Overparameterized Meta-learning | An overarching goal in machine learning is to build a generalizable model with few samples. To this end, overparameterization has been the subject of immense interest to explain the generalization ability of deep nets even when the size of the dataset is smaller than that of the model. While the prior literature focuses on the classical supervised setting, this paper aims to demystify overparameterization for meta-learning. Here we have a sequence of linear-regression tasks and we ask: (1) Given earlier tasks, what is the optimal linear representation of features for a new downstream task? and (2) How many samples do we need to build this representation? This work shows that surprisingly, overparameterization arises as a natural answer to these fundamental meta-learning questions. Specifically, for (1), we first show that learning the optimal representation coincides with the problem of designing a task-aware regularization to promote inductive bias. We leverage this inductive bias to explain how the downstream task actually benefits from overparameterization, in contrast to prior works on few-shot learning. For (2), we develop a theory to explain how feature covariance can implicitly help reduce the sample complexity well below the degrees of freedom and lead to small estimation error. We then integrate these findings to obtain an overall performance guarantee for our meta-learning algorithm. Numerical experiments on real and synthetic data verify our insights on overparameterized meta-learning.
| accept | This paper studies overparameterization for meta-learning in the setting of linear-regression tasks. It answers the questions about the optimal linear representation of features for a new downstream task and the number of samples needed to build this representation. The reviewers are quite diverse at the beginning and raised a number of questions, with one of the main criticisms being on the clarity of Theorem 1. The authors have done a good job at replying the reviews, and the reviewers also have done a good job at discussing the rebuttal. Finally, the originally negative reviewers are able to converge to the borderline and the positive reviewers maintained their scores. I propose acceptance of this paper due to the important contribution of demonstrating overparameterization to meta learning. However, I highly recommend the authors to take the comments from the reviewers and revise the paper carefully for its final version. It would make the paper even stronger if the revision can somehow address the following question raised in the discussion: it is expected the paper can provide either stronger empirical verifications (MNIST and CIFAR100 in the appendix are non-standard in FSL) or for theory beyond linear regression case. | train | [
"SFBuKvdqILo",
"mPponQObQAc",
"M7wZdnIk5ev",
"2sPe2t7J974",
"5elXOhbR8hi",
"78zZ7htfgEL",
"M5Zp8lBCS0",
"UokAnR4jaq5",
"ycvLY9IgZp2",
"mgq3paRYM8f",
"Y7_CL4g6NoV",
"eNxNTtI3wlb",
"OLWlezaOyhK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies overparametrized representation learning in meta-learning, i.e. representation dimensionality (R) is larger than number of samples ($n_2$) per task. It does so in the setting of linear representations, but goes beyond existing analyses that typically assume that the covariance of task regressors... | [
7,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
2,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_-KU_e4Biu0",
"nips_2021_-KU_e4Biu0",
"nips_2021_-KU_e4Biu0",
"OLWlezaOyhK",
"nips_2021_-KU_e4Biu0",
"mPponQObQAc",
"5elXOhbR8hi",
"ycvLY9IgZp2",
"78zZ7htfgEL",
"OLWlezaOyhK",
"SFBuKvdqILo",
"nips_2021_-KU_e4Biu0",
"nips_2021_-KU_e4Biu0"
] |
nips_2021_RmydToMkEM | ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers | Husheng Han, Kaidi Xu, Xing Hu, Xiaobing Chen, Ling LIANG, Zidong Du, Qi Guo, Yanzhi Wang, Yunji Chen | accept | The authors develop a novel scalable robustness certification technique to certify robustness of deep networks to adversarial patches. The paper provides a theoretically and empirically interesting contribution to the literature on robustness and verification of deep networks.
In the initial reviews, several concerns were raised around the correctness of the proposed approach. These were adequately addressed in the rebuttal. The paper would be acceptable for publication provided that the authors make the revisions suggested by the reviewers during the rebuttal phase in the camera ready version.
Specific detailed feedback from the reviewers on how to improve the paper:
1. L254-255 in the main paper needs to be rephrased to indicate that only sliding windows that overlap with (corresponding to the benign image) do not affect prediction results for the image . Justification of why this is sufficient for certification needs to be included. This could include the figure shared by the authors in the rebuttal to illustrate that the top-k region of the attacked image cannot differ by more than a fixed amount from the top-k region of the original image. In addition, for the cases when the patch is occluded completely, the top-k region of the attacked image would be a subset of the top-k region of the benign image. This can be formally stated and proved.
2. It could be made more clear throughout the paper that this is a certified detection method rather than a certified prediction/ recovery method, perhaps in the title as well. It seems unfair to directly compare certified accuracy of recovery methods to detection methods as it’s easier to detect whether a patch is present than it is to predict the correct class in any patched image. A note on why this method cannot be used for certified prediction would also help the reader understand the same better.
3. Results on execution latency on ImageNet for PatchGuard++ (shared in the rebuttal) could be included
4. Details on how the top-k neurons are computed could be included
5. Adding detailed captions to tables and figures would improve the readability. The x-axis label for Fig.1(e) is missing
6. A discussion on false alarms and what is done to reduce this could be included.
7. Clearly define SINs and states the important properties of SINs, e.g., what is the SIN of the masked region?
8. Clearly and formally define the inference protocol. It seems to be first computing SINs then dynamically pruning the network to only pass the signals for top-k SIN neurons. The final inference protocol that takes windows into account runs multiple inferences independently --- the top-k SINs are independently computed for each masked input.
9. A detailed proof of Theorem 1 with more illustrations.
10. Training details for reproducibility. The current submission lacks many training details nor contains code implementation. Please make sure to include them especially the description of training details.
11. Compress Section 3.1 to gain space.
12. Authors should include the discussion on the drop in performance for low-resolution datasets and the guidelines for superficial layer selection.
13. Comparison of provable detection and provable recovery should not be placed in the main text. The comparison is meaningless due to the difference in settings.
14. The statement related to the algorithm is not very clear. I suggest that the authors clearly explain the algorithm in future revisions. In addition, Line 246-257 is copied from PatchGuard++, which is not allowed.
15. Reporting error bars of results (shared in the rebuttal)
16. A discussion on limitations could be included: sub-optimal performance on low-resolution datasets, a large drop in clean accuracy with large patch attacks, and limited generalization (only works for image CNNs due to exploiting localized low-level features). | train | [
"AqNDSbx7U-5",
"O0ylfKXe5MU",
"rukWBVuc3QH",
"YoJZa2bbSrR",
"yMgZ-8DS8qw",
"Xel9fzCgn3O",
"09MYQKcGIIq",
"Bn7UmETsDx",
"94cm6VVPN2i",
"oRB6AD37ioE",
"mt_UTGgdyx",
"ktvZybx1jBl",
"mgX_x9kkmU9",
"CH1hMkCpL4s",
"ohRJHx3a2WJ",
"5LPFxK64Hik",
"lv1_s1xbi12",
"RtLKOMfogyL",
"bN3zLOvWPde... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_revi... | [
"In this paper, the authors propose a scalable certified defense framework to detect patch attacks. The authors empirically show that patch attacks rely on localized Superficial Important Neurons (SINs) to cause misclassification. By occluding image patches corresponding to these localized neurons, the predictions ... | [
6,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_RmydToMkEM",
"yMgZ-8DS8qw",
"nips_2021_RmydToMkEM",
"nips_2021_RmydToMkEM",
"5LPFxK64Hik",
"09MYQKcGIIq",
"oRB6AD37ioE",
"94cm6VVPN2i",
"ktvZybx1jBl",
"RtLKOMfogyL",
"nips_2021_RmydToMkEM",
"mgX_x9kkmU9",
"CH1hMkCpL4s",
"ohRJHx3a2WJ",
"MKF7zkV9mkj",
"lv1_s1xbi12",
"bN3zLOv... |
nips_2021_wfiVgITyCC_ | Towards mental time travel: a hierarchical memory for reinforcement learning agents | Reinforcement learning agents often forget details of the past, especially after delays or distractor tasks. Agents with common memory architectures struggle to recall and integrate across multiple timesteps of a past event, or even to recall the details of a single timestep that is followed by distractor tasks. To address these limitations, we propose a Hierarchical Chunk Attention Memory (HCAM), that helps agents to remember the past in detail. HCAM stores memories by dividing the past into chunks, and recalls by first performing high-level attention over coarse summaries of the chunks, and then performing detailed attention within only the most relevant chunks. An agent with HCAM can therefore "mentally time-travel"--remember past events in detail without attending to all intervening events. We show that agents with HCAM substantially outperform agents with other memory architectures at tasks requiring long-term recall, retention, or reasoning over memory. These include recalling where an object is hidden in a 3D environment, rapidly learning to navigate efficiently in a new neighborhood, and rapidly learning and retaining new words. Agents with HCAM can extrapolate to task sequences much longer than they were trained on, and can even generalize zero-shot from a meta-learning setting to maintaining knowledge across episodes. HCAM improve agent sample efficiency, generalization, and generality (by solving tasks that previously required specialized architectures). Our work is a step towards agents that can learn, interact, and adapt in complex and temporally-extended environments.
| accept | - The proposed method is tackling an important problem, reasonable, and the experiments are well designed and executed.
- The major concerns from the reviewers (e.g., hyper parameter tuning, chunk size, TxXL results, etc.) are addressed well enough by the rebuttal.
- For reproducibility and accessibility, I suggest to open source the code and make the environment available at your earliest convenience.
- I suggest accepting the paper. | train | [
"Wp16rO1Nzaq",
"LXxBkakzVvQ",
"bf_9TRW2Eqc",
"QpJwALf6r_S",
"8Alid0Mg_hP",
"fmkM4YmpYSK",
"T-9fR_9wNsE",
"n_Avj9iGjb",
"ToxqkQVsFr"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for taking the time to consider our response. We hope that this additional study confirms that the improvement over TrXL is clear in the settings we considered, but note that this hasn't changed your perception of the work substantially. Is there anything else we could do to convince you? For example, i... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"bf_9TRW2Eqc",
"nips_2021_wfiVgITyCC_",
"nips_2021_wfiVgITyCC_",
"fmkM4YmpYSK",
"LXxBkakzVvQ",
"bf_9TRW2Eqc",
"ToxqkQVsFr",
"LXxBkakzVvQ",
"nips_2021_wfiVgITyCC_"
] |
nips_2021_eIdzV1-Jdwv | Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization | The theory of spectral filtering is a remarkable tool to understand the statistical properties of learning with kernels. For least squares, it allows to derive various regularization schemes that yield faster convergence rates of the excess risk than with Tikhonov regularization. This is typically achieved by leveraging classical assumptions called source and capacity conditions, which characterize the difficulty of the learning task. In order to understand estimators derived from other loss functions, Marteau-Ferey et al. have extended the theory of Tikhonov regularization to generalized self concordant loss functions (GSC), which contain, e.g., the logistic loss. In this paper, we go a step further and show that fast and optimal rates can be achieved for GSC by using the iterated Tikhonov regularization scheme, which is intrinsically related to the proximal point method in optimization, and overcomes the limitation of the classical Tikhonov regularization.
| accept | The topic of the submission is supervised learning in RKHSs (reproducing kernel Hilbert space) with generalized self-concordant (GSC; such as the logistic one) losses. Particularly, the authors raise the question if spectral filtering techniques can be leveraged to achieve faster/optimal convergence rates under source and capacity conditions in case of GSC losses when the regularization is changed from the classical Tikhonov one (as studied in [1] and suffering from saturation) to the iterative Tikhonov scheme (also known as proximal point algorithm). The authors answer this question affirmatively (Theorem 1) with extension to inexact proximal solvers (Proposition 2). The efficiency of the approach is illustrated on synthetic examples.
Kernel techniques are omnipresent in machine learning with a large number of successful applications. The reviewers agreed that the authors deliver a solid theoretical contribution in this direction in a well-written paper which can be of clear interest to the NeurIPS audience. | train | [
"6xkJriO1Nb",
"k5z95r0on7",
"yRCrnrDxVkT",
"Ih6l2A0Htgw",
"ObUPX33MDnM",
"Y_B2xcnUx3",
"-5EIU8IU7Q",
"8Sj8Xw0EbFp",
"mLZWUAjJyVv",
"_Zor7BrI3BB",
"k4susN7jnIb",
"6c-WSzpOet",
"m-tBsonHoUw",
"fWfIq84vDV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"### Update ###\n\nI have the author response. Many thanks for addressing my concerns.\n\nThis is a strong paper and I look forward to seeing it published.\n\n=============\n\nThis submission establishes convergence rates for the excess risk of _iterated Tikhonov regularization_ for generalized self-concordant func... | [
8,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
2,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_eIdzV1-Jdwv",
"mLZWUAjJyVv",
"_Zor7BrI3BB",
"Y_B2xcnUx3",
"nips_2021_eIdzV1-Jdwv",
"-5EIU8IU7Q",
"6c-WSzpOet",
"k4susN7jnIb",
"fWfIq84vDV",
"m-tBsonHoUw",
"6xkJriO1Nb",
"ObUPX33MDnM",
"nips_2021_eIdzV1-Jdwv",
"nips_2021_eIdzV1-Jdwv"
] |
nips_2021_EaLBPnRtggY | Variational Bayesian Reinforcement Learning with Regret Bounds | Brendan O'Donoghue | accept | This paper proposes K-learning, a reinforcement learning exploration algorithm with Bayesian regret bounds. The paper is in a tabular setting that works for an interesting exponential epistemic-risk-seeking utility function. Although the regret bounds may not be tight, most of the reviewers believe that the contribution is sufficient for acceptance. The experiment part is also a plus. | train | [
"PqaT_YWW6FS",
"rzvrTwmmZME",
"9dDPoi2JAs",
"tdlf7CzyIEw",
"kLiZu_84M6I",
"dimmzqUfFAk",
"aIZQTnTBRMe",
"cAYaR3jN-ga",
"iSNK-rC7dT3",
"Nx2JxwEvYxF",
"8g4JqSm_HCC",
"He-zuFEY94",
"pe2S09vsWM",
"8Rv-6grXk3a",
"mdDRI0HZndt"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks AC for leading discussion. The response addressed my concern and I'd like to keep my score as 6.",
" We would like to thank the reviewers for their thorough feedback, for engaging in the rebuttal, and for their original evaluation of the paper. This is much appreciated and we believe that the paper has g... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"rzvrTwmmZME",
"nips_2021_EaLBPnRtggY",
"cAYaR3jN-ga",
"kLiZu_84M6I",
"dimmzqUfFAk",
"iSNK-rC7dT3",
"8Rv-6grXk3a",
"mdDRI0HZndt",
"pe2S09vsWM",
"He-zuFEY94",
"nips_2021_EaLBPnRtggY",
"nips_2021_EaLBPnRtggY",
"nips_2021_EaLBPnRtggY",
"nips_2021_EaLBPnRtggY",
"nips_2021_EaLBPnRtggY"
] |
nips_2021_RYL_709qe9Y | Logarithmic Regret from Sublinear Hints | Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit | accept | Reviewers all agree that this paper makes solid contribution to advancing the understanding
of online linear optimization with hints. Please do incorporate all the writing suggestions
from the reviews into the final version. | train | [
"e8MnLJrMDyg",
"H3KEGy1qsZT",
"80xw7POlE2Y",
"6KrL4gaHhfn",
"m2bI79w6JgG",
"D-SeJ5ZpMCv",
"ZPJbd7Ii5g",
"e1c4sLAJCl",
"9fWnbNi4ML",
"B9oB-XpfIy"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your clarifications. I find this work interesting to say the least and after reading the other reviewers' comments and the authors' responses I feel convinced about my initial score and maintain it for the time being. ",
" Thanks for your detailed reply. I quite like this line of work. After reading ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"D-SeJ5ZpMCv",
"m2bI79w6JgG",
"nips_2021_RYL_709qe9Y",
"B9oB-XpfIy",
"9fWnbNi4ML",
"e1c4sLAJCl",
"80xw7POlE2Y",
"nips_2021_RYL_709qe9Y",
"nips_2021_RYL_709qe9Y",
"nips_2021_RYL_709qe9Y"
] |
nips_2021_Rnn8zoAkrwr | Independent mechanism analysis, a new concept? | Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.
| accept | The authors propose a novel method for non-linear ICA. The idea is to constrain partial derivatives of the mixing function with respect to each source to be orthogonal. The authors connect this idea to the principle of independent mechanisms. The authors show that the constraint allows to deal with known counterexamples to identifiability in non-linear ICA. The method is evaluated on simulated toy examples. Reviewer hbzz praised the originality and significance of the work. Several reviewers (tDZV, ovUQ, 989y) criticized that the connection of IMA to other principles (such as algorithmic independence and information geometric causal inference) is vague or difficult to understand. In the responses, the authors expanded on the connections between the principles and proposed to de-emphasise the connection to algorithmic independence, while emphasizing the connection to information geometric causal inference. Several reviewers (VVZG, ovUQ, hbzz) agree that the paper is well-written. Overall, the authors have addressed the main concerns in the discussion. | train | [
"T5xDNTTTQti",
"1Uk8unnIF_2",
"egpnK-VX8Rt",
"cp55u1hwlsL",
"A6NQND1XW__",
"cokh4QrW1l2",
"E5iWBHEs7rH",
"QN2shdtW0nK",
"qLQMR4h9uy",
"URfEdj1k0GK",
"SScQQhZYY5L",
"KHyicZU_0vn",
"GBx1wC7Wsk",
"dIwg74k6NBg",
"fVC2Kp-5ubB",
"CVew-UL_SY",
"9euLLDmhvB8"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the other reviews and the discussion into account in your revised evaluation. We will do our best to incorporate your feedback for the revised manuscript, and address the concerns you raised.\n\nIn our view, there are two aspects in the ICA problem, that is: (1) identifiability and (2) estima... | [
-1,
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"cp55u1hwlsL",
"URfEdj1k0GK",
"nips_2021_Rnn8zoAkrwr",
"KHyicZU_0vn",
"qLQMR4h9uy",
"QN2shdtW0nK",
"nips_2021_Rnn8zoAkrwr",
"SScQQhZYY5L",
"fVC2Kp-5ubB",
"9euLLDmhvB8",
"E5iWBHEs7rH",
"egpnK-VX8Rt",
"nips_2021_Rnn8zoAkrwr",
"CVew-UL_SY",
"nips_2021_Rnn8zoAkrwr",
"nips_2021_Rnn8zoAkrwr"... |
nips_2021_LY-o87_w_x4 | Momentum Centering and Asynchronous Update for Adaptive Gradient Methods | Juntang Zhuang, Yifan Ding, Tommy Tang, Nicha Dvornek, Sekhar C. Tatikonda, James Duncan | accept | I recommend acceptance. The work build on top of existing work and clearly (even if moderately) improves some theoretical and empirical aspects.
I applaud the authors for their great communication with the reviewers that were responsive during the discussion period. Their responses have been extensive and convincing. They have gone to great lengths to respond to all timely requests for extra experiments and hyper parameter tuning.
I acknowledge that last-minute requests for heavy experiments cannot be reasonably accommodated and I accept the provided experimental evidence as sufficient for the paper’s main claims. | train | [
"TgCzSdUThVQ",
"qiMY9yRRRl",
"Eii75mHPQ3",
"_hBqv1Bmwgl",
"2sRb1jVqNrs",
"5sjeUSasCnb",
"QkHJKCI1eaE",
"GFKhsPbzMm6",
"a9AWLppwCl",
"zurfdA8w1E1",
"3G1NsUu9q7B",
"GpHCuq0JL1",
"G-40k3752p8",
"-A9ab0WV4TT",
"gFDxGTl-s_A",
"x8wnbHg9lM",
"8ib9wzX1bd-",
"RMd-OZgarOw",
"uGGyjPTx17n",
... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
... | [
" Thanks for the response, we address your concerns below.\n\n**A. Clarification on convergence rate**. \nThe convergence rate depends on a decayed learning rate schedule (essential for the proof, same as literature [1,2,3]), while the figures in submission use a constant learning rate, in this case the figure do... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"VguaTnPZuvQ",
"QkHJKCI1eaE",
"KoY3oHyXyJ3",
"KoY3oHyXyJ3",
"nips_2021_LY-o87_w_x4",
"T3Gi60h_c5",
"Eii75mHPQ3",
"_hBqv1Bmwgl",
"aCdgVmT_Zcu",
"5sjeUSasCnb",
"uGGyjPTx17n",
"nO7oNOjUMlV",
"-A9ab0WV4TT",
"gFDxGTl-s_A",
"nips_2021_LY-o87_w_x4",
"8ib9wzX1bd-",
"gFDxGTl-s_A",
"KG_zE853... |
nips_2021_PYcdGhnZQJh | Robustness via Uncertainty-aware Cycle Consistency | Unpaired image-to-image translation refers to learning inter-image-domain mapping without corresponding image pairs. Existing methods learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty, leading to performance degradation when encountering unseen perturbations at test time. To address this, we propose a novel probabilistic method based on Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC), which models the per-pixel residual by generalized Gaussian distribution, capable of modelling heavy-tailed distributions. We compare our model with a wide variety of state-of-the-art methods on various challenging tasks including unpaired image translation of natural images spanning autonomous driving, maps, facades, and also in the medical imaging domain consisting of MRI. Experimental results demonstrate that our method exhibits stronger robustness towards unseen perturbations in test data. Code is released here: https://github.com/ExplainableML/UncertaintyAwareCycleConsistency.
| accept | After discussion, the reviewers are all for accepting the work. It is well-motivated and presents its ideas well. There are some concerns regarding image artifacts (particularly, the color channels being distorted). The rebuttal does not really address this concern as no prior work which use very similar architectures suffer from this phenomena. I highly recommend the authors look into resolving or clarifying why this happens, and incorporating this discussion in their paper. | train | [
"oKwGkH8Hiqx",
"SYuzf3zPzrN",
"er9pzHoebhW",
"knsccel3M0h",
"8YBtnMr3-6G",
"0WlxI8oawWG",
"EC7TB0iu_07",
"iDholkMfE8n"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for allowing us to clarify this point within the rebuttal period. We used the same generator architecture for all the methods for a fair comparison, i.e. we adopted the cascaded UNet based generator to ensure sample efficiency with a lower memory footprint. Compared to the generator in the v... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"SYuzf3zPzrN",
"8YBtnMr3-6G",
"0WlxI8oawWG",
"iDholkMfE8n",
"EC7TB0iu_07",
"nips_2021_PYcdGhnZQJh",
"nips_2021_PYcdGhnZQJh",
"nips_2021_PYcdGhnZQJh"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.