paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_dZEZu7zxJBF | Learning sparse features can lead to overfitting in neural networks | It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge: for example, it is beneficial for modern architectures trained to classify images, whereas it is detrimental for fully-connected networks trained for the same task on the same data. Here we propose an explanation for this puzzle, by showing that feature learning can perform worse than lazy training (via random feature kernel or the NTK) as the former can lead to a sparser neural representation. Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth along certain directions of input space. We illustrate this phenomenon in two settings: (i) regression of Gaussian random functions on the $d$-dimensional unit sphere and (ii) classification of benchmark datasets of images. For (i), we compute the scaling of the generalization error with number of training points, and show that methods that do not learn features generalize better, even when the dimension of the input space is large. For (ii), we show empirically that learning features can indeed lead to sparse and thereby less smooth representations of the image predictors. This fact is plausibly responsible for deteriorating the performance, which is known to be correlated with smoothness along diffeomorphisms. | Accept | This paper studies different behaviors of feature learning and lazy training in the regression of Gaussian random functions and image classification. The main contribution includes insights into why feature learning in a particular setting is not robust (may overfit) compared to the lazy training regime for fully connected networks.
The reviewers pointed out several concerns about the presentation of the proofs, different behaviors of FCN and CNN, etc., which have been appropriately addressed by the authors. Overall, all reviewers appreciate the contribution of this paper, so I recommend accept. | train | [
"hH5yf85nR9h0",
"HpbYTa5g5HR",
"kxrpdGUK2IyL",
"ZYN0X1vqLE_",
"t045Y-v8eXN",
"lyT51liWxDM",
"EwvZzJZ3HaO",
"E7n6JnZ-S95",
"_xmfj8Hhy28",
"mn3MG8g5kXS",
"6MjF0jXNLkn",
"J6LD_F7PRV_",
"U9bVQMKGk1A",
"IJPpuTA5m3L",
"r-weE8KnHf-"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the stimulating propositions.\n\nWe think that background noise would not solve the issue as two cases emerge: either the noise magnitude is small and we go back to the zero-noise case, or it is large enough to prevent the network from learning thus just affecting accuracy. Some prelimin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
3
] | [
"t045Y-v8eXN",
"EwvZzJZ3HaO",
"lyT51liWxDM",
"nips_2022_dZEZu7zxJBF",
"mn3MG8g5kXS",
"E7n6JnZ-S95",
"_xmfj8Hhy28",
"r-weE8KnHf-",
"IJPpuTA5m3L",
"U9bVQMKGk1A",
"J6LD_F7PRV_",
"nips_2022_dZEZu7zxJBF",
"nips_2022_dZEZu7zxJBF",
"nips_2022_dZEZu7zxJBF",
"nips_2022_dZEZu7zxJBF"
] |
nips_2022_dix1iktX7Qt | Multi-fidelity Monte Carlo: a pseudo-marginal approach | Markov chain Monte Carlo (MCMC) is an established approach for uncertainty quantification and propagation in scientific applications. A key challenge in applying MCMC to scientific domains is computation: the target density of interest is often a function of expensive computations, such as a high-fidelity physical simulation, an intractable integral, or a slowly-converging iterative algorithm. Thus, using an MCMC algorithms with an expensive target density becomes impractical, as these expensive computations need to be evaluated at each iteration of the algorithm. In practice, these computations often approximated via a cheaper, low-fidelity computation, leading to bias in the resulting target density. Multi-fidelity MCMC algorithms combine models of varying fidelities in order to obtain an approximate target density with lower computational cost. In this paper, we describe a class of asymptotically exact multi-fidelity MCMC algorithms for the setting where a sequence of models of increasing fidelity can be computed that approximates the expensive target density of interest. We take a pseudo-marginal MCMC approach for multi-fidelity inference that utilizes a cheaper, randomized-fidelity unbiased estimator of the target fidelity constructed via random truncation of a telescoping series of the low-fidelity sequence of models. Finally, we discuss and evaluate the proposed multi-fidelity MCMC approach on several applications, including log-Gaussian Cox process modeling, Bayesian ODE system identification, PDE-constrained optimization, and Gaussian process parameter inference. | Accept | In "multi-fidelity" Monte Carlo, a sequence of estimators of increasing cost and quality (in the sense of approximating some, typically intractable, limiting model) are available. The authors define a target distribution on an extended space including both fidelity and model parameters. Then, a Markov chain which targets a distribution over this extended space can recover the marginal of interest.
All reviewers were positive about the work and there seems to be a consensus that the paper is well-written and clear, the method is applicable in a broad setting and demonstrates an advantage in tested benchmarks (albeit perhaps low dimensional),
Reviewer ga3Z raised a valid concern about the mixing time considering the nature of the unbiased estimation methods based on couplings described in section 3. These methods achieve unbiased estimates at the expense of estimation variance and expected running time (time to coupling). The reviewer mentions that this algorithm can still suffer from the same problems that annealing based methods can suffer in terms of mixing. Such limitations should be made explicit in the paper.
After the rebuttal, I feel that most concerns have been addressed and the paper can be a valuable contribution to Neurips, hence I suggest acceptance.
| val | [
"Hlz7G5aFvQp",
"NOlKnv12qVO",
"e8iOjvx5zMp",
"suDzhxArEME",
"yCfe301gcQ5",
"9bjDwIEzt89",
"ScXH9hh4W",
"o8Mxw2wPfMY",
"NP8HIrlw-f",
"6ApBn89smJP",
"jWRKQ3FVDUd",
"UtLE0huSXEk",
"fe8eMmsrCgE"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response and further engagement in the discussion!\n\n**Tuning the fidelity K:**\n\nIn practice, there’s two types of biases: (1) the bias from a finite number of samples (which both methods will have), and (2) asymptotic bias (the fixed K methods will suffer from, whereas our method does not). Fo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"e8iOjvx5zMp",
"o8Mxw2wPfMY",
"yCfe301gcQ5",
"9bjDwIEzt89",
"6ApBn89smJP",
"jWRKQ3FVDUd",
"UtLE0huSXEk",
"fe8eMmsrCgE",
"nips_2022_dix1iktX7Qt",
"nips_2022_dix1iktX7Qt",
"nips_2022_dix1iktX7Qt",
"nips_2022_dix1iktX7Qt",
"nips_2022_dix1iktX7Qt"
] |
nips_2022_RczPtvlaXPH | Turbocharging Solution Concepts: Solving NEs, CEs and CCEs with Neural Equilibrium Solvers | Solution concepts such as Nash Equilibria, Correlated Equilibria, and Coarse Correlated Equilibria are useful components for many multiagent machine learning algorithms. Unfortunately, solving a normal-form game could take prohibitive or non-deterministic time to converge, and could fail. We introduce the Neural Equilibrium Solver which utilizes a special equivariant neural network architecture to approximately solve the space of all games of fixed shape, buying speed and determinism. We define a flexible equilibrium selection framework, that is capable of uniquely selecting an equilibrium that minimizes relative entropy, or maximizes welfare. The network is trained without needing to generate any supervised training data. We show remarkable zero-shot generalization to larger games. We argue that such a network is a powerful component for many possible multiagent algorithms. | Accept | This paper introduces an Neural Network based Equilibrium Solver which utilizes a special equivariant neural network architecture
to approximately predict NEs, CEs, and CCEs of normal-form games. Experiments show the effectiveness of the proposed methods across multiple dataset. All reviewers support the acceptance of this paper.
While I agree on the merit of this paper worth acceptance, I'd also recommend authors to revise a bit in the final version regarding to the theoretical complexity of finding equilibrium. (1) in line 18 "solving for an equilibrium of a game can be computationally complex [9, 8]", in fact, the cited intractable results only apply to finding Nash in multiplayer general-sum games. Finding CE/CCE can be always done by LP, which is tractable, and can be guaranteed to finish in polynomial time; (2) this paper emphasize that prior methods may take an non-deterministic time to converge, while this method proposed in this paper gives determinism. However, it appears to be the methods proposed in this paper is not provided with guarantees to converge in certain time (thus without determinism either). It's better if the authors can clarify or modify corresponding arguments.
| val | [
"qafSDMGow_j",
"6nPp6Lc0POh",
"VOLwqsL8GXa",
"2hmI1B9gcJ2",
"oyveI9vnC6",
"mJ-pMplUYX",
"vYkeHUtaeGk",
"50G7s-_utAB",
"XIQHTndijM3",
"zeKHN-iwmGex",
"FVujAJV5Os",
"CyZ6V6D5tM",
"YktZjCo4XM",
"oeOQJoXxi80",
"qjaq12HPtUi",
"hRMg7PIEN8A",
"4UjaaTk2vwn",
"QUbPi4nY8PU",
"7VSeZ815aK"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" I have found myself repeatedly coming back to thinking about this paper over the past several days. I think the insight and direction here is fundamentally important, though it is initially difficult to see because it originally seems very niche and is far from traditional approaches. While I do not know how wel... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"vYkeHUtaeGk",
"oyveI9vnC6",
"2hmI1B9gcJ2",
"CyZ6V6D5tM",
"qjaq12HPtUi",
"XIQHTndijM3",
"FVujAJV5Os",
"zeKHN-iwmGex",
"YktZjCo4XM",
"oeOQJoXxi80",
"oeOQJoXxi80",
"7VSeZ815aK",
"oeOQJoXxi80",
"QUbPi4nY8PU",
"4UjaaTk2vwn",
"nips_2022_RczPtvlaXPH",
"nips_2022_RczPtvlaXPH",
"nips_2022_... |
nips_2022_4B7azgAbzda | Learning dynamics of deep linear networks with multiple pathways | Not only have deep networks become standard in machine learning, they are increasingly of interest in neuroscience as models of cortical computation that capture relationships between structural and functional properties. In addition they are a useful target of theoretical research into the properties of network computation. Deep networks typically have a serial or approximately serial organization across layers, and this is often mirrored in models that purport to represent computation in mammalian brains. There are, however, multiple examples of parallel pathways in mammalian brains. In some cases, such as the mouse, the entire visual system appears arranged in a largely parallel, rather than serial fashion. While these pathways may be formed by differing cost functions that drive different computations, here we present a new mathematical analysis of learning dynamics in networks that have parallel computational pathways driven by the same cost function. We use the approximation of deep linear networks with large hidden layer sizes to show that, as the depth of the parallel pathways increases, different features of the training set (defined by the singular values of the input-output correlation) will typically concentrate in one of the pathways. This result is derived analytically and demonstrated with numerical simulation. Thus, rather than sharing stimulus and task features across multiple pathways, parallel network architectures learn to produce sharply diversified representations with specialized and specific pathways, a mechanism which may hold important consequences for codes in both biological and artificial systems. | Accept | Reviewers agree that this is an sound and well presented contribution. | test | [
"2t6QMgtQuL",
"Bhhk9JsjvMG",
"uBHHMGtWho",
"HX-likiCyPY",
"1wPO8bMLVXO",
"x-2m3hYB5UW",
"HVb0bIRjJh",
"IhtS8SwcaZu",
"7LhQcZxwsjX",
"tdD5bERt_2E",
"IQKjx2lMZJw",
"McpBTkhXBk4"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The fact that AlexNet learning two sets of features (color blobs versus gray-level Gabor) in the two GPUs is always very intriguing to me, and your result might actually provide an insight to what this is so. Thanks also for the extensive discussion on PCA. But how about ICA -- or Foldiak's blind source separati... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"7LhQcZxwsjX",
"IQKjx2lMZJw",
"tdD5bERt_2E",
"McpBTkhXBk4",
"IQKjx2lMZJw",
"tdD5bERt_2E",
"McpBTkhXBk4",
"IQKjx2lMZJw",
"tdD5bERt_2E",
"nips_2022_4B7azgAbzda",
"nips_2022_4B7azgAbzda",
"nips_2022_4B7azgAbzda"
] |
nips_2022_G5ADoRKiTyJ | Fine-tuning language models to find agreement among humans with diverse preferences | Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs ($>70\%$) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions ($>65\%$). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another. | Accept | This paper tackles the interesting task of training a language model to generate a consensus statement that maximize the expected approval of a group of people with diverse opinions. The proposed approach that is based on human evaluations of LM-generated statements and training of a reward model, is compared with a set of baselines. Based on reviewers' suggestions, authors performed additional experiments with smaller models and included analysis of results.
SACs, please note that reviewer bGmJ changed their rating from 3: Reject to 6: Weak Accept verbally, but this change isn't reflected in the average rating. | train | [
"Z9KxFqcMz7",
"Bmi03_UZ5ia",
"BHT2OrymJP4",
"TrwdUte_TPi",
"4ksySI_Ggeu",
"KSF5w72jUMG",
"h3qVnfXIOOa",
"_4PQyzV3jAJ",
"FyRor5CmMDz",
"7UhaUqrzTi",
"WaU9gAXYNlp",
"_HcejMHSr0",
"f8A50V9PYWd",
"ppuq_Fivph4",
"DPHVid6dek"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the comprehensive reply! I think my concerns are adequately resolved.\n\nThanks for resolving my question on first-round training! I guess I missed that in my reading.\n\nIt is great to see GPT-2 can also work well on this task.\n\n\n",
" We wish to thank the reviewers again for helping us improve... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"f8A50V9PYWd",
"nips_2022_G5ADoRKiTyJ",
"DPHVid6dek",
"DPHVid6dek",
"DPHVid6dek",
"ppuq_Fivph4",
"ppuq_Fivph4",
"ppuq_Fivph4",
"f8A50V9PYWd",
"f8A50V9PYWd",
"_HcejMHSr0",
"nips_2022_G5ADoRKiTyJ",
"nips_2022_G5ADoRKiTyJ",
"nips_2022_G5ADoRKiTyJ",
"nips_2022_G5ADoRKiTyJ"
] |
nips_2022_7nypt7cjNL | Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference | A surprising phenomenon in modern machine learning is the ability of a highly overparameterized model to generalize well (small error on the test data) even when it is trained to memorize the training data (zero error on the training data). This has led to an arms race towards increasingly overparameterized models (c.f., deep learning). In this paper, we study an underexplored hidden cost of overparameterization: the fact that overparameterized models may be more vulnerable to privacy attacks, in particular the membership inference attack that predicts the (potentially sensitive) examples used to train a model. We significantly extend the relatively few empirical results on this problem by theoretically proving for an overparameterized linear regression model in the Gaussian data setting that membership inference vulnerability increases with the number of parameters. Moreover, a range of empirical studies indicates that more complex, nonlinear models exhibit the same behavior. Finally, we extend our analysis towards ridge-regularized linear regression and show in the Gaussian data setting that increased regularization also increases membership inference vulnerability in the overparameterized regime. | Accept | This paper considers the problem of membership inference. The authors propose to use the tools of random matrix theory in the asymptotic regime to analyze membership inference in the simple case of a linear model on Gaussian data. They start by deriving the explicit advantage of the attack in the asymptotic regime. Further derivations allow the author to analyze several interesting machine learning ingredients, starting from L2 regularization (ridge regression). There, the authors show that regularization has the counterintuitive effect of increasing the performance of membership inference attacks. The referees are leaning toward acceptance and I concur. | train | [
"n5oNjy2OIq_",
"6w0RAxIKZV6",
"yQwqzrYCHQ",
"fQy_ttZidnU",
"zoW-7Uus0Hb",
"duFK_WctALW",
"7Fk2erIytID",
"rsogH4NSt-r",
"ivjj5UGVvw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for taking time to answer all of my comments. They were very helpful.\n\nAssuming the authors would correct the interpretation of their result (not to claim that more parameters always means more vulnerability) I would stay supportive of this paper and think that it could be a good paper for NeurIPS.",
"... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"zoW-7Uus0Hb",
"7Fk2erIytID",
"ivjj5UGVvw",
"rsogH4NSt-r",
"7Fk2erIytID",
"nips_2022_7nypt7cjNL",
"nips_2022_7nypt7cjNL",
"nips_2022_7nypt7cjNL",
"nips_2022_7nypt7cjNL"
] |
nips_2022_bAE1y8wG-ng | Few-Shot Fast-Adaptive Anomaly Detection | The ability to detect anomaly has long been recognized as an inherent human ability, yet to date, practical AI solutions to mimic such capability have been lacking. This lack of progress can be attributed to several factors. To begin with, the distribution of ``abnormalities'' is intractable. Anything outside of a given normal population is by definition an anomaly. This explains why a large volume of work in this area has been dedicated to modeling the normal distribution of a given task followed by detecting deviations from it. This direction is however unsatisfying as it would require modeling the normal distribution of every task that comes along, which includes tedious data collection. In this paper, we report our work aiming to handle these issues. To deal with the intractability of abnormal distribution, we leverage Energy Based Model (EBM). EBMs learn to associates low energies to correct values and higher energies to incorrect values. At its core, the EBM employs Langevin Dynamics (LD) in generating these incorrect samples based on an iterative optimization procedure, alleviating the intractable problem of modeling the world of anomalies. Then, in order to avoid training an anomaly detector for every task, we utilize an adaptive sparse coding layer. Our intention is to design a plug and play feature that can be used to quickly update what is normal during inference time. Lastly, to avoid tedious data collection, this mentioned update of the sparse coding layer needs to be achievable with just a few shots. Here, we employ a meta learning scheme that simulates such a few shot setting during training. We support our findings with strong empirical evidence. | Accept | The final consensus from three reviewers knowledgeable in the field was that the paper makes an interesting contribution in the area of anomaly detection. The empirical results were seen as particularly impressive, and the treatment of intermediate samples from Langevin dynamics as abnormal was also seen as offering some novelty. My own assessment is a bit more qualified than the reviewers: while the empirical results are certainly nice, the approach itself seems a somewhat complex combination of ideas from the literature. (This complexity was also noted in one of the reviews.) While we uphold the reviewers' verdict, we encourage the authors to spend a bit more time (perhaps in Sec 2) drawing out some higher-level insights of the proposed approach, and whether there might be simpler alternatives that could also work well. | train | [
"JTa4RbCSTkA",
"mPUmz25raqI",
"uLNo33HWgjmK",
"qSFqbuAeFof",
"kvzDQ60Qzle",
"YOghDJR6F_U",
"VqQvqzq9LC9",
"p0hp-dbwXyb",
"P4rUcBhf0Mg"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for updating the algorithm and clarification on co-variate shift",
" We appreciate reviewer's prompt feedback, and feel free to let us know if there are further questions. ",
" Thanks for the updates. I have updated my score based on the author response.",
" We thank the reviewer for their constructi... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"kvzDQ60Qzle",
"uLNo33HWgjmK",
"qSFqbuAeFof",
"P4rUcBhf0Mg",
"p0hp-dbwXyb",
"VqQvqzq9LC9",
"nips_2022_bAE1y8wG-ng",
"nips_2022_bAE1y8wG-ng",
"nips_2022_bAE1y8wG-ng"
] |
nips_2022_Vu-B0clPfq | Transformer Memory as a Differentiable Search Index | In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup. | Accept | This paper proposes a differentiable search index paradigm that transforms an IR problem to a generation problem of doc IDs. Reviewers generally find the paper novel and interesting. However, a few limitations are also pointed out. For example, the approach does not work with unseen docs. | train | [
"1nfmk3N4Cyl",
"0JGkLvLDf6ac",
"dgbWQAh55291",
"Q7792E5ZG7X",
"oOFPmKvfr1z",
"VHdIf4A8Znn",
"iQVucrhN_8",
"dKAr3iRnQN3",
"n5tdZDU3-4z"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for your helpful comments. We respond to your specific questions below, please see the overall comment for our general response. \n\n> Can the process of using doc-id to determine the document itself also be understood as a kind of index? How can this process be distinguished from the defini... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"n5tdZDU3-4z",
"dKAr3iRnQN3",
"iQVucrhN_8",
"VHdIf4A8Znn",
"nips_2022_Vu-B0clPfq",
"nips_2022_Vu-B0clPfq",
"nips_2022_Vu-B0clPfq",
"nips_2022_Vu-B0clPfq",
"nips_2022_Vu-B0clPfq"
] |
nips_2022_ZBlaix34YX | LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness | Recent studies show that training deep neural networks (DNNs) with Lipschitz constraints are able to enhance adversarial robustness and other model properties such as stability. In this paper, we propose a layer-wise orthogonal training method (LOT) to effectively train 1-Lipschitz convolution layers via parametrizing an orthogonal matrix with an unconstrained matrix. We then efficiently compute the inverse square root of a convolution kernel by transforming the input domain to the Fourier frequency domain. On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models. We conduct comprehensive evaluations for LOT under different settings. We show that LOT significantly outperforms baselines regarding deterministic l2 certified robustness, and scales to deeper neural networks. Under the supervised scenario, we improve the state-of-the-art certified robustness for all architectures (e.g. from 59.04% to 63.50% on CIFAR-10 and from 32.57% to 34.59% on CIFAR-100 at radius $\rho=36/255$ for 40-layer networks). With semi-supervised learning over unlabelled data, we are able to improve state-of-the-art certified robustness on CIFAR-10 at $\rho=108/255$ from 36.04% to 42.39%. In addition, LOT consistently outperforms baselines on different model architectures with only 1/3 evaluation time. | Accept | This paper proposes a training framework named Layer-wise Orthogonal Training (LOT) that aims to train 1-Lipschitz convolution layers effectively and improves the certified robustness of Lipschitz-bounded models. The paper also proves that semi-supervised learning can benefit the robustness of Lipschitz-bounded models.
All reviewers agree that this work is new, and the empirical improvement is significant. I follow the majority to recommend acceptance. | train | [
"17R-Y1a3hoM",
"Ga-OGO6jJ-d",
"5qi0hgEwAm",
"Ja-J8BTWzC0",
"MitIwQGVi_a",
"wgvDQQjPFLT",
"nqiX7XIRApW",
"dClEM_wfZv",
"c6gVKZmNdLb",
"QVSbUL8Su8V",
"5DZstejo80l",
"5c4WJFv5j8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my concerns and showing the additional experiments!",
" Thanks for the detailed responses, with strengthened experiments.",
" We thank all the reviewers for their comments and valuable feedback. We have made the following major updates following the reviews to further improve our work.\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"wgvDQQjPFLT",
"c6gVKZmNdLb",
"nips_2022_ZBlaix34YX",
"5c4WJFv5j8",
"5DZstejo80l",
"5DZstejo80l",
"QVSbUL8Su8V",
"QVSbUL8Su8V",
"QVSbUL8Su8V",
"nips_2022_ZBlaix34YX",
"nips_2022_ZBlaix34YX",
"nips_2022_ZBlaix34YX"
] |
nips_2022_wTp4KgVIJ5 | SAGDA: Achieving $\mathcal{O}(\epsilon^{-2})$ Communication Complexity in Federated Min-Max Learning | Federated min-max learning has received increasing attention in recent years thanks to its wide range of applications in various learning paradigms. Similar to the conventional federated learning for empirical risk minimization problems, communication complexity also emerges as one of the most critical concerns that affects the future prospect of federated min-max learning. To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning. However, due to the more complicated inter-outer problem structure in federated min-max learning, theoretical understandings of communication complexity for federated min-max learning with infrequent communications remain very limited in the literature. This is particularly true for settings with non-i.i.d. datasets and partial client participation. To address this challenge, in this paper, we propose a new algorithmic framework called \ul{s}tochastic \ul{s}ampling \ul{a}veraging \ul{g}radient \ul{d}escent \ul{a}scent ($\mathsf{SAGDA}$), which i) assembles stochastic gradient estimators from randomly sampled clients as control variates and ii) leverages two learning rates on both server and client sides. We show that $\mathsf{SAGDA}$ achieves a linear speedup in terms of both the number of clients and local update steps, which yields an $\mathcal{O}(\epsilon^{-2})$ communication complexity that is orders of magnitude lower than the state of the art. Interestingly, by noting that the standard federated stochastic gradient descent ascent (FSGDA) is in fact a control-variate-free special version of $\mathsf{SAGDA}$, we immediately arrive at an $\mathcal{O}(\epsilon^{-2})$ communication complexity result for FSGDA. Therefore, through the lens of $\mathsf{SAGDA}$, we also advance the current understanding on communication complexity of the standard FSGDA method for federated min-max learning. | Accept | While the reviewers have shown some concerns regarding presentations, the paper has substantial contribution. It is suggested that the authors improve their presentation taking the suggestions from the reviewers: this paper will gain from clarifying the contributions. Overall the novelty of min-max optimization coupled with interesting results warrants acceptance. | train | [
"njneg4sWCUA",
"CCt9Rgxpy9m",
"1mkDha8d4PC",
"VXx0c0qYI97",
"e7l0Jyptdw-",
"eE40BDujf3",
"-gzvfywbiFC",
"adJIWnrB69",
"xsXK1RwGmK",
"kLRb62Lmrzz",
"IVSeWGV6dp3"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks so much for going through our previous response carefully and raising your score! We will definitely incorporate your comments and suggestions in our revision. Thanks!\n\nBest,\nAuthors",
" I would like to thank the authors for their detailed rebuttal. Almost all my questions have been answered and I hop... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"CCt9Rgxpy9m",
"eE40BDujf3",
"VXx0c0qYI97",
"IVSeWGV6dp3",
"kLRb62Lmrzz",
"-gzvfywbiFC",
"adJIWnrB69",
"xsXK1RwGmK",
"nips_2022_wTp4KgVIJ5",
"nips_2022_wTp4KgVIJ5",
"nips_2022_wTp4KgVIJ5"
] |
nips_2022_L3uTDctm3s9 | Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome | Data augmentation plays a key role in modern machine learning pipelines. While numerous augmentation strategies have been studied in the context of computer vision and natural language processing, less is known for other data modalities. Our work extends the success of data augmentation to compositional data, i.e., simplex-valued data, which is of particular interest in microbiology, geochemistry, and other applications. Drawing on key principles from compositional data analysis, such as the \emph{Aitchison geometry of the simplex} and subcompositions, we define novel augmentation strategies for this data modality. Incorporating our data augmentations into standard supervised learning pipelines results in consistent performance gains across a wide range of standard benchmark datasets. In particular, we set a new state-of-the-art for key disease prediction tasks including colorectal cancer, type 2 diabetes, and Crohn's disease. In addition, our data augmentations enable us to define a novel contrastive learning model, which improves on previous representation learning approaches for microbiome compositional data. | Accept |
This paper proposes three different, easy to implement methods for data-augmentation when learning models on compositional data (where each feature lies in a potentially high-dimensional simplex). The basic idea is that for such data, it is important to create augmentations that respect the fact that data are within the simplex. There was consensus among the reviewers that this work should be accepted. This work was simple, interesting and results in empirical improvements across various choices of models of outcomes. I think this work can have an impact for those building predictive models in applications of machine learning such as microbiome data.
| test | [
"HqeKBt_RdgP",
"eW2akvUf2t-",
"q9IteFeLrbT",
"fDQkzSqIk2b",
"lY-pbZoCLm7",
"7QduoPaV4Zp",
"DIbbgpfQsHQ",
"LMHmw4-SGMy",
"twb-gQTchFN",
"ukTU3EbjtMa"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for reviewing our updates and for your response. We agree that the contrastive learning section could be improved by contextualizing our experiments upfront; we shall explicitly state its goal as a proof-of-concept for contrastive learning on CoDa. We also agree with moving some of the implementation ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"eW2akvUf2t-",
"7QduoPaV4Zp",
"fDQkzSqIk2b",
"lY-pbZoCLm7",
"LMHmw4-SGMy",
"twb-gQTchFN",
"ukTU3EbjtMa",
"nips_2022_L3uTDctm3s9",
"nips_2022_L3uTDctm3s9",
"nips_2022_L3uTDctm3s9"
] |
nips_2022_5TfqL2gWdV9 | Invariance-Aware Randomized Smoothing Certificates | Building models that comply with the invariances inherent to different domains, such as invariance under translation or rotation, is a key aspect of applying machine learning to real world problems like molecular property prediction, medical imaging, protein folding or LiDAR classification. For the first time, we study how the invariances of a model can be leveraged to provably guarantee the robustness of its predictions. We propose a gray-box approach, enhancing the powerful black-box randomized smoothing technique with white-box knowledge about invariances. First, we develop gray-box certificates based on group orbits, which can be applied to arbitrary models with invariance under permutation and Euclidean isometries. Then, we derive provably tight gray-box certificates. We experimentally demonstrate that the provably tight certificates can offer much stronger guarantees, but that in practical scenarios the orbit-based method is a good approximation. | Accept | The paper presents the first study in the literature about the interplay of model invariance and robustness certification. While the approach is novel, the techniques are usual and the scope of the invariances considered are a bit narrow, which makes the paper somewhat borderline. Nevertheless, all reviewers recommend acceptance. | train | [
"b7HE5unirP",
"9vmShj_UEu",
"ZBILL7qbI6f",
"oHT5oHT3rak",
"1lBvXmwKCC",
"UCFBtVNUZPK",
"8AN4WQ_CldV",
"jrLRWplgwB1",
"kq4C4Xmtm87",
"qHa_YIu4KC6",
"HfMMU-7hd7G"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThank you for your hard work in responding exhaustively to my review, and for updating the submission.\n\nI will raise my rating by a point after the rebuttal and reading the reviews of the other referees. I think this work can be a useful investigative contribution for future studies of this top... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"8AN4WQ_CldV",
"nips_2022_5TfqL2gWdV9",
"oHT5oHT3rak",
"HfMMU-7hd7G",
"UCFBtVNUZPK",
"qHa_YIu4KC6",
"jrLRWplgwB1",
"kq4C4Xmtm87",
"nips_2022_5TfqL2gWdV9",
"nips_2022_5TfqL2gWdV9",
"nips_2022_5TfqL2gWdV9"
] |
nips_2022_EZQnauHn-77 | Deep Compression of Pre-trained Transformer Models | Pre-trained transformer models have achieved remarkable success in natural language processing (NLP) and have recently become competitive alternatives to Convolution Neural Networks (CNN) and Recurrent Neural Networks (RNN) in vision and speech tasks, respectively. Due to excellent computational efficiency and scalability, transformer models can be trained on exceedingly large amounts of data; however, model sizes can grow tremendously. As high performance, large-scale, and pre-trained transformer models become available for users to download and fine-tune for customized downstream tasks, the deployment of these models becomes challenging due to the vast amount of operations and large memory footprint. To address this challenge, we introduce methods to deeply compress pre-trained transformer models across three major application domains: NLP, speech, and vision. Specifically, we quantize transformer backbones down to 4-bit and further achieve 50% fine-grained structural sparsity on pre-trained BERT, Wav2vec2.0 and Vision Transformer (ViT) models to achieve 16x compression while maintaining model accuracy. This is achieved by identifying the critical initialization for quantization/sparsity aware fine-tuning, as well as novel techniques including quantizers with zero-preserving format and scheduled dropout. These hardware-friendly techniques need only to be applied in the fine-tuning phase for downstream tasks; hence, are especially suitable for acceleration and deployment of pre-trained transformer models. | Accept | The paper presents quantization and sparsifying (50%) for transformers, and studies simple schemes for both quite extensively. The novelty is rather low but the value of the paper is in presenting results in 3 domains (NLP, ASR, image classification) and in paying attention to details of what matters to do quantization on pre-trained models with minimal loss of accuracy, as well as efficient implementation of those schemes. Overall, the paper is suitable for NeurIPS and I recommend acceptance. | train | [
"UlflM81mYWk",
"_AH9VjHMKAy",
"NPOQpMv1mb0",
"c_Jif9EZTAE",
"AlEHWovzsGN",
"3zWXYCJZpyh"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your review and thoughtful comments. We are revising our draft and will address your concerns and suggestions.\n \nTo answer your questions and comments:\n\n1). **Distillations**: We agree distillation is a powerful tool. In fact, some of the state-of-the-art works rely on distillation to ... | [
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
2,
2,
3
] | [
"3zWXYCJZpyh",
"AlEHWovzsGN",
"c_Jif9EZTAE",
"nips_2022_EZQnauHn-77",
"nips_2022_EZQnauHn-77",
"nips_2022_EZQnauHn-77"
] |
nips_2022_sn6BZR4WvUR | Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent | Recent studies have shown that gradient descent (GD) can achieve improved generalization when its dynamics exhibits a chaotic behavior. However, to obtain the desired effect, the step-size should be chosen sufficiently large, a task which is problem dependent and can be difficult in practice. In this study, we incorporate a chaotic component to GD in a controlled manner, and introduce \emph{multiscale perturbed GD} (MPGD), a novel optimization framework where the GD recursion is augmented with chaotic perturbations that evolve via an independent dynamical system. We analyze MPGD from three different angles: (i) By building up on recent advances in rough paths theory, we show that, under appropriate assumptions, as the step-size decreases, the MPGD recursion converges weakly to a stochastic differential equation (SDE) driven by a heavy-tailed L\'{e}vy-stable process. (ii) By making connections to recently developed generalization bounds for heavy-tailed processes, we derive a generalization bound for the limiting SDE and relate the worst-case generalization error over the trajectories of the process to the parameters of MPGD. (iii) We analyze the implicit regularization effect brought by the dynamical regularization and show that, in the weak perturbation regime, MPGD introduces terms that penalize the Hessian of the loss function. Empirical results are provided to demonstrate the advantages of MPGD. | Accept | A recent line of work on the role of stochasticity in ML suggests that variants of GD which use non-traditional step size schedules (a) may perform relatively well in certain settings (b) due to an implicit chaotic behavior. The present paper studies a variant of GD (MPGD) augmented with an explicit chaotic component, implemented by means of an external deterministic dynamical system, as a theoretical model for investigating these hypotheses. Recent results are shown to imply generalization bounds for the limiting stochastic process. Numerical results are provided for comparing the performance of MPGD to existing methods.
The reviewers have generally found the use of a GD variant with an explicit chaotic term, as well as the proposed analytic framework, interesting and appreciated the clarity and rigor of the results given in the paper. In later discussions, concerns regarding the relevance of the theoretical model to (a) and (b) above were raised by the reviewers, questioning more broadly the significance of MPGD and the respective limiting SDE to the general understanding of SGD/GD. All in all, I think this is a reasonable paper to accept if there is room. The authors are encouraged to revise the paper according to the important feedback given by the reviewers.
| train | [
"vnN2ATd7G7i",
"qah3tUNVbEF",
"P5NesZUrZ_Q",
"qTXsWlgaAWj",
"m5GlPlVps49",
"wwlUh7eUu4u",
"anF7o11KgYB",
"e5dobeS0TZ0",
"154PBnixGoK"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think that adding this discussion to the paper will improve the clarity of the results.",
" We thank the reviewer for careful reading of the paper. As the reviewer correctly mentioned, we could not place the assumptions in the main text due to space constraints. However, in case our submission gets accepted, ... | [
-1,
-1,
-1,
-1,
-1,
8,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
3
] | [
"qah3tUNVbEF",
"154PBnixGoK",
"e5dobeS0TZ0",
"anF7o11KgYB",
"wwlUh7eUu4u",
"nips_2022_sn6BZR4WvUR",
"nips_2022_sn6BZR4WvUR",
"nips_2022_sn6BZR4WvUR",
"nips_2022_sn6BZR4WvUR"
] |
nips_2022_hw-n6BUmiyI | Task Discovery: Finding the Tasks that Neural Networks Generalize on | When developing deep learning models, we usually decide what task we want to solve then search for a model that generalizes well on the task. An intriguing question would be: what if, instead of fixing the task and searching in the model space, we fix the model and search in the task space? Can we find tasks that the model generalizes on? How do they look, or do they indicate anything? These are the questions we address in this paper.
We propose a task discovery framework that automatically finds examples of such tasks via optimizing a generalization-based quantity called agreement score. We demonstrate that one set of images can give rise to many tasks on which neural networks generalize well. These tasks are a reflection of the inductive biases of the learning framework and the statistical patterns present in the data, thus they can make a useful tool for analyzing the neural networks and their biases. As an example, we show that the discovered tasks can be used to automatically create ''adversarial train-test splits'' which make a model fail at test time, without changing the pixels or labels, but by only selecting how the datapoints should be split between the train and test sets. We end with a discussion on human-interpretability of the discovered tasks.
| Accept | This paper contributes an interesting point of view: given a particular model, can we find tasks for which it will be good? For this end, the paper proposes an agreement score whose applications include finding generalizable sub-tasks and adversarial train-test splits. All reviewers were in favor of acceptance, generally agreeing that the paper provides theoretical and practical contributions. | train | [
"xcurwFZtx0O",
"Z4GCWVQQY_mE",
"q1KPkj-GuQF",
"MysKjD0myBB",
"474GkE7T5Gf",
"IJHtZMDI2c",
"T1kf5lJRlWj",
"OnCpCqDwtn",
"NOiNosOnZpr",
"GrzXUD6p79Y",
"KqTmHvmb-wY",
"RT5vTfd2sOHu",
"upILLVyDJJI",
"E6ZEvpF-xv",
"Im1StNnKAT",
"obzHW5bYoal"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the in depth replies. I believe that this area of research is very interesting and am definitely interested in seeing where it goes in the future.\n\nI maintain my original scores.",
" >How exactly is the AS being used to discover adversarial splits?\n\nAs described in Sec. 5.3, to construct an ad... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"NOiNosOnZpr",
"q1KPkj-GuQF",
"MysKjD0myBB",
"E6ZEvpF-xv",
"upILLVyDJJI",
"upILLVyDJJI",
"upILLVyDJJI",
"upILLVyDJJI",
"Im1StNnKAT",
"Im1StNnKAT",
"obzHW5bYoal",
"nips_2022_hw-n6BUmiyI",
"nips_2022_hw-n6BUmiyI",
"nips_2022_hw-n6BUmiyI",
"nips_2022_hw-n6BUmiyI",
"nips_2022_hw-n6BUmiyI"
... |
nips_2022_UmvSlP-PyV | Beyond neural scaling laws: beating power law scaling via data pruning | Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning. However, these improvements through scaling alone require considerable costs in compute and energy. Here we focus on the scaling of error with dataset size and show both in theory and practice that we can break beyond power law scaling and reduce it to exponential scaling instead if we have access to a high-quality data pruning metric that ranks the order in which training examples should be discarded to achieve any pruned dataset size. We then test this new exponential scaling prediction with pruned dataset size empirically, and indeed observe better than power-law scaling performance on ResNets trained on CIFAR-10, SVHN, and ImageNet. Given the importance of finding high-quality pruning metrics, we perform the first large-scale benchmarking study of 9 different data pruning metrics on ImageNet. We find most existing high performing metrics scale poorly to ImageNet, while the best are computationally intensive and require labels for every image. We therefore developed a new simple, cheap and scalable self-supervised pruning metric that demonstrates comparable performance to the best supervised metrics. Overall, our work suggests that the discovery of good data-pruning metrics may provide a viable path forward to substantially improved neural scaling laws, thereby reducing the resource costs of modern deep learning. | Accept | The paper provides theory and experiments that power law scaling of error with respect to dataset size can be improved to "exponential" scaling by high-quality intelligent data pruning. The paper first provides a theoretical model to study the effect of pruning and show that it matches well in experiments. Beyond this, the paper also observes similar behavior on real datasets. The analysis demonstrates the possibility of beyond-power law scaling by utilizing pruning metrics in large scale settings.
Reviewers saw many strength on the work: as an “outstandingly good paper” with “thorough theoretical backing” and “great empirical design” `NsH6`; an “exciting paper for its bridge of phenomena and theory” with “significant insight and multiple important contributions to the field” `ycTR`, “interesting and novel insights'' targeting an “important problem” which “evaluates its claims on a wide range of datasets…with strong theoretical foundation” `QtTy` as well as “thorough evaluation” with a “wealth of empirical experiments'' which will be “valuable to future work in the area” `D9bn`. Authors addressed questions and weaknesses raised by the reviewers, during the very active author-reviewer discussion period. The AC encourages the authors to incorporate all the agreed suggested changes in camera ready version.
AC agrees with the reviewers sentiment that the study and insight will be important to the broad NeurIPS audience where correctly characterizing scaling behavior of neural networks is becoming even more important.
| train | [
"kpC1JdikmRy",
"wSzYmC3kbMi",
"0JFbp-YSC-",
"JCn30A8-FTU",
"wuCtx8oGuEP",
"Nm2XdrSbJ3d",
"jOXDaLbLshx",
"EvQlwmOF1f",
"i16tLa5mhxi",
"1nJZ1PUcTRN",
"QI6JDxty1_V",
"GjNPNz3ytJU",
"_XoGEbY3Sky",
"2iKMUSOmIw_",
"du9jE8tPNvJ",
"Pf0nMaufVRt",
"ERZZfc3Tkply",
"ut_A0oYwam",
"CMXzgMbZCdh... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"officia... | [
" Thanks again for all the valuable feedback - we will continue updating the manuscript as new results come in!",
" Thank you again for your comments - they have greatly improved the clarity of the manuscript - and for your enthusiasm regarding our work!",
" Thank you again for your comments and your recommenda... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
8,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"Nm2XdrSbJ3d",
"1nJZ1PUcTRN",
"QI6JDxty1_V",
"ut_A0oYwam",
"nips_2022_UmvSlP-PyV",
"EvQlwmOF1f",
"i16tLa5mhxi",
"i16tLa5mhxi",
"_XoGEbY3Sky",
"ERZZfc3Tkply",
"QjsrHEro_bX",
"gdx2EhiGyY9",
"gdx2EhiGyY9",
"zSWvPjfZ6mA",
"zSWvPjfZ6mA",
"zSWvPjfZ6mA",
"g5A94siqbgz",
"zSWvPjfZ6mA",
"g... |
nips_2022_HjNn9oD_v47 | Unsupervised Learning for Combinatorial Optimization with Principled Objective Relaxation | Using machine learning to solve combinatorial optimization (CO) problems is challenging, especially when the data is unlabeled. This work proposes an unsupervised learning framework for CO problems. Our framework follows the standard relaxation-plus-rounding approach and adopts neural networks to parameterize the relaxed solutions so that simple back-propagation can train them end-to-end. Our key contribution is the observation that if the relaxed objective satisfies entry-wise concavity, a low optimization loss guarantees the quality of the obtained integral solutions. This observation significantly generalizes the applicability of the previous framework inspired by Erdos' probabilistic method (Karalias & Loukas, 2020). Our framework is particularly suitable to guide the design of objective models in the applications where the objectives are not given explicitly while requiring being modeled and learned first. We evaluate our framework by solving a synthetic graph optimization problem, and two real-world applications including resource allocation in circuit design and approximate computing. Our framework largely outperforms the baselines based on reinforcement learning and Gumbel-softmax tricks. | Accept | This paper proposes a new framework to train machine learning models to solve combinatorial optimization problems. Its key idea is to develop a relax-and-round solution-generating approach that allows backpropagation and enjoys theoretical guarantees for the rounding procedure. This approach generalizes the previous work (Karalia et al. 2020) to proxy CO settings where the objective function is expensive (and replaced by a proxy function).
The reviewers are all on the positive side of this paper. While the experiments are somewhat toy-like, all the reviewers appreciated the key idea and the theoretical results of this paper. I tend to agree and recommend acceptance for this paper.
However, I would strongly encourage the authors to improve their presentation for their final manuscript. Upon reading, the paper caused confusion on (1) using the terminology "unsupervised" and (2) its relationship to the prior work (Karalia et al. 2020). Although this concern has been resolved after a thorough discussion between reviewers and authors, I would be happy to see the corrections being made for the final manuscript. | train | [
"5oOrKK8EmF2",
"W2XWPI8Fe0Q",
"Qivyvz2k_5n",
"7ruh2sxZ5os",
"8s3EtXUUD5F",
"S1KIhvG2Tcl",
"xQWgqEGC4lQ",
"NKKk9STlqe8",
"sapcRg5ha2E",
"thP3r7GGo03",
"ey2pqjGXf2q",
"fD17LLlmviR",
"ypSyWbDocsh",
"k97v904ZHna",
"ceeeGsdoo0e",
"ERESzAxLSmg",
"pwZ9FoscN-9",
"t6dHYF3IPO",
"i-I0h1u51O... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
... | [
" I appreciate the new results on pure CO, and I understand that the authors may not provide a revision due to limited time. Since there is no revised pdf, I am unsure some writing issues (such as \"unsupervised\" vs \"supervised\") will be adequately addressed in the final version. I raise the score by 1 in consid... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"7ruh2sxZ5os",
"pwZ9FoscN-9",
"8s3EtXUUD5F",
"S1KIhvG2Tcl",
"NKKk9STlqe8",
"f4-GKsKvLBs",
"NKKk9STlqe8",
"sapcRg5ha2E",
"thP3r7GGo03",
"ypSyWbDocsh",
"fD17LLlmviR",
"wF9qYp2omar",
"ceeeGsdoo0e",
"pwZ9FoscN-9",
"ERESzAxLSmg",
"TKQ9VLen5i7",
"wF9qYp2omar",
"rrM3CaStEYF",
"rrM3CaStE... |
nips_2022_dXiGWqBoxaD | GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale | Large language models have been widely adopted but require significant GPU memory for inference. We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for inference by half while retaining full precision performance. With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation. This is made possible by understanding and working around properties of highly systematic emergent features in transformer language models that dominate attention and transformer predictive performance. To cope with these features, we develop a two-part quantization procedure, {\bf LLM.int8()}. We first use vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication, to quantize most of the features. However, for the emergent outliers, we also include a new mixed-precision decomposition scheme, which isolates the outlier feature dimensions into a 16-bit matrix multiplication while still more than 99.9\% of values are multiplied in 8-bit. Using LLM.int8(), we show empirically it is possible to perform inference in LLMs with up to 175B parameters without any performance degradation. This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open source our software. | Accept | This paper provides a solid contribution to compressed ML by showing how to apply 8-bit arithmetic to large-scale transformers. All the reviewers had a borderline or positive impression of the paper, with agreement among the reviewers that the results would be interesting to
the community especially inasmuch as they represent an empirical exploration of compression on much larger scale models than previous studies. The result about the phase transition is particularly interesting, and may merit future work. While a couple of the reviewers assigned borderline scores, the weaknesses pointed out in the reviews mostly coalesced around questions of novelty and comparison to the literature: while I agree that novelty is important, I think that identifying a simple way to combine and adapt some subset of existing methods to work at new scales _is_ itself a novel contribution. (The authors should, of course, make sure to cite the additional papers mentioned by the reviewers as proposed in their author response.) And the author response does a good job of contextualizing the reviewers' concerns: there is no unaddressed weakness that would amount to a strong justification for rejection. At the same time, all reviewers seem to agree on the strengths of the paper, and this sort of result will be interesting and useful to the community. As such, I recommend acceptance. | train | [
"TEek07GWtE5",
"HHDNaU52K-u",
"ZVBljQ81zXE",
"DSv9pt0RRcj",
"hLR0bZscSat",
"o-rQcObkOxs",
"K2MHihUlp4P",
"aF6nBCcyA3U",
"ZYMTwxaIwAM",
"6VYS2LWEYlJ",
"X1e6TnuMJco",
"y0H9BZhwLTj",
"aN3dEbw07Jm",
"i5_gS49F0DQ"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your timely response and for highlighting that the experimental results that we show for very large models is rare and valuable.\n\nI think we were not quite clear that we would add these results to the current paper. By next version we meant the version which we would use if our paper would get acc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"HHDNaU52K-u",
"K2MHihUlp4P",
"DSv9pt0RRcj",
"o-rQcObkOxs",
"6VYS2LWEYlJ",
"i5_gS49F0DQ",
"aN3dEbw07Jm",
"y0H9BZhwLTj",
"X1e6TnuMJco",
"X1e6TnuMJco",
"nips_2022_dXiGWqBoxaD",
"nips_2022_dXiGWqBoxaD",
"nips_2022_dXiGWqBoxaD",
"nips_2022_dXiGWqBoxaD"
] |
nips_2022_d-kvI4YdNu | The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning | The surprising discovery of the BYOL method shows the negative samples can be replaced by adding the prediction head to the network. It is mysterious why even when there exist trivial collapsed global optimal solutions, neural networks trained by (stochastic) gradient descent can still learn competitive representations. In this work, we present our empirical and theoretical discoveries on non-contrastive self-supervised learning. Empirically, we find that when the prediction head is initialized as an identity matrix with only its off-diagonal entries being trainable, the network can learn competitive representations even though the trivial optima still exist in the training objective. Theoretically, we characterized the substitution effect and acceleration effect of the trainable, but identity-initialized prediction head. The substitution effect happens when learning the stronger features in some neurons can substitute for learning these features in other neurons through updating the prediction head. And the acceleration effect happens when the substituted features can accelerate the learning of other weaker features to prevent them from being ignored. These two effects enable the neural networks to learn diversified features rather than focus only on learning the strongest features, which is likely the cause of the dimensional collapse phenomenon. To the best of our knowledge, this is also the first end-to-end optimization guarantee for non-contrastive methods using nonlinear neural networks with a trainable prediction head and normalization. | Accept | This paper proposed a theoretical explanation to the open question of why BYOL-style (non-contrastive) self-supervised learning does not collapse to trivial solutions. This paper could be an important contribution to better understanding this widely observed but poorly understood phenomenon. Although there are still some issues raised by the reviewers that are not addressed during the rebuttal, it is generally agreed that the contribution of this paper still overweigh the issues. Therefore, I decide to accept this paper, but urge the authors to fully address the reviewers' comments in the final revision (e.g. missing Fig.3, clarification of Def. 3.1, etc.). | train | [
"cfhX3I537QJ",
"_ckygStxRr1",
"uWZb-xDNeIo",
"8-v3RQ43x9T",
"p4EFFf-tSfV",
"mEge9B1HfVD",
"6LtHohgpcZcs",
"NUo-IPepKYc",
"ByHk6X37PEp",
"W2hXPXYfudB",
"sXql4LcFh2",
"o2AVbdNzK9A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their responses about the role of non-linearity and intuitions for the different stages. It would be interesting to check whether a linear network succeeds or fails under the setting being analyzed, since the author response suggests that the outcome is unclear (with failure being more lik... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2,
4
] | [
"NUo-IPepKYc",
"6LtHohgpcZcs",
"mEge9B1HfVD",
"sXql4LcFh2",
"W2hXPXYfudB",
"o2AVbdNzK9A",
"sXql4LcFh2",
"ByHk6X37PEp",
"nips_2022_d-kvI4YdNu",
"nips_2022_d-kvI4YdNu",
"nips_2022_d-kvI4YdNu",
"nips_2022_d-kvI4YdNu"
] |
nips_2022_bdfJCeWDKUB | Learning low-dimensional generalizable natural features from retina using a U-net | Much of sensory neuroscience focuses on sensory features that are chosen by the experimenter because they are thought to be behaviorally relevant to the organism. However, it is not generally known what these features are in complex, natural scenes. This work focuses on using the retinal encoding of natural movies to determine the presumably behaviorally-relevant features that the brain represents. It is prohibitive to parameterize a natural movie and its respective retinal encoding fully. We use time within a natural movie as a proxy for the whole suite of features evolving across the scene. We then use a task-agnostic deep architecture, an encoder-decoder, to model the retinal encoding process and characterize its representation of ``time in the natural scene'' in a compressed latent space. In our end-to-end training, an encoder learns a compressed latent representation from a large population of salamander retinal ganglion cells responding to natural movies, while a decoder samples from this compressed latent space to generate the appropriate movie frame. By comparing latent representations of retinal activity from three movies, we find that the retina performs transfer learning to encode time: the precise, low-dimensional representation of time learned from one movie can be used to represent time in a different movie, with up to 17ms resolution. We then show that static textures and velocity features of a natural movie are synergistic. The retina simultaneously encodes both to establishes a generalizable, low-dimensional representation of time in the natural scene. | Accept | The authors analyze the latent representation of visual features from natural movies in the salamander retina using a U-net. They train an encoder to learn a compressed latent representation from a large population of salamander retinal ganglion cells responding to natural movies, while a decoder samples from this compressed latent space to generate the appropriate movie frame. They characterize its representation of “time in the natural scene” in the latent space of their model.
Overall, the reviewers expressed a lot of interest in the topic and valued the novel application to salamander retinal data. There were some questions about the significance of the finding, and through the rebuttal period, the authors provided a number of experiments to compare against other variants of their baselines (and ablations), with some reviewers increasing their scores in favor of acceptance.
At the same time, there was concern that the U-Net architecture could potentially reconstruct the movie without any retinal data. Thus, it was not entirely clear whether the model was truly leveraging the retinal data to obtain meaningful outputs. Unfortunately, this concern was not fully addressed in the revision, leaving the reviewers overall with mixed views but the majority in favor of acceptance.
| train | [
"Yyy0JsyNRHL",
"EakLGf33dYb",
"BUrH7nrHRjY",
"PpxeL2h5zQH",
"BC-JeRbCkx",
"TU8nXigbfA",
"5BEZdMsDyH",
"Fja83Mz6lGR",
"iGW4T7nIyVB",
"9TnWGECFywW",
"FWJa6LVXmL2",
"LHLEnG-7cHC",
"2hmM1lNmjcG",
"VMnI4FsgIRN",
"JAvRfn_qedu2",
"jjCZBO82Fie",
"1VHNRYWfCCZ",
"lz8QSig4ey",
"1o4dwV7PsM2"... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for reading our reply. We apologize that there are still some misunderstanding on the primary aim and the neuroscience context of our paper. Unfortunately, we also find some of the comments not relevant for our paper or impossible within the current experiment limit for us to improve our work. \n\n1) \"a ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
2
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"EakLGf33dYb",
"2hmM1lNmjcG",
"BC-JeRbCkx",
"9TnWGECFywW",
"Fja83Mz6lGR",
"FWJa6LVXmL2",
"nips_2022_bdfJCeWDKUB",
"iGW4T7nIyVB",
"LHLEnG-7cHC",
"lz8QSig4ey",
"1VHNRYWfCCZ",
"VMnI4FsgIRN",
"JAvRfn_qedu2",
"jjCZBO82Fie",
"JhSIYt1dZi",
"5TOOH64_w5L",
"W4thjAcks8Z",
"1o4dwV7PsM2",
"n... |
nips_2022_IbBHnPyjkco | (De-)Randomized Smoothing for Decision Stump Ensembles | Tree-based models are used in many high-stakes application domains such as finance and medicine, where robustness and interpretability are of utmost importance. Yet, methods for improving and certifying their robustness are severely under-explored, in contrast to those focusing on neural networks. Targeting this important challenge, we propose deterministic smoothing for decision stump ensembles. Whereas most prior work on randomized smoothing focuses on evaluating arbitrary base models approximately under input randomization, the key insight of our work is that decision stump ensembles enable exact yet efficient evaluation via dynamic programming. Importantly, we obtain deterministic robustness certificates, even jointly over numerical and categorical features, a setting ubiquitous in the real world. Further, we derive an MLE-optimal training method for smoothed decision stumps under randomization and propose two boosting approaches to improve their provable robustness. An extensive experimental evaluation on computer vision and tabular data tasks shows that our approach yields significantly higher certified accuracies than the state-of-the-art for tree-based models. We release all code and trained models at https://github.com/eth-sri/drs. | Accept | There were substantial discussions around this paper and its contributions. The authors did a good job of explaining and interacting with reviewers (with, as a consequence, a substantial raise of scores). To prepare the camera-ready version, it is strongly suggested to include the material introduced at discussion time, including the experimental results (K1dy) and use the intuitive explanation provided for the dynamic programming approach (7Kxj) to revamp a section / paragraph on a high-level explanation of the approach. | test | [
"IRliIT1me_",
"oIyiY2NKiZj",
"3j2_E6Ii-i",
"uY3MA5J3Izf_",
"5AnsSm7UIqp",
"lflsCUQHeq7L",
"uDraMfBZvaZG",
"XTXMUu7nvzU",
"PdyUddlTzwW",
"HCSHvH_8xYq",
"rpDW6NCtQUx",
"zvP6hxjtmV",
"vKubyIhxnlH",
"c-_RQX9HgNj"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" $\\newcommand{Rt}{\\textcolor{green}{7Kxj}}$\n\nWe thank reviewer $\\Rt$ for engaging in the discussion, are happy that we could clarify most points already, and answer follow-up questions below:\n\n**Regarding Q2.3:**\n\n* Indeed, we meant Tables 17 and 18, apologies for the inconvenience.\n* (a and b) We agree ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
5
] | [
"oIyiY2NKiZj",
"HCSHvH_8xYq",
"uY3MA5J3Izf_",
"5AnsSm7UIqp",
"lflsCUQHeq7L",
"PdyUddlTzwW",
"c-_RQX9HgNj",
"nips_2022_IbBHnPyjkco",
"c-_RQX9HgNj",
"vKubyIhxnlH",
"zvP6hxjtmV",
"nips_2022_IbBHnPyjkco",
"nips_2022_IbBHnPyjkco",
"nips_2022_IbBHnPyjkco"
] |
nips_2022_7SEi-ISNni7 | Diffusion Visual Counterfactual Explanations | Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image classifier. They are “small” but “realistic” semantic changes of the image changing the classifier decision. Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts, or are limited to image classification problems with few classes. In this paper, we overcome this by generating Diffusion Visual Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers via a diffusion process. Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification. Second, our cone regularization via an adversarially robust model ensures that the diffusion process does not converge to trivial non-semantic changes, but instead produces realistic images of the target class which achieve high confidence by the classifier. | Accept | On the whole after the reviews and discussion, the reviewers are in agreement that this is an interesting method with some novel contributions and high-quality results, which should be of interest to the community. | train | [
"tC89cWMgS22",
"wt2VY0ey2Rc",
"POp1XIykpdO",
"U_Ef4P2NkDC",
"HOu3dJQoLkL",
"h316gkU9sJD",
"VPjo2Z0DJbR",
"jHxkdCYtbL",
"m0jAHj45XpZ",
"xCchB7n3PQN",
"tL8GB2k8SHv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Authors have partially addressed my concerns and I am willing to maintain my score.",
" Thanks for the detailed responses! The new quantitative experiments are good to see, and I appreciate the effort in Appendix F. I will consider raising my score; I think the paper is improved and don't have additional questi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"h316gkU9sJD",
"POp1XIykpdO",
"tL8GB2k8SHv",
"xCchB7n3PQN",
"m0jAHj45XpZ",
"jHxkdCYtbL",
"nips_2022_7SEi-ISNni7",
"nips_2022_7SEi-ISNni7",
"nips_2022_7SEi-ISNni7",
"nips_2022_7SEi-ISNni7",
"nips_2022_7SEi-ISNni7"
] |
nips_2022_L8pZq2eRWvX | ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model | The need for interpretable models has fostered the development of self-explainable classifiers. Prior approaches are either based on multi-stage optimization schemes, impacting the predictive performance of the model, or produce explanations that are not transparent, trustworthy or do not capture the diversity of the data. To address these shortcomings, we propose ProtoVAE, a variational autoencoder-based framework that learns class-specific prototypes in an end-to-end manner and enforces trustworthiness and diversity by regularizing the representation space and introducing an orthonormality constraint. Finally, the model is designed to be transparent by directly incorporating the prototypes into the decision process. Extensive comparisons with previous self-explainable approaches demonstrate the superiority of ProtoVAE, highlighting its ability to generate trustworthy and diverse explanations, while not degrading predictive performance. | Accept | The paper is of good quality and presents an interesting approach to interpretability in a clear manner. Generally, the paper is very well written and due to extensive ablations and experiments offers many insights into the relevant aspects of the proposed approach. At the current stage the method might not be fully practical but might inspire further research down the line that overcomes said shortcomings. On the flip side, one drawback of the paper is relying on variational auto-encoder, which itself doesn't work nicely as a generative model for complex (natural) datasets. There are not many results on real-world complex tasks in the paper, and adding them would definitely add value. While there are some shortcomings, I believe it is still valuable for the research community. The authors have provided clarification to most major concerns in a convincing way. Overall, I suggest for the acceptance of the paper. Please improve the final version with the content from reviewer responses. | train | [
"FgpFVtmonmD",
"-9kfYedLR3X",
"oXF1VPJwRnJ",
"z-EAYb5ufQ8",
"QTUdQBQXEB",
"LXH6eS_70Nkd",
"B3NRNrDqevH",
"phQgxKgr57",
"52QCg2joOk",
"MAdSqY1aYwp",
"zWTU9h7l-L7",
"gvZMb3X7-ZrJ",
"fCZ-gXOsG2B",
"ARQKwFCIsDk",
"7EZZcuWL4LH",
"D8yPIvZeHoY"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all the reviewers for appreciating our proposed methodology as well as the importance of the work, the predictive performance of our method, and our thorough evaluation.\n\nFollowing the constructive suggestions and comments of the reviewers, we have revised our paper and included additiona... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_L8pZq2eRWvX",
"ARQKwFCIsDk",
"QTUdQBQXEB",
"LXH6eS_70Nkd",
"gvZMb3X7-ZrJ",
"52QCg2joOk",
"7EZZcuWL4LH",
"7EZZcuWL4LH",
"7EZZcuWL4LH",
"D8yPIvZeHoY",
"D8yPIvZeHoY",
"D8yPIvZeHoY",
"ARQKwFCIsDk",
"nips_2022_L8pZq2eRWvX",
"nips_2022_L8pZq2eRWvX",
"nips_2022_L8pZq2eRWvX"
] |
nips_2022_fDWNnSiHeka | Sketching based Representations for Robust Image Classification with Provable Guarantees | How do we provably represent images succinctly so that their essential latent attributes are correctly captured by the representation to as high level of detail as possible? While today's deep networks (such as CNNs) produce image embeddings they do not have any provable properties and seem to work in mysterious non-interpretable ways. In this work we theoretically study synthetic images that are composed of a union or intersection of several mathematically specified shapes using thresholded polynomial functions (for e.g. ellipses, rectangles). We show how to produce a succinct sketch of such an image so that the sketch “smoothly” maps to the latent-coefficients producing the different shapes in the image. We prove several important properties such as: easy reconstruction of the image from the sketch, similarity preservation (similar shapes produce similar sketches), being able to index sketches so that other similar images and parts of other images can be retrieved, being able to store the sketches into a dictionary of concepts and shapes so parts of the same or different images that refer to the same shape can point to the same entry in this dictionary of common shape attributes. | Accept | All reviewers are positive about this paper. After rebuttal, the authors have well solved the reviewers' concerns, and improve the quality of this paper. So I suggest accepting this paper. | train | [
"sIzoGF77kPr",
"gPeLVCmbnz2",
"QaOb9sgdmwy",
"DdBPnRJbH50",
"WDFe_L5gN_",
"9SDZY-J3LRn"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your review and the feedback provided. We are glad that you found our theoretical results appealing. We try to address your concerns and questions below.\n\n1. Intuitive explanations behind the theoretical results: Thanks, we have added visual images to show the sketching in increasing cases of complex... | [
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
2,
3,
1
] | [
"9SDZY-J3LRn",
"WDFe_L5gN_",
"DdBPnRJbH50",
"nips_2022_fDWNnSiHeka",
"nips_2022_fDWNnSiHeka",
"nips_2022_fDWNnSiHeka"
] |
nips_2022_Vhd-jh9B8Hc | Active Ranking without Strong Stochastic Transitivity | Ranking from noisy comparisons is of great practical interest in machine learning. In this paper, we consider the problem of recovering the exact full ranking for a list of items under ranking models that do *not* assume the Strong Stochastic Transitivity property. We propose a $\delta$-correct algorithm, Probe-Rank, that actively learns the ranking of the items from noisy pairwise comparisons. We prove a sample complexity upper bound for Probe-Rank, which only depends on the preference probabilities between items that are adjacent in the true ranking. This improves upon existing sample complexity results that depend on the preference probabilities for all pairs of items. Probe-Rank thus outperforms existing methods over a large collection of instances that do not satisfy Strong Stochastic Transitivity.
Thorough numerical experiments in various settings are conducted, demonstrating that Probe-Rank is significantly more sample-efficient than the state-of-the-art active ranking method. | Accept | This is an interesting and solid paper in which the authors address the identification of a ranking in the setting of duelling bandits under the assumption that the underlying preference probabilities are weakly stochastic transitive. They introduce a novel algorithm which solves the problem in a delta-PAC manner and has an instance-wise sample complexity guarantee. All reviewers agree that this is a significant contribution. Questions and open points could be clarified in the discussion phase. | val | [
"7L2TZsiDraF",
"lwmG2EjIVh",
"_sEwyzZdlw9",
"05CO81LDfU9",
"JS1Lo-hsXFc",
"fnBJWO_TyrG",
"4uQVvSiR0kH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ---\n**Q6:** \"From looking at its proof, it appears that you could slightly strengthen Thm.2 and provide a more detailed bound without O-notation, i.e., you could explicitly provide the constant that is hidden in the O-term. Maybe you could state this constant for the sake of completeness in the appendix?\"\n\n*... | [
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"lwmG2EjIVh",
"4uQVvSiR0kH",
"fnBJWO_TyrG",
"JS1Lo-hsXFc",
"nips_2022_Vhd-jh9B8Hc",
"nips_2022_Vhd-jh9B8Hc",
"nips_2022_Vhd-jh9B8Hc"
] |
nips_2022_Tfb73TeKnJ- | Cross-Linked Unified Embedding for cross-modality representation learning | Multi-modal learning is essential for understanding information in the real world. Jointly learning from multi-modal data enables global integration of both shared and modality-specific information, but current strategies often fail when observa- tions from certain modalities are incomplete or missing for part of the subjects. To learn comprehensive representations based on such modality-incomplete data, we present a semi-supervised neural network model called CLUE (Cross-Linked Unified Embedding). Extending from multi-modal VAEs, CLUE introduces the use of cross-encoders to construct latent representations from modality-incomplete observations. Representation learning for modality-incomplete observations is common in genomics. For example, human cells are tightly regulated across multi- ple related but distinct modalities such as DNA, RNA, and protein, jointly defining a cell’s function. We benchmark CLUE on multi-modal data from single cell measurements, illustrating CLUE’s superior performance in all assessed categories of the NeurIPS 2021 Multimodal Single-cell Data Integration Competition. While we focus on analysis of single cell genomic datasets, we note that the proposed cross-linked embedding strategy could be readily applied to other cross-modality representation learning problems. | Accept | In this paper, the authors propose CLUE (Cross-Linked Unified Embedding) to construct multimodal representations from modality-incomplete datasets and apply CLUE to the single-cell data integration problems. The proposed method is simple yet effective and shows the superior performance over state-of-the-art methods. All reviewers agree to accept the paper; I will also vote for acceptance. In the final version, I encourage the authors to improve the experimental section by addressing the reviewer's concerns. | train | [
"4iUZxqEAEg2",
"8YjF3ycaUB",
"-Chba5CKn0J",
"4onIgzBPywK",
"xDBSPzogFFY",
"rSTurO1HKqC",
"JFjjwWG1ZOW",
"bNjSvWFo_p6",
"x4syFDgduIQ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer tEWF,\n\nwe thank you for the valuable comments and suggestions for our paper. We have provided corresponding responses to your questions, which we believe have addressed your concerns. Please let us know if you still have any unclear parts of our work. We appreciate your further feedback. Thank you... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
5
] | [
"bNjSvWFo_p6",
"x4syFDgduIQ",
"x4syFDgduIQ",
"bNjSvWFo_p6",
"JFjjwWG1ZOW",
"nips_2022_Tfb73TeKnJ-",
"nips_2022_Tfb73TeKnJ-",
"nips_2022_Tfb73TeKnJ-",
"nips_2022_Tfb73TeKnJ-"
] |
nips_2022_6QvmtRjWNRy | Subgroup Robustness Grows On Trees: An Empirical Baseline Investigation | Researchers have proposed many methods for fair and robust machine learning, but thorough empirical evaluation of their subgroup robustness is lacking. In this work, we address this gap in the context of tabular data, where sensitive subgroups are clearly-defined, real-world fairness problems abound, and prior works often fail to compare to state-of-the-art tree-based models. We conduct an empirical comparison of several previously-proposed methods for fair and robust learning alongside state-of-the-art tree-based methods and other baselines. Via experiments with more than 340,000 model configurations on eight datasets, we show that tree-based methods have strong subgroup robustness, even when compared to robustness- and fairness-enhancing methods. Moreover, the best tree-based models tend to show good performance over a range of metrics, while robust or group-fair models can show brittleness, with significant performance differences across different metrics for a fixed model. We also demonstrate that tree-based models show less sensitivity to hyperparameter configurations, and are less costly to train. Our work suggests that tree-based ensemble models make an effective baseline for tabular data, and are a sensible default when subgroup robustness is desired. | Accept | This work looks at sensitive subgroups' propensity to be incorrectly scored across both "traditional" tree-based models and other approaches from the recent literature, on tabular data. I believe Reviewer kdGJ's numeric score is lower than it should be (thank you, Reviewer kdGJ for a technically strong review and for participating in a back-and-forth with the authors), but I also agree with the concerns that were surfaced here. I also agree with the questioning of Reviewer WCHM w.r.t. domain generalization, and believe that this is more central to the story of the paper than is presently written. A borderline paper, I would encourage the authors to seriously consider both of those reviewers' threads in a camera-ready or next submission. | train | [
"XDZhT4txOb",
"RmOGot3aF-m",
"TSNgsf3pVZ",
"-Va7xTFJHw",
"k26MZw3LYc",
"jGNHKlxK6hm",
"axhtDe22aTp",
"ZkQAE2xuYLb",
"Yv_P2JYRiO7",
"wFktC78L1UR",
"HXyref_lbD",
"MqdE3mFqx2n",
"_4sydsUnPVe",
"5dqnQqrc5AZ",
"B6dfI8BUTq1",
"k568PDnm4Pb",
"K4f3WRZT8i3"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the interesting dialogue. Providing a strong empirical foundation for formulating and testing hypotheses in distribution shift is a key goal of our work, and we are glad that the work has stimulated this dialogue.\n\nWe agree that “In Search of Lost Domain Generalization” contains a very... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"RmOGot3aF-m",
"axhtDe22aTp",
"-Va7xTFJHw",
"k26MZw3LYc",
"HXyref_lbD",
"nips_2022_6QvmtRjWNRy",
"ZkQAE2xuYLb",
"wFktC78L1UR",
"K4f3WRZT8i3",
"k568PDnm4Pb",
"MqdE3mFqx2n",
"B6dfI8BUTq1",
"5dqnQqrc5AZ",
"nips_2022_6QvmtRjWNRy",
"nips_2022_6QvmtRjWNRy",
"nips_2022_6QvmtRjWNRy",
"nips_2... |
nips_2022_ZXoSAAlBnW8 | Recursive Reinforcement Learning | Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner's ingenuity in designing a suitable "flat" representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning---a model-free RL algorithm for RMDPs---and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions. | Accept | The paper has generated a lot of discussion, and on balance the reviewers appreciate its technical contributions but find that the paper would benefit from a more in-depth discussion of its relationship to hierarchical RL. | test | [
"7UZsraj_xm8",
"nOGDgfsqE-",
"7VsjjPXKDYJ",
"VArRMgNTVc-",
"6zHAk59YeoA",
"QCJCQXhFkaI",
"CDPxWQBS1DY",
"M2kjeEEUATK",
"Iy6zcgbH4QA",
"6Rl4X-kdwkF",
"bwjyZe12WVY",
"_hcyrkCMhh",
"e_U2xYJ9CR3",
"L9-ZK9tUFdh",
"_AO3z9hDRj",
"sqpsM-UXZmy",
"JBVmZG2wM17"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Yes, this would fail even in the 1-exit case due to the need to handle calling components specially.\n\nIn more detail: In the 1-exit setting, once one makes the observation that the stack information is not required, one may consider solving each individual component MDP separately, where the exit is treated as ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
1,
3
] | [
"7VsjjPXKDYJ",
"6Rl4X-kdwkF",
"QCJCQXhFkaI",
"e_U2xYJ9CR3",
"CDPxWQBS1DY",
"M2kjeEEUATK",
"bwjyZe12WVY",
"Iy6zcgbH4QA",
"JBVmZG2wM17",
"sqpsM-UXZmy",
"_hcyrkCMhh",
"_AO3z9hDRj",
"L9-ZK9tUFdh",
"nips_2022_ZXoSAAlBnW8",
"nips_2022_ZXoSAAlBnW8",
"nips_2022_ZXoSAAlBnW8",
"nips_2022_ZXoSA... |
nips_2022_uPdS_7pdA9p | Contrastive Adapters for Foundation Model Group Robustness | While large pretrained foundation models (FMs) have shown remarkable zero-shot classification robustness to dataset-level distribution shifts, their robustness to subpopulation or group shifts is relatively underexplored. We study this problem, and find that foundation models such as CLIP may not be robust to various group shifts. Across 9 robustness benchmarks, zero-shot classification with their embeddings results in gaps of up to 80.7 percentage points (pp) between average and worst-group accuracy. Unfortunately, existing methods to improve robustness require retraining, which can be prohibitively expensive on large foundation models. We also find that efficient ways to improve model inference (e.g. via adapters, lightweight networks that transform FM embeddings) do not consistently improve and can sometimes *hurt* group robustness compared to zero-shot. We therefore develop an adapter training strategy to effectively and efficiently improve FM group robustness. Our motivating observation is that while poor robustness results from groups in the same class being embedded far apart in the foundation model "embedding space," standard adapter training may not actually bring these points closer together. We thus propose contrastive adapting, which contrastively trains adapters to bring sample embeddings close to both their ground-truth class embeddings and same-class sample embeddings. Across the 9 robustness benchmarks, contrastive adapting consistently improves group robustness, raising worst-group accuracy by 8.5 to 56.0 pp over zero-shot. Our approach is also efficient, doing so without any FM finetuning and only a fixed set of FM embeddings. On popular benchmarks such as Waterbirds and CelebA, this leads to worst-group accuracy comparable to state-of-the-art methods, while only training <1% of the model parameters. | Accept | This paper received unanimous recommendations of acceptance from the reviewer. The authors did a good job addressing concerns from the reviewers, especially with the additional ablation studies to decouple the gains from other techniques such as SupCon. The AC agrees with the reviewers regarding the contribution of this paper and recommends acceptance. | test | [
"CSta_YW1bvC",
"M8sWbnYx2Pc",
"ScnhgPNQA6d",
"SunB38u3pIe",
"6ekySQLoaH",
"tB9gOyaz071",
"TQP4sU4v-eXs",
"UHt0N5tBUrh",
"Z13Ep5p72Cg",
"Vo6F_EhnxXb",
"5UyZB_dLUv3",
"SkArhbkOovE",
"TvwI4AmaF8_",
"Sng9uzcwXdH",
"203-6IVszW",
"SQUPxsd2F0M",
"YqfErGFRRJb",
"BhPSVcV72lY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their efforts in addressing my questions. I am satisfied with the answers, and have gladly raised my score to 6. On the term of Foundation Model, I suggest the authors to change the term to avoid unnecessary controversy or diminishing the contribution of the manuscript, and then it will se... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"TvwI4AmaF8_",
"ScnhgPNQA6d",
"TQP4sU4v-eXs",
"tB9gOyaz071",
"nips_2022_uPdS_7pdA9p",
"UHt0N5tBUrh",
"5UyZB_dLUv3",
"Z13Ep5p72Cg",
"Vo6F_EhnxXb",
"BhPSVcV72lY",
"SkArhbkOovE",
"YqfErGFRRJb",
"Sng9uzcwXdH",
"SQUPxsd2F0M",
"nips_2022_uPdS_7pdA9p",
"nips_2022_uPdS_7pdA9p",
"nips_2022_uP... |
nips_2022_QLPzCpu756J | Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks | A striking observation about iterative magnitude pruning (IMP; Frankle et al. 2020) is that—after just a few hundred steps of dense training—the method can find a sparse sub-network that can be trained to the same accuracy as the dense network. However, the same does not hold at step 0, i.e. random initialization. In this work, we seek to understand how this early phase of pre-training leads to a good initialization for IMP both through the lens of the data distribution and the loss landscape geometry. Empirically we observe that, holding the number of pre-training iterations constant, training on a small fraction of (randomly chosen) data suffices to obtain an equally good initialization for IMP. We additionally observe that by pre-training only on "easy" training data, we can decrease the number of steps necessary to find a good initialization for IMP compared to training on the full dataset or a randomly chosen subset. Finally, we identify novel properties of the loss landscape of dense networks that are predictive of IMP performance, showing in particular that more examples being linearly mode connected in the dense network correlates well with good initializations for IMP. Combined, these results provide new insight into the role played by the early phase training in IMP. | Accept | This paper presents comprehensive experiments studying the role of data in finding lottery tickets in the early stage of training. All reviewers liked the paper and agreed that the paper has novel and insightful results worth sharing with the community. | test | [
"NEHkAI-3Gd",
"QTfZFV9MpA9",
"g53uvrVYUS",
"1tLtF8Y_hC1",
"4TOE6jrCt57",
"vYzAyEDRp2Xw",
"7ILk4rVkPip",
"GdLttOFs-Q",
"R9CqenWf6zg",
"d6Pg6bVs6YQ",
"ywQAK50ThJi",
"gl_PZFr7mEN",
"XRD4OCmA3VB",
"Nqa08ucuKk1"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the feedback. If accepted, we will use the extra space to move the material recommended by the reviewer from the appendix to the main text for added clarity.",
" Thanks a lot for your detailed response. I think my concerns have been addressed. I changed my score 5->6. ",
" We are eager to addre... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
9,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"1tLtF8Y_hC1",
"g53uvrVYUS",
"d6Pg6bVs6YQ",
"GdLttOFs-Q",
"Nqa08ucuKk1",
"Nqa08ucuKk1",
"XRD4OCmA3VB",
"gl_PZFr7mEN",
"ywQAK50ThJi",
"ywQAK50ThJi",
"nips_2022_QLPzCpu756J",
"nips_2022_QLPzCpu756J",
"nips_2022_QLPzCpu756J",
"nips_2022_QLPzCpu756J"
] |
nips_2022_iBBcRUlOAPR | An empirical analysis of compute-optimal large language model training | We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4$\times$ more data. Chinchilla uniformly and significantly outperformsGopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, a 7% improvement over Gopher. | Accept | Four experts reviewed this paper and they all recommended acceptance. The paper finds that current Transformer-based large language models (LLM) are significantly undertrained. This is likely to be of great interest to the AI/NLP community, as it challenges current practices and recommendations from prior work. The paper's main recommendation is that, given a increase of computation budget, model size and number of training tokens should be scaled equally. The claims of the paper are supported with an extensive amount of experimentation, including 400 language models, model sizes ranging from 70M to over 16B parameters, and amounts of data ranging from 5 to 500 billion tokens. Reviewers either listed no weaknesses or had most of their concerns addressed by the authors' responses. The main remaining limitation is that the authors couldn't release any code or data, but the work seems mostly reproducible from the paper (extensive methodological and experimental details are given in the appendix). | train | [
"XjQk_dgnLQw",
"C9gCwt_sYUJ",
"9eetMKuP4fY",
"FIJn4GjMhr",
"GmtPin6QRjr",
"j4eCIXN1sj1",
"0l5vVfF7kbc",
"r6SbeNf2lvt",
"PmInOr131Z4",
"fIBUHVR_fqc",
"XKQ6U26-VNe"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for including the new results in the Appendix D.1.\n\nThe discussion above addressed my major concerns. Thus, I am glad to increase Soundness score and overall rating. ",
" We have trained 5 different 1.1B models with different random data and included results in the Appendix. We have copied the text ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"C9gCwt_sYUJ",
"9eetMKuP4fY",
"0l5vVfF7kbc",
"XKQ6U26-VNe",
"fIBUHVR_fqc",
"PmInOr131Z4",
"r6SbeNf2lvt",
"nips_2022_iBBcRUlOAPR",
"nips_2022_iBBcRUlOAPR",
"nips_2022_iBBcRUlOAPR",
"nips_2022_iBBcRUlOAPR"
] |
nips_2022_H88qfUs3U2W | Dynamic Pricing with Monotonicity Constraint under Unknown Parametric Demand Model | We consider the Continuum Bandit problem where the goal is to find the optimal action under an unknown reward function, with an additional monotonicity constraint (or, "markdown" constraint) that requires that the action sequence be non-increasing. This problem faithfully models a natural single-product dynamic pricing problem, called "markdown pricing", where the objective is to adaptively reduce the price over a finite sales horizon to maximize expected revenues.
Jia et al '21 and Chen '21 independently showed a tight $T^{3/4}$ regret bound over $T$ rounds under *minimal* assumptions of unimodality and Lipschitzness in the reward (or, "revenue") function. This bound shows that the demand learning in markdown pricing is harder than unconstrained (i.e., without the monotonicity constraint) pricing under unknown demand which suffers regret only of the order of $T^{2/3}$ under the same assumptions (Kleinberg '04).
However, in practice the demand functions are usually assumed to have certain functional forms (e.g. linear or exponential), rendering the demand-learning easier and suggesting lower regret bounds. We investigate two fundamental questions, assuming the underlying demand curve comes from a given parametric family: (1) Can we improve the $T^{3/4}$ regret bound for markdown pricing, under extra assumptions on the functional forms of the demand functions? (2) Is markdown pricing still harder than unconstrained pricing, under these additional assumptions? To answer these, we introduce a concept called markdown dimension that measures the complexity of the parametric family and present tight regret bounds under this framework, thereby completely settling the aforementioned questions. | Accept | This paper focuses on an interesting problem, dynamic pricing.
The paper brings conceptual new ideas (markdown dimension) and associated algorithms.
I have 2 concerns:
1) it would be better if the algorithms were adaptive to the aforementioned dimension.
2) it would have been better if the authors had followed the instructions (and especially not updated the rebuttal as the revised version).
Point 1 would be future work, while point 2 is ok since the pdf can still be found in the submission files.
As a consequence, I recommend acceptance
| train | [
"_Nkx6QaH_Q3",
"r6STVyuiZQv",
"-X3pq8Ky6KB",
"YzK1lGkTNm",
"x-3oiZhsiBW",
"r5NPl_MVix",
"zAhcDBNLNgZ",
"_6DO6wi-Jb2",
"ALW541opEUt",
"GlxLVp_tHZP",
"9FuFrDRFbL"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We have copied our response from the pdf file to the other reviewers' \"official comment\" section.",
" **We first apologize for submitting the entire rebuttal as one pdf file to the paper submission portal, which may have caused troubles for reviewers to notice. Our original rebuttal submission includes (1) a ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3,
3
] | [
"r5NPl_MVix",
"9FuFrDRFbL",
"ALW541opEUt",
"_6DO6wi-Jb2",
"zAhcDBNLNgZ",
"GlxLVp_tHZP",
"nips_2022_H88qfUs3U2W",
"nips_2022_H88qfUs3U2W",
"nips_2022_H88qfUs3U2W",
"nips_2022_H88qfUs3U2W",
"nips_2022_H88qfUs3U2W"
] |
nips_2022_TThSwRTt4IB | On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning | Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to prevent catastrophic forgetting. This work draws attention to a hidden pitfall of this widespread practice: repeated optimization on a small pool of data inevitably leads to tight and unstable decision boundaries, which are a major hindrance to generalization. To address this issue, we propose Lipschitz-DrivEn Rehearsal (LiDER), a surrogate objective that induces smoothness in the backbone network by constraining its layer-wise Lipschitz constants w.r.t. replay examples. By means of extensive experiments, we show that applying LiDER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training. Through additional ablative experiments, we highlight peculiar aspects of buffer overfitting in CL and better characterize the effect produced by LiDER. Code is available at https://github.com/aimagelab/LiDER. | Accept | In order to solve the inherent problem of rehearsal-based continual learning methods (a problem in which a small pool of data from the previous tasks is repeatedly used for learning and hence we have a tight and unstable decision boundary), this paper proposes a method to provide smoothness of the backbone network by placing constraints on the Lipschitz constant of each layer. All reviewers unanimously recognized the strengths of this paper - sufficient performance improvement in the designed experiments and convincing insight/motivation etc. However, some reviewers were concerned that the baselines or experimental settings are somehow weak, and the authors partially resolved this through additional experiments during discussion phase. Nevertheless, more diverse experimental results need to be added in the final version. Especially, this ac thinks that if the experimental results for more various buffer sizes are included in the final version, the experimental quality will be improved and the insight of the authors can be better supported.
| train | [
"WZyE8m3G2N",
"3Vo68xoDVO",
"hzCWGlHIkfJ",
"qOiC9xpReu",
"Jrseljtuf6r",
"_57G65DhB5",
"0aDNfGdbOEw",
"lxv4DuTsBm",
"GtdD4K4_cCC",
"nyve3ZvJELP",
"vQ8bFcxY9hu",
"Rlt9j9ugum",
"Gqf4j05RQPk",
"_9q6hARU2RN",
"09BBmYVfavP"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are truly thankful to oByL for taking the time to read through our response and rethink their initial evaluation. \n\nRegarding the concern raised about the performance of competitors in our response, we remark that we follow a Class-Incremental (CIL) evaluation protocol, in which all evaluated tasks share th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"3Vo68xoDVO",
"qOiC9xpReu",
"nips_2022_TThSwRTt4IB",
"Jrseljtuf6r",
"_57G65DhB5",
"Gqf4j05RQPk",
"lxv4DuTsBm",
"09BBmYVfavP",
"nyve3ZvJELP",
"Rlt9j9ugum",
"_9q6hARU2RN",
"nips_2022_TThSwRTt4IB",
"nips_2022_TThSwRTt4IB",
"nips_2022_TThSwRTt4IB",
"nips_2022_TThSwRTt4IB"
] |
nips_2022_yoBaCtx_a3 | An Algorithm for Learning Switched Linear Dynamics from Data | We present an algorithm for learning switched linear dynamical systems in discrete time from noisy observations of the system's full state or output. Switched linear systems use multiple linear dynamical modes to fit the data within some desired tolerance. They arise quite naturally in applications to robotics and cyber-physical systems. Learning switched systems from data is a NP-hard problem that is nearly identical to the $k$-linear regression problem of fitting $k > 1$ linear models to the data. A direct mixed-integer linear programming (MILP) approach yields time complexity that is exponential in the number of data points. In this paper, we modify the problem formulation to yield an algorithm that is linear in the size of the data while remaining exponential in the number of state variables and the desired number of modes. To do so, we combine classic ideas from the ellipsoidal method for solving convex optimization problems, and well-known oracle separation results in non-smooth optimization. We demonstrate our approach on a set of microbenchmarks and a few interesting real-world problems. Our evaluation suggests that the benefits of this algorithm can be made practical even against highly optimized off-the-shelf MILP solvers. | Accept | This was a borderline paper and after discussion with the reviewers, we have decided to accept the paper. The paper currently has many typos and the approach requires more comparison with other approaches for learning switched systems--- though these can be addressed in the final revision. Overall, the paper address a relatively open question from a machine learning viewpoint and the results are interesting and the approach is novel. | train | [
"rmFZQRLLrf6",
"RuoqL_zEctK",
"_JJBXpJh3PR",
"7H2dT99brjL",
"vOOWgZdggek",
"xN93v_GZ-U4",
"SnCmzWeIC8t",
"qJR3MAJ2ws",
"WBnvWU9TW3",
"dA4tuMR-RCu",
"YrxuQjuSXoh",
"gvM1DIQP3l5"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the answers. I have updated and submitted my final comments.",
" Thank you for the comment. We agree that the technique _as presented in the paper_, will not efficiently address the application you have in mind to modeling robotic systems with contact forces. This is because the technique in the pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"RuoqL_zEctK",
"_JJBXpJh3PR",
"WBnvWU9TW3",
"SnCmzWeIC8t",
"gvM1DIQP3l5",
"YrxuQjuSXoh",
"dA4tuMR-RCu",
"WBnvWU9TW3",
"nips_2022_yoBaCtx_a3",
"nips_2022_yoBaCtx_a3",
"nips_2022_yoBaCtx_a3",
"nips_2022_yoBaCtx_a3"
] |
nips_2022_NENo__bExYu | Make Some Noise: Reliable and Efficient Single-Step Adversarial Training | Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prior to FGSM (RS-FGSM) could prevent CO. However, Andriushchenko & Flammarion (2020) observed that RS-FGSM still leads to CO for larger perturbations, and proposed a computationally expensive regularizer (GradAlign) to avoid it. In this work, we methodically revisit the role of noise and clipping in single-step adversarial training. Contrary to previous intuitions, we find that using a stronger noise around the clean sample combined with \textit{not clipping} is highly effective in avoiding CO for large perturbation radii. We then propose Noise-FGSM (N-FGSM) that, while providing the benefits of single-step adversarial training, does not suffer from CO. Empirical analyses on a large suite of experiments show that N-FGSM is able to match or surpass the performance of previous state of-the-art GradAlign while achieving 3$\times$ speed-up. | Accept | This paper enhances single-step adversarial training by adopting a much stronger noise for initialization. The initial concerns were mostly about missing ablations and misunderstandings/confusions, which were well addressed in the rebuttal. As a result, all reviewers unanimously agree to accept this submission.
In the final version, the authors should include all the clarifications and the additional empirical results provided in the rebuttal. | train | [
"MpTRUx_vBi3",
"rMjokmuysnD",
"6kJbENHldkK",
"PTWuE1Ze-QX",
"tDPS60x7JrG",
"n-O-rjE3aIhN",
"9O9-HeMVSnk",
"abBIutbL1Cdk",
"uGW8HbRKKwD",
"rsJdnFkl4Xf",
"YZrjCL82pHb",
"E9lsSFDs-1yr",
"1owglUe8_ix",
"djgx5Lc5MO_t",
"F5zQrBNxOc7",
"Ame7YPIM43",
"Z1X8FrzTZD8",
"ntcD_q0v3Jw",
"VWvsD2... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Thank you for the comments and suggestions, they have definitely helped improve the paper.",
" Thank you for the comments and the discussion throughout the rebuttal. Your suggestions have definitely helped improve the paper.",
" Thanks for the reply. My concerns are well addressed in the rebuttal.",
" Thank... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"PTWuE1Ze-QX",
"6kJbENHldkK",
"9O9-HeMVSnk",
"F5zQrBNxOc7",
"n-O-rjE3aIhN",
"ntcD_q0v3Jw",
"abBIutbL1Cdk",
"rsJdnFkl4Xf",
"rsJdnFkl4Xf",
"YZrjCL82pHb",
"E9lsSFDs-1yr",
"Z1X8FrzTZD8",
"nips_2022_NENo__bExYu",
"iCB51SmKeu",
"Ame7YPIM43",
"MIAJr5o65SE",
"QGcPa_Z32Ye",
"VWvsD2l9pAo",
... |
nips_2022_7WvNQz9SWH2 | Shape And Structure Preserving Differential Privacy | It is common for data structures such as images and shapes of 2D objects to be represented as points on a manifold. The utility of a mechanism to produce sanitized differentially private estimates from such data is intimately linked to how compatible it is with the underlying structure and geometry of the space. In particular, as recently shown, utility of the Laplace mechanism on a positively curved manifold, such as Kendall’s 2D shape space, is significantly influenced by the curvature. Focusing on the problem of sanitizing the Fr\'echet mean of a sample of points on a manifold, we exploit the characterization of the mean as the minimizer of an objective function comprised of the sum of squared distances and develop a K-norm gradient mechanism on Riemannian manifolds that favors values that produce gradients close to the the zero of the objective function. For the case of positively curved manifolds, we describe how using the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism, and demonstrate this numerically on a dataset of shapes of corpus callosa. Further illustrations of the mechanism’s utility on a sphere and the manifold of symmetric positive definite matrices are also presented. | Accept | The reviewers agree that the paper should be accepted (albeit with a mix of the level of acceptance). I agree. The presentation of the paper could be better. | test | [
"AzCNMNFqMMo",
"udAbf4RUXRc",
"QyJODwBcQW",
"ZinRNyo8g1U",
"CLlENgM1Cv9",
"ZdTY7rBt23",
"uvsDhYQ_zJ7",
"rlgcz2tmxX5",
"4eo9Pn5VtGa"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks. This clears up my concern on the main contribution of the paper.\n\nHowever, there is still one remaining concern: there is a gap between the theory and practice, as sampling with MH algorithm might not give us the privacy guarantee of the exponential mechanism (EM). How large is the constant in the conve... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
3
] | [
"CLlENgM1Cv9",
"4eo9Pn5VtGa",
"rlgcz2tmxX5",
"uvsDhYQ_zJ7",
"ZdTY7rBt23",
"nips_2022_7WvNQz9SWH2",
"nips_2022_7WvNQz9SWH2",
"nips_2022_7WvNQz9SWH2",
"nips_2022_7WvNQz9SWH2"
] |
nips_2022_IRSyuxfYNb | Forward-Backward Latent State Inference for Hidden Continuous-Time semi-Markov Chains | Hidden semi-Markov Models (HSMM's) - while broadly in use - are restricted to a discrete and uniform time grid. They are thus not well suited to explain often irregularly spaced discrete event data from continuous-time phenomena. We show that non-sampling-based latent state inference used in HSMM's can be generalized to latent Continuous-Time semi-Markov Chains (CTSMC's). We formulate integro-differential forward and backward equations adjusted to the observation likelihood and introduce an exact integral equation for the Bayesian posterior marginals and a scalable Viterbi-type algorithm for posterior path estimates. The presented equations can be efficiently solved using well-known numerical methods. As a practical tool, variable-step HSMM's are introduced. We evaluate our approaches in latent state inference scenarios in comparison to classical HSMM's. | Accept | All of the authors agree that the work meets the NeurIPS standards, with the two lowest-scoring reviewers upping their recommendation from 4 to 5 on rebuttal. The work is described as "a fundamental and important problem" and "timely", "a good contribution".
Reviewer 6Ntb summarises the technical contribution:
"The paper generalizes latent state inference from HSMMs to continuous-time chains in latent space. In this case, the posterior is not simply proportional to the usual forward and backward probabilities. Instead, the transition random variables ("currents") are Markov. The authors take the limit of step size approaching 0 for these currents"
It's clear to me that the the work has been communicated really well, since all of the reviewers were able to grasp the paper and there was very little misunderstanding in the discussions.
There were some recommendations from the reviewers - please ensure these are fixed in the camera-ready version.
| val | [
"DT1fVmqmxRp",
"79medEm4fYq",
"EQEAwIexaI1",
"JnqzF9SRDmx",
"Qd62o56J0",
"Y_VgwMBRhjc",
"DzFb8EwSCLV",
"OATyXHUMxD",
"CILnPtzso-T",
"-a5x09NajJH",
"s5xJH2-I18P",
"8YDf5SLrp7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for your assessment. Maybe it is an error on our side, but we cannot see the changed score in the author console. If what we can see is correct, we kindly ask you to adjust the score as advertised.",
" Thank you for the detailed response, which has clarified some things for me as a non-expert on th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
2
] | [
"79medEm4fYq",
"DzFb8EwSCLV",
"Qd62o56J0",
"Y_VgwMBRhjc",
"8YDf5SLrp7",
"s5xJH2-I18P",
"-a5x09NajJH",
"CILnPtzso-T",
"nips_2022_IRSyuxfYNb",
"nips_2022_IRSyuxfYNb",
"nips_2022_IRSyuxfYNb",
"nips_2022_IRSyuxfYNb"
] |
nips_2022_rJjJda5q0E | Lifting the Information Ratio: An Information-Theoretic Analysis of Thompson Sampling for Contextual Bandits | We study the Bayesian regret of the renowned Thompson Sampling algorithm in contextual bandits with binary losses and adversarially-selected contexts. We adapt the information-theoretic perspective of Russo and Van Roy [2016] to the contextual setting by considering a lifted version of the information ratio defined in terms of the unknown model parameter instead of the optimal action or optimal policy as done in previous works on the same setting. This allows us to bound the regret in terms of the entropy of the prior distribution through a remarkably simple proof, and with no structural assumptions on the likelihood or the prior. The extension to priors with infinite entropy only requires a Lipschitz assumption on the log-likelihood. An interesting special case is that of logistic bandits with $d$-dimensional parameters, $K$ actions, and Lipschitz logits, for which we provide a $\tilde{O}(\sqrt{dKT})$ regret upper-bound that does not depend on the smallest slope of the sigmoid link function. | Accept | The paper presents an analysis of the Bayesian regret of Thompson Sampling algorithm in contextual bandits under an adversarial context process. The authors express the regret as a function of the so-called lifted information ratio. This information theoretical quantity is a natural extension of those introduced by Russo and Van Roy. The analysis provides new regret upper bounds for logistic bandit, and comes with simpler and elegant proofs.
The reviewers easily reached a consensus on this paper. It is very well written, easy to follow, and provide very interesting contributions.
The discussion phase gave the opportunity for the authors to correct the related work section, and in particular to re-position the paper compared to papers pointed out by during the review process.
| test | [
"CED-v6D9V2O",
"qoVqsMi9kO0",
"YBzcu_I3Z9k",
"cMMf1uuHakx",
"GyNCluqyvtx",
"MQsBbGKm_Lo",
"EV8-SFFI8f",
"5B_Zyv-zLky",
"4XNPsoMJ4cg",
"LQgcgp8jMzm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for getting back to us!\nWe of course understand why you decided to lower your score --- it makes perfect sense given all the relevant literature we missed in the original version. We would appreciate though if you could take a quick look at the revision and confirm that the discussion of these related ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"cMMf1uuHakx",
"YBzcu_I3Z9k",
"EV8-SFFI8f",
"MQsBbGKm_Lo",
"LQgcgp8jMzm",
"4XNPsoMJ4cg",
"5B_Zyv-zLky",
"nips_2022_rJjJda5q0E",
"nips_2022_rJjJda5q0E",
"nips_2022_rJjJda5q0E"
] |
nips_2022_-zBN5sBzdvr | How Sampling Impacts the Robustness of Stochastic Neural Networks | Stochastic neural networks (SNNs) are random functions whose predictions are gained by averaging over multiple realizations.
Consequently, a gradient-based adversarial example is calculated based on one set of samples and its classification on another set.
In this paper we derive a sufficient condition for such a stochastic prediction to be robust against a given sample-based attack.
This allows us to identify the factors that lead to an increased robustness of SNNs and gives theoretical explanations for:
(i) the well known observation, that increasing the amount of samples drawn for the estimation of adversarial examples increases the attack's strength,
(ii) why increasing the number of samples during an attack can not fully reduce the effect of stochasticity,
(iii) why the sample size during inference does not influence the robustness, and
(iv) why a higher gradient variance and a shorter expected value of the gradient relates to a higher robustness.
Our theoretical findings give a unified view on the mechanisms underlying previously proposed approaches for increasing attack strengths or model robustness and are verified by an extensive empirical analysis. | Accept | All reviewers agree this paper studies an important problem and presents a principled analysis for robustness of SNNs. Empirical results, though limited, seems to be complementing the analysis well. One reviewer rated the paper negatively. I find their concerns to be well addressed in the rebuttal phase. Overall this a borderline paper. I am suggesting acceptance and ecourage authors to update the draft to address all the reviewers comments. | train | [
"rBscQoZLF7U",
"O4ktt-ZiW3r",
"Nf-y__NLk1d",
"-qETPeIJw9Z",
"9IdDJi2-zR",
"e4_R47n37W",
"dm5osqZ1xDr",
"fBwz3PO7zQ3",
"TigO5bbPwpd",
"QBy_fg09w4j_",
"Ud9x_NQEoNZ",
"8mStCuZWv3D",
"NtV0XGn5MBl",
"7K-In09Ie6Y",
"_t1ddZhqluh",
"gHQmj4hTyv",
"D-p1NA-Lntr",
"M6qOxtSZjHD",
"KJIUsghjUXH... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" We would like to use the example of Chebyshev maximum margin classifier to clarify the left misunderstandings: If we assume the Chebyshev classifier $g(x)$ to be the mean of a stochastic classifier $f(x|\\theta)$, i.e. $\\mathbb{E}(f(x|\\theta)) = g(x)$, than the derivative of $g(x)$ is given by $\\frac{d}{dx} g(... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"9IdDJi2-zR",
"-qETPeIJw9Z",
"NtV0XGn5MBl",
"fBwz3PO7zQ3",
"dm5osqZ1xDr",
"_t1ddZhqluh",
"TigO5bbPwpd",
"TigO5bbPwpd",
"QBy_fg09w4j_",
"Ud9x_NQEoNZ",
"7K-In09Ie6Y",
"nips_2022_-zBN5sBzdvr",
"Zyb6dURcXT9",
"KJIUsghjUXH",
"M6qOxtSZjHD",
"D-p1NA-Lntr",
"nips_2022_-zBN5sBzdvr",
"nips_2... |
nips_2022_AK6S9MZwM0 | Robust Reinforcement Learning using Offline Data | The goal of robust reinforcement learning (RL) is to learn a policy that is robust against the uncertainty in model parameters. Parameter uncertainty commonly occurs in many real-world RL applications due to simulator modeling errors, changes in the real-world system dynamics over time, and adversarial disturbances. Robust RL is typically formulated as a max-min problem, where the objective is to learn the policy that maximizes the value against the worst possible models that lie in an uncertainty set. In this work, we propose a robust RL algorithm called Robust Fitted Q-Iteration (RFQI), which uses only an offline dataset to learn the optimal robust policy. Robust RL with offline data is significantly more challenging than its non-robust counterpart because of the minimization over all models present in the robust Bellman operator. This poses challenges in offline data collection, optimization over the models, and unbiased estimation. In this work, we propose a systematic approach to overcome these challenges, resulting in our RFQI algorithm. We prove that RFQI learns a near-optimal robust policy under standard assumptions and demonstrate its superior performance on standard benchmark problems. | Accept | The reviewers are in general positive about the paper. The main concerns were about the assumptions used in the analysis. The AC is satisfied by the response from authors, and also thinks the assumptions are reasonable (standard in offline RL literature). The AC also thinks the setting studied in this paper is important. | train | [
"CZ2aVIvwLPS",
"Wybzs20V4K",
"pj4ePAvoUd4",
"a1bDVuuRYiB",
"A5XdSAZQspU",
"X8hjzfEfRbs",
"RytwIiPp1ZG",
"KhUhPPW9Vv",
"Qzab1Y70X4p",
"VwJ2l_g66_w",
"pYgbOMRC8v8",
"2P8LN6WtD8",
"9v1iz4Y55Qz",
"3yD2_RqcI7",
"C3Kl-GGLxIw",
"tM9zg0mJGjG",
"q6zBUFqlwwf"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for this detailed response, I am seeing your work less critical now (I wll update my score in the reviewer-AC discussion round).\n\nOne further remark that I want to place in this discussion in the hope, that it might help the scientific progress:\n\nYou state:\n` it is typ,ically difficult to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"pj4ePAvoUd4",
"a1bDVuuRYiB",
"A5XdSAZQspU",
"X8hjzfEfRbs",
"KhUhPPW9Vv",
"q6zBUFqlwwf",
"9v1iz4Y55Qz",
"Qzab1Y70X4p",
"tM9zg0mJGjG",
"pYgbOMRC8v8",
"C3Kl-GGLxIw",
"3yD2_RqcI7",
"nips_2022_AK6S9MZwM0",
"nips_2022_AK6S9MZwM0",
"nips_2022_AK6S9MZwM0",
"nips_2022_AK6S9MZwM0",
"nips_2022... |
nips_2022_IU3nj1tqwyY | Characterizing the Ventral Visual Stream with Response-Optimized Neural Encoding Models | Decades of experimental research based on simple, abstract stimuli has revealed the coding principles of the ventral visual processing hierarchy, from the presence of edge detectors in the primary visual cortex to the selectivity for complex visual categories in the anterior ventral stream. However, these studies are, by construction, constrained by their $\textit{a priori}$ hypotheses. Furthermore, beyond the early stages, precise neuronal tuning properties and representational transformations along the ventral visual pathway remain poorly understood. In this work, we propose to employ response-optimized encoding models trained solely to predict the functional MRI activation, in order to gain insights into the tuning properties and representational transformations in the series of areas along the ventral visual pathway. We demonstrate the strong generalization abilities of these models on artificial stimuli and novel datasets. Intriguingly, we find that response-optimized models trained towards the ventral-occipital and lateral-occipital areas, but not early visual areas, can recapitulate complex visual behaviors like object categorization and perceived image-similarity in humans. We further probe the trained networks to reveal representational biases in different visual areas and generate experimentally testable hypotheses. Our analyses suggest a shape-based processing along the ventral visual stream and provide a unified picture of multiple neural phenomena characterized over the last decades with controlled fMRI studies. | Accept | The authors leverage a large fMRI dataset to fit “hypothesis-agnostic”/data-driven encoding models to brain data (as opposed to task-optimized ones -- as done in most of the related work). While there is general agreement that this is not a groundbreaking advance (there is already similar work published), the reviewers agreed that the paper was technically strong (good generalization to held-out subjects, good generalization to OOD images, increased shape-bias, improved sample efficiency). The interpretation of results for neuroscience was rated as "top-notch". The paper received a rare unanimous clear accept. The AC recommend acceptance. | train | [
"gq16Cct8jfV",
"KkZYDHUXe5R",
"Ia4g8qr-Wg-",
"XRfqINHTIDV",
"uVlbwypGWbO",
"OErn6NYAcEI",
"nMESPbyfk_o",
"GnW9RKQay-sR",
"AWLZcxYVnJW",
"V6DRfMydiDx",
"woGV-wVAcV",
"mgX1eUO3w04",
"P7UpE_ObmVcy",
"7O8-764CURc",
"VQ6ApkpnLzE",
"LviymYQAZhJ",
"MW_4_T057G"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you, this added discussion addresses my follow-up question.",
" Thank you again for suggesting those baselines - they have considerably improved our manuscript. \nThank you for the suggestion in the follow-up. We agree that these works are very relevant to the present study and deserve a more detailed tre... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"KkZYDHUXe5R",
"Ia4g8qr-Wg-",
"AWLZcxYVnJW",
"nMESPbyfk_o",
"OErn6NYAcEI",
"woGV-wVAcV",
"GnW9RKQay-sR",
"MW_4_T057G",
"V6DRfMydiDx",
"LviymYQAZhJ",
"mgX1eUO3w04",
"P7UpE_ObmVcy",
"7O8-764CURc",
"VQ6ApkpnLzE",
"nips_2022_IU3nj1tqwyY",
"nips_2022_IU3nj1tqwyY",
"nips_2022_IU3nj1tqwyY"
... |
nips_2022_5OWV-sZvMl | NOMAD: Nonlinear Manifold Decoders for Operator Learning | Supervised learning in function spaces is an emerging area of machine learning research with applications to the prediction of complex physical systems such as fluid flows, solid mechanics, and climate modeling. By directly learning maps (operators) between infinite dimensional function spaces, these models are able to learn discretization invariant representations of target functions. A common approach is to represent such target functions as linear combinations of basis elements learned from data. However, there are simple scenarios where, even though the target functions form a low dimensional submanifold, a very large number of basis elements is needed for an accurate linear representation. Here we present NOMAD, a novel operator learning framework with a nonlinear decoder map capable of learning finite dimensional representations of nonlinear submanifolds in function spaces. We show this method is able to accurately learn low dimensional representations of solution manifolds to partial differential equations while outperforming linear models of larger size. Additionally, we compare to state-of-the-art operator learning methods on a complex fluid dynamics benchmark and achieve competitive performance with a significantly smaller model size and training cost. | Accept | The focus of the submission is supervised learning with functional inputs and outputs. Particularly, the authors consider the
encoder-approximator-decoder architecture (2)-(3) to tackle this task. After discussing the limitations of linear decoders in this scheme (meant in L^2 and uniform sense; the latter is elaborated in Proposition 1), the authors present the nonlinear NOMAD architecture under the assumption of operator learning manifold hypothesis (12) which captures a low-dimensional output space condition. They demonstrate the efficiency of the approach compared to the DeepONet method relying on linear decoders, Fourier neural operators and LOCA (i) when using stochastic gradient descent on the empirical loss (1), (ii) on the learning problem of the antiderivative operator and learning the solution of two partial differential equations.
Functional data analysis is a fundamental area of machine learning with a large number of applications. The authors present a new method in this context which can be of definite interest to the community as it was assessed by the reviewers. | val | [
"yMOHk9F_Hl",
"g7Yofjg9c6a",
"Ewn0rSuFe1n",
"HSmeVSUNd9e",
"c0M1oEEW5o5",
"19cWvDHpT0i",
"cOuord3vjQM",
"fX1JlssMfy",
"dYp6Mb1nDiH",
"f9UvjM9FvC2",
"aMA1bZxlrQP",
"f6YjjZfePV9",
"2Aar--3ajgG",
"D7s2zyVruSe"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nDear colleagues,\n\nThank you for your response. Partially it addressed my concerns. However, some issues still slightly outweigh the merits of this paper, so I can not increase a score to a full extent. In particular,\n\n- although the authors proposed a new insight on how we can represent a model of a nonline... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4,
4,
4
] | [
"cOuord3vjQM",
"nips_2022_5OWV-sZvMl",
"c0M1oEEW5o5",
"19cWvDHpT0i",
"D7s2zyVruSe",
"2Aar--3ajgG",
"f6YjjZfePV9",
"aMA1bZxlrQP",
"f9UvjM9FvC2",
"nips_2022_5OWV-sZvMl",
"nips_2022_5OWV-sZvMl",
"nips_2022_5OWV-sZvMl",
"nips_2022_5OWV-sZvMl",
"nips_2022_5OWV-sZvMl"
] |
nips_2022_LzbrVf-l0Xq | Implications of Model Indeterminacy for Explanations of Automated Decisions | There has been a significant research effort focused on explaining predictive models, for example through post-hoc explainability and recourse methods. Most of the proposed techniques operate upon a single, fixed, predictive model. However, it is well-known that given a dataset and a predictive task, there may be a multiplicity of models that solve the problem (nearly) equally well. In this work, we investigate the implications of this kind of model indeterminacy on the post-hoc explanations of predictive models. We show how it can lead to explanatory multiplicity, and we explore the underlying drivers. We show how predictive multiplicity, and the related concept of epistemic uncertainty, are not reliable indicators of explanatory multiplicity. We further illustrate how a set of models showing very similar aggregate performance on a test dataset may show large variations in their local explanations, i.e., for a specific input. We explore these effects for Shapley value based explanations on three risk assessment datasets. Our results indicate that model indeterminacy may have a substantial impact on explanations in practice, leading to inconsistent and even contradicting explanations. | Accept | This paper introduces the notion of explanation multiplicity. The authors first show how explanations can vary for models with similar performance on toy examples and theoretically in the linear case. They demonstrate that model multiplicity can have noticeable effects on the explanations using Shapley explanations. This paper puts forth a strong approach tackling a very important issue. So, all the reviewers concur that this paper should be accepted at this time. | train | [
"Q4UsPrLOrjr",
"8MYqdhqVmX",
"Zf0yVYLuHLE",
"FoDbWxCFnCk",
"PMuJLyq2N_-",
"kRQXQYkRhx8",
"aZC3rUZN2u",
"oFEgmHYqpk",
"1_UsBobvSn3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my questions. I appreciate you all improving the figures to make it clearer. ",
" Thanks for the author's response, and I appreciate the considerable effort in the research so far. Looking again and considering the response, I think this paper is in quite a good shape, so I raised my sc... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"kRQXQYkRhx8",
"FoDbWxCFnCk",
"oFEgmHYqpk",
"1_UsBobvSn3",
"oFEgmHYqpk",
"aZC3rUZN2u",
"nips_2022_LzbrVf-l0Xq",
"nips_2022_LzbrVf-l0Xq",
"nips_2022_LzbrVf-l0Xq"
] |
nips_2022_j0J9upqN5va | Single Model Uncertainty Estimation via Stochastic Data Centering | We are interested in estimating the uncertainties of deep neural networks, which play an important role in many scientific and engineering problems. In this paper, we present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models, where the differences in predictions are a strong indicator of epistemic uncertainties. Using the neural tangent kernel (NTK), we demonstrate that this phenomena occurs in part because the NTK is not shift-invariant. Since this is achieved via a trivial input transformation, we show that this behavior can therefore be approximated by training a single neural network -- using a technique that we call $\Delta-$UQ -- that estimates uncertainty around prediction by marginalizing out the effect of the biases during inference. We show that $\Delta-$UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks-- outlier rejection, calibration under distribution shift, and sequential design optimization of black box functions. Code for $\Delta-$UQ can be accessed at github.com/LLNL/DeltaUQ
| Accept | The paper proposes a method that allows single model uncertainty estimation by training a model with a random data augmentation. The proposed approach is simple and scalable. It is comparable to or better than deep ensemble in terms of NLL, ECE, and Brier score. The application to sequential optimization tasks presented in the paper looks interesting. All reviewers support accepting this paper. While there could be more theoretical support, I think this paper would be of wide interest to the NeurIPS community. if possible, accept as Spotlight. | val | [
"ZVDq1znNKK",
"kjS99Urd7Au",
"zMhJwmhz-i",
"Wt6ScCtcyA",
"K2D3IjFCPgt",
"eCXm3I45-Yc",
"IxSVjfpMJi-",
"IBBetTdik8h",
"bUOEqPkFkOW",
"Hgq6P-jzGJZ5",
"o64-JYS9Fq-",
"Rn1xmRrTLU",
"p75ZV2_CLn-",
"vyY5AEu51q4",
"xm5ze4QEbaAy",
"KOAtDOEnXTv",
"aj_5NM9RSYm",
"q0UKGFGS1wU",
"Ev5FiQnYTSn... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for going over the rebuttal and updating your score. Since there is one more day left for the discussions, are there any specific questions that we can address?",
" Thank you for going over the rebuttal and voting for the acceptance of this work!",
" Thank you for going over the rebuttal and voting ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"K2D3IjFCPgt",
"bUOEqPkFkOW",
"IBBetTdik8h",
"IxSVjfpMJi-",
"eCXm3I45-Yc",
"Wwhbm5I8qzI",
"KOAtDOEnXTv",
"xm5ze4QEbaAy",
"o64-JYS9Fq-",
"nips_2022_j0J9upqN5va",
"Rn1xmRrTLU",
"qhiIQIjv2Kw",
"vyY5AEu51q4",
"Wwhbm5I8qzI",
"Ev5FiQnYTSn",
"aj_5NM9RSYm",
"q0UKGFGS1wU",
"nips_2022_j0J9up... |
nips_2022_YCniF6_3Jb | Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors | Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned on the source task. But an initialization contains relatively little information about the source task, and does not reflect the belief that our knowledge of the source task should affect the locations and shape of optima on the downstream task.
Instead, we show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches, which then serve as the basis for priors that modify the whole loss surface on the downstream task. This simple modular approach enables significant performance gains and more data-efficient learning on a variety of downstream classification and segmentation tasks, serving as a drop-in replacement for standard pre-training strategies. These highly informative priors also can be saved for future use, similar to pre-trained weights, and stand in contrast to the zero-mean isotropic uninformative priors that are typically used in Bayesian deep learning. | Accept | This work presents a Bayesian method for transfer learning using SWAG. Reviewers agree that this is well-motivated, it's novel and the proposed method is well done and works well. There are some concerns about the computational burden, but the authors claim that the proposed part adds about 1/7 total cost. I share some of the concerns with one of the reviewers regarding the fact that this method may be limited in usefulness to people who are experts, rather than the more general public. However, given that the method builds on prior art and adds a single hyperparameter, I feel it should be relatively easy for someone to actually use this, if they are interested in transfer learning. | train | [
"fLhithLT9id",
"yZkP_Jolu5",
"c0NIoyEGs3",
"MNZNXSonA1b",
"X2jUl-EW1e8",
"K2Cka2IIwpnG",
"_eQMi8QrcRy",
"PfIStR3PJZE",
"S1WTrKBnA8R",
"BfPL3LPM9m",
"lgDRCFBP5Vd",
"g47HtOPp6UN",
"a9XF1lmdob6",
"0zW5Eevj3uR",
"AxTwMa9-Wt6",
"9BXXsH9Nszr"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. \n\nWe have now gone through each figure, caption, and corresponding description, and we have corrected inconsistencies in our updated draft. \n\nWe are also happy that the new experiments strengthen the submission and that you will consider raising your score. ",
" Thank you for... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4
] | [
"yZkP_Jolu5",
"K2Cka2IIwpnG",
"PfIStR3PJZE",
"PfIStR3PJZE",
"_eQMi8QrcRy",
"BfPL3LPM9m",
"9BXXsH9Nszr",
"S1WTrKBnA8R",
"AxTwMa9-Wt6",
"0zW5Eevj3uR",
"g47HtOPp6UN",
"a9XF1lmdob6",
"nips_2022_YCniF6_3Jb",
"nips_2022_YCniF6_3Jb",
"nips_2022_YCniF6_3Jb",
"nips_2022_YCniF6_3Jb"
] |
nips_2022_AODVskSug8 | A Theoretical View on Sparsely Activated Networks | Deep and wide neural networks successfully fit very complex functions today, but dense models are starting to be prohibitively expensive for inference. To mitigate this, one promising research direction is networks that activate a sparse subgraph of the network. The subgraph is chosen by a data-dependent routing function, enforcing a fixed mapping of inputs to subnetworks (e.g., the Mixture of Experts (MoE) paradigm in Switch Transformers). However, there is no theoretical grounding for these sparsely activated models. As our first contribution, we present a formal model of data-dependent sparse networks that captures salient aspects of popular architectures. Then, we show how to construct sparse networks that provably match the approximation power and total size of dense networks on Lipschitz functions. The sparse networks use much fewer inference operations than dense networks, leading to a faster forward pass. The key idea is to use locality sensitive hashing on the input vectors and then interpolate the function in subregions of the input space. This offers a theoretical insight into why sparse networks work well in practice. Finally, we present empirical findings that support our theory; compared to dense networks, sparse networks give a favorable trade-off between number of active units and approximation quality. | Accept | The paper provides a theoretical analysis of sparsely activated neural networks. They introduce LSH (local sensitive hashing) as a new routing function for theoretical analysis and proved a few results on representation power and inference time. One reviewer pointed out that the theoretical results are expected and do not provide much interesting insight, which I agree with. Nevertheless, this is one of the early papers that study sparsely activated networks and may serve as a starting point. I recommend acceptance.
| val | [
"mWuDBqft3xx",
"t9ZfaTXrYww",
"peKHPLytAB",
"WClIiZQItuk",
"7TVqcN--ec",
"eKjAm4dm2N4",
"nDjlMDa5v_C",
"EgBpQ8WrTZQ",
"o3WuZPcGnb",
"MWk3GqRMK6b",
"0fhOgOgbyXK",
"X0EXbaKyp8G",
"ClVOxrZX5yI",
"Gx3VjODzCA",
"6JUx58ec1ei",
"SF_FPaQTzPl",
"HwY4PbHEwLg",
"dSD3TyqYARq",
"_1x2JkFVl9V",... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"... | [
" Thank you for your patient response. The discussion period is about to end. In order to make the authors delightfully devote themselves to their other valuable work, I promise that I will respond to your urge to reconsider the rating. Please relax.\n\nBut I may not be able to give the final rating yet, as there w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"peKHPLytAB",
"SF_FPaQTzPl",
"7TVqcN--ec",
"eKjAm4dm2N4",
"EgBpQ8WrTZQ",
"nDjlMDa5v_C",
"MWk3GqRMK6b",
"0fhOgOgbyXK",
"X0EXbaKyp8G",
"Gx3VjODzCA",
"ClVOxrZX5yI",
"dSD3TyqYARq",
"HPdnm2kaPfJ",
"_1x2JkFVl9V",
"dSD3TyqYARq",
"HwY4PbHEwLg",
"nips_2022_AODVskSug8",
"nips_2022_AODVskSug8... |
nips_2022_57ZKV2YuwjL | Deep Counterfactual Estimation with Categorical Background Variables | Referred to as the third rung of the causal inference ladder, counterfactual queries typically ask the "What if ?" question retrospectively. The standard approach to estimate counterfactuals resides in using a structural equation model that accurately reflects the underlying data generating process. However, such models are seldom available in practice and one usually wishes to infer them from observational data alone. Unfortunately, the correct structural equation model is in general not identifiable from the observed factual distribution. Nevertheless, in this work, we show that under the assumption that the main latent contributors to the treatment responses are categorical, the counterfactuals can be still reliably predicted.
Building upon this assumption, we introduce CounterFactual Query Prediction (\method), a novel method to infer counterfactuals from continuous observations when the background variables are categorical. We show that our method significantly outperforms previously available deep-learning-based counterfactual methods, both theoretically and empirically on time series and image data. Our code is available at https://github.com/edebrouwer/cfqp. | Accept | The authors develop a technique for estimating counterfactuals. Counterfactuals are generally not identifiable. This paper makes assumptions on the exogenous noise on the outcome to estimate counterfactuals. Namely, the paper assumes a purely exogenous continuous part and a discrete part. The reviewers were generally positive with the one negative reviewer noting in discussion that they still had concerns about novelty, but were swayed positive by the value the method could have on real world problems.
| train | [
"9r_OMOJhcyl",
"c2Im5AttqY",
"8vWmZhvs5E8",
"7VihPBFxY0T",
"Ezw52J4TLAX",
"_9YsXnQ6_4y",
"NE261jX7hi2",
"MmcrHcwnP_R",
"-TBRbvdr1N-",
"MEIxD8t1WGi",
"Rt_8qvt5CNb",
"PK38KyYOCSu",
"RBNMRZZfSs4A",
"25mClg0zoW",
"oukpWu6ruhX",
"ztV1sKEJqK",
"i8yRhKaZyC1",
"2AEQkSqhh_R"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your feedback and for updating your score ! It's truly appreciated ! We are glad our response addressed your concerns.\n\nBest Regards,\n\nThe authors",
" Thanks for the well-written response, it addresses the concerns I had.",
" Dear Reviewer Md9L,\n\nThanks again for taking the time to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
3
] | [
"c2Im5AttqY",
"Ezw52J4TLAX",
"oukpWu6ruhX",
"ztV1sKEJqK",
"i8yRhKaZyC1",
"nips_2022_57ZKV2YuwjL",
"nips_2022_57ZKV2YuwjL",
"i8yRhKaZyC1",
"MEIxD8t1WGi",
"oukpWu6ruhX",
"2AEQkSqhh_R",
"25mClg0zoW",
"25mClg0zoW",
"ztV1sKEJqK",
"nips_2022_57ZKV2YuwjL",
"nips_2022_57ZKV2YuwjL",
"nips_202... |
nips_2022_mq-8p5pUnEX | Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning | Recurrent neural networks have a strong inductive bias towards learning temporally compressed representations, as the entire history of a sequence is represented by a single vector. By contrast, Transformers have little inductive bias towards learning temporally compressed representations, as they allow for attention over all previously computed elements in a sequence. Having a more compressed representation of a sequence may be beneficial for generalization, as a high-level representation may be more easily re-used and re-purposed and will contain fewer irrelevant details. At the same time, excessive compression of representations comes at the cost of expressiveness. We propose a solution which divides computation into two streams. A slow stream that is recurrent in nature aims to learn a specialized and compressed representation, by forcing chunks of $K$ time steps into a single representation which is divided into multiple vectors. At the same time, a fast stream is parameterized as a Transformer to process chunks consisting of $K$ time-steps conditioned on the information in the slow-stream. In the proposed approach we hope to gain the expressiveness of the Transformer, while encouraging better compression and structuring of representations in the slow stream. We show the benefits of the proposed method in terms of improved sample efficiency and generalization performance as compared to various competitive baselines for visual perception and sequential decision making tasks.
| Accept | This paper proposes a new architecture consisting of both a recurrent and transformer-based component. The reviewers found the performance of the model to be impressive across multiple domains, but were less convinced by some of the quasitechnical claims in the storytelling (even Reviewer p6Zz, who champions the paper expressed this concern). I encourage the authors to be a but more disciplined about these claims in the camera ready version if accepted. Overall the authors engaged admirably in the discussion period providing a strong effort to improve the paper and digestible summaries of the original complaints and corresponding changes in the improved draft. | train | [
"K8HGKqlInQT",
"ldIWShWx5D6",
"ln5M-ecAd6F",
"3suuIO5ib5",
"06orEd4ErA",
"dPoa6N0Ha2B",
"HFBFZZhKVn",
"OVPApKzVMuq",
"TD61l-QX1-",
"AOLMXeSg2E1",
"wCA6OabrJf8",
"2aK4ZQHg6JL",
"rg_QHWPoU4a",
"prlZsseUgUW",
"D0r7Mwl5Nhv",
"1rqU0Sy5Bat",
"e8tknj73-Sc",
"zygRt8sJSnH",
"9Y9js5j7H749"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
... | [
" I have read your last reply and appreciate your quick response. \n\nI keep my current assessment, updated during the rebuttal phase.\n",
" **Reviewer EwzR** (original rating: 4; new rating: 4)\n\nThe reviewer raised detailed concerns in the initial review as well as in the discussion phase. We thank the reviewe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"prlZsseUgUW",
"ln5M-ecAd6F",
"nips_2022_mq-8p5pUnEX",
"prlZsseUgUW",
"dPoa6N0Ha2B",
"rNblfUQcAbN",
"q9ut5YUbyVR",
"TD61l-QX1-",
"AOLMXeSg2E1",
"wCA6OabrJf8",
"2aK4ZQHg6JL",
"q9ut5YUbyVR",
"rNblfUQcAbN",
"D0r7Mwl5Nhv",
"uefL_Cztt_s",
"q9ut5YUbyVR",
"zygRt8sJSnH",
"5zraS3TBo799",
... |
nips_2022_owDcdLGgEm | On the symmetries of the synchronization problem in Cryo-EM: Multi-Frequency Vector Diffusion Maps on the Projective Plane | Cryo-Electron Microscopy (Cryo-EM) is an important imaging method which allows high-resolution reconstruction of the 3D structures of biomolecules. It produces highly noisy 2D images by projecting a molecule's 3D density from random viewing directions. Because the projection directions are unknown, estimating the images' poses is necessary to perform the reconstruction. We focus on this task and study it under the group synchronization framework: if the relative poses of pairs of images can be approximated from the data, an estimation of the images' poses is given by the assignment which is most consistent with the relative ones.
In particular, by studying the symmetries of cryo-EM, we show that relative poses in the group O(2) provide sufficient constraints to identify the images' poses, up to the molecule's chirality. With this in mind, we improve the existing multi-frequency vector diffusion maps (MFVDM) method: by using O(2) relative poses, our method not only predicts the similarity between the images' viewing directions but also recovers their poses. Hence, we can leverage all input images in a 3D reconstruction algorithm by initializing the poses with our estimation rather than just clustering and averaging the input images. We validate the recovery capabilities and robustness of our method on randomly generated synchronization graphs and a synthetic cryo-EM dataset. | Accept | This paper formally studies the symmetries in group synchronization for pose estimation in cryo-EM. The main insight obtained by the authors is that the relative poses between images in O(2) provide sufficient constraints to identify image poses in SO(3). This insights leads to improved quality of the multi-frequency vector diffusion map algorithm for 3D reconstruction. A main weakness of the paper is its lack of a proof-of-concept on real-world data. This would have made a much stronger case for the utility of the proposed approach. The current work uses synthetic cyro-EM datasets, which assume a uniform distribution on the viewing angles. This assumption might not hold true for real-world datasets. | train | [
"rSj1N40YPh",
"uyMjD333fbe",
"XoKBw5jiTZB",
"JsqBA8BQBAd",
"Yss_3vlAAJw",
"8BjI3Q6tJO"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree with the reviewer that validating our method on real-world data is important.\nWe are currently working on some proof-of-concept experiments on a real dataset from EMPIAR, which we hope to include in the next version of the manuscript.\n\nRegarding the proof of sufficiency of $O(2)$ and the failure with ... | [
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
3,
2,
3
] | [
"8BjI3Q6tJO",
"Yss_3vlAAJw",
"JsqBA8BQBAd",
"nips_2022_owDcdLGgEm",
"nips_2022_owDcdLGgEm",
"nips_2022_owDcdLGgEm"
] |
nips_2022_XmK56zbGeCp | Towards Trustworthy Automatic Diagnosis Systems by Emulating Doctors' Reasoning with Deep Reinforcement Learning | The automation of the medical evidence acquisition and diagnosis process has recently attracted increasing attention in order to reduce the workload of doctors and democratize access to medical care. However, most works proposed in the machine learning literature focus solely on improving the prediction accuracy of a patient's pathology. We argue that this objective is insufficient to ensure doctors' acceptability of such systems. In their initial interaction with patients, doctors do not only focus on identifying the pathology a patient is suffering from; they instead generate a differential diagnosis (in the form of a short list of plausible diseases) because the medical evidence collected from patients is often insufficient to establish a final diagnosis. Moreover, doctors explicitly explore severe pathologies before potentially ruling them out from the differential, especially in acute care settings. Finally, for doctors to trust a system's recommendations, they need to understand how the gathered evidences led to the predicted diseases. In particular, interactions between a system and a patient need to emulate the reasoning of doctors. We therefore propose to model the evidence acquisition and automatic diagnosis tasks using a deep reinforcement learning framework that considers three essential aspects of a doctor's reasoning, namely generating a differential diagnosis using an exploration-confirmation approach while prioritizing severe pathologies. We propose metrics for evaluating interaction quality based on these three aspects. We show that our approach performs better than existing models while maintaining competitive pathology prediction accuracy. | Accept | **Technical Review and Decision**: This paper proposes four sets of rewards such that an RL maximizing the sum of those rewards (combined with the environment reward) to mimic doctors' behavior and increase the trust in the automatic differential diagnosis systems. The paper is well-written and the exchange between the reviewers and the authors have been constructive. There are multiple questions and the reviewers are convinced by the response. The authors should include the clarifications in the camera-ready version of the paper. While the methodological contributions are limited to a reward design, this paper qualifies as a good application paper.
**Ethics Review**: The ethical reviewers have identified that the authors need to elaborate more on the doctor consultation process and make it more transparent. I strongly suggest including a discussion of ethical concerns, as discussed in the ethical reviews. | val | [
"6aGfeLAhf4",
"v8VKR7HA0mn",
"6SAD3K6Puny",
"lQupJ_uW7GC",
"qMkPA9Tlpj",
"6lvLd582w_R",
"iE9z9B1uR3i",
"32laJAX_qXF",
"Vo58e0uMzT",
"ilHpQmwUOAu",
"Dek1XHGjKyf",
"smi4oA-_vu3",
"XZ8G2phNfze",
"KfI296Mh-3R",
"MT0umIWEnTt",
"oxfhjPgj9vj"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The paper raises a number of ethical issues including:\n1) doctors in different locations and cultures may be affected by other factors when deciding to trust the algorithm \n2) synthetic patient data may not represent real patients\n3) patients could overly trust the automated diagnosis system when it's no subst... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
3
] | [
"nips_2022_XmK56zbGeCp",
"lQupJ_uW7GC",
"ilHpQmwUOAu",
"qMkPA9Tlpj",
"iE9z9B1uR3i",
"nips_2022_XmK56zbGeCp",
"Dek1XHGjKyf",
"XZ8G2phNfze",
"nips_2022_XmK56zbGeCp",
"KfI296Mh-3R",
"MT0umIWEnTt",
"oxfhjPgj9vj",
"nips_2022_XmK56zbGeCp",
"nips_2022_XmK56zbGeCp",
"nips_2022_XmK56zbGeCp",
"n... |
nips_2022_yjWir-w3gki | Reinforcement Learning with Non-Exponential Discounting | Commonly in reinforcement learning (RL), rewards are discounted over time using an exponential function to model time preference, thereby bounding the expected long-term reward. In contrast, in economics and psychology, it has been shown that humans often adopt a hyperbolic discounting scheme, which is optimal when a specific task termination time distribution is assumed. In this work, we propose a theory for continuous-time model-based reinforcement learning generalized to arbitrary discount functions. This formulation covers the case in which there is a non-exponential random termination time. We derive a Hamilton–Jacobi–Bellman (HJB) equation characterizing the optimal policy and describe how it can be solved using a collocation method, which uses deep learning for function approximation. Further, we show how the inverse RL problem can be approached, in which one tries to recover properties of the discount function given decision data. We validate the applicability of our proposed approach on two simulated problems. Our approach opens the way for the analysis of human discounting in sequential decision-making tasks. | Accept | This paper studies various forms of discounting in continuous time, with a deep learning solution. One of the motivations for doing so is to broaden the range of inverse RL algorithms.
Overall, the reviewers appreciated the perspective taken in this paper and the proposed application of the idea in an IRL context. There was some discussion on whether actual human data experiments were necessary; I note that the experimental results are currently fairly preliminary. The use of the term "reinforcement learning" is also somewhat misleading given that this is closer to more traditional OR work, including the assumption that the MDP parameters are known. However, there was general agreement that this paper plays a useful role in bridging different fields and makes a good contribution.
The authors are encouraged to give a more complete discussion of how this work relates to other techniques such as preference elicitation. | val | [
"4KVmlLTZXhf",
"u6obhejLD0",
"-7ETMIwBeOf",
"_nJQgsn1nAb",
"599f5ALdG_",
"9dJeVzcm-i",
"SZvCZaxeNby",
"pDoiHvenoSf",
"9B58mnSwdhs",
"wMQKZ-7LpGD",
"ZC4OcvXYf1P",
"PoSH9f7F_rm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you, I found your comments are responsive and have increased my score to 5 (I am not comfortable raising it higher at the moment due to (a) limited understanding on my part, and (b) lack of revision currently, although this paper could very well be in 6-7 range). ",
" Thank you for your response. I apprec... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"9dJeVzcm-i",
"SZvCZaxeNby",
"PoSH9f7F_rm",
"ZC4OcvXYf1P",
"ZC4OcvXYf1P",
"wMQKZ-7LpGD",
"9B58mnSwdhs",
"nips_2022_yjWir-w3gki",
"nips_2022_yjWir-w3gki",
"nips_2022_yjWir-w3gki",
"nips_2022_yjWir-w3gki",
"nips_2022_yjWir-w3gki"
] |
nips_2022_4LZo68TuF-4 | Algorithms and Hardness for Learning Linear Thresholds from Label Proportions | We study the learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) framework. In this, the feature-vector classifier is learnt from bags of feature-vectors and their corresponding observed label proportions which are satisfied by (i.e., consistent with) some unknown LTF. This problem has been investigated in recent work (Saket21) which gave an algorithm to produce an LTF that satisfies at least $(2/5)$-fraction of a satisfiable collection of bags, each of size $\leq 2$, by solving and rounding a natural SDP relaxation. However, this SDP relaxation is specific to at most $2$-sized bags and does not apply to bags of larger size.
In this work we provide a fairly non-trivial SDP relaxation of a non-quadratic formulation for bags of size $3$. We analyze its rounding procedure using novel matrix decomposition techniques to obtain an algorithm which outputs an LTF satisfying at least $(1/12)$-fraction of the bags of size $\leq 3$. We also apply our techniques to bags of size $q \geq 4$ to provide a $\Omega\left(1/q\right)$-approximation guarantee for a weaker notion of satisfiability. We include comparative experiments on simulated data demonstrating the applicability of our algorithmic techniques.
From the complexity side we provide a hardness reduction to produce instances with bags of any constant size $q$. Our reduction proves the NP-hardness of satisfying more than $({1}/{q}) + o(1)$ fraction of a satisfiable collection of such bags using as hypothesis any function of constantly many LTFs, showing thereby that the problem is harder to approximate as the bag size $q$ increases. Using a strengthened analysis, for $q=2$ we obtain a $({4}/{9}) +o(1)$ hardness factor for this problem, improving upon the $({1}/{2}) + o(1)$ factor shown by Saket21.
| Accept | All of the reviewers found the theoretical results in this paper novel and significant. In particular, the main contribution of the paper, which is the new SDP relaxation, appears to be non-trivial and interesting. However, there remain concerns about readability of the paper, as outlined by one of the reviewers, and we request that the authors put some effort into addressing them. | train | [
"9tOZ57ZHH99",
"05sY5kwFaQc",
"7gQVP5H5e3u",
"pJuJQCILoLh",
"onRQ7kXAhxT",
"SgwG2OdSU1M",
"2yhLcdf062W",
"pfwEq8H6fnL",
"ak0EjzOHe66",
"YqN4d6sV-Je"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer RxJW,\n\nDid the author response address your concerns? \nIf yes, then please acknowledge this in your review (or by responding to the author comments). If not, then please ask the authors a clarifying question during the author-reviewer discussion period (which lasts until this Tuesday Aug 9).\n\nT... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"pfwEq8H6fnL",
"7gQVP5H5e3u",
"2yhLcdf062W",
"SgwG2OdSU1M",
"pfwEq8H6fnL",
"YqN4d6sV-Je",
"ak0EjzOHe66",
"nips_2022_4LZo68TuF-4",
"nips_2022_4LZo68TuF-4",
"nips_2022_4LZo68TuF-4"
] |
nips_2022_w6tBOjPCrIO | MoCoDA: Model-based Counterfactual Data Augmentation | The number of states in a dynamic process is exponential in the number of objects, making reinforcement learning (RL) difficult in complex, multi-object domains. For agents to scale to the real world, they will need to react to and reason about unseen combinations of objects. We argue that the ability to recognize and use local factorization in transition dynamics is a key element in unlocking the power of multi-object reasoning. To this end, we show that (1) known local structure in the environment transitions is sufficient for an exponential reduction in the sample complexity of training a dynamics model, and (2) a locally factored dynamics model provably generalizes out-of-distribution to unseen states and actions. Knowing the local structure also allows us to predict which unseen states and actions this dynamics model will generalize to. We propose to leverage these observations in a novel Model-based Counterfactual Data Augmentation (MoCoDA) framework. MoCoDA applies a learned locally factored dynamics model to an augmented distribution of states and actions to generate counterfactual transitions for RL. MoCoDA works with a broader set of local structures than prior work and allows for direct control over the augmented training distribution. We show that MoCoDA enables RL agents to learn policies that generalize to unseen states and actions. We use MoCoDA to train an offline RL agent to solve an out-of-distribution robotics manipulation task on which standard offline RL algorithms fail. | Accept | The paper suggests to improve sample efficiency and out-of-distribution generalization in RL by learning locally factored world models, and use these models to generate counterfactual data to train on. The key assumption is that the environment model is the right model to factorize (as opposed to, say a policy or value function) and that this model will generalize out of distribution when performing the relevant interventions. All reviewers were in agreement the paper was well written and presented an interesting idea with sound empirical verification. Several comments pointed to an unclear definition of 'counterfactual' used in the paper (as the authors point out, it means different things depending on whether adopting a potential outcome or DAG framework) - please make sure this is clear in the final version, as well as a clear explanation of the distinction with CODA. | train | [
"nEks9MDSRIM",
"zLRJJqmRrWO",
"EjYbTQWy8Q",
"amBgJVW-CQ0",
"JWJzxouC3W3",
"GUqZPWnJA2n",
"3ZjJnAGf7a",
"J5nQ93HOaUM",
"E7GAipNJycL",
"pEhmm4jz8ae",
"jfFOqyFrUg",
"JwZHzkINsST",
"RkCBML_1Me"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad to hear that the response helped to clarify some questions you had in the initial review, and we appreciate the increased score.\n\nSetting aside the scores and reviews, we definitely agree notions of causality and counterfactuals can be quite slippery at times, are used to mean a variety of things in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"zLRJJqmRrWO",
"JwZHzkINsST",
"amBgJVW-CQ0",
"JWJzxouC3W3",
"RkCBML_1Me",
"JwZHzkINsST",
"jfFOqyFrUg",
"pEhmm4jz8ae",
"nips_2022_w6tBOjPCrIO",
"nips_2022_w6tBOjPCrIO",
"nips_2022_w6tBOjPCrIO",
"nips_2022_w6tBOjPCrIO",
"nips_2022_w6tBOjPCrIO"
] |
nips_2022_rYkGxHPnCIf | Learning Optimal Flows for Non-Equilibrium Importance Sampling | Many applications in computational sciences and statistical inference require the computation of expectations with respect to complex high-dimensional distributions with unknown normalization constants, as well as the estimation of these constants. Here we develop a method to perform these calculations based on generating samples from a simple base distribution, transporting them by the flow generated by a velocity field, and performing averages along these flowlines. This non-equilibrium importance sampling (NEIS) strategy is straightforward to implement and can be used for calculations with arbitrary target distributions. On the theory side, we discuss how to tailor the velocity field to the target and establish general conditions under which the proposed estimator is a perfect estimator with zero-variance. We also draw connections between NEIS and approaches based on mapping a base distribution onto a target via a transport map. On the computational side, we show how to use deep learning to represent the velocity field by a neural network and train it towards the zero variance optimum. These results are illustrated numerically on benchmark examples (with dimension up to 10), where after training the velocity field, the variance of the NEIS estimator is reduced by up to 6 orders of magnitude lower than that of a vanilla estimator. We also compare the performances of NEIS with those of Neal’s annealed importance sampling (AIS). | Accept | This paper studies the properties of a non-equilibium importance sampling method for unnormalized densities. The reviewers unanimously agreed the paper is well-written and the contribution is novel. Several reviewers expressed concerns on the practicality of the algorithm and that the empirical results do not adequately show the benefits of the method. Overall, the reviews felt that the rebuttal did not adequately address their concerns.
After discussion, the reviewers agreed that the paper is a worthwhile contribution and unanimously recommended acceptance. The final revision of the paper should address the limitations of the method more explicitly and tone down the claims about the empirical contribution and the method’s practicality. Another reviewer expressed that the paper needs to address the issue that in practice, numerical integration is required to solve ODEs and can lead to longer running times. Please revise the paper carefully based on the reviewers' feedback.
| train | [
"US0FjmvT0GB",
"ux9R71ueuxx",
"8L9-fie7pW",
"E-FIrTnOuL",
"glXkfNyxdXxo",
"R5Cshkk7PUW",
"uFbHvDJhVjg",
"-D7zBSrl_WN",
"vYO_HW8sA8w",
"TlcemdLzyIF",
"jDNFKg6ZnwG",
"dH8vh2e7CU",
"SpmyArYIzid6"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Clearly, many simpler MCMC algorithms exist that have wide (but not full) applicability in statistics and other fields. However, for us, one important motivation to the development of adaptive NEIS comes from its appealing and robust theoretical properties, a rare feat amongst importance sampling strategies that ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
2
] | [
"ux9R71ueuxx",
"vYO_HW8sA8w",
"nips_2022_rYkGxHPnCIf",
"uFbHvDJhVjg",
"-D7zBSrl_WN",
"TlcemdLzyIF",
"jDNFKg6ZnwG",
"dH8vh2e7CU",
"SpmyArYIzid6",
"nips_2022_rYkGxHPnCIf",
"nips_2022_rYkGxHPnCIf",
"nips_2022_rYkGxHPnCIf",
"nips_2022_rYkGxHPnCIf"
] |
nips_2022_FlWdTyUznCc | Residual Multiplicative Filter Networks for Multiscale Reconstruction | Coordinate networks like Multiplicative Filter Networks (MFNs) and BACON offer some control over the frequency spectrum used to represent continuous signals such as images or 3D volumes. Yet, they are not readily applicable to problems for which coarse-to-fine estimation is required, including various inverse problems in which coarse-to-fine optimization plays a key role in avoiding poor local minima. We introduce a new coordinate network architecture and training scheme that enables coarse-to-fine optimization with fine-grained control over the frequency support of learned reconstructions. This is achieved with two key innovations. First, we incorporate skip connections so that structure at one scale is preserved when fitting finer-scale structure. Second, we propose a novel initialization scheme to provide control over the model frequency spectrum at each stage of optimization. We demonstrate how these modifications enable multiscale optimization for coarse-to-fine fitting to natural images. We then evaluate our model on synthetically generated datasets for the the problem of single-particle cryo-EM reconstruction. We learn high resolution multiscale structures, on par with the state-of-the art. Project webpage: https://shekshaa.github.io/ResidualMFN/.
| Accept | The paper studies Multiplicative Filter Networks, which are coordinate neural networks in which each layer applies a multiplicative (Hadamard product) filter and a sinusoidal nonlinearity. The paper shows how introducing residual connections and initializing appropriately can lead to networks where the frequency content of the image separates over layers. This leads to a learned version of classical “coarse-to-fine” reconstruction methods, which the paper terms Residual Multiplicative Filter Networks. The paper illustrates its proposals with experiments on image approximation and on cryo-EM reconstruction.
Reviewers found that the paper presents a simple idea, which can be easily adopted whenever a coarse-to-fine reconstruction is desired, and as such is likely to see followup work. The main questions concerned the necessity of a coarse-to-fine approach in applications where one ultimately seeks a reconstruction at just a single scale, and the cryo-EM experiments, which show good performance compared to a baseline, when the coordinate network model is integrated into a larger system. Overall, the reviewers found that paper presents a natural modification to MFNs which improves both their interpretability and applicability in inverse problems in imaging.
| val | [
"sKdH4_kSrfC",
"FxLIgQNo2t_",
"EAK7-P6aE7CK",
"oMRZZtXRxMA",
"gZ3_FO61t2Z",
"9fwcFOkpAQ",
"AqFnG3g-UT9",
"EgLvuk7mB8A",
"KidkbyFBzZM"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > In the revised version of the paper, we will add more background on coordinate networks to the introduction to make the work more accessible.\n\nThank you for this. I believe this will make the paper more self-contained and easy to understand for researchers who are outside of this sub-field. \n\n> We agree tha... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"FxLIgQNo2t_",
"KidkbyFBzZM",
"oMRZZtXRxMA",
"EgLvuk7mB8A",
"9fwcFOkpAQ",
"AqFnG3g-UT9",
"nips_2022_FlWdTyUznCc",
"nips_2022_FlWdTyUznCc",
"nips_2022_FlWdTyUznCc"
] |
nips_2022_XVfOai2ytN1 | On global convergence of ResNets: From finite to infinite width using linear parameterization | Overparameterization is a key factor in the absence of convexity to explain global convergence of gradient descent (GD) for neural networks. Beside the well studied lazy regime, infinite width (mean field) analysis has been developed for shallow networks, using on convex optimization technics. To bridge the gap between the lazy and mean field regimes, we study Residual Networks (ResNets) in which the residual block has linear parameterization while still being nonlinear. Such ResNets admit both infinite depth and width limits, encoding residual blocks in a Reproducing Kernel Hilbert Space (RKHS). In this limit, we prove a local Polyak-Lojasiewicz inequality. Thus, every critical point is a global minimizer and a local convergence result of GD holds, retrieving the lazy regime. In contrast with other mean-field studies, it applies to both parametric and non-parametric cases under an expressivity condition on the residuals. Our analysis leads to a practical and quantified recipe: starting from a universal RKHS, Random Fourier Features are applied to obtain a finite dimensional parameterization satisfying with high-probability our expressivity condition. | Accept | The paper presents convergence analysis for ResNet in a certain asymptotic regime, for which the authors are able to establish local Polyak-Lojasiewicz inequality. The analysis sheds new light on the convergence of neural network training. It is a technical sound paper and merits acceptance to the conference. | train | [
"peLH0JqHFa",
"UFLVK8ahZqb",
"THKH6qGB6uf",
"jqQvl8iMbM",
"eeixXld848F",
"gYSJCKS65HN",
"82o-wU7Z1u",
"sFF5t237UP6",
"pNrrrQOB0hu",
"FSNcNme9TVr",
"ctbVcKMthyU",
"-anNn-eS5-f",
"5WVgiWu93uM",
"WC4oML3XwH",
"e-_1BRk1kJX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank you for the update and your support. ",
" The paper have nine content page, all the other stuff are in the appendix\nThis can be simply fixed by split the pdf. It's very rude to reject a paper by this means.\n\nI hope the ac can ignore this rude reviewer.",
" We would like to thank you ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
1,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"UFLVK8ahZqb",
"jqQvl8iMbM",
"eeixXld848F",
"eeixXld848F",
"nips_2022_XVfOai2ytN1",
"-anNn-eS5-f",
"nips_2022_XVfOai2ytN1",
"e-_1BRk1kJX",
"e-_1BRk1kJX",
"e-_1BRk1kJX",
"e-_1BRk1kJX",
"5WVgiWu93uM",
"nips_2022_XVfOai2ytN1",
"nips_2022_XVfOai2ytN1",
"nips_2022_XVfOai2ytN1"
] |
nips_2022_49TS-pwQWBa | Learning Robust Dynamics through Variational Sparse Gating | Learning world models from their sensory inputs enables agents to plan for actions by imagining their future outcomes. World models have previously been shown to improve sample-efficiency in simulated environments with few objects, but have not yet been applied successfully to environments with many objects. In environments with many objects, often only a small number of them are moving or interacting at the same time. In this paper, we investigate integrating this inductive bias of sparse interactions into the latent dynamics of world models trained from pixels. First, we introduce Variational Sparse Gating (VSG), a latent dynamics model that updates its feature dimensions sparsely through stochastic binary gates. Moreover, we propose a simplified architecture Simple Variational Sparse Gating (SVSG) that removes the deterministic pathway of previous models, resulting in a fully stochastic transition function that leverages the VSG mechanism. We evaluate the two model architectures in the BringBackShapes (BBS) environment that features a large number of moving objects and partial observability, demonstrating clear improvements over prior models. | Accept | The reviewers agreed this work is well written and the set of experiments are good. However, a general concern was that the interpretation / explanation of this method should be improved. The rebuttal seems to have addressed these points to a good degree and we urge the authors to revise the work to include the further explanations / experiments and analysis in the final version. | train | [
"p1pbnJjj3s",
"lglx-NtSoe",
"_KDG9ow265f",
"nrMW47UlMKg",
"qKvCdNGZiCi",
"6IlQNPmppGe",
"cbMkBhSw0X",
"u0P-D_zDxoB",
"PNaAYBj9MBQX",
"yo5iDS8N3uX",
"nTk2FtRx-dZ",
"6ITPZOJy3I",
"FZ1jjAqamsi",
"N8N2ilfZDbZ",
"dH8uFfsPY16",
"rKBOu2AgXl2"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 4uKx,\n\n> Comparison to ensemble-based methods, e.g., plan2explore: \n\nAs discussed in Table 1 and Section 3.8 in APD (https://arxiv.org/abs/2009.01791), exploration using ensemble-based methods like Plan2Explore and using SVSG will fall under two different categories. Planning using methods like ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"lglx-NtSoe",
"qKvCdNGZiCi",
"PNaAYBj9MBQX",
"u0P-D_zDxoB",
"nTk2FtRx-dZ",
"6ITPZOJy3I",
"nips_2022_49TS-pwQWBa",
"yo5iDS8N3uX",
"rKBOu2AgXl2",
"dH8uFfsPY16",
"N8N2ilfZDbZ",
"FZ1jjAqamsi",
"nips_2022_49TS-pwQWBa",
"nips_2022_49TS-pwQWBa",
"nips_2022_49TS-pwQWBa",
"nips_2022_49TS-pwQWBa... |
nips_2022_7yJMZwhIC2k | A Theoretical Framework for Inference Learning | Backpropagation (BP) is the most successful and widely used algorithm in deep learning. However, the computations required by BP are challenging to reconcile with known neurobiology. This difficulty has stimulated interest in more biologically plausible alternatives to BP. One such algorithm is the inference learning algorithm (IL). IL trains predictive coding models of neural circuits and has achieved equal performance to BP on supervised and auto-associative tasks. In contrast to BP, however, the mathematical foundations of IL are not well-understood. Here, we develop a novel theoretical framework for IL. Our main result is that IL closely approximates an optimization method known as implicit stochastic gradient descent (implicit SGD), which is distinct from the explicit SGD implemented by BP. Our results further show how the standard implementation of IL can be altered to better approximate implicit SGD. Our novel implementation considerably improves the stability of IL across learning rates, which is consistent with our theory, as a key property of implicit SGD is its stability. We provide extensive simulation results that further support our theoretical interpretations and find IL achieves quicker convergence when trained with mini-batch size one while performing competitively with BP for larger mini-batches when combined with Adam. | Accept | This paper presents an interesting connection between stochastic gradient descent by backpropagation and the "inference learning" algorithm for predictive coding. The key result is that inference learning approximates _implicit_ gradient descent, rather than explicit SGD as normally implemented. The implicit methods perform comparably to standard methods, and they may be of interest to computational neuroscientists interested in biologically plausible learning rules.
In addition to addressing the reviewers' concerns, I would encourage the authors to improve the exposition around Eqs. 1 and 2. The stated equalities require a few lines of calculus to derive, and you could spare the reader the trouble. | train | [
"iYVLdzyURej",
"pj-C-5-rRet",
"7iQ-8WactGH",
"iTLvWRIJ9VN",
"KHizi18wD25",
"ud7PcwfFEVZ",
"jof6b3juoE",
"aun7VMkESHm",
"i82DJb1sVg9"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer an3x for their clarification and are happy to hear they found our response compelling. As for the reviewer's question, we have not tested IL updates against implicit SGD updates that exploit automatic differentiation, but we think that would be a useful comparison for us to do in future work. We... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"7iQ-8WactGH",
"ud7PcwfFEVZ",
"iTLvWRIJ9VN",
"i82DJb1sVg9",
"aun7VMkESHm",
"jof6b3juoE",
"nips_2022_7yJMZwhIC2k",
"nips_2022_7yJMZwhIC2k",
"nips_2022_7yJMZwhIC2k"
] |
nips_2022_fRWwcgfXXZ | Deep Learning Methods for Proximal Inference via Maximum Moment Restriction | The No Unmeasured Confounding Assumption is widely used to identify causal effects in observational studies. Recent work on proximal inference has provided alternative identification results that succeed even in the presence of unobserved confounders, provided that one has measured a sufficiently rich set of proxy variables, satisfying specific structural conditions. However, proximal inference requires solving an ill-posed integral equation. Previous approaches have used a variety of machine learning techniques to estimate a solution to this integral equation, commonly referred to as the bridge function. However, prior work has often been limited by relying on pre-specified kernel functions, which are not data adaptive and struggle to scale to large datasets. In this work, we introduce a flexible and scalable method based on a deep neural network to estimate causal effects in the presence of unmeasured confounding using proximal inference. Our method achieves state of the art performance on two well-established proximal inference benchmarks. Finally, we provide theoretical consistency guarantees for our method. | Accept | Reviewers agreed that the paper proposes a new and valuable method for proximal causal inference with a solid theoretical analysis. The reviewers pointed out several ways in which the theory might be improved, some of which have already been undertaken by the authors, while some remain open. A potential drawback is that the authors were originally unaware of highly related work (especially Cui et al. 2020 and Kallus et al. 2021); while the authors have now addressed this work, I suggest that in the final version they add further details about the differences between their results and those of the above papers.
Having said all that, we all view the contribution positively and believe its merits outweigh its drawbacks . | train | [
"DWQG9bxFrM7",
"mw0BzPQfw-V",
"Z359NtjU3V",
"YCphufv-b_8",
"8OmvA49AlNQ",
"84e98xCH3_i",
"AnEctt3IfBM",
"Yc-coiysaX7",
"v2JuwZ8wi2",
"eJoKoJ84N_",
"J7FDYJjFQIB",
"KQQCNV-YK6q",
"oAKdhS7oKP",
"9br_tsMNnuI",
"auErm9OHfa",
"UYEFmcnq1So",
"0e5b1mfyRlW"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" To clarify, \"one step in the argument is plausible yet missing\" refers to \"obtaining those formal bounds [on Rademacher complexities] would be involved, but probably doable with fast rates\".\n\nI will see if I can find some references whose combination might give the desired result.\n\nThanks for your hard wo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"mw0BzPQfw-V",
"Z359NtjU3V",
"v2JuwZ8wi2",
"UYEFmcnq1So",
"auErm9OHfa",
"nips_2022_fRWwcgfXXZ",
"Yc-coiysaX7",
"0e5b1mfyRlW",
"eJoKoJ84N_",
"J7FDYJjFQIB",
"UYEFmcnq1So",
"auErm9OHfa",
"9br_tsMNnuI",
"nips_2022_fRWwcgfXXZ",
"nips_2022_fRWwcgfXXZ",
"nips_2022_fRWwcgfXXZ",
"nips_2022_fR... |
nips_2022_uytgM9N0vlR | Does GNN Pretraining Help Molecular Representation? | Extracting informative representations of molecules using Graph neural networks (GNNs) is crucial in AI-driven drug discovery. Recently, the graph research community has been trying to replicate the success of self-supervised pretraining in natural language processing, with several successes claimed. However, we find the benefit brought by self-supervised pretraining on small molecular data can be negligible in many cases. We conduct thorough ablation studies on the key components of GNN pretraining, including pretraining objectives, data splitting methods, input features, pretraining dataset scales, and GNN architectures, to see how they affect the accuracy of the downstream tasks. Our first important finding is, self-supervised graph pretraining do not always have statistically significant advantages over non-pretraining methods in many settings. Secondly, although noticeable improvement can be observed with additional supervised pretraining, the improvement may diminish with richer features or more balanced data splits. Thirdly, hyper-parameters could have larger impacts on accuracy of downstream tasks than the choice of pretraining tasks, especially when the scales of downstream tasks are small. Finally, we provide our conjectures where the complexity of some pretraining methods on small molecules might be insufficient, followed by empirical evidences on different pretraining datasets. | Accept | The reviewers were split about this paper: on one hand they appreciated the sensitivity analyses and surprising findings, on the other they were concerned about the overall contribution of the work. After going through it and the discussion I have decided to vote to accept given the clear and convincing author response. I urge the authors to take all of the reviewers changes into account (if not already done so). Once done this paper will be a nice addition to the conference! | train | [
"T5RL1hN9gv",
"OSG7jKv1oK8",
"kbvx0x4Rvsu",
"vmjAMEKGvxC",
"CF2ACnAqHSy",
"bvCqTF_qr6",
"kdT3rbjXKOB",
"vdRDmtjGwZ3",
"UkTjWOA-XFf",
"3MZFRERyiw",
"Rndw48tvwJ",
"hQYQhCGr42U",
"p_YbYSkoSlo",
"-HyW1VM7bz",
"mtaM7YAYvaj",
"Nn-OX-2_S6P",
"L_UleM5VAFE",
"dURNeynEakz",
"g3VELZD-4dZ",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"... | [
" I will raise the rating to 4.",
" Thank you for the response and I maintain the recommendation of accept.",
" We sincerely thank you for the valuable time you spent on improving our paper in the busy rebuttal time! Thanks for your useful and constructive comments. With your inputs, our paper can improve signi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5,
4
] | [
"kbvx0x4Rvsu",
"g3VELZD-4dZ",
"CF2ACnAqHSy",
"bvCqTF_qr6",
"bvCqTF_qr6",
"vdRDmtjGwZ3",
"hQYQhCGr42U",
"Rndw48tvwJ",
"mtaM7YAYvaj",
"nips_2022_uytgM9N0vlR",
"Nn-OX-2_S6P",
"L_UleM5VAFE",
"-HyW1VM7bz",
"_TBHVScy6Tw",
"_oJjKxJWkWi",
"xHBi5HNpwX",
"iBxqJ5l0NyX",
"nips_2022_uytgM9N0vlR... |
nips_2022_CMcptt6nFaQ | Structure-Aware Image Segmentation with Homotopy Warping | Besides per-pixel accuracy, topological correctness is also crucial for the segmentation of images with fine-scale structures, e.g., satellite images and biomedical images. In this paper, by leveraging the theory of digital topology, we identify pixels in an image that are critical for topology. By focusing on these critical pixels, we propose a new \textbf{homotopy warping loss} to train deep image segmentation networks for better topological accuracy. To efficiently identify these topologically critical pixels, we propose a new algorithm exploiting the distance transform. The proposed algorithm, as well as the loss function, naturally generalize to different topological structures in both 2D and 3D settings. The proposed loss function helps deep nets achieve better performance in terms of topology-aware metrics, outperforming state-of-the-art structure/topology-aware segmentation methods. | Accept | The paper proposes a topology-aware learning objective for semantic segmentation models based upon warping masks. The loss is used for training satellite data and medical segmentation datasets and provides benefits in these domains. Reviewers acknowledge that the approach is simple, intuitive, has thorough empirical evaluation and ablations. Reviewers note a few presentation issues, e.g. related to terminology. These must be fixed in a final revision of the paper. Overall all reviewers vote for acceptance and so do I. | train | [
"4ElPuKji35C",
"tMEUu9kmYXb",
"Rf0dPVVswk_",
"NIDX5126YUx",
"1pWsj1Xp6pa",
"atuoRm_XcB-",
"SfoRQsHTvnY",
"k3FI5O6ePoW",
"lPeVhsBIvdp",
"bcBmj9Bd2Yq",
"jVWsUlHExwJ",
"JmIn9j8Dq_j",
"JDPXhw_VeSV",
"V2sets02lUZ",
"Lh_c_Opp0bK",
"eQmWP6-zyp",
"hCGmqQ_JBt2",
"o8dDHZoysm2",
"VAZv3DQYD4... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Dear Reviewer RT7y,\n\nThanks very much for your positive feedback! We'll make the modifications accordingly in the final version.\n\nBest,\n\nAuthors of Paper #11428",
" Dear Reviewer 6qMj,\n\nThanks very much for your confirmation and suggestions! We'll include a separate limitation section/paragraph in the f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"1pWsj1Xp6pa",
"NIDX5126YUx",
"atuoRm_XcB-",
"eQmWP6-zyp",
"SfoRQsHTvnY",
"hCGmqQ_JBt2",
"k3FI5O6ePoW",
"Lh_c_Opp0bK",
"Lh_c_Opp0bK",
"jVWsUlHExwJ",
"JmIn9j8Dq_j",
"JDPXhw_VeSV",
"V2sets02lUZ",
"F9M9-kjX7MY",
"mRD_xuF-GDr",
"GRrzwUVeb4d",
"VAZv3DQYD4T",
"nips_2022_CMcptt6nFaQ",
"... |
nips_2022_kCtnkLv-_W0 | Enhanced Meta Reinforcement Learning via Demonstrations in Sparse Reward Environments | Meta reinforcement learning (Meta-RL) is an approach wherein the experience gained from solving a variety of tasks is distilled into a meta-policy. The meta-policy, when adapted over only a small (or just a single) number of steps, is able to perform near-optimally on a new, related task. However, a major challenge to adopting this approach to solve real-world problems is that they are often associated with sparse reward functions that only indicate whether a task is completed partially or fully. We consider the situation where some data, possibly generated by a sub-optimal agent, is available for each task. We then develop a class of algorithms entitled Enhanced Meta-RL via Demonstrations (EMRLD) that exploit this information---even if sub-optimal---to obtain guidance during training. We show how EMRLD jointly utilizes RL and supervised learning over the offline data to generate a meta-policy that demonstrates monotone performance improvements. We also develop a warm started variant called EMRLD-WS that is particularly efficient for sub-optimal demonstration data. Finally, we show that our EMRLD algorithms significantly outperform existing approaches in a variety of sparse reward environments, including that of a mobile robot. | Accept | The authors propose EMRLD algorithms, which use potentially suboptimal demonstrations to perform meta-RL in environments where rewards are sparse. The algorithm is illustrated well in Point2D navigation toy examples to illustrate how it solves a multi-task goal reaching environment on both suboptimal and optimal data. Empirical results on twowheeled and halfcheetah forward-backward are compelling, and I appreciated the real-world experiments on Turtlebot. All reviewers have voted to accept. | train | [
"Gi0Ko9UEqm",
"C4T4EeoC4a4",
"U38QxcuTeZ7",
"rrwkacnwNTP",
"EAJJ_jCIH-",
"FoGXVij39JO",
"zWRC4ye2eLW",
"o_37ojNT05",
"dm6YuSsQ3xu"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifications and additional experiments! I have kept my positive rating.",
" I thank the authors for their thorough reply. Most of my questions and comments were addressed to my satisfaction. I would have preferred the discussion of limitations to be in the main paper, but understand space l... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"rrwkacnwNTP",
"U38QxcuTeZ7",
"dm6YuSsQ3xu",
"o_37ojNT05",
"zWRC4ye2eLW",
"nips_2022_kCtnkLv-_W0",
"nips_2022_kCtnkLv-_W0",
"nips_2022_kCtnkLv-_W0",
"nips_2022_kCtnkLv-_W0"
] |
nips_2022_U4BUMoVTrB2 | DOPE: Doubly Optimistic and Pessimistic Exploration for Safe Reinforcement Learning | Safe reinforcement learning is extremely challenging--not only must the agent explore an unknown environment, it must do so while ensuring no safety constraint violations. We formulate this safe reinforcement learning (RL) problem using the framework of a finite-horizon Constrained Markov Decision Process (CMDP) with an unknown transition probability function, where we model the safety requirements as constraints on the expected cumulative costs that must be satisfied during all episodes of learning. We propose a model-based safe RL algorithm that we call Doubly Optimistic and Pessimistic Exploration (DOPE), and show that it achieves an objective regret $\tilde{O}(|\mathcal{S}|\sqrt{|\mathcal{A}| K})$ without violating the safety constraints during learning, where $|\mathcal{S}|$ is the number of states, $|\mathcal{A}|$ is the number of actions, and $K$ is the number of learning episodes. Our key idea is to combine a reward bonus for exploration (optimism) with a conservative constraint (pessimism), in addition to the standard optimistic model-based exploration. DOPE is not only able to improve the objective regret bound, but also shows a significant empirical performance improvement as compared to earlier optimism-pessimism approaches. | Accept | I went through the paper, reviews and responses. This is a borderline paper with reasonable theoretical analysis but weak experimental results. Lack of comparisons to practical safe RL algorithms is not an advantage.
I tend to accept. I'm also ok with a borderline reject. | train | [
"29MvdgOYl6",
"MVGCscJFq_R",
"B8GXtlrPUuKK",
"5_TU0IDb7pK",
"96X00qDmEEG",
"T6_xFODnne",
"JTsR59oDTk",
"QJ7CMsF8kOC",
"AznZRV8f6fj",
"DDWxmSpjamn"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the reply! ",
" We apologize for not explicitly responding to that question in our response. We had tried to immediately address those questions that we thought were of greatest importance to the reviewer. To answer this question, please note that DOPE can operate with any baseline pol... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"T6_xFODnne",
"B8GXtlrPUuKK",
"JTsR59oDTk",
"96X00qDmEEG",
"DDWxmSpjamn",
"AznZRV8f6fj",
"QJ7CMsF8kOC",
"nips_2022_U4BUMoVTrB2",
"nips_2022_U4BUMoVTrB2",
"nips_2022_U4BUMoVTrB2"
] |
nips_2022_ITXgYOFi8b | Incentivizing Combinatorial Bandit Exploration | Consider a bandit algorithm that recommends actions to self-interested users in a recommendation system. The users are free to choose other actions and need to be incentivized to follow the algorithm's recommendations. While the users prefer to exploit, the algorithm can incentivize them to explore by leveraging the information collected from the previous users. All published work on this problem, known as incentivized exploration, focuses on small, unstructured action sets and mainly targets the case when the users' beliefs are independent across actions. However, realistic exploration problems often feature large, structured action sets and highly correlated beliefs. We focus on a paradigmatic exploration problem with structure: combinatorial semi-bandits. We prove that Thompson Sampling, when applied to combinatorial semi-bandits, is incentive-compatible when initialized with a sufficient number of samples of each arm (where this number is determined in advance by the Bayesian prior). Moreover, we design incentive-compatible algorithms for collecting the initial samples.
| Accept | This paper investigates the problem of incentivized exploration in the combinatorial semi-bandits setting. The reviewers are overall positive about the paper. The main concern of the paper is that the contribution is incremental given the number of prior works on incentivizing exploration. The authors' responses have more explicitly addressed this concern, and we encourage the authors to incorporate their responses into the paper. There have also been issues brought up about the lack of experiments, but I agree with the authors that given the main contribution of this work is theoretical, having empirical evaluations is a plus but not required. Overall, I believe the contribution of the paper outweighs the concerns and would therefore recommend acceptance. | train | [
"AdtGmTl5bHU",
"aVyNKUkpseR",
"Qzt5r5xnWn",
"TItJzLhRjIq",
"mhTCTrRHLQl",
"F5uAZvl67G",
"jycKn0IE3oT",
"NAAhYZe-r82",
"xVg1rSeyZxx",
"X5vLAuvnSq2",
"BdClT2UTH6L"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have no further questions.",
" Thanks for following up!\n\nRe: arbitrary correlation matrix across atoms\n\nWe need independence to apply Harris inequality. But this is a great question! We discussed replacing Harris inequality with FKG inequality, which admits weaker assumptions. We succeeded in pushing thes... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
3
] | [
"aVyNKUkpseR",
"Qzt5r5xnWn",
"TItJzLhRjIq",
"BdClT2UTH6L",
"X5vLAuvnSq2",
"xVg1rSeyZxx",
"NAAhYZe-r82",
"nips_2022_ITXgYOFi8b",
"nips_2022_ITXgYOFi8b",
"nips_2022_ITXgYOFi8b",
"nips_2022_ITXgYOFi8b"
] |
nips_2022_LsWxgJZpRl | Near-optimal Distributional Reinforcement Learning towards Risk-sensitive Control | We consider finite episodic Markov decision processes aiming at the entropic risk measure (ERM) of return for risk-sensitive control. We identify two properties of the ERM that enable risk-sensitive distributional dynamic programming. We propose two novel distributional reinforcement learning (DRL) algorithms, including a model-free one and a model-based one, that implement optimism through two different schemes. We prove that both of them attain $\tilde{\mathcal{O}}(\frac{\exp(|\beta| H)-1}{|\beta|H}H\sqrt{HS^2AT})$ regret upper bound, where $S$ is the number of states, $A$ the number of states, $H$ the time horizon and $T$ the number of total time steps. It matches RSVI2 proposed in \cite{fei2021exponential} with a much simpler regret analysis. To the best of our knowledge, this is the first regret analysis of DRL, which theoretically verifies the efficacy of DRL for risk-sensitive control. Finally, we improve the existing lower bound by proving a tighter bound of $\Omega(\frac{\exp(\beta H/6)-1}{\beta H}H\sqrt{SAT})$ for $\beta>0$ case, which recovers the tight lower bound $\Omega(H\sqrt{SAT})$ in the risk-neutral setting. | Reject | While this work provides interesting insights on distributional reinforcement learning for risk-sensitive control, it is unclear how beneficial these results are given the closely related work [22] (on the same problem with the same regret bound). The authors mentioned in the rebuttal that their algorithm may motivate the design of similar algorithms for other risk measures, but no concrete discussions and examples were provided. We believe that the paper would benefit from another round of revision to properly address these issues and make its contributions move convincing. | train | [
"Il_Jo2gakCA",
"GGOhiOuCD4",
"NiYao9bpEfG",
"R1tlYCKh6Yi",
"U0SS1gKAd63",
"SDHKI2QnCi2",
"qL76K6e-_sL",
"kH9XUdfDZmW",
"d0-FW04xsS",
"pLwVs1EP56",
"y9kaBKSCWhO",
"hSFF4U8z8MA",
"6qNKVYCYgfo",
"QknJAJxacpC",
"gnByhT0mhP"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer SdUB,\n\nWe have uploaded a revised version of the paper, in which the revised part is highlighted on the red front. We hope that the revised paper could better address your concern. Feel free to reply if you have any questions.\n\n",
" Dear reviewer CsUD,\n\nWe have uploaded a revised version of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"gnByhT0mhP",
"QknJAJxacpC",
"6qNKVYCYgfo",
"hSFF4U8z8MA",
"SDHKI2QnCi2",
"gnByhT0mhP",
"kH9XUdfDZmW",
"QknJAJxacpC",
"6qNKVYCYgfo",
"y9kaBKSCWhO",
"hSFF4U8z8MA",
"nips_2022_LsWxgJZpRl",
"nips_2022_LsWxgJZpRl",
"nips_2022_LsWxgJZpRl",
"nips_2022_LsWxgJZpRl"
] |
nips_2022_IQIY2LASzYx | A Simple Decentralized Cross-Entropy Method | Cross-Entropy Method (CEM) is commonly used for planning in model-based reinforcement learning (MBRL) where a centralized approach is typically utilized to update the sampling distribution based on only the top-$k$ operations' results on samples. In this paper, we show that such a centralized approach makes CEM vulnerable to local optima, thus impairing its sample efficiency. To tackle this issue, we propose Decentralized CEM (DecentCEM), a simple but effective improvement over classical CEM, by using an ensemble of CEM instances running independently from one another, and each performing a local improvement of its own sampling distribution. We provide both theoretical and empirical analysis to demonstrate the effectiveness of this simple decentralized approach. We empirically show that, compared to the classical centralized approach using either a single or even a mixture of Gaussian distributions, our DecentCEM finds the global optimum much more consistently thus improves the sample efficiency. Furthermore, we plug in our DecentCEM in the planning problem of MBRL, and evaluate our approach in several continuous control environments, with comparison to the state-of-art CEM based MBRL approaches (PETS and POPLIN). Results show sample efficiency improvement by simply replacing the classical CEM module with our DecentCEM module, while only sacrificing a reasonable amount of computational cost. Lastly, we conduct ablation studies for more in-depth analysis. | Accept | This paper proposes a parallelized version of the classic cross-entropy optimization method, by using an ensemble of CEM instances running independently from one another, and each performing a local improvement of its own sampling distribution. Both a theoretical and empirical analysis are provided to demonstrate the effectiveness of this simple decentralized approach. The reviewers find the paper to be overall well-presented, and appreciate the fact that the proposed is simple but effective. Consequently, this work can be utilized in any CEM-based model-based reinforcement learning algorithm. | train | [
"eS_MzWWn119",
"cMbtDRsEWwm",
"Bku8WB9tVtW",
"3G99S-BKJSH",
"FJcUPgo3Vnz",
"_e9cG6Kzt8T",
"LVm4683vagw",
"KX4SgzeiDLv",
"fWvRCfHQf-",
"0_EFLUi81B7",
"cB7ZSV6DkWO",
"-yJYA9W8kdZ",
"x6sFyuDZ0O",
"nFZLLQakyhD",
"98obJFRocNl"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nThank you for reading our rebuttal and for raising the score.",
" Dear Reviewer, \n\nThank you for reading our rebuttal and for providing further feedback.\n\n`Re the reward function`: we agree that there are \"many non-deterministic reward function settings in real applications\" and that lea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"Bku8WB9tVtW",
"3G99S-BKJSH",
"KX4SgzeiDLv",
"0_EFLUi81B7",
"_e9cG6Kzt8T",
"fWvRCfHQf-",
"nips_2022_IQIY2LASzYx",
"98obJFRocNl",
"nFZLLQakyhD",
"x6sFyuDZ0O",
"-yJYA9W8kdZ",
"nips_2022_IQIY2LASzYx",
"nips_2022_IQIY2LASzYx",
"nips_2022_IQIY2LASzYx",
"nips_2022_IQIY2LASzYx"
] |
nips_2022_e2gRdexoTZf | Structural Analysis of Branch-and-Cut and the Learnability of Gomory Mixed Integer Cuts | The incorporation of cutting planes within the branch-and-bound algorithm, known as branch-and-cut, forms the backbone of modern integer programming solvers. These solvers are the foremost method for solving discrete optimization problems and thus have a vast array of applications in machine learning, operations research, and many other fields. Choosing cutting planes effectively is a major research topic in the theory and practice of integer programming. We conduct a novel structural analysis of branch-and-cut that pins down how every step of the algorithm is affected by changes in the parameters defining the cutting planes added to the input integer program. Our main application of this analysis is to derive sample complexity guarantees for using machine learning to determine which cutting planes to apply during branch-and-cut. These guarantees apply to infinite families of cutting planes, such as the family of Gomory mixed integer cuts, which are responsible for the main breakthrough speedups of integer programming solvers. We exploit geometric and combinatorial structure of branch-and-cut in our analysis, which provides a key missing piece for the recent generalization theory of branch-and-cut. | Accept | There is general agreement that this is a strong paper that should be accepted. I agree with the discussion. Nothing much to add. | train | [
"L80lJvrWAUu",
"ZgZkwyak18",
"B1N6VZTFbt7",
"yFiPd7Ld1tk",
"amxjx3Uf-93",
"6nxYpdIK22",
"RW1hC4wA9mw",
"PICheQ5u-zl",
"l4gWGQCkVX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply and thorough response. I understand and agree with your point of providing some geometric intuition. Perhaps my concern is that I felt some sentences were vague in the proofs and paper compared to more theoretical & fundamental Math Prog. works (the example I provided, \"with less of gene... | [
-1,
-1,
-1,
-1,
-1,
8,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"ZgZkwyak18",
"l4gWGQCkVX",
"PICheQ5u-zl",
"RW1hC4wA9mw",
"6nxYpdIK22",
"nips_2022_e2gRdexoTZf",
"nips_2022_e2gRdexoTZf",
"nips_2022_e2gRdexoTZf",
"nips_2022_e2gRdexoTZf"
] |
nips_2022_AiNrnIrDfD9 | Score-Based Generative Models Detect Manifolds | Score-based generative models (SGMs) need to approximate the scores $\nabla \log p_t$ of the intermediate distributions as well as the final distribution $p_T$ of the forward process. The theoretical underpinnings of the effects of these approximations are still lacking. We find precise conditions under which SGMs are able to produce samples from an underlying (low-dimensional) data manifold $\mathcal{M}$. This assures us that SGMs are able to generate the "right kind of samples". For example, taking $\mathcal{M}$ to be the subset of images of faces, we provide conditions under which the SGM robustly produces an image of a face, even though the relative frequencies of these images might not accurately represent the true data generating distribution.
Moreover, this analysis is a first step towards understanding the generalization properties of SGMs: Taking $\mathcal{M}$ to be the set of all training samples, our results provide a precise description of when the SGM memorizes its training data. | Accept | This paper presents a theoretical analysis of score-based generative models (SGMs) [diffusion models]. Specifically, the paper theoretically studies the effect of approximations used by SGM [1. approximating p_T by µ_{prior} and 2. approximating ∇log(p_t) by a neural network], which currently lacks a solid understanding. The paper presents conditions that assures SGMs can sample from the underlying data manifold and also analyzes conditions under which an SGM memorizes the training data (the latter relates to understanding the generalization properties of SGMs).
Besides technical discussions and clarifications during the rebuttal period, the authors overhauled the introduction section and also added some experiments with CIFAR-10 dataset to support their theory, both of which were requested by the reviewers to enhance the paper. Reviewers were satisfied with the responses and the improvements in the revision. In concordance with them, I believe the paper provides a solid theoretical contribution to our understanding of SGMs and recommend accept.
| train | [
"s8zOOPI1Nxz",
"M0cOAmPhRC",
"i5PuHIk5GYE",
"c30yu_7o7cU",
"S2NaK3hM2g",
"ewM84_6LQYB",
"wrjCuryB35",
"t0QdMB6Q8Kr",
"dysml-424I",
"MBflNhjmgWq",
"X55cqKj9vMl",
"TXfa1yMqJjg",
"3HKRBn1aZ7j",
"nbCUtZAA89T",
"vnLnwNy88T",
"uXTWY2bNj_Y",
"78tZDOvgs2"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are not sure we understood the question right, but will try to answer it. If the question persists, feel free to ask again.\n\nMost of the time we talk about either $t$ and $\\hat{t}$ (Point 1 in our last answer) or $s_\\theta$ and $t$ (Point 2 in our last answer.\n\nWe do not make any real claims about $\\hat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
2
] | [
"M0cOAmPhRC",
"i5PuHIk5GYE",
"c30yu_7o7cU",
"S2NaK3hM2g",
"ewM84_6LQYB",
"TXfa1yMqJjg",
"t0QdMB6Q8Kr",
"dysml-424I",
"3HKRBn1aZ7j",
"nbCUtZAA89T",
"78tZDOvgs2",
"uXTWY2bNj_Y",
"vnLnwNy88T",
"nips_2022_AiNrnIrDfD9",
"nips_2022_AiNrnIrDfD9",
"nips_2022_AiNrnIrDfD9",
"nips_2022_AiNrnIrD... |
nips_2022_FDmIo6o09H | Environment Diversification with Multi-head Neural Network for Invariant Learning | Neural networks are often trained with empirical risk minimization; however, it has been shown that a shift between training and testing distributions can cause unpredictable performance degradation. On this issue, a research direction, invariant learning, has been proposed to extract causal features insensitive to the distributional changes. This work proposes EDNIL, an invariant learning framework containing a multi-head neural network to absorb data biases. We show that this framework does not require prior knowledge about environments or strong assumptions about the pre-trained model. We also reveal that the proposed algorithm has theoretical connections to recent studies discussing properties of variant and invariant features. Finally, we demonstrate that models trained with EDNIL are empirically more robust against distributional shifts. | Accept | This work presents a novel environment-free invariant learning method that uses an auxiliary network to learn environment-specific features, from which environment inferences can be derived. The method is composed of two jointly learned models, that take care of the environment identification, the learning of the invariant representations, and the label predictions, produced by a multi-headed neural network. The proposed model is compared to different alternative models from the literature of the field, in different challenging benchmarks, and the results show that it closely achieves the best possible invariant learning performance.
After some initial discussions, all reviewers agreed that this work is ready for publication, as the work addresses an important problem, presents good empirical results, and will be of significant interest to the community.
| train | [
"VOYIOu8RnkC",
"EAuPSNhnxyz",
"St0-INQhtt",
"Zm8yQfoOPbQ",
"knyhjs3ZFfM",
"3Zn4mPObbtJ",
"TBjaBTW2tvH",
"Qm6xGJi6hNi",
"oYGECRffRe0",
"UyXba1k6sLX",
"uqOT8uDVDmT",
"fMfRvN-Fb6r",
"eMQr4tWa8q9",
"G6UGmfraL9",
"YsbZdE5IgTi",
"FnWpxXNQQG6",
"f4iHIsm-mGk",
"0GBvTERfUgQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks - this feels like a more well-rounded societal impact discussion now.",
" We thank Reviewer SNzy for reading our revised paper.\n\nWe would like to clarify that Equation 6 indeed aims at maximizing $H(Y | X_v) - H(Y | X_v, \\mathcal{E}_\\text{learn})$ instead of minimizing $H(Y | X_v, \\mathcal{E}_\\text... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
3
] | [
"TBjaBTW2tvH",
"St0-INQhtt",
"knyhjs3ZFfM",
"3Zn4mPObbtJ",
"oYGECRffRe0",
"uqOT8uDVDmT",
"Qm6xGJi6hNi",
"UyXba1k6sLX",
"FnWpxXNQQG6",
"f4iHIsm-mGk",
"YsbZdE5IgTi",
"f4iHIsm-mGk",
"FnWpxXNQQG6",
"0GBvTERfUgQ",
"nips_2022_FDmIo6o09H",
"nips_2022_FDmIo6o09H",
"nips_2022_FDmIo6o09H",
"... |
nips_2022_wFwSFojKu6D | UnfoldML: Cost-Aware and Uncertainty-Based Dynamic 2D Prediction for Multi-Stage Classification | Machine Learning (ML) research has focused on maximizing the accuracy of predictive tasks. ML models, however, are increasingly more complex, resource intensive, and costlier to deploy in resource-constrained environments. These issues are exacerbated for prediction tasks with sequential classification on progressively transitioned stages with “happens-before” relation between them.We argue that it is possible to “unfold” a monolithic single multi-class classifier, typically trained for all stages using all data, into a series of single-stage classifiers. Each single- stage classifier can be cascaded gradually from cheaper to more expensive binary classifiers that are trained using only the necessary data modalities or features required for that stage. UnfoldML is a cost-aware and uncertainty-based dynamic 2D prediction pipeline for multi-stage classification that enables (1) navigation of the accuracy/cost tradeoff space, (2) reducing the spatio-temporal cost of inference by orders of magnitude, and (3) early prediction on proceeding stages. UnfoldML achieves orders of magnitude better cost in clinical settings, while detecting multi- stage disease development in real time. It achieves within 0.1% accuracy from the highest-performing multi-class baseline, while saving close to 20X on spatio- temporal cost of inference and earlier (3.5hrs) disease onset prediction. We also show that UnfoldML generalizes to image classification, where it can predict different level of labels (from coarse to fine) given different level of abstractions of a image, saving close to 5X cost with as little as 0.4% accuracy reduction. | Accept | The paper presents a method for transforming a multi-class multimodal classifier into a multi-stage classifier, increasing the efficiency at runtime and only using the necessary modalities for prediction. Most reviewers agree that this is an important problem for the community and has a high potential for impact. Reviewer f34b raised a question about uncertainty quantification, which the authors clarified successfully. The authors also introduced an appendix for policy selection and explanation on model predictions at the reviewer's request -- as these are tangential to the paper, the effort is appreciated. The authors also answered the questions about the optimization problem asked by reviewer cok9 - the reviewer did not comment on the author response, however the answers seem pertinent.
An remaining issue is the lack of theoretical analysis of the work, though the experiments do support the authors' claims about the cost reduction. Reviewers 5CL8 and qEBg issued positive comments w.r.t the paper's strengths, specifically the real world experiments, the reduction in cost and the exploration of the cost-AUC trade-off space. | train | [
"b3GgtkEMZS",
"vU4nQ9NvXss",
"cKduRly2ad1",
"pHFpaQ3hQde",
"XYgPfUAvYN",
"CpeKLipz-FJ",
"pXFeNRNLg_N",
"2b6FrxOnKEe",
"mOCc34OKXKi",
"-wjA3d4i898",
"CJk4I5wdwuY8",
"gX3WaepRjC2",
"WO1NXgZYS",
"ZKOha1vBRK",
"PjsndNEHRfN",
"Ja67EEiALK0",
"eig0FCw9Pt1",
"FB-2lcVwYw1",
"R4tJI_BlfCa"
... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" p.s. if you're planning to increase the score, could you please do that? We don't see it reflected on the scoreboard on our side. Thanks.",
" We have carefully defined the evaluation metrics, i.e., the cost-AUC tradeoff quantified by the area under convex hull in Table 1 and early-hour prediction in Table 2, in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
2
] | [
"vU4nQ9NvXss",
"cKduRly2ad1",
"PjsndNEHRfN",
"ZKOha1vBRK",
"-wjA3d4i898",
"mOCc34OKXKi",
"FB-2lcVwYw1",
"eig0FCw9Pt1",
"gX3WaepRjC2",
"Ja67EEiALK0",
"nips_2022_wFwSFojKu6D",
"R4tJI_BlfCa",
"FB-2lcVwYw1",
"eig0FCw9Pt1",
"Ja67EEiALK0",
"nips_2022_wFwSFojKu6D",
"nips_2022_wFwSFojKu6D",
... |
nips_2022_JJCnsgk4OIS | Learning Enhanced Representation for Tabular Data via Neighborhood Propagation | Prediction over tabular data is an essential and fundamental problem in many important downstream tasks. However, existing methods either take a data instance of the table independently as input or do not fully utilize the multi-row features and labels to directly change and enhance the target data representations. In this paper, we propose to 1) construct a hypergraph from relevant data instance retrieval to model the cross-row and cross-column patterns of those instances, and 2) perform message Propagation to Enhance the target data instance representation for Tabular prediction tasks. Specifically, our specially-designed message propagation step benefits from 1) the fusion of label and features during propagation, and 2) locality-aware multiplicative high-order interaction between features. Experiments on two important tabular prediction tasks validate the superiority of the proposed PET model against other baselines. Additionally, we demonstrate the effectiveness of the model components and the feature enhancement ability of PET via various ablation studies and visualizations. The code is available at https://github.com/KounianhuaDu/PET. | Accept | This paper proposes (PET), an approach to classifying rows in tabular data using retrieval methods and hypergraph neural networks to make predictions. The key ideas are
- use information retrieval techniques to find similar rows to each row that needs to be labeled.
- connect the similar rows in a hypergraph structure.
- learn a representation over the hypergraph structure with graph neural networks.
Experiments show that PET can singificantly outperform multiple state of the art methods on two tasks. Ablations also validate the design and each component of PET. During the review process, the authors added additional experiments addressing many of the reviewers open concerns.
The reviewers agreed that the paper is very well written, presents a significantly useful method, and that while PET builds on pieces that have been developed separately, it combines them in an interesting way. Reviewers also felt that the work is likely to be of interest to the wider graph neural network community and has the potential to influence future work. | val | [
"nW8QhGAUVI9",
"OsPP67a-bW5",
"I41Mefy6TlX",
"xk6MRelWRpb",
"NLO-dUQj_Nf",
"_uRuYo0ApP",
"VrIZtycDdIL",
"OwRAeS1kbav",
"RzxC6pZUIri",
"eaH-rq2dfV",
"tQ5vzfQ1CRT",
"1LwM2wG76Se",
"7zuKuU8UFSD"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your valuable feedback, which we appreciate a lot.\nWe have added the descriptions of the hyperparameters and the complexity analysis in the revised paper. \n\nIn addition, for your information, we offer the exact runtime for each stage as below.\n*All the experiments were on one Tesla T4 instance. The... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"OsPP67a-bW5",
"OwRAeS1kbav",
"_uRuYo0ApP",
"7zuKuU8UFSD",
"7zuKuU8UFSD",
"7zuKuU8UFSD",
"1LwM2wG76Se",
"1LwM2wG76Se",
"tQ5vzfQ1CRT",
"tQ5vzfQ1CRT",
"nips_2022_JJCnsgk4OIS",
"nips_2022_JJCnsgk4OIS",
"nips_2022_JJCnsgk4OIS"
] |
nips_2022_7vDt4_ulNyB | FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation | The ability to estimate epistemic uncertainty is often crucial when deploying machine learning in the real world, but modern methods often produce overconfident, uncalibrated uncertainty predictions. A common approach to quantify epistemic uncertainty, usable across a wide class of prediction models, is to train a model ensemble. In a naive implementation, the ensemble approach has high computational cost and high memory demand. This challenges in particular modern deep learning, where even a single deep network is already demanding in terms of compute and memory, and has given rise to a number of attempts to emulate the model ensemble without actually instantiating separate ensemble members. We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation (FiLM). That technique was originally developed for multi-task learning, with the aim of decoupling different tasks. We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison. Empirically, FiLM-Ensemble outperforms other implicit ensemble methods, and it comes very close to the upper bound of an explicit ensemble of networks (sometimes even beating it), at a fraction of the memory cost. | Accept | This paper constructs implicit ensembles by adding multiple affine transformations to batch normalization layers. The proposed approach is elegant and complements existing implicit ensemble techniques. There are some concerns that the experimental evaluation does not include larger models or datasets - which is important because other implicit ensemble models (e.g. MIMO) exhibit sensitivity to model/dataset size. It would also be useful to discuss how this method could be used in conjunction with pre-trained models. Nevertheless, the idea is well-presented, simple, and effective, and therefore it will be of interest to the NeurIPS community. | train | [
"ZHkqmJkzEV",
"gVt6EtKG6K",
"T4wET3ju2r",
"yLgE0eGXBzi",
"v9r9eaj-K6c",
"xII0b0wt15",
"zwUO24TM7mU",
"AOnBPQGKfL9",
"stlVnXOGCZ",
"5OQDCfY1Pv",
"cgX7rQfTtNU",
"7Cb0U0JVhm9M",
"haZTG1l_w_hB",
"415TqY66v9E",
"i6KGJ2iKvgA",
"naCQbljJvHo",
"na9owMBC72d"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your constructive comments and please find the individual answers to your raised points below:\n\n\n**This is not true:** *Table 2 essentially claims the opposite of the existing literature, i.e., all previous methods perform worse than even a single network.*\n\nPlease see some results from existi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"gVt6EtKG6K",
"haZTG1l_w_hB",
"xII0b0wt15",
"zwUO24TM7mU",
"AOnBPQGKfL9",
"5OQDCfY1Pv",
"stlVnXOGCZ",
"cgX7rQfTtNU",
"na9owMBC72d",
"i6KGJ2iKvgA",
"naCQbljJvHo",
"nips_2022_7vDt4_ulNyB",
"415TqY66v9E",
"nips_2022_7vDt4_ulNyB",
"nips_2022_7vDt4_ulNyB",
"nips_2022_7vDt4_ulNyB",
"nips_2... |
nips_2022_Ry9iNlpUy1- | Maximizing Revenue under Market Shrinkage and Market Uncertainty | A shrinking market is a ubiquitous challenge faced by various industries. In this paper we formulate the first formal model of shrinking markets in multi-item settings, and study how mechanism design and machine learning can help preserve revenue in an uncertain, shrinking market. Via a sample-based learning mechanism, we prove the first guarantees on how much revenue can be preserved by truthful multi-item, multi-bidder auctions (for limited supply) when only a random unknown fraction of the population participates in the market. We first present a general reduction that converts any sufficiently rich auction class into a randomized auction robust to market shrinkage. Our main technique is a novel combinatorial construction called a winner diagram that concisely represents all possible executions of an auction on an uncertain set of bidders. Via a probabilistic analysis of winner diagrams, we derive a general possibility result: a sufficiently rich class of auctions always contains an auction that is robust to market shrinkage and market uncertainty. Our result has applications to important practically-constrained settings such as auctions with a limited number of winners. We then show how to efficiently learn an auction that is robust to market shrinkage by leveraging practically-efficient routines for solving the winner determination problem. | Accept | The reviews are all positive. The reviewers agree that the paper studies a fundamental problem with nice insights and interesting techniques, and the paper is well-written. | train | [
"2qdnVeia30",
"WlPVxCS_XYb",
"qS1T0tpynlX",
"5B-JA4p3HA3",
"YYP8ZpzfWAq",
"aR-ofhpGT-E",
"5wpGpE0N5b6",
"0OREJsG0wzd",
"CF_KMAO6_Ef",
"oJBLYB_R8Ck"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. It addressed my concerns.",
" Thanks for the detailed reply! All my questions are addressed.",
" Thank you for your review! We respond to your specific comments and questions below.\n\n> “The assumption that the valuations for all players are known is kind of restricted.”\n\nWe agree... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1,
3
] | [
"qS1T0tpynlX",
"5B-JA4p3HA3",
"oJBLYB_R8Ck",
"CF_KMAO6_Ef",
"0OREJsG0wzd",
"5wpGpE0N5b6",
"nips_2022_Ry9iNlpUy1-",
"nips_2022_Ry9iNlpUy1-",
"nips_2022_Ry9iNlpUy1-",
"nips_2022_Ry9iNlpUy1-"
] |
nips_2022_H3Gv7XEGzYV | FP8 Quantization: The Power of the Exponent | When quantizing neural networks for efficient inference, low-bit integers are the go-to format for efficiency. However, low-bit floating point numbers have an extra degree of freedom, assigning some bits to work on an exponential scale instead. This paper in-depth investigates this benefit of the floating point format for neural network inference. We detail the choices that can be made for the FP8 format, including the important choice of the number of bits for the mantissa and exponent, and show analytically in which settings these choices give better performance. Then we show how these findings translate to real networks, provide an efficient implementation for FP8 simulation, and a new algorithm that enables the learning of both the scale parameters and number of exponent bits in the FP8 format. Our chief conclusion is that when doing post-training quantization for a wide range of networks, the FP8 format is better than INT8 in terms of accuracy, and the choice of the number of exponent bits is driven by the severity of outliers in the network. We also conduct experiments with quantization-aware training where the difference in formats disappears as the network is trained to reduce the effect of outliers. | Accept | This paper had mixed reviews.
One very positive expert reviewer (8) pointed out this paper used a rigorous approach to showing than FP8 can outperform INT8 in inference, which I agree is very interesting and useful.
Another reviewer gave borderline acceptance (5), and I did not find any remaining concerns following the authors' response.
One reviewer gave borderline reject (4), but I did not find any remaining coherent major concerns following the authors' rebuttal. Also, this reviewer seemed less experienced, so I down-weighted this reviewer's score.
Another reviewer gave borderline reject (4) with the following remaining concerns:
(1) "I think this scheme does not show advantages over the existing work. "
But I don't think this is true, since as far as I know previous work did not show such an advantage of FP over INT.
(2) "The practical application of the algorithm in this paper will bring extra overhead. "
I agree the authors should give more details here (especially for flexible), but at least for the flex bias method, the extra overhead seems quite reasonable (as this is similar to the standard method used for INT), so I'm not sure what is the issue.
(3) "it is a very intuitive view that we should adopt a format with more exponent bits on the data with a large distribution range".
But the authors' response correctly said this is not true (as a uniform distribution would be better represented using INT, no matter what it's range), and is not what they are saying.
Therefore, I think the reviewer had some errors in understanding here, and so I down-weighted this reviewer's score.
Also, the following paper seems relevant:
A Block Minifloat Representation for Training Deep Neural Networks, ICLR 2022 | train | [
"SrFh5w_zfdq",
"idd61eDkHgZ",
"hCOYyhZwWD",
"owJzgFLgylP",
"bcG5sy00FD3",
"yuIpxiuGKg",
"BYYi8l4jUc",
"8wdV_7474xO",
"hrZ0MuiseVM",
"szCUXverwpm",
"I9ND4osWmx4",
"uXKo8nXhdJY",
"nDSwYlXQ8yH",
"CQRxXEL9-T",
"GrfRXUN8wtz",
"60VDeKEkrLF"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. I assume the original paper (as well as the appendix) does not provide this part. Can you give out the comparison comprehensively (against FP32/FP8/INT8) in terms of the performance?\n\n2. I already identify the issue in a previous reply. As I identified, I consider the comparison between INT8 and FP8 is essen... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"idd61eDkHgZ",
"bcG5sy00FD3",
"owJzgFLgylP",
"hrZ0MuiseVM",
"I9ND4osWmx4",
"BYYi8l4jUc",
"8wdV_7474xO",
"60VDeKEkrLF",
"GrfRXUN8wtz",
"CQRxXEL9-T",
"nDSwYlXQ8yH",
"nips_2022_H3Gv7XEGzYV",
"nips_2022_H3Gv7XEGzYV",
"nips_2022_H3Gv7XEGzYV",
"nips_2022_H3Gv7XEGzYV",
"nips_2022_H3Gv7XEGzYV"... |
nips_2022_WG3vmsteqR_ | The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization | The logit outputs of a feedforward neural network at initialization are conditionally Gaussian, given a random covariance matrix defined by the penultimate layer. In this work, we study the distribution of this random matrix. Recent work has shown that shaping the activation function as network depth grows large is necessary for this covariance matrix to be non-degenerate. However, the current infinite-width-style understanding of this shaping method is unsatisfactory for large depth: infinite-width analyses ignore the microscopic fluctuations from layer to layer, but these fluctuations accumulate over many layers.
To overcome this shortcoming, we study the random covariance matrix in the shaped infinite-depth-and-width limit. We identify the precise scaling of the activation function necessary to arrive at a non-trivial limit, and show that the random covariance matrix is governed by a stochastic differential equation (SDE) that we call the Neural Covariance SDE. Using simulations, we show that the SDE closely matches the distribution of the random covariance matrix of finite networks. Additionally, we recover an if-and-only-if condition for exploding and vanishing norms of large shaped networks based on the activation function. | Accept | There is a clear consensus to accept this manuscript. The results are impressive, and have a nice theoretical orientation that will allow the results to have continued impact as the field advances. There are some minor errors by the authors in the discussions, which are worth the authors being aware of before submitting their final version. In particular, they state that:
"The authors of [2,3] considered a bounded activation function in the infinite-width limit with large depth, in which case their variance
V
ℓ
α
α
always converged to a finite fixed point when depth is large [2, eq. 3]. Their chaotic and ordered phases are then defined by the behaviour of the correlation fixed point [2, eq. 5], which in turn determines the behaviour of the gradient [2, eq. 16].
In our case, shaping the activation leads to an unbounded function, and consequently the variance
V
ℓ
α
α
is not always bounded - even if we take the same limit as [2] (but shaping depends on depth instead like the DKS/TAT papers), in which case we get an ODE with finite time explosion. Intuitively, if we drop the Brownian motion from eq. 18 and consider
d
X
t
=
b
X
t
(
X
t
−
1
)
,
d
t
,
,
which is the logistic ODE, and has a finite time explosion if
X
0
>
1
and
b
>
0
. At the same time, due to shaping, our correlation
ρ
t
α
β
will actually be able to avoid the fixed point (i.e., non-degenerate). So the gradient will be well behaved from the perspective of correlations (we are in the critical regime defined by [2]), but it may still explode due to variances exploding.
References
Novak, R., Xiao, L., Lee, J., Bahri, Y., Yang, G., Hron, J., Abolafia, D.A., Pennington, J. and Sohl-Dickstein, J., 2018. Bayesian deep convolutional networks with many channels are gaussian processes. arXiv preprint arXiv:1810.05148. https://arxiv.org/pdf/1810.05148.pdf
Schoenholz, S.S., Gilmer, J., Ganguli, S. and Sohl-Dickstein, J., 2016. Deep information propagation. arXiv preprint arXiv:1611.01232. https://arxiv.org/pdf/1611.01232.pdf
Yang, G. and Schoenholz, S., 2017. Mean field residual networks: On the edge of chaos. Advances in neural information processing systems, 30. https://arxiv.org/pdf/1712.08969.pdf
"
And while [2] states they consider bounded activations, it is not used or necessary and is not used in [3] or subsequent more recent work that discusses the edge of chaos further; see for instance: Activation function design for deep networks: linearity and effective initialisation by Murray et al. and On the impact of the activation function on deep neural networks training by Hayou et al.
| train | [
"j8R6aHdO50",
"HUP6aWf6YG7",
"kw-ajUJz-tF",
"taaWuCrx9P-u",
"aC5bl6n2iYr",
"98ERxtC23mt",
"ZSZCuXga0Wh",
"HAugpDhuy3k",
"r7-p3OFtl2s",
"LYf6VbA9aA2",
"jjf2GogXDy4",
"1n0tXnjkUBj",
"iWoP880D9Ex",
"00JhUqh791d"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for clarifying all my questions. I believe including these clarifying discussions into the main text will improve the paper for the NeurIPS readers. \n\nMy score remains the same and I am happy to champion the paper for acceptance. ",
" Thank you for responding to my comments and for the clarificatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"LYf6VbA9aA2",
"HAugpDhuy3k",
"aC5bl6n2iYr",
"98ERxtC23mt",
"00JhUqh791d",
"iWoP880D9Ex",
"iWoP880D9Ex",
"1n0tXnjkUBj",
"jjf2GogXDy4",
"jjf2GogXDy4",
"nips_2022_WG3vmsteqR_",
"nips_2022_WG3vmsteqR_",
"nips_2022_WG3vmsteqR_",
"nips_2022_WG3vmsteqR_"
] |
nips_2022_QSNoFvdIL41 | Toward Understanding Privileged Features Distillation in Learning-to-Rank | In learning-to-rank problems, a \textit{privileged feature} is one that is available during model training, but not available at test time. Such features naturally arise in merchandised recommendation systems; for instance, "user clicked this item" as a feature is predictive of "user purchased this item" in the offline data, but is clearly not available during online serving. Another source of privileged features is those that are too expensive to compute online but feasible to be added offline. \textit{Privileged features distillation} (PFD) refers to a natural idea: train a "teacher" model using all features (including privileged ones) and then use it to train a "student" model that does not use the privileged features.
In this paper, we first study PFD empirically on three public ranking datasets and an industrial-scale ranking problem derived from Amazon's logs. We show that PFD outperforms several baselines (no-distillation, pretraining-finetuning, self-distillation, and generalized distillation) on all these datasets. Next, we analyze why and when PFD performs well via both empirical ablation studies and theoretical analysis for linear models. Both investigations uncover an interesting non-monotone behavior: as the predictive power of a privileged feature increases, the performance of the resulting student model initially increases but then decreases. We show the reason for the later decreasing performance is that a very predictive privileged teacher produces predictions with high variance, which lead to high variance student estimates and inferior testing performance. | Accept | There is a consensus that the insights on the distillation of privileged information presented in the paper are interesting (e.g., possibility of distillation even if the privileged information is independent from x, non-monotonicity of the impact of privileged information vs correlation with the target feature), which is why the paper is recommended for acceptance.
Note that even after the rebuttal, several of the main weaknesses remain,
- it is not clear why the paper focuses "learning to rank" (apart from the original motivation of the authors), since the claims seem to hold as well in classification or regression
- the value of the theoretical analysis is limited, because it seems the authors considered the easiest setup where the phenomena illustrated in the experiment could be proved. In particular, they study linear least-square regression, which doesn't match any of their experiments.
- no novelty in terms of methods
In the end, the paper is borderline on the side of acceptance because the insights are significant enough. | train | [
"CwhfzrK69sM",
"2Y3S0w_rW3T",
"vNcu0jDbt4",
"F1Qve00iHD2",
"JOj4jVjo04h",
"QWwFt6rvy4U",
"bub-PBGjEKx",
"rmTOCB4ZhGf",
"D3mV5ctQTsS",
"iVbbAfQFKO-"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your feedback! It makes sense to me. \nI keep my current score. ",
" Thank you for the detailed comments and suggestions! For the concerns raised in the review:\n\n**[Limited novelty]** While we found out that the notion of Privileged Features Distillation did exist previously, it was demonstrated as... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"F1Qve00iHD2",
"bub-PBGjEKx",
"iVbbAfQFKO-",
"D3mV5ctQTsS",
"rmTOCB4ZhGf",
"bub-PBGjEKx",
"nips_2022_QSNoFvdIL41",
"nips_2022_QSNoFvdIL41",
"nips_2022_QSNoFvdIL41",
"nips_2022_QSNoFvdIL41"
] |
nips_2022_7YwwfU3DqKI | Outlier-Robust Sparse Estimation via Non-Convex Optimization | We explore the connection between outlier-robust high-dimensional statistics and non-convex optimization in the presence of sparsity constraints, with a focus on the fundamental tasks of robust sparse mean estimation and robust sparse PCA. We develop novel and simple optimization formulations for these problems such that any approximate stationary point of the associated optimization problem yields a near-optimal solution for the underlying robust estimation task. As a corollary, we obtain that any first-order method that efficiently converges to stationarity yields an efficient algorithm for these tasks. The obtained algorithms are simple, practical, and succeed under broader distributional assumptions compared to prior work. | Accept | This paper proposed a non-convex formulation for outlier robust sparse estimation which is an extension of existing work. The key point is to construct new objective functions to capture sparsity and at the same time satisfy the nice properties identified for nonconvex optimization formulations in the past for high dimensional parameter estimation in the non-sparse setting. The paper is well written and the math is solid. | train | [
"ygI4MYumGaN",
"hyatsbfqN2r",
"Sh7MLMpT58o",
"C6d6LRI_4mh",
"u5x4jGYWQU",
"5kL_yUHqrOd"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to review our paper.\n\nWe have identified the following references on applications of robust (sparse) PCA in computer vision (and the references therein). We will add a discussion on these related works.\n\n[1] FD Torre and MJ Black. Robust Principal Component Analysis for Computer ... | [
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
4,
4,
2
] | [
"5kL_yUHqrOd",
"u5x4jGYWQU",
"C6d6LRI_4mh",
"nips_2022_7YwwfU3DqKI",
"nips_2022_7YwwfU3DqKI",
"nips_2022_7YwwfU3DqKI"
] |
nips_2022_jF7u0APnGOv | Neural Abstractions | We present a novel method for the safety verification of dynamical models, which uses neural networks to represent abstractions of the dynamics. Neural networks have extensively been used before as approximators; in this work, we make a step further and use them for the first time as abstractions. We synthesise a neural network so as to approximate a nonlinear model
whilst ensuring an arbitrarily tight, formally certified bound on the approximation error,
using counterexample-guided inductive synthesis. We show that this produces a neural ODE with non-deterministic disturbances
that constitutes a formal abstraction of the concrete system under analysis. This guarantees a fundamental property:
if the abstract system is safe, i.e., free from any initialised trajectory that reaches an undesirable state, then the concrete system is also safe.
By using neural ODEs with ReLU activation functions as abstractions, we cast the safety verification problem for nonlinear dynamical models into that of hybrid automata with affine dynamics, which we verify using SpaceEx.We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models. We additionally demonstrate and that it is effective on models that do not exhibit local Lipschitz continuity, which are out of reach to the existing technologies. | Accept | This was a borderline paper. The overall idea of using neural networks as abstractions is sound and novel. However, the evaluation is not thorough, there is no theoretical analysis, and concerns about scalability remain. After calibrating across my pile, I am recommending acceptance. Please make sure to incorporate the reviewers' feedback in the final version of the paper. | train | [
"UpPI5J1XoA2",
"S-nsEF6KiDS",
"AeKl_1kx_1n",
"nk_Xt8R0QBmc",
"9EZUcFwHxo9",
"Fg2Q7ImSaHL",
"rZTlmgbqSsK",
"N1GqpR6OilD-",
"BIiDT7e51AG",
"lBpLXPE_cy",
"LHiTpJzrep",
"PzwHIYGu2bs",
"iIPoFPYt9l",
"eUaYeAB4yei",
"b4BsGNmHGL",
"Dx9vxoMViYo"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the explanation and additional experiments. I will update my score.",
" Q1. Our choice of using SpaceEx justified by the fact that SpaceEx is the state of the art in verification of hybrid automata with linear dynamics; conversely, Flow* is the state of the art in verification of nonlinear systems. N... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"AeKl_1kx_1n",
"nk_Xt8R0QBmc",
"9EZUcFwHxo9",
"LHiTpJzrep",
"iIPoFPYt9l",
"BIiDT7e51AG",
"N1GqpR6OilD-",
"LHiTpJzrep",
"PzwHIYGu2bs",
"nips_2022_jF7u0APnGOv",
"Dx9vxoMViYo",
"b4BsGNmHGL",
"eUaYeAB4yei",
"nips_2022_jF7u0APnGOv",
"nips_2022_jF7u0APnGOv",
"nips_2022_jF7u0APnGOv"
] |
nips_2022_thgItcQrJ4y | Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability | Recent findings demonstrate that modern neural networks trained by full-batch gradient descent typically enter a regime called Edge of Stability (EOS). In this regime, the sharpness, i.e., the maximum Hessian eigenvalue, first increases to the value 2/(step size) (the progressive sharpening phase) and then oscillates around this value (the EOS phase).
This paper aims to analyze the GD dynamics and the sharpness along the optimization trajectory.
Our analysis naturally divides the GD trajectory into four phases depending on the change in the sharpness value. We empirically identify the norm of output layer weight as an interesting indicator of the sharpness dynamics. Based on this empirical observation, we attempt to theoretically and empirically explain the dynamics of various key quantities that lead to the change of the sharpness in each phase of EOS. Moreover, based on certain assumptions, we provide a theoretical proof of the sharpness behavior in the EOS regime in two-layer fully-connected linear neural networks. We also discuss some other empirical findings and the limitation of our theoretical results. | Accept | While there is a rather large gap between the reviewers' scores, all the reviewers agreed that the paper is novel and the contributions are significant, especially given the little knowledge about the EoS phenomenon. While one of the reviewers raised important concerns about the appropriateness of the assumptions, I do believe that in a field without a rich enough literature, initial theoretical results with potentially strong assumptions are still valuable. Hence I am recommending an acceptance for the paper.
Please implement all the changes that have been requested by the reviewers. On the other hand please avoid using gender pronouns like "he" when addressing the reviewers. | val | [
"XL1GsDIpG5",
"rD5rYcMZfh",
"_Hki7lsQ5cO",
"38kD_KFhfek",
"-KGO_Pdct-c",
"64_hy1reWIG",
"o-mBhu4vku",
"Hvx2TXKMypd",
"TFo2c4L0ww",
"wKY3wKPS_ON",
"UvNgamhpZ1-",
"VRPrY_nOpJP"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your careful reading of our responses, and thanks a lot for your further thoughtful comments. We would like to further clarify our contribution and weakness. \n\n>\"In regards to analyzing the last layer weight $\\|A\\|$ as a proxy for sharpness, I believe that this could be emphasized further in th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rD5rYcMZfh",
"o-mBhu4vku",
"-KGO_Pdct-c",
"TFo2c4L0ww",
"Hvx2TXKMypd",
"UvNgamhpZ1-",
"UvNgamhpZ1-",
"VRPrY_nOpJP",
"wKY3wKPS_ON",
"nips_2022_thgItcQrJ4y",
"nips_2022_thgItcQrJ4y",
"nips_2022_thgItcQrJ4y"
] |
nips_2022_seYcx6CqPe | Template based Graph Neural Network with Optimal Transport Distances | Current Graph Neural Networks (GNN) architectures generally rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling. The structural (or topological) information is implicitly taken into account in these two steps. We propose in this work a novel point of view, which places distances to some learnable graph templates at the core of the graph representation. This distance embedding is constructed thanks to an optimal transport distance: the Fused Gromov-Wasserstein (FGW) distance, which encodes simultaneously feature and structure dissimilarities by solving a soft graph-matching problem. We postulate that the vector of FGW distances to a set of template graphs has a strong discriminative power, which is then fed to a non-linear classifier for final predictions. Distance embedding can be seen as a new layer, and can leverage on existing message passing techniques to promote sensible feature representations. Interestingly enough, in our work the optimal set of template graphs is also learnt in an end-to-end fashion by differentiating through this layer. After describing the corresponding learning procedure, we empirically validate our claim on several synthetic and real life graph classification datasets, where our method is competitive or surpasses kernel and GNN state-of-the-art approaches. We complete our experiments by an ablation study and a sensitivity analysis to parameters. | Accept | In this paper, the authors introduced a new GNN layer to represent a graph by the distances to template graphs. hey used the OT(FGW, used-Gromov-Wasserstein) distance as the metric. They showed good performance on several benchmark datasets. Overall, the paper is very well written and motivated. The description of the methods and the presentation of the experiments are convincing. We also thank the careful and detailed rebuttal from the authors. This is a pleasant read paper.
| train | [
"xLkmgOkY-b",
"4tuliXApot",
"2PODECK-bju",
"F3x2z6AE89C",
"kmi7BOiFkxB",
"8hOyzlJKwWW",
"2WvqhYQFI-t",
"qWRtY1lxX00",
"Lc0rp8Fba9",
"8RldkDCWI6p",
"h18TAS1Vfn4"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed discussion. Overall, my concerns have been addressed adequately. The runtime increase of TFGW seems acceptable compared to its performance gain, and I agree that the framework can benefit greatly from GPU support in the future. The results on GAT also look encouraging, and I think the i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"4tuliXApot",
"2WvqhYQFI-t",
"F3x2z6AE89C",
"kmi7BOiFkxB",
"h18TAS1Vfn4",
"8RldkDCWI6p",
"qWRtY1lxX00",
"Lc0rp8Fba9",
"nips_2022_seYcx6CqPe",
"nips_2022_seYcx6CqPe",
"nips_2022_seYcx6CqPe"
] |
nips_2022_xqYGGRt7kM | Single-pass Streaming Lower Bounds for Multi-armed Bandits Exploration with Instance-sensitive Sample Complexity | Motivated by applications to process massive datasets, we study streaming algorithms for pure exploration in Stochastic Multi-Armed Bandits (MABs). This problem was first formulated by Assadi and Wang [STOC 2020] as follows: A collection of $n$ arms with unknown rewards are arriving one by one in a stream, and the algorithm is only allowed to store a limited number of arms at any point. The goal is to find the arm with the largest reward while minimizing the number of arm pulls (sample complexity) and the maximum number of stored arms (space complexity). Assuming $\Delta_{[2]}$ is known, Assadi and Wang designed an algorithm that uses a memory of just one arm and still achieves the sample complexity of $O(n/\Delta_{[2]}^2)$ which is worst-case optimal even for non-streaming algorithms; here $\Delta_{[i]}$ is the gap between the rewards of the best and the $i$-th best arms.
In this paper, we extended this line of work to stochastic MABs in the streaming model with the instance-sensitive sample complexity, i.e. the sample complexity of $O(\sum_{i=2}^{n} \frac{1}{\Delta_{[i]}^2}\log\log{(\frac{1}{\Delta_{[i]}})})$, similar in spirit to Karnin et.al. [ICML 2013] and Jamieson et.al. [COLT 2014] in the classical setting. We devise strong negative results under this setting: our results show that any streaming algorithm under a single pass has to use either asymptotically higher sample complexity than the instance-sensitive bound, or a memory of $\Omega(n)$ arms, even if the parameter $\Delta_{[2]}$ is known. In fact, the lower bound holds under much stronger assumptions, including the random order streams or the knowledge of all gap parameters $\{\Delta_{[i]}\}_{i=2}^n$. We complement our lower bounds by proposing a new algorithm that uses a memory of a single arm and achieves the instance-optimal sample complexity when all the strong assumptions hold simultaneously.
Our results are developed based on a novel arm-trapping lemma. This generic complexity result shows that any algorithm to trap the index of the best arm among $o(n)$ indices (but not necessarily to find it) has to use $\Theta(n/\Delta_{[2]}^2)$ sample complexity. This result is not restricted to the streaming setting, and to the best of our knowledge, this is the first result that captures the sample-space trade-off for `trapping' arms in multi-armed bandits, and it can be of independent interest. | Accept | This paper studies the pure exploration in the streaming MAB model. The main message it delivers is that any single pass streaming algorithm must either have a large sample complexity or store a linear number of arms. The reviewers agreed that the results are interesting and that the analysis in the paper is novel and technically strong. Some questions were raised regarding the requirement of knowing Delta_[2] in the algorithms and the high dependence on 1/Delta_[2] in the upper bounds. These questions have been largely addressed by the authors' responses. There are still some questions regarding the motivation of the streaming bandit model; it will be nice if the authors can give some real-world applications for this model in the next version. | train | [
"_Y840Z2oD3",
"Xux7lorfeo",
"dZhH43pV6tK",
"swJuwidLGK",
"aEcufYieLEJ",
"2W66AtcS_P6",
"5FsJ25_XXe",
"bbU40FhAELX",
"oMImIEKLS5w",
"1v1xBekjuD4",
"HvrWfODvEKH",
"OfUhR6x1geV",
"A8cC3DHzp9u",
"FvEsDBnXUWD"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the additional comments and suggestions. Regarding the last question, we don't think the algorithm works with an overestimated $\\hat{\\Delta}$. This setting is called the $\\epsilon$-best arm problem in the literature (where $\\epsilon$ is the suggested $\\hat{\\Delta}$ parameter -- only a change of n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"Xux7lorfeo",
"dZhH43pV6tK",
"aEcufYieLEJ",
"2W66AtcS_P6",
"bbU40FhAELX",
"5FsJ25_XXe",
"HvrWfODvEKH",
"FvEsDBnXUWD",
"A8cC3DHzp9u",
"OfUhR6x1geV",
"nips_2022_xqYGGRt7kM",
"nips_2022_xqYGGRt7kM",
"nips_2022_xqYGGRt7kM",
"nips_2022_xqYGGRt7kM"
] |
nips_2022_G4GpqX4bKAH | Embrace the Gap: VAEs Perform Independent Mechanism Analysis | Variational autoencoders (VAEs) are a popular framework for modeling complex data distributions; they can be efficiently trained via variational inference by maximizing the evidence lower bound (ELBO), at the expense of a gap to the exact (log-)marginal likelihood. While VAEs are commonly used for representation learning, it is unclear why ELBO maximization would yield useful representations, since unregularized maximum likelihood estimation cannot invert the data-generating process. Yet, VAEs often succeed at this task. We seek to elucidate this apparent paradox by studying nonlinear VAEs in the limit of near-deterministic decoders. We first prove that, in this regime, the optimal encoder approximately inverts the decoder---a commonly used but unproven conjecture---which we refer to as self-consistency. Leveraging self-consistency, we show that the ELBO converges to a regularized log-likelihood. This allows VAEs to perform what has recently been termed independent mechanism analysis (IMA): it adds an inductive bias towards decoders with column-orthogonal Jacobians, which helps recovering the true latent factors. The gap between ELBO and log-likelihood is therefore welcome, since it bears unanticipated benefits for nonlinear representation learning. In experiments on synthetic and image data, we show that VAEs uncover the true latent factors when the data generating process satisfies the IMA assumption. | Accept | ## MetaReview
**Summary**: The submission examins why VAEs learn useful representations despite the non-identifiability of the latent variables (in a nonlinear data generating process). To do so they analyze the amortization gap from the perspective of independent mechanism analysis (IMA). The main contribution is to show that VAEs with near-deterministic decoders do not converge to the marginal likelihood but to an IMA-regularized likelihood. Previous work has shown that this regularization enforces a column-orthogonality condition on the Jacobian of the decoder, which aids in recovering the true latent factors when the data generating process satisfies the IMA assumption. The analysis focuses on near-deterministic decoders, for which the paper shows that the optimal encoder approximately inverts the decoder. This result formalizes an existing hypothesis and is necessary for deriving the convergence of the ELBO to the IMA-regularized likelihood. Synthetic experiments empirically support theoretical results.
**Strengths**: Reviewers [tFG1] and [UUBT] appreciate the analysis of the amortization gap from the perspective of independent mechanism analysis, finding it both new and interesting. Reviewer [8jNM] appreciates the comprehensive theoretical analysis and detailed explanation, as well as the clear and transparent contextualization relative to related work (although reviewer [UUBT] finds that related work is only sparingly discussed in main text)
**Weaknesses**: While the reviewers were overall appreciative of this submission, they also expressed concerns.
Reviewer [tFG1] notes that γ is always fixed in experiments, whereas it is typically optimized in VAEs. A comparison to a case with trainable γ would therefore be desirable. The reviewer also notes that it is unclear how big the gap is between ELBO and ELBO*.
Reviewer [UUBT] expresses concern that significance of the results is somewhat overstated. In particular it has been known that a linear-Gaussian VAE recovers the principal components and that the decoder therefore has orthogonal Jacobian columns (Dai et al 2018, Dai and Wipf 2019), and that nonlinear Guassian VAEs are also regularized towards decoders with orthogonal Jacobian columns. While no results for precise setting of work have been presented, this may reflect restrictiveness of setting, which assumed that data distribution has full support, along with an isotropic encoder (note: the authors clarified in their response that the encoder is in fact not isotropic).
Reviewer [8jNM] finds there is insufficient discussion of practical significance. In this context it would be helpful to understand IMA violation would be acceptable to still recover true factors. The reviewer further finds that the paper is dense and the appendix is long. The paper might be clearer if it focused more on one part of the story.
**Author Reviewer Discussion**: The authors provided detailed responses to all reviewers, including additional analyis of the role of the γ hyperparameter in response to [tFG1], additional discussion of the work by (Dai and Wipf 2019) in response to [UUBT], and provided various clarifications to reviewer [8jNM]. Reviewer [UUBT] raised their score 5->6 and reviewer [tFG1] raised their scored 6->7.
**Reviewer AC Discussion**: Reviewers [8jNM] and [tFG1] affirmed that they are happy for this paper to appear.
**Overall Recommendation**: The AC is satisfied with the level of examination and discussion that has taken place and will follow the recommendation of the reviewers. This is a relatively clear accept. | train | [
"ozkOhsSMzk4",
"j9CNCm4OOOn",
"jDZnPr863j",
"gW_MrNJXQL6",
"3ZJW-EfpwFv",
"-2uoA-oiWD",
"u6Ao3GWs6JK",
"OTeKiCvu7ws",
"e5HjrkhGHiH",
"HEiCjwhLTxx",
"sbBPnYiYafp",
"BZ5akjyHZKp",
"M6MeIhMvzpY",
"fAPgDJ5n48r",
"pBw5dfhFS76",
"RAn8HdYVsbb",
"17BhzYOaQ-6"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank **Reviewer 8jNM** for the discussion and the additional comments regarding the empirical covariances.\n\n## Empirical covariance\n\n**Reviewer 8jNM:** _Regarding the empirical covariance: yes, I meant the covariance calculated from the observed data. I agree that the variational posterior is factorized b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"gW_MrNJXQL6",
"3ZJW-EfpwFv",
"u6Ao3GWs6JK",
"OTeKiCvu7ws",
"M6MeIhMvzpY",
"u6Ao3GWs6JK",
"pBw5dfhFS76",
"e5HjrkhGHiH",
"HEiCjwhLTxx",
"17BhzYOaQ-6",
"BZ5akjyHZKp",
"M6MeIhMvzpY",
"RAn8HdYVsbb",
"nips_2022_G4GpqX4bKAH",
"nips_2022_G4GpqX4bKAH",
"nips_2022_G4GpqX4bKAH",
"nips_2022_G4G... |
nips_2022_r0bjBULkyz | Active Bayesian Causal Inference | Causal discovery and causal reasoning are classically treated as separate and consecutive tasks: one first infers the causal graph, and then uses it to estimate causal effects of interventions. However, such a two-stage approach is uneconomical, especially in terms of actively collected interventional data, since the causal query of interest may not require a fully-specified causal model. From a Bayesian perspective, it is also unnatural, since a causal query (e.g., the causal graph or some causal effect) can be viewed as a latent quantity subject to posterior inference—quantities that are not of direct interest ought to be marginalized out in this process, thus contributing to our overall uncertainty. In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, i.e., for jointly inferring a posterior over causal models and queries of interest. In our approach to ABCI, we focus on the class of causally-sufficient nonlinear additive Gaussian noise models, which we model using Gaussian processes. To capture the space of causal graphs, we use a continuous latent graph representation, allowing our approach to scale to practically relevant problem sizes. We sequentially design experiments that are maximally informative about our target causal query, collect the corresponding interventional data, update our beliefs, and repeat. Through simulations, we demonstrate that our approach is more data-efficient than existing methods that only focus on learning the full causal graph. This allows us to accurately learn downstream causal queries from fewer samples, while providing well-calibrated uncertainty estimates of the quantities of interest. | Accept | This paper proposes a Bayesian active learning framework for integrated causal discovery and reasoning. In the framework, one sequentially designs experiments that are maximally informative about a target causal query, collect the corresponding interventional data, update the beliefs, and repeat. Through simulations, the authors have demonstrated that the approach is more data-efficient than existing methods that only focus on learning the full causal graph. This allows one to accurately learn downstream causal queries from fewer samples. All the reviews are positive. | train | [
"9yJaZ26VWYN",
"tqfqFV6lVFC",
"S4-E1rT7MKuv",
"UU7RDnK9PNR",
"K8SoUqtyaYG",
"aPXv_MzYXrZ",
"sPFYgn8yoNg",
"HSyKrbf-xD",
"V4dO1cwo-0v",
"hyIqEyGdBvB",
"8E7Zazn9Ol",
"-mc9FEf0D4",
"bDxoCE7wFE"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for engaging with our work and for increasing their score.\n\nWe answer their additional questions below.\n\n> If $\\alpha$ is not annealed to $\\infty$, then according to Eqn (6) in the DiBS paper, it seems that the distribution will have support over both cyclic and acyclic graphs. If some... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
9,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"tqfqFV6lVFC",
"aPXv_MzYXrZ",
"bDxoCE7wFE",
"bDxoCE7wFE",
"-mc9FEf0D4",
"8E7Zazn9Ol",
"8E7Zazn9Ol",
"hyIqEyGdBvB",
"nips_2022_r0bjBULkyz",
"nips_2022_r0bjBULkyz",
"nips_2022_r0bjBULkyz",
"nips_2022_r0bjBULkyz",
"nips_2022_r0bjBULkyz"
] |
nips_2022_KJemAi9fymT | Learning Dense Object Descriptors from Multiple Views for Low-shot Category Generalization | A hallmark of the deep learning era for computer vision is the successful use of large-scale labeled datasets to train feature representations for tasks ranging from object recognition and semantic segmentation to optical flow estimation and novel view synthesis of 3D scenes. In this work, we aim to learn dense discriminative object representations for low-shot category recognition without requiring any category labels. To this end, we propose Deep Object Patch Encodings (DOPE), which can be trained from multiple views of object instances without any category or semantic object part labels. To train DOPE, we assume access to sparse depths, foreground masks and known cameras to obtain pixel-level correspondences between views of an object, and use this to formulate a self-supervised learning task to learn discriminative object patches. We find that DOPE can directly be used for low-shot classification of novel categories using local-part matching, and is competitive with and outperforms supervised and self-supervised learning baselines. | Accept | **Summary**: This paper aims to learn representative features of objects (without category labels) from multi-view images. A Deep Object Part Encodings (DOPE) framework is proposed, which leverages sparse depths, foreground masks, and camera poses (from COLMAP) to obtain correspondence across different views and performs self-supervised learning from these views. DOPE works well on the low-shot classification of novel categories.
**Strengths**: The overall idea is interesting, novel, and effective. It explicitly leverages 3D geometry in multi-view images for self-supervised learning. The performance gain is notable. The experimental design is solid. The paper is generally well written.
**Weaknesses**: DOPE relies on ground-truth foreground masks and the estimated camera poses by COLMAP. The compared baselines are not sufficient (mostly before 2021). Some technical parts and claims/definitions are not clear.
**Recommendation**: The paper receives positive ratings and reviews in general. After rebuttal, most of the reviewers’ concerns are addressed (e.g., additional experiments) and the paper clearly has strengths. The AC thus suggests acceptance. The AC strongly suggests that the authors incorporate their rebuttal (e.g., responses to Reviewer 3MZa; change the title) into their camera-ready version. | train | [
"bWHEzZ8c5B",
"awL5I58K8I",
"SkU-tVHQZ1",
"IyNvqlsTEfX",
"Lt_B8qrWmn",
"H5Fh0R97vYK",
"qlvDBVwQ5TW",
"0TJNsCMBASM",
"CE9irHrPyzw",
"chhoxt3BXK4",
"HclmLANujoO",
"Qp-kL74983",
"a5_WOrT0hUE",
"oiHIMaQSZaiI",
"yMr7A5xqLB",
"UzC39YHEEZA",
"ViR4b7OEOAE",
"oxyeXLzr7Ki",
"MwSv3frBshb",
... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" The rebuttal solves my concerns well. I keep my previous rating to borderline accept this paper.",
" I think this sufficiently addresses my concerns. The paper is considerably improved post rebuttal. I will update my rating.",
" Following the reviewer's suggestion, we will include the multi object results in ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"qlvDBVwQ5TW",
"SkU-tVHQZ1",
"IyNvqlsTEfX",
"oiHIMaQSZaiI",
"a5_WOrT0hUE",
"Qp-kL74983",
"oxyeXLzr7Ki",
"MwSv3frBshb",
"MwSv3frBshb",
"MwSv3frBshb",
"On18SlBTImv",
"aGOAChi-fjd",
"aGOAChi-fjd",
"aGOAChi-fjd",
"aGOAChi-fjd",
"aGOAChi-fjd",
"nips_2022_KJemAi9fymT",
"nips_2022_KJemAi9... |
nips_2022_ZaDlbaahOqG | VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives | Many past works aim to improve visual reasoning in models by supervising feature importance (estimated by model explanation techniques) with human annotations such as highlights of important image regions. However, recent work has shown that performance gains from feature importance (FI) supervision for Visual Question Answering (VQA) tasks persist even with random supervision, suggesting that these methods do not meaningfully align model FI with human FI. In this paper, we show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason (RRR) metrics by optimizing for four key model objectives: (1) accurate predictions given limited but sufficient information (Sufficiency); (2) max-entropy predictions given no important information (Uncertainty); (3) invariance of predictions to changes in unimportant features (Invariance); and (4) alignment between model FI explanations and human FI explanations (Plausibility). Our best performing method, Visual Feature Importance Supervision (VISFIS), outperforms strong baselines on benchmark VQA datasets in terms of both in-distribution and out-of-distribution accuracy. While past work suggests that the mechanism for improved accuracy is through improved explanation plausibility, we show that this relationship depends crucially on explanation faithfulness (whether explanations truly represent the model’s internal reasoning). Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful. Lastly, we show that, surprisingly, RRR metrics are not predictive of out-of-distribution model accuracy when controlling for a model’s in-distribution accuracy, which calls into question the value of these metrics for evaluating model reasoning. | Accept | 2 out of 3 reviewers are positive about the paper and recommend accept after the author feedback.
Importantly the paper contributes and the reviewers value
- the idea and design to make VQA models right for the right reasons
- the experimental validation, including comparison to the recent work Singla et al. [47].
- new RRR-metrics
- analysis of the metrics including exposing limitation its use to estimate for OOD performance
The authors have addressed several of the concerns of the authors, and I recommend acceptance under the expectation that the authors will revise the paper as promised in the author response, including
- adding a plot for a random shuffle model to address reviewer JZyx's concern.
- adding clarifications provided in the author response and discussion with the reviewers to the paper/supplement wherever possible | train | [
"EG4Nl8JUxLQ",
"AvXF6hWkSD",
"eW161XXYMNW",
"5dC80FQx2rm",
"7nSnzXMLaKc",
"JKe5QOkxRU7",
"hnbDS-D-6oM",
"kgl9JJH4PGO",
"_WO4wjEubrG",
"XWhdeWsNuq",
"g7m--Y-mpvT",
"SC95Vi-SOS",
"blIIvEPqlu",
"qhnTIF5tq-e"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > What I mean is that say you have a VisFIS model with all your objectives, the explanation are classified into 3 categories. You then train a random shuffle model and collect the results corresponding to the question id in the 3 categories according to the VisFIS model and draw the graph. \n\nGotcha, thanks for ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5
] | [
"AvXF6hWkSD",
"hnbDS-D-6oM",
"5dC80FQx2rm",
"XWhdeWsNuq",
"_WO4wjEubrG",
"nips_2022_ZaDlbaahOqG",
"qhnTIF5tq-e",
"blIIvEPqlu",
"SC95Vi-SOS",
"SC95Vi-SOS",
"nips_2022_ZaDlbaahOqG",
"nips_2022_ZaDlbaahOqG",
"nips_2022_ZaDlbaahOqG",
"nips_2022_ZaDlbaahOqG"
] |
nips_2022_Jz-kcwIJqB | Data augmentation for efficient learning from parametric experts | We present a simple, yet powerful data-augmentation technique to enable data-efficient learning from parametric experts for reinforcement and imitation learning. We focus on what we call the policy cloning setting, in which we use online or offline queries of an expert or expert policy to inform the behavior of a student policy. This setting arises naturally in a number of problems, for instance as variants of behavior cloning, or as a component of other algorithms such as DAGGER, policy distillation or KL-regularized RL. Our approach, augmented policy cloning (APC), uses synthetic states to induce feedback-sensitivity in a region around sampled trajectories, thus dramatically reducing the environment interactions required for successful cloning of the expert. We achieve highly data-efficient transfer of behavior from an expert to a student policy for high-degrees-of-freedom control problems. We demonstrate the benefit of our method in the context of several existing and widely used algorithms that include policy cloning as a constituent part. Moreover, we highlight the benefits of our approach in two practically relevant settings (a) expert compression, i.e. transfer to a student with fewer parameters; and (b) transfer from privileged experts, i.e. where the expert has a different observation space than the student, usually including access to privileged information. | Accept | This paper studies an interesting problem, and overall the reviewers agreed the exposition and validation are sufficient. We encourage the authors to consider the issues raised by the reviewers and further improve the work in the final version.
| val | [
"x7oJsRxscyd",
"4h0FVXYMP3E",
"kfUbk44l-KU",
"yyExSGx2fIk",
"X6WaEZaow-R",
"DYKBJg_7qt",
"rmVQJxP179w",
"oVdQ5fnwCT4",
"51qcyuDvnB",
"TsjWxdlY8Fh",
"kJnqi9zJRF",
"FnW8xB9n5fR",
"qmsubTaF3mK",
"jV11WGGDy1",
"hexaJy2NLFW",
"uH-ABUcfOfW"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response and clarifications. The author clarifications about the dagger/BC naming in the paper as well as the peg insertion experiments on shorter trajectories in the supplemental section serve to strengthen the paper, but not sufficiently to increase my score since I'm already in fa... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"FnW8xB9n5fR",
"kfUbk44l-KU",
"X6WaEZaow-R",
"DYKBJg_7qt",
"TsjWxdlY8Fh",
"oVdQ5fnwCT4",
"nips_2022_Jz-kcwIJqB",
"51qcyuDvnB",
"jV11WGGDy1",
"uH-ABUcfOfW",
"hexaJy2NLFW",
"qmsubTaF3mK",
"nips_2022_Jz-kcwIJqB",
"nips_2022_Jz-kcwIJqB",
"nips_2022_Jz-kcwIJqB",
"nips_2022_Jz-kcwIJqB"
] |
nips_2022_GaLgQ5_CZwB | A theory of weight distribution-constrained learning | A central question in computational neuroscience is how structure determines function in neural networks. Recent large-scale connectomic studies have started to provide a wealth of structural information such as the distribution of excitatory/inhibitory cell and synapse types as well as the distribution of synaptic weights in the brains of different species. The emerging high-quality large structural datasets raise the question of what general functional principles can be gleaned from them. Motivated by this question, we developed a statistical mechanical theory of learning in neural networks that incorporates structural information as constraints. We derived an analytical solution for the memory capacity of the perceptron, a basic feedforward model of supervised learning, with constraint on the distribution of its weights. Interestingly, the theory predicts that the reduction in capacity due to the constrained weight-distribution is related to the Wasserstein distance between the cumulative distribution function of the constrained weights and that of the standard normal distribution. To test the theoretical predictions, we use optimal transport theory and information geometry to develop an SGD-based algorithm to find weights that simultaneously learn the input-output task and satisfy the distribution constraint. We show that training in our algorithm can be interpreted as geodesic flows in the Wasserstein space of probability distributions. Given a parameterized family of weight distributions, our theory predicts the shape of the distribution with optimal parameters. We apply our theory to map out the experimental parameter landscape for the estimated distribution of synaptic weights in mammalian cortex and show that our theory’s prediction for optimal distribution is close to the experimentally measured value. We further developed a statistical mechanical theory for teacher-student perceptron rule learning and ask for the best way for the student to incorporate prior knowledge of the rule (i.e., the teacher). Our theory shows that it is beneficial for the learner to adopt different prior weight distributions during learning, and shows that distribution-constrained learning outperforms unconstrained and sign-constrained learning. Our theory and algorithm provide novel strategies for incorporating prior knowledge about weights into learning, and reveal a powerful connection between structure and function in neural networks. | Accept | Reviewers appreciate the novel weight-distribution contrained algorithm, in spite of reservations about potential impact in comp neuro or ML remain. | train | [
"ufVpLZrBLkO",
"7HMXo7XbrrN",
"1Adiu6aenbf",
"pZXS35WcOvg",
"o3-dZc2P8OY",
"breoU0CLEPy",
"FOPbvnUBLe",
"NIMcG49NXhe",
"cclhVMi7Kwc",
"HboKtamI16M",
"uxvLNswDCiL",
"yAnMWhVI57j",
"nXNlXwQ3ZQF",
"TIqM0HYfQeV",
"vQazy60F-WI",
"7EO7OD_cSl",
"_RcMveE7PL",
"YGuNj87GtFJ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I agree this is best left for future work.\n\nAfter more thought, I believe the results in this work are ultimately of interest to the theoretical neuroscience community, so I have increased my score by 1 point.",
" Thanks for the interesting suggestions. \n1. Robustness-in our setting robustness to noise is de... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
2
] | [
"7HMXo7XbrrN",
"NIMcG49NXhe",
"breoU0CLEPy",
"o3-dZc2P8OY",
"yAnMWhVI57j",
"uxvLNswDCiL",
"HboKtamI16M",
"cclhVMi7Kwc",
"YGuNj87GtFJ",
"_RcMveE7PL",
"7EO7OD_cSl",
"vQazy60F-WI",
"TIqM0HYfQeV",
"nips_2022_GaLgQ5_CZwB",
"nips_2022_GaLgQ5_CZwB",
"nips_2022_GaLgQ5_CZwB",
"nips_2022_GaLgQ... |
nips_2022_VM7u8ecLrZV | Average Sensitivity of Euclidean k-Clustering | Given a set of $n$ points in $\mathbb{R}^d$, the goal of Euclidean $(k,\ell)$-clustering is to find $k$ centers that minimize the sum of the $\ell$-th powers of the Euclidean distance of each point to the closest center. In practical situations, the clustering result must be stable against points missing in the input data so that we can make trustworthy and consistent decisions. To address this issue, we consider the average sensitivity of Euclidean $(k,\ell)$-clustering, which measures the stability of the output in total variation distance against deleting a random point from the input data. We first show that a popular algorithm \textsc{$k$-means++} and its variant called \textsc{$D^\ell$-sampling} have low average sensitivity. Next, we show that any approximation algorithm for Euclidean $(k,\ell)$-clustering can be transformed to an algorithm with low average sensitivity while almost preserving the approximation guarantee. As byproducts of our results, we provide several algorithms for consistent $(k,\ell)$-clustering and dynamic $(k,\ell)$-clustering in the random-order model, where the input points are randomly permuted and given in an online manner. The goal of the consistent setting is to maintain a good solution while minimizing the number of changes to the solution during the process, and that of the dynamic setting is to maintain a good solution while minimizing the (amortized) update time. | Accept | I agree with the reviewers that the topic is essential, the notion of average sensitivity on Euclidean -clustering is interesting, and that the paper addresses important problem that is very relevant for the inevitability noisy data in the big data era.
As complained, the "dynamic data" section is a bit misleading compared to previous work and I suggest to remove it.
Please also add the discussion in the rebuttal to the paper or at least the supp. material. | train | [
"7ek8yIHTOAA",
"zdzqc5GYTa8",
"OmkM6DNJSz",
"r52p8AaIVr",
"uwPmJQyoMu0",
"4ogFeDKJjrr",
"UUzIB_FWCNb"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the explanation.\nIn my opinion, the discussion you provided on dynamic clustering prior works and other techniques for dimension reduction should be added to the paper.\nFollowing the author responses, I will maintain my original score.",
" Thank you very much for the careful reading and for prov... | [
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"zdzqc5GYTa8",
"UUzIB_FWCNb",
"4ogFeDKJjrr",
"uwPmJQyoMu0",
"nips_2022_VM7u8ecLrZV",
"nips_2022_VM7u8ecLrZV",
"nips_2022_VM7u8ecLrZV"
] |
nips_2022_UavQ9HYye6n | Visual correspondence-based explanations improve AI robustness and human-AI team accuracy | Explaining artificial intelligence (AI) predictions is increasingly important and even imperative in many high-stake applications where humans are the ultimate decision-makers. In this work, we propose two novel architectures of explainable image classifiers that first explain, and then predict (as opposed to post-hoc explanation methods). Our models first rank the training-set images by their distance with the query in an image-level deep feature space. And then, we re-rank the top-50 shortlisted candidates using patch-wise similarity of 5 highest-similarity pairs of patches between the query and every candidate. On ImageNet, our models improve (by 1-4 points) the out-of-distribution accuracy on several datasets including Adversarial Patch and ImageNet-R while performing marginally worse (by 1-2 points) on ImageNet to the baselines (ResNet-50 pre-trained ImageNet). A consistent trend is observed on CUB. Via a large-scale, human study (~60 users per method per dataset) on ImageNet and CUB, we find our proposed correspondence-based explanations led to human-alone image classification accuracy and human-AI team accuracy that are consistently better than those of k-NN. Our correspondence-based explanations help users better correctly reject AI's wrong decisions than all other tested methods.
Interestingly, for the first time, we show that it is possible to achieve complementary human-AI team accuracy (i.e. that is higher than either AI-alone or human-alone), in both image classification tasks. | Accept | This paper proposes a classifier based on different refining strategies of the closest nearest neighbors using intermediate features, providing explanations in the form of example images. The authors demonstrate that it is possible to achieve complementary human-AI team accuracy in image classification. In general, the paper is clearly written by addressing an interesting problem. The reviewers point out the limitations in terms of the novelty, experiments design, and clarity. Afte the rebuttal and extensive discussion, most of the reviewers agree that the concerns are properly addressed although there still exist some. In general, the paper is interesting by providing some insights, but the authors are expected to make a through revision by considering the reviewers' commets. | train | [
"JVJPzq1HJqC",
"Zuh57uVgb6",
"7XwuRmKwwAo",
"wTSGVbJk8Ss",
"RoeBH1dtZP",
"s_uaXc9pcq-",
"LOQTXLSYTsI",
"Y3dtEZzO2gu",
"l6DaS0Ul_o",
"CZV9RomqVrb",
"1WGWbhx-uZn",
"9xeeTF0Sgf",
"WB9JEIsWR-b",
"p_CK5ON4zd-",
"2VUV0YU-Pb2",
"DzoD8O7wQpe"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response! I acknowledge that the paper has provided novel contributions in terms of performing evaluations of patch-based XAI classifiers on large-scale datasets such as ImageNet and conducting human studies on these classifiers, but I am hesitant to increase my rating because the technical cont... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"9xeeTF0Sgf",
"7XwuRmKwwAo",
"1WGWbhx-uZn",
"DzoD8O7wQpe",
"DzoD8O7wQpe",
"DzoD8O7wQpe",
"DzoD8O7wQpe",
"DzoD8O7wQpe",
"2VUV0YU-Pb2",
"p_CK5ON4zd-",
"p_CK5ON4zd-",
"WB9JEIsWR-b",
"nips_2022_UavQ9HYye6n",
"nips_2022_UavQ9HYye6n",
"nips_2022_UavQ9HYye6n",
"nips_2022_UavQ9HYye6n"
] |
nips_2022_OZKBReUF-wX | Meta-Reward-Net: Implicitly Differentiable Reward Learning for Preference-based Reinforcement Learning | Setting up a well-designed reward function has been challenging for many reinforcement learning applications. Preference-based reinforcement learning (PbRL) provides a new framework that avoids reward engineering by leveraging human preferences (i.e., preferring apples over oranges) as the reward signal. Therefore, improving the efficacy of data usage for preference data becomes critical. In this work, we propose Meta-Reward-Net (MRN), a data-efficient PbRL framework that incorporates bi-level optimization for both reward and policy learning. The key idea of MRN is to adopt the performance of the Q-function as the learning target. Based on this, MRN learns the Q-function and the policy in the inner level while updating the reward function adaptively according to the performance of the Q-function on the preference data in the outer level. Our experiments on robotic simulated manipulation tasks and locomotion tasks demonstrate that MRN outperforms prior methods in the case of few preference labels and significantly improves data efficiency, achieving state-of-the-art in preference-based RL. Ablation studies further demonstrate that MRN learns a more accurate Q-function compared to prior work and shows obvious advantages when only a small amount of human feedback is available. The source code and videos of this project are released at https://sites.google.com/view/meta-reward-net. | Accept | The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. Overall, the reviewers had a generally positive impression of this paper. One reviewer argued that the paper introduces a novel technique for learning a reward function from human preferences. The reviewer acknowledged that the method outperforms SOTA baselines, that it is well-motivated, and that it introduces important insights. As the main weakness, this reviewer pointed out that the paper analyses of the method's limitations show that it works best w.r.t. baselines when there is little available feedback. They encourage the authors to include a more explicit discussion on limitations such as this. Another reviewer had a less favorable view of this work and argued that the key ideas in this paper have been proposed and widely used in computer vision. They had two main technical questions, to which the authors responded. The reviewer, however, remained concerned about whether, e.g., using Q-values could introduce bias and if the method lacked sufficient theoretical analyses. Two other reviewers had more positive views of this paper. One of them argued that the paper introduced a novel algorithm and that its experiments showed that it outperforms baselines both in sample efficiency and final policy quality. The authors responded to the technical questions made by this reviewer, and the reviewer said they were satisfied with the authors' rebuttal. Finally, a fourth reviewer also acknowledged that this method is novel and that it was thoroughly evaluated in simulation and compared to reasonable baselines, where it was shown to be faster/more efficient. This reviewer initially thought there was insufficient motivation for using this approach over alternative meta-learning techniques—in which case the paper's impact could be moderate. However, the authors addressed the reviewer's concerns/questions in detail and the reviewer seemed satisfied with the rebuttal, changing their score to Accept. Overall, thus, it seems like most reviewers were positively impressed with the quality of this work. They look forward to an updated version of the paper that addresses the suggestions mentioned in their reviews and during the discussion phase. | train | [
"vPYq9jt1f2K",
"6O9eoZ6Ml16",
"fnN1kqMW-Z",
"mvqgRxn1m1F",
"QnSQFIcuTDk",
"1smp1ZiGjzy",
"oI75HWx_G_",
"ccmQICJw6ZU",
"INqiFZJgCer",
"zJsrrlQNNGI",
"eGW441VUOex",
"cqBJtGNf6Wx",
"f-HsVVXlMWP",
"dOZR08OG9P",
"bK-d6PRjoKV",
"J43ao_e-UQ3",
"hzcKJ73BPwN",
"0sbqXrT8bm1",
"N3nNEaH8AHJ"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would first like to highlight the value of all reviews and the utility of discussing. In this context I would like to intervene this discussion.\n\n Specifically, I cannot understand what does reviewer 5HHg wants to see in support of Q1 and Q2 and why.\n\nThe authors have included a thorough evaluation and a co... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"6O9eoZ6Ml16",
"fnN1kqMW-Z",
"QnSQFIcuTDk",
"f-HsVVXlMWP",
"1smp1ZiGjzy",
"zJsrrlQNNGI",
"0sbqXrT8bm1",
"hzcKJ73BPwN",
"J43ao_e-UQ3",
"hzcKJ73BPwN",
"N3nNEaH8AHJ",
"N3nNEaH8AHJ",
"N3nNEaH8AHJ",
"0sbqXrT8bm1",
"J43ao_e-UQ3",
"nips_2022_OZKBReUF-wX",
"nips_2022_OZKBReUF-wX",
"nips_20... |
nips_2022_-jnE7sxuMm | Flowification: Everything is a normalizing flow | We develop a method that can be used to calculate the likelihood contribution of linear and convolutional layers allowing multi-layer perceptrons and convolutional networks to be converted into normalizing flows. We term this process flowification.
In some cases flowification requires the addition of uncorrelated noise to the model but in the simplest case no additional parameters. The technique we develop can be applied to a broad range of architectures, allowing them to be used for a wide range of tasks. Our models also allow existing density estimation techniques to be combined with high performance feature extractors. In contrast to standard density estimation techniques that require specific architectures and specialized knowledge, our approach can leverage design knowledge from different domains and is a step closer to the realization of general purpose architectures. We investigate the efficacy of linear and convolutional layers for the task of density estimation on standard datasets.
| Accept | This paper proposes a straightforward framework to adapt a wide class of DNNs to be amenable to build normalizing flow. The framework which is derived is neat and is mostly supported experimentally. While some reviewers pointed that the links with SURVAEs could be stated more clearly in the initial submission and noted some weaknesses in the experimental part, the authors have done a convincing rebuttal. It seems that the contribution of this work is significant enough to lead to acceptance. | train | [
"aU7Ryh7CUsj",
"78HiTvEsFEf",
"M0U_-0-AjIk",
"gBSmboiEFj4",
"PCqHtIjGuM9",
"mT-wL18dJNx",
"caqiODPnvMi",
"FHP_7ihdYB",
"LfJ1HcMf8Oc",
"rGBxkYMPHo",
"BkQQCLzt74",
"HYwufW7zAGc",
"i4krDjlxj_9",
"7l45Cqv3zsv"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comments!\n\n*\"Will similar modifications improve the performance on the tabular datasets or is the lower performance still something of a mystery?\"*\n\nThe modification that helped in the case of image datasets has already been applied to the architectures producing these results. We do not... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"78HiTvEsFEf",
"caqiODPnvMi",
"PCqHtIjGuM9",
"LfJ1HcMf8Oc",
"mT-wL18dJNx",
"7l45Cqv3zsv",
"i4krDjlxj_9",
"HYwufW7zAGc",
"BkQQCLzt74",
"nips_2022_-jnE7sxuMm",
"nips_2022_-jnE7sxuMm",
"nips_2022_-jnE7sxuMm",
"nips_2022_-jnE7sxuMm",
"nips_2022_-jnE7sxuMm"
] |
nips_2022_K_LtkDGdonK | Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning | Synchronizing decisions across multiple agents in realistic settings is problematic since it requires agents to wait for other agents to terminate and communicate about termination reliably. Ideally, agents should learn and execute asynchronously instead. Such asynchronous methods also allow temporally extended actions that can take different amounts of time based on the situation and action executed. Unfortunately, current policy gradient methods are not applicable in asynchronous settings, as they assume that agents synchronously reason about action selection at every time step. To allow asynchronous learning and decision-making, we formulate a set of asynchronous multi-agent actor-critic methods that allow agents to directly optimize asynchronous policies in three standard training paradigms: decentralized learning, centralized learning, and centralized training for decentralized execution. Empirical results (in simulation and hardware) in a variety of realistic domains demonstrate the superiority of our approaches in large multi-agent problems and validate the effectiveness of our algorithms for learning high-quality and asynchronous solutions. | Accept | All reviewers appreciated the clarity of writing, thorough evaluation and effectiveness of the proposed method. Some reviewers had concerns about the novelty of proposed ideas, however I feel these have been satisfactorily addressed in the authors' rebuttal. Thus, I recommend acceptance. | train | [
"-JBjBcpV811",
"MG5LDwaUpK4",
"6qgT8odLb1h",
"D21TfEU-qw6",
"Gk8EpLQoPE0",
"6BVHtHPxyVC",
"sA7cXEE1YN",
"O9MzlxEcjGZ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing the questions and comments. I still have some reservations about the novelty and contribution of this work and would stick with my score.",
" Thank you for addressing the questions and comments. I am convinced with the author's arguments and would stick with my score.",
" We thank the... | [
-1,
-1,
-1,
-1,
-1,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"Gk8EpLQoPE0",
"D21TfEU-qw6",
"O9MzlxEcjGZ",
"sA7cXEE1YN",
"6BVHtHPxyVC",
"nips_2022_K_LtkDGdonK",
"nips_2022_K_LtkDGdonK",
"nips_2022_K_LtkDGdonK"
] |
nips_2022_OYqCR-f-dg | SQ Lower Bounds for Learning Single Neurons with Massart Noise | We study the problem of PAC learning a single neuron in the presence of Massart noise. Specifically, for a known activation function $f: \mathbb{R}\to \mathbb{R}$, the learner is given access to labeled examples $(\mathbf{x}, y) \in \mathbb{R}^d \times \mathbb{R}$, where the marginal distribution of $\mathbf{x}$ is arbitrary and the corresponding label $y$ is a Massart corruption of $f(\langle \mathbf{w}, \mathbf{x} \rangle)$. The goal of the learner is to output a hypothesis $h: \mathbb{R}^d \to \mathbb{R}$ with small squared loss. For a range of activation functions, including ReLUs, we establish super-polynomial Statistical Query (SQ) lower bounds for this learning problem. In more detail, we prove that no efficient SQ algorithm can approximate the optimal error within any constant factor. Our main technical contribution is a novel SQ-hard construction for learning $\{ \pm 1\}$-weight Massart halfspaces on the Boolean hypercube that is interesting on its own right.
| Accept | The paper shows an SQ lower bound for learning a single neuron with a known activation function (functions that includes the rectifier) under Massart noise. The paper makes a sufficiently novel contribution; however, I also agree with the various presentation issues made by the two more critical reviews. The authors must address all of these adequately for the revision | train | [
"EgKC9KI-N7",
"bJTtAxszSna",
"Ybk2Ezm5SES",
"E3tuea6QfKXG",
"OOQXRQhf78t",
"DyA47BclcIZ",
"EZkunRxnrps",
"9-KYh6nh7FT5",
"dxo8VyqcyHw",
"s7TWkbuB3lN",
"z11ULl_wIhh",
"tYRWreTQ7O",
"xrmxNPePL_5"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree with the reviewer that, in principle, the significance of a research work can be considered a matter of personal taste.\nIn this instance, however, we would like to make the following general points:\n\n1) Historically speaking, the study of PAC learning (in the context of both boolean-valued and real-va... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"Ybk2Ezm5SES",
"EZkunRxnrps",
"E3tuea6QfKXG",
"OOQXRQhf78t",
"tYRWreTQ7O",
"nips_2022_OYqCR-f-dg",
"xrmxNPePL_5",
"z11ULl_wIhh",
"s7TWkbuB3lN",
"nips_2022_OYqCR-f-dg",
"nips_2022_OYqCR-f-dg",
"nips_2022_OYqCR-f-dg",
"nips_2022_OYqCR-f-dg"
] |
nips_2022_-5rFUTO2NWe | Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation | Current work in object-centric learning has been motivated by developing learning algorithms that infer independent and symmetric entities from the perceptual input. This often requires the use iterative refinement procedures that break symmetries among equally plausible explanations for the data, but most prior works differentiate through the unrolled refinement process, which can make optimization exceptionally challenging. In this work, we observe that such iterative refinement methods can be made differentiable by means of the implicit function theorem, and develop an implicit differentiation approach that improves the stability and tractability of training such models by decoupling the forward and backward passes. This connection enables us to apply recent advances in optimizing implicit layers to not only improve the stability and optimization of the slot attention module in SLATE, a state-of-the-art method for learning entity representations, but do so with constant space and time complexity in backpropagation and only one additional line of code. | Accept | The paper proposes to treat object-centric models with iterative refinement procedures as fixed point operations and optimize them using implicit differentiation.
Overall, the reviewers find that the contribution of the paper is somewhat novel, although similar ideas have been presented in prior work in different contexts (supervised settings). Only one reviewer was more negative before the rebuttal, eventually increasing their score after discussion with the authors.
I, therefore, recommend acceptance and encourage the authors to address the comments raised by the reviewers in the final version. | train | [
"MKIbg0k1i0J",
"mm1k7QWugPm",
"eJgV_RCkBAE",
"1XBZwwb9W2",
"h34oR-5kwSi",
"blKiw3_hjMO",
"pxJDG_ERyuA",
"wivLzeCGU_u",
"Sg638IULHas",
"llYLXktOaDLf",
"n8W4Fy63gFi",
"K9pZCQ6yntM",
"7LJ8J2Tihxh",
"Q2EHev2OeTl",
"8hpdV3PYUq1",
"Et2zNog6ocX",
"RytjmUjWqmz",
"X6KOrtmnkvi"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer STEd for their constructive feedback, which has raised many helpful points of investigation for future iterations of work for improving object-centric models with implicit differentiation. ",
" You did a good job defending the paper, so I’m raising my score to 5. Overall please consider the po... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"mm1k7QWugPm",
"eJgV_RCkBAE",
"h34oR-5kwSi",
"blKiw3_hjMO",
"llYLXktOaDLf",
"pxJDG_ERyuA",
"wivLzeCGU_u",
"K9pZCQ6yntM",
"7LJ8J2Tihxh",
"Q2EHev2OeTl",
"RytjmUjWqmz",
"X6KOrtmnkvi",
"Et2zNog6ocX",
"8hpdV3PYUq1",
"nips_2022_-5rFUTO2NWe",
"nips_2022_-5rFUTO2NWe",
"nips_2022_-5rFUTO2NWe"... |
nips_2022_VeXBywV9FV | Operator Splitting Value Iteration | We introduce new planning and reinforcement learning algorithms for discounted MDPs that utilize an approximate model of the environment to accelerate the convergence of the value function. Inspired by the splitting approach in numerical linear algebra, we introduce \emph{Operator Splitting Value Iteration} (OS-VI) for both Policy Evaluation and Control problems. OS-VI achieves a much faster convergence rate when the model is accurate enough. We also introduce a sample-based version of the algorithm called OS-Dyna. Unlike the traditional Dyna architecture, OS-Dyna still converges to the correct value function in presence of model approximation error. | Accept | All reviewers appreciated the strong theoretical contribution of the paper. The idea was evaluated to be very innovative and offers very good theoratical guarantees. The paper is well written and, while the contribution is mainly theoretical, it also offers a basic evaluation of the presented idea. I follow the reviewers with their recommendation. | train | [
"9h-wVJyMYdY",
"uYtT2AempEAy",
"wyip4jn_x1",
"8K6Wr6pregt",
"bFHqwzHNrGi",
"mBAZf6L1qy7x",
"7i08qjoclE5",
"uh5u2grKCJP",
"urHM5sfGMa4",
"49f99oT0N-S",
"hjziohKr2-6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their thorough response as well as their efforts to revise the paper to make it more clear. I have increased my score by 1.",
" Thank you for your responses and the clarification on the handling of continuous state spaces. The update to the appendix are appreciated as well.",
" **Q: ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"wyip4jn_x1",
"8K6Wr6pregt",
"mBAZf6L1qy7x",
"bFHqwzHNrGi",
"hjziohKr2-6",
"49f99oT0N-S",
"urHM5sfGMa4",
"nips_2022_VeXBywV9FV",
"nips_2022_VeXBywV9FV",
"nips_2022_VeXBywV9FV",
"nips_2022_VeXBywV9FV"
] |
nips_2022_K8JngctQ2Tu | Discovering and Overcoming Limitations of Noise-engineered Data-free Knowledge Distillation | Distillation in neural networks using only the samples randomly drawn from a Gaussian distribution is possibly the most straightforward solution one can think of for the complex problem of knowledge transfer from one network (teacher) to the other (student). If successfully done, it can eliminate the requirement of teacher's training data for knowledge distillation and avoid often arising privacy concerns in sensitive applications such as healthcare. There have been some recent attempts at Gaussian noise-based data-free knowledge distillation, however, none of them offer a consistent or reliable solution. We identify the shift in the distribution of hidden layer activation as the key limiting factor, which occurs when Gaussian noise is fed to the teacher network instead of the accustomed training data. We propose a simple solution to mitigate this shift and show that for vision tasks, such as classification, it is possible to achieve a performance close to the teacher by just using the samples randomly drawn from a Gaussian distribution. We validate our approach on CIFAR10, CIFAR100, SVHN, and Food101 datasets. We further show that in situations of sparsely available original data for distillation, the proposed Gaussian noise-based knowledge distillation method can outperform the distillation using the available data with a large margin. Our work lays the foundation for further research in the direction of noise-engineered knowledge distillation using random samples. | Accept | Reviewers agree that the results presented in this paper are quite solid. While some reviewers would still like to see a wider range of experiments on different architectures, the consensus seems to be that this paper provides a solid proof of concept for a solution to a very difficult problem. | train | [
"ARctDLZHTRD",
"WcFjLDrPsX4R",
"D8_whdUeszD",
"iI4u-hgcYWr",
"y-6mPp3KM8N",
"8xubBhO8Wlg",
"lhwn-WUQuX",
"arB8A_LNlru",
"3t4gephp913"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The response has addressed my concerns and I think this paper offers insightful inputs to the field. I reside on positive side and keep my score.",
" We thank all the reviewers again for their comments. We hope our responses have been able to address the concerns and looking forward to the feedback on our respo... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iI4u-hgcYWr",
"8xubBhO8Wlg",
"3t4gephp913",
"arB8A_LNlru",
"lhwn-WUQuX",
"nips_2022_K8JngctQ2Tu",
"nips_2022_K8JngctQ2Tu",
"nips_2022_K8JngctQ2Tu",
"nips_2022_K8JngctQ2Tu"
] |
nips_2022_tIZtD2kZ6zx | Drawing out of Distribution with Neuro-Symbolic Generative Models | Learning general-purpose representations from perceptual inputs is a hallmark of human intelligence. For example, people can write out numbers or characters, or even draw doodles, by characterizing these tasks as different instantiations of the same generic underlying process---compositional arrangements of different forms of pen strokes. Crucially, learning to do one task, say writing, implies reasonable competence at another, say drawing, on account of this shared process. We present Drawing out of Distribution (DooD), a neuro-symbolic generative model of stroke-based drawing that can learn such general-purpose representations. In contrast to prior work, DooD operates directly on images, requires no supervision or expensive test-time inference, and performs unsupervised amortized inference with a symbolic stroke model that better enables both interpretability and generalization. We evaluate DooD on its ability to generalize across both data and tasks. We first perform zero-shot transfer from one dataset (e.g. MNIST) to another (e.g. Quickdraw), across five different datasets, and show that DooD clearly outperforms different baselines. An analysis of the learnt representations further highlights the benefits of adopting a symbolic stroke model. We then adopt a subset of the Omniglot challenge tasks, and evaluate its ability to generate new exemplars (both unconditionally and conditionally), and perform one-shot classification, showing that DooD matches the state of the art. Taken together, we demonstrate that DooD does indeed capture general-purpose representations across both data and task, and takes a further step towards building general and robust concept-learning systems.
| Accept | The paper presents DooD, an unsupervised approach for stroke-based generation of line drawings and handwritten characters, which the authors show outperforms other methods in generalization and interpretability. The reviewers agree that this is an innovative and useful contribution, and is well-expressed in the paper. Some concerns are raised related to the insights that one can draw from this domain to other forms of image generation; however, these objections are not fatal, in my opinion. Note that in a private communication with me, the only reviewer not recommending acceptance wrote: "I would keep my score, though I won't be disappointed if the paper will be accepted."
I recommend acceptance of the paper. | train | [
"H5o8h5b6og_",
"DEbJZ4IcfGHp",
"0hNG_bouvtZ",
"6Ld6pkYOWAC",
"vbMAhAKy-auD",
"noAergWsfyH",
"UFFTJ1g683F",
"n0L5P5m7yA5",
"0cUZRvNO3pg",
"FqjLlrCDEAi",
"SXAUOYM6g8s",
"zw4zkAingeU",
"y75xMWnYkWz",
"AXzrvokXPXQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" To further elaborate on our previous comments, our baselines are motivated from the perspective of solving the full Omniglot challenge which is to learn everything about the domain that a human can. This *requires* a model to generate as this is a key characteristic of what humans do with writing/drawing. Given a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
2,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
3
] | [
"6Ld6pkYOWAC",
"0cUZRvNO3pg",
"SXAUOYM6g8s",
"zw4zkAingeU",
"y75xMWnYkWz",
"y75xMWnYkWz",
"y75xMWnYkWz",
"AXzrvokXPXQ",
"AXzrvokXPXQ",
"nips_2022_tIZtD2kZ6zx",
"nips_2022_tIZtD2kZ6zx",
"nips_2022_tIZtD2kZ6zx",
"nips_2022_tIZtD2kZ6zx",
"nips_2022_tIZtD2kZ6zx"
] |
nips_2022_0e0es11XAIM | Beyond Adult and COMPAS: Fair Multi-Class Prediction via Information Projection | We consider the problem of producing fair probabilistic classifiers for multi-class classification tasks. We formulate this problem in terms of ``projecting'' a pre-trained (and potentially unfair) classifier onto the set of models that satisfy target group-fairness requirements. The new, projected model is given by post-processing the outputs of the pre-trained classifier by a multiplicative factor. We provide a parallelizable, iterative algorithm for computing the projected classifier and derive both sample complexity and convergence guarantees. Comprehensive numerical comparisons with state-of-the-art benchmarks demonstrate that our approach maintains competitive performance in terms of accuracy-fairness trade-off curves, while achieving favorable runtime on large datasets. We also evaluate our method at scale on an open dataset with multiple classes, multiple intersectional groups, and over 1M samples. | Accept | I recommend acceptance for this paper due to the uniformly positive reviews, which emphasize the relevance and novelty of the proposed method for fair multiclass classification. The method is well-supported experimentally, and the authors introduce to the ML literature a new benchmark classification task. During the discussion period, many of the reviewers concerns around baselines, related work, etc were addressed. | train | [
"7jGZnQXXVr-",
"gCsxdy48k1gg",
"X5Lu62EVUR",
"_tg0Rgq08gf",
"sELI7gqv5tS",
"w1jMPodF3UY",
"p0_GVQy3ZkI",
"tMjvyUF7k3a",
"iJX9pNnnC8e",
"MM7e1g_pPqK",
"cn91YedKs_7",
"PZiw6pV7WWL",
"hXWDu5CPXc",
"4QtHBfmze",
"m0biLXScFuZz",
"7PJ_w-tr5eF",
"7kWEpd9Ln1d",
"qQwbV_mFGU7",
"l2PaJpIe0T4... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Thank you for your positive feedback, we are glad that we addressed all of your questions. Thanks again, and have a great rest of your day!",
" Thank you for your positive feedback, we are glad that we addressed all of your concerns. We would sincerely appreciate it if you would kindly consider increasing your ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"PZiw6pV7WWL",
"MM7e1g_pPqK",
"_tg0Rgq08gf",
"tMjvyUF7k3a",
"p0_GVQy3ZkI",
"iJX9pNnnC8e",
"tMjvyUF7k3a",
"iJX9pNnnC8e",
"cn91YedKs_7",
"4QtHBfmze",
"qQwbV_mFGU7",
"7kWEpd9Ln1d",
"yUNE5b4EFhi",
"m0biLXScFuZz",
"7PJ_w-tr5eF",
"5WEQTHyKg12",
"_FtojnmG9x2",
"87iVtf06iG",
"nips_2022_0... |
nips_2022_lXuZaxEaI7 | Batch size-invariance for policy optimization | We say an algorithm is batch size-invariant if changes to the batch size can largely be compensated for by changes to other hyperparameters. Stochastic gradient descent is well-known to have this property at small batch sizes, via the learning rate. However, some policy optimization algorithms (such as PPO) do not have this property, because of how they control the size of policy updates. In this work we show how to make these algorithms batch size-invariant. Our key insight is to decouple the proximal policy (used for controlling policy updates) from the behavior policy (used for off-policy corrections). Our experiments help explain why these algorithms work, and additionally show how they can make more efficient use of stale data. | Accept | The paper provides a practical approach for making policy optimization methods (e.g., PPO) batch-size invariant. Batch-size invariance allows for achieving the same algorithmic behaviour when different computational resources are available (here the trade-off is the batch size). Almost all the reviewers consider the paper interesting and sound. The paper may be of large interest to practitioners and it may allow a much simpler scaling/reproduction of existing algorithms/experiments. | train | [
"ez8AaDYRcn3",
"a34-aCErdP",
"LptDKebpTqag",
"m9m6KW_ins-",
"ngf2vP001bt",
"Z8l14jbzaTN",
"jLrQZOGnUaT",
"jfNEgXLSiCu",
"rLZRJLfem7",
"NRVkI4KiM6M",
"Y1JZirSE3dU",
"IMmpLdqVKa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I will keep the score as is.\n",
" Thanks for the authors' responses. I agree that the two batch invariants should be separately analyzed, my main concern is that the second batch invariants lack some theoretical guarantees so the whole method is not particularly appealing to me. I ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"jfNEgXLSiCu",
"ngf2vP001bt",
"jLrQZOGnUaT",
"Z8l14jbzaTN",
"IMmpLdqVKa",
"Y1JZirSE3dU",
"NRVkI4KiM6M",
"rLZRJLfem7",
"nips_2022_lXuZaxEaI7",
"nips_2022_lXuZaxEaI7",
"nips_2022_lXuZaxEaI7",
"nips_2022_lXuZaxEaI7"
] |
nips_2022_y--ZUTfbNB | Faster Linear Algebra for Distance Matrices | The distance matrix of a dataset $X$ of $n$ points with respect to a distance function $f$ represents all pairwise distances between points in $X$ induced by $f$. Due to their wide applicability, distance matrices and related families of matrices have been the focus of many recent algorithmic works. We continue this line of research and take a broad view of algorithm design for distance matrices with the goal of designing fast algorithms, which are specifically tailored for distance matrices, for fundamental linear algebraic primitives. Our results include efficient algorithms for computing matrix-vector products for a wide class of distance matrices, such as the $\ell_1$ metric for which we get a linear runtime, as well as an $\Omega(n^2)$ lower bound for any algorithm which computes a matrix-vector product for the $\ell_{\infty}$ case, showing a separation between the $\ell_1$ and the $\ell_{\infty}$ metrics. Our upper bound results in conjunction with recent works on the matrix-vector query model have many further downstream applications, including the fastest algorithm for computing a relative error low-rank approximation for the distance matrix induced by $\ell_1$ and $\ell_2^2$ functions and the fastest algorithm for computing an additive error low-rank approximation for the $\ell_2$ metric, in addition to applications for fast matrix multiplication among others. We also give algorithms for constructing distance matrices and show that one can construct an approximate $\ell_2$ distance matrix in time faster than the bound implied by the Johnson-Lindenstrauss lemma. | Accept | The authors propose fast algorithms for exact matrix-vector computations when the matrices are pairwise distance matrices (not strictly metrics). This leads to fast algorithms for matrix multiplication when one factor is a distance matrix. Experimental results are included, as well as lower and upper bounds.
As the reviewers wrote, the paper is very well written and easy to follow. All the arguments in the main body seem sound.
I agree with the authors that this is a fundamental problem in machine learning with many modern applications.
Please consider adding some text from Section B as suggested by one of the reviewer. | train | [
"Fr6Bc1Fj4kC",
"9bCImHms0Y9",
"XRxz59YSGBA",
"_VjUhUNlTIm",
"G3l8Bh4aFy",
"a9dA8AWSVLY",
"YHoMuECc83",
"Jj3ahYA3nqW",
"7Fxcm4sAGB0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I still believe linear algebra papers, even when relevant to ML applications, receive best reviews via professional linear algebra journals. But your evidence of precedence is appreciated.",
" Dear Reviewer hXqz,\n\nDid we address all your concerns satisfactorily, namely your main... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"_VjUhUNlTIm",
"_VjUhUNlTIm",
"7Fxcm4sAGB0",
"Jj3ahYA3nqW",
"YHoMuECc83",
"nips_2022_y--ZUTfbNB",
"nips_2022_y--ZUTfbNB",
"nips_2022_y--ZUTfbNB",
"nips_2022_y--ZUTfbNB"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.