paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_SL4SwMNnwIe | Acceleration in Distributed Sparse Regression | We study acceleration for distributed sparse regression in {\it high-dimensions}, which allows the parameter size to exceed and grow faster than the sample size. When applicable, existing distributed algorithms employing acceleration perform poorly in this setting, theoretically and numerically. We propose a new accelerated distributed algorithm suitable for high-dimensions. The method couples a suitable instance of accelerated Nesterov's proximal gradient with consensus and gradient-tracking mechanisms, aiming at estimating locally the gradient of the empirical loss while enforcing agreement on the local estimates. Under standard assumptions on the statistical model and tuning parameters, the proposed method is proved to globally converge at {\it linear} rate to an estimate that is within the {\it statistical precision} of the model. The iteration complexity scales as $\mathcal{O}(\sqrt{\kappa})$, while the communications per iteration are at most $\widetilde{\mathcal{O}}(\log m/(1-\rho))$,
where $\kappa$ is the restricted condition number of the empirical loss, $m$ is the number of agents, and $\rho\in (0,1)$ measures the network connectivity. As by-product of our design, we also report an accelerated method for high-dimensional estimations over master-worker architectures, which is of independent interest and compares favorably with existing works. | Accept | The paper provides novel guarantees for the well-studied distributed sparse regression problem. Their theoretical results improve upon the state of the art and extend to settings that many previous results could not handle. From a technical perspective, their result builds upon previous frameworks, but also requires a number new, novel ideas. The paper does have some downsides; as mentioned previously, some of their ideas do build quite strongly off of previous work, and the presentation of the paper is quite dense, as several reviewers noted. However, the consensus of the reviewers overall is that the technical contribution of the paper is above the bar for acceptance, and would be of interest to the distributed optimization community. | train | [
"VjocaR2oWQd",
"WQSaDCSgQUR",
"WaYFzzOHNJt",
"-MitqNhiwN",
"BB91LGNPyL",
"X3fZ1b4Or1n",
"3gF74L-uzvI",
"7nnUZoZ9O0G",
"6fVUncOskJ2",
"1pHCnTSLShs",
"UlPLHEYq-93",
"6zJkg2-hT2e",
"vmMfJ5miyzF",
"f7ZOlYkAThM",
"M8yJR9ZlJAW",
"IcDujwYnWOn",
"R0zQH6ERSME",
"gOd0AqsWjfY",
"lc3dXuzpHeE... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for finding the time to check our replies.",
" Yes, I will increase my score to 7.",
" Can you please let us know if our answers are satisfactory and address your concerns? There are only few hours left to keep up with the discussion and update the score to reflect the final outcome of the discussion. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
3
] | [
"WQSaDCSgQUR",
"WaYFzzOHNJt",
"7nnUZoZ9O0G",
"X3fZ1b4Or1n",
"6zJkg2-hT2e",
"3gF74L-uzvI",
"vmMfJ5miyzF",
"6fVUncOskJ2",
"R0zQH6ERSME",
"UlPLHEYq-93",
"lc3dXuzpHeE",
"gOd0AqsWjfY",
"IcDujwYnWOn",
"M8yJR9ZlJAW",
"nips_2022_SL4SwMNnwIe",
"nips_2022_SL4SwMNnwIe",
"nips_2022_SL4SwMNnwIe",... |
nips_2022_ztcfHweENtU | Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics | Group fairness metrics are an established way of assessing the fairness of prediction-based decision-making systems. However, these metrics are still insufficiently linked to philosophical theories, and their moral meaning is often unclear. We propose a general framework for analyzing the fairness of decision systems based on theories of distributive justice, encompassing different established "patterns of justice" that correspond to different normative positions. We show that the most popular group fairness metrics can be interpreted as special cases of our approach. Thus, we provide a unifying and interpretative framework for group fairness metrics that reveals the normative choices associated with each of them and that allows understanding their moral substance. At the same time, we provide an extension of the space of possible fairness metrics beyond the ones currently discussed in the fair ML literature. Our framework also allows overcoming several limitations of group fairness metrics that have been criticized in the literature, most notably (1) that they are parity-based, i.e., that they demand some form of equality between groups, which may sometimes be harmful to marginalized groups, (2) that they only compare decisions across groups, but not the resulting consequences for these groups, and (3) that the full breadth of the distributive justice literature is not sufficiently represented. | Reject | The paper grounds several fairness notions used in machine learning in principles of distributive justice. The stated motivation is to understand the normative choices behind each and to combat the shortcoming of some of these notions. The main concerns of the reviews were that this grounding is very limited in terms of its scope and there is little actionable insight that follows. Furthermore, many of the connections have already been acknowledged in the literature, e.g. [11, 29]. Philosophical underpinnings of the sciences are very important, as they can help advance both the questions we ask and the answers we offer. The effort of this paper is thus appreciated. However, as it falls somewhat short of advancing either the philosophy or the science, it may be of limited significance to the community. To garner better appreciation of their work, the authors are advised to elaborate on how their grounding could guide the field (e.g., How could one make algorithmic fairness choices in light of this perspective? Have there been instances where the wrong choice was made (algorithmically) relative to the stated intent (normatively)? Are there limitations to this perspective, perhaps in terms of assumptions that should be challenged? etc.) | val | [
"3XwR14zIFf",
"AZin02PrZEG",
"2kVXNSwVhC",
"6iWcmo43vm",
"OqvGvpOuxEB",
"v8myW6SRewt",
"XGkGgxBnTEE"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the answer. Please include what is not covered in the final version of the paper.",
" We thank the reviewer for the questions and comments and we address each concern below.\n\n1. We agree with the reviewer’s view that group fairness notions are an important red flag that can be used to audit a predi... | [
-1,
-1,
-1,
-1,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"6iWcmo43vm",
"XGkGgxBnTEE",
"OqvGvpOuxEB",
"v8myW6SRewt",
"nips_2022_ztcfHweENtU",
"nips_2022_ztcfHweENtU",
"nips_2022_ztcfHweENtU"
] |
nips_2022_6scShPCpdDu | Embed and Emulate: Learning to estimate parameters of dynamical systems with uncertainty quantification | This paper explores learning emulators for parameter estimation with uncertainty estimation of high-dimensional dynamical systems. We assume access to a computationally complex simulator that inputs a candidate parameter and outputs a corresponding multi-channel time series. Our task is to accurately estimate a range of likely values of the underlying parameters. Standard iterative approaches necessitate running the simulator many times, which is computationally prohibitive. This paper describes a novel framework for learning feature embeddings of observed dynamics jointly with an emulator that can replace high-cost simulators. Leveraging a contrastive learning approach, our method exploits intrinsic data properties within and across parameter and trajectory domains. On a coupled 396-dimensional multiscale Lorenz 96 system, our method significantly outperforms a typical parameter estimation method based on predefined metrics and a classical numerical simulator, and with only 3.5% of the baseline's computation time. Ablation studies highlight the potential of explicitly designing learned emulators for parameter estimation by leveraging contrastive learning. | Accept | All reviewers agree that the paper proposes an interesting approach that aims at efficiently solving the inverse problem of stochastic simulators. Although some reviewers have some technical concerns at their first reviews, basically those have been resolved by the authors' responses. Thus, although there are some points that should be modified from the current form, I think we can expect the authors modify the paper in the camera-ready by reflecting the discussion. Based on these, I recommend acceptance for this paper. | test | [
"UKRj-PL2T8",
"OLgFqe5fDS7",
"4H9_PRFRa0_",
"d0co7-0B52-",
"dEzEi7EFWam",
"Xzh_8V1wNSG",
"pjFse8C_YVc",
"PRE8RV72kn3",
"wa-ZMhgd9lG",
"NrEK_GZ8gWm",
"jGS21nmODd",
"HJXACHwCbI1",
"dxTXa-bo6Dz",
"-ej0qfSMxhk",
"iIubL3baYIA"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for taking the time to read our rebuttal carefully. We really appreciate your thoughtful and constructive feedback.\n\nThank you,\n\nAuthors",
" I would like to thank the authors for the diligent effort in replying to all reviews, including mine. In particular, I believe the comparison with stat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"d0co7-0B52-",
"NrEK_GZ8gWm",
"PRE8RV72kn3",
"nips_2022_6scShPCpdDu",
"Xzh_8V1wNSG",
"pjFse8C_YVc",
"-ej0qfSMxhk",
"wa-ZMhgd9lG",
"iIubL3baYIA",
"jGS21nmODd",
"HJXACHwCbI1",
"dxTXa-bo6Dz",
"nips_2022_6scShPCpdDu",
"nips_2022_6scShPCpdDu",
"nips_2022_6scShPCpdDu"
] |
nips_2022_QEODRZ7j3L_ | DGD^2: A Linearly Convergent Distributed Algorithm For High-dimensional Statistical Recovery | We study linear regression from data distributed over a network of agents (with no master node) under high-dimensional scaling, which allows the ambient dimension to grow faster than the sample size. We propose a novel decentralization of the projected gradient algorithm whereby agents iteratively update their local estimates by a “double-mixing” mechanism, which suitably combines averages of iterates and gradients of neighbouring nodes. Under standard assumptions on the statistical model and network connectivity, the proposed method enjoys global linear convergence up to the statistical precision of the model. This improves on guarantees of (plain) DGD algorithms, whose iteration complexity grows undesirably with the ambient dimension. Our technical contribution is a novel convergence analysis that resembles (albeit different) algorithmic stability arguments extended to high-dimensions and distributed setting, which is of independent interest. | Accept | This paper studies the problem of high-dimensional linear regression in a decentralized setting. A distributed algorithm, incorporating so-called "double mixing" with decentralized gradient descent, is proposed that converges linearly to statistical precision in the regime where the problem dimension is much larger than the number of samples. The paper is well-written, clearly positions the contributions with respect to previous work, and provides significant results. The reviewers are also unanimous in recommending that the work be accepted, and recognizing the contribution as significant.
The reviewers found the post-rebuttal discussion to be very informative, and we recommend that the authors expand on the following points when preparing the camera ready manuscript:
* Adding some discussion of communication overhead associated with double mixing
* The tradeoff between double mixing (parameters and gradients) vs. additional rounds of communication to cope with poor conditioning
* Discussion of potential challenges extending the results to the stochastic gradient setting
| train | [
"q2Hi1R412Jr",
"WMmPHyZBlzI",
"-SZE2nrlpg_",
"l2k_wEvlIs_",
"G3pCCWHI6dR",
"tCQet1_fDyH",
"KzYOeU4eddM",
"mRWfOiO-oQh",
"8RWvZ2OfHql",
"rLFuGsBSVxo"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer would like to thank the authors for their detailed and insightful responses which have satisfactorily addressed the reviewer's concerns. The reviewer finds this work valuable and thinks that it might spark some sebsequent works towards improving communication efficiency. The reviewer is thus happy to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"8RWvZ2OfHql",
"KzYOeU4eddM",
"tCQet1_fDyH",
"rLFuGsBSVxo",
"tCQet1_fDyH",
"8RWvZ2OfHql",
"mRWfOiO-oQh",
"nips_2022_QEODRZ7j3L_",
"nips_2022_QEODRZ7j3L_",
"nips_2022_QEODRZ7j3L_"
] |
nips_2022_R5yl-ySZR0U | Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer | Although autoregressive models have achieved promising results on image generation, their unidirectional generation process prevents the resultant images from fully reflecting global contexts. To address the issue, we propose an effective image generation framework of \emph{Draft-and-Revise} with \emph{Contextual RQ-transformer} to consider global contexts during the generation process. As a generalized VQ-VAE, RQ-VAE first represents a high-resolution image as a sequence of discrete code stacks. After code stacks in the sequence are randomly masked, Contextual RQ-Transformer is trained to infill the masked code stacks based on the unmasked contexts of the image. Then, we propose the two-phase decoding, Draft-and-Revise, for Contextual RQ-Transformer to generates an image, while fully exploiting the global contexts of the image during the generation process. Specifically. in the \emph{draft} phase, our model first focuses on generating diverse images despite rather low quality. Then, in the \emph{revise} phase, the model iteratively improves the quality of images, while preserving the global contexts of generated images. In experiments, our method achieves state-of-the-art results on conditional image generation. We also validate that the Draft-and-Revise decoding can achieve high performance by effectively controlling the quality-diversity trade-off in image generation. | Accept | A new transformer method for image generation is discussed. Reviewers appreciated the results but raised concerns regarding exposition, some questionable ablations, limited novelty and relation to prior work (MaskGIT). The rebuttal was able to address some concerns. In a discussion reviewers generally kept their rating but raised concerns regarding novelty and ablations again. AC thinks the paper just barely made the cut and strongly encourages authors to further improve the ablations in the camera ready version to further strengthen the paper. AC also recommended senior ACs to look at this decision and possibly revise. | val | [
"abgZpTasDVv",
"T5LEwiKwBp-",
"0YmBHULgCG8",
"94CfH8LjLd",
"zp3K7hl3wsU",
"lj7kKfhDM2o",
"xFkVuioVwkI",
"nhQZe62x73P",
"TKPjeuNuTlX"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the reviewers’ constructive comments and sincerely respond to all the questions and concerns.\nIn addition, we have uploaded the revised version of our submission and highlighted the changes during the rebuttal period to address the reviews’ comments.\nIn the author's responses, the line numbers (e.... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"nips_2022_R5yl-ySZR0U",
"TKPjeuNuTlX",
"nhQZe62x73P",
"xFkVuioVwkI",
"lj7kKfhDM2o",
"nips_2022_R5yl-ySZR0U",
"nips_2022_R5yl-ySZR0U",
"nips_2022_R5yl-ySZR0U",
"nips_2022_R5yl-ySZR0U"
] |
nips_2022_pyLFJ9TBZw | Generative multitask learning mitigates target-causing confounding | We propose generative multitask learning (GMTL), a simple and scalable approach to causal representation learning for multitask learning. Our approach makes a minor change to the conventional multitask inference objective, and improves robustness to target shift. Since GMTL only modifies the inference objective, it can be used with existing multitask learning methods without requiring additional training. The improvement in robustness comes from mitigating unobserved confounders that cause the targets, but not the input. We refer to them as \emph{target-causing confounders}. These confounders induce spurious dependencies between the input and targets. This poses a problem for conventional multitask learning, due to its assumption that the targets are conditionally independent given the input. GMTL mitigates target-causing confounding at inference time, by removing the influence of the joint target distribution, and predicting all targets jointly. This removes the spurious dependencies between the input and targets, where the degree of removal is adjustable via a single hyperparameter. This flexibility is useful for managing the trade-off between in- and out-of-distribution generalization. Our results on the Attributes of People and Taskonomy datasets reflect an improved robustness to target shift across four multitask learning methods. | Accept | The decision is to accept the paper.
The paper presents an interesting perspective on confounding in the marginal distribution of task targets in multi-task learning settings, under an anti-causal prediction and no-unobserved-confounders assumption. The paper suggests an inference time strategy for eliminating some of the influence of the marginal correlation between task labels by subtracting off their joint log-probability from an inference-time objective. For convenience (and usability with current MTL training pipelines), the strategy employs an additional factorization in the posterior over tasks. The authors demonstrate effectiveness on several datasets.
The reviewers agreed that this was a fresh perspective on this problem, but also noted some limitations that could deserve more discussion in the paper. In particular, it would be useful for the authors to give an example (even a toy example) where the factorization really does hurt compared to an oracle that had access to the joint p(y, y' | x), which would motivate more fundamental research on the MTL side, and give end-users a sense of whether the method was appropriate for their problem.
Less pressingly, additional commentary on the particular modeling assumption about downstream domains that is being made with the alpha parameterization of the objective would also be useful; here, too, it would be useful to have a toy example where this simplification might also yield sub-optimal results, or at least a note that there may be other interpolation schemes between DMTL and GMTL that would work better. Basically, framing this as _a_ sensitive solution rather than _the_ solution would be helpful to point other researchers toward directions for future work.
Despite some of these reviewer concerns, I think there is enough here that the community would find this paper stimulating and useful. | train | [
"L2ByJxvKRZu",
"YcPaj_Lx668",
"RPHhf67tai2",
"WSPEJFP-PxD",
"X6zTVygko39",
"uWAl_Tf3R2v",
"rbMtGjc9T3u",
"fECLXCwS07d",
"x-FitjxxZUT",
"yQ071W9vUvl",
"q_Q1-vRu9TE",
"g0iIa1L1LTn"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, could you confirm whether we addressed your concerns? If yes, please consider increasing our score.",
" Thank you for your response. Under our causal graph, $Y$ and $Y’$ are indeed conditionally dependent given $X$. When viewed in isolation, us allowing $p(y, y’ | x)$ to factorize appears to viol... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"rbMtGjc9T3u",
"X6zTVygko39",
"WSPEJFP-PxD",
"x-FitjxxZUT",
"fECLXCwS07d",
"nips_2022_pyLFJ9TBZw",
"g0iIa1L1LTn",
"q_Q1-vRu9TE",
"yQ071W9vUvl",
"nips_2022_pyLFJ9TBZw",
"nips_2022_pyLFJ9TBZw",
"nips_2022_pyLFJ9TBZw"
] |
nips_2022_1mFfKXYMg5a | Minimax Optimal Online Imitation Learning via Replay Estimation | Online imitation learning is the problem of how best to mimic expert demonstrations, given access to the environment or an accurate simulator. Prior work has shown that in the \textit{infinite} sample regime, exact moment matching achieves value equivalence to the expert policy. However, in the \textit{finite} sample regime, even if one has no optimization error, empirical variance can lead to a performance gap that scales with $H^2 / N_{\text{exp}}$ for behavioral cloning and $H / N_{\text{exp}}$ for online moment matching, where $H$ is the horizon and $N_{\text{exp}}$ is the size of the expert dataset. We introduce the technique of ``replay estimation'' to reduce this empirical variance: by repeatedly executing cached expert actions in a stochastic simulator, we compute a smoother expert visitation distribution estimate to match. In the presence of general function approximation, we prove a meta theorem reducing the performance gap of our approach to the \textit{parameter estimation error} for offline classification (i.e. learning the expert policy). In the tabular setting or with linear function approximation, our meta theorem shows that the performance gap incurred by our approach achieves the optimal $\widetilde{O} \left( \min( H^{3/2} / N_{\text{exp}}, H / \sqrt{N_{\text{exp}}} \right)$ dependency, under significantly weaker assumptions compared to prior work, Rajaraman et al. (2021). We implement multiple instantiations of our approach on several continuous control tasks and find that we are able to significantly improve policy performance across a variety of dataset sizes. | Accept | Everyone in the review committee see this as a strong paper with a novel and practically useful replay estimation technique. Solid theoretical contribution with a meta theorem that can deal with general function approximation and weaker assumptions. Also strong empirical results on various benchmark. During rebuttal, authors were able to address most of the technical comments. Great work! | train | [
"WEB_-kW4VeL",
"O5rWHZQFwUZ",
"EZ1qUWqh5z",
"HJ9-Yap-uHkS",
"S39YNMKwfa",
"XQZQ4nkHABQ",
"6OqWxfRFYr_",
"2kv8jvJx_YH",
"NiLOjKNt8XJ",
"7Jec8pq8rKT",
"87LNBcdKC6q",
"ZZRPj1YtOs",
"uD5Dt_fmKzd",
"JIvFem2x2K",
"W2qkKGB8IGq",
"r15l4eBMWcF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for the detailed response. Most of my concerns are well addressed. I increased my score from 5 to 7. ",
" Thanks for the clarification. I appreciate the experiment setup in this paper. But, I believe current empirical results are not sufficient to support some claims. Here are my suggestio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"S39YNMKwfa",
"EZ1qUWqh5z",
"HJ9-Yap-uHkS",
"XQZQ4nkHABQ",
"2kv8jvJx_YH",
"6OqWxfRFYr_",
"NiLOjKNt8XJ",
"ZZRPj1YtOs",
"7Jec8pq8rKT",
"87LNBcdKC6q",
"r15l4eBMWcF",
"W2qkKGB8IGq",
"JIvFem2x2K",
"nips_2022_1mFfKXYMg5a",
"nips_2022_1mFfKXYMg5a",
"nips_2022_1mFfKXYMg5a"
] |
nips_2022_yTJze_xm-u6 | Variational Model Perturbation for Source-Free Domain Adaptation | We aim for source-free domain adaptation, where the task is to deploy a model pre-trained on source domains to target domains. The challenges stem from the distribution shift from the source to the target domain, coupled with the unavailability of any source data and labeled target data for optimization. Rather than fine-tuning the model by updating the parameters, we propose to perturb the source model to achieve adaptation to target domains. We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework. By doing so, we can effectively adapt the model to the target domain while largely preserving the discriminative ability. Importantly, we demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains. To enable more efficient optimization, we further employ a parameter sharing strategy, which substantially reduces the learnable parameters compared to a fully Bayesian neural network.
Our model perturbation provides a new probabilistic way for domain adaptation which enables efficient adaptation to target domains while maximally preserving knowledge in source models. Experiments on several source-free benchmarks under three different evaluation settings verify the effectiveness of the proposed variational model perturbation for source-free domain adaptation. | Accept | This paper proposes a novel probabilistic framework for source-free domain adaptation, in which the source model serves as the invariant part (mean) while a perturbation (variance) is applied to the source model parameters to derive the target model that accounts for the target-specific distribution. All four reviewers provided detailed and constructive comments, which were well taken into account by the authors in their revision and rebuttal. After discussion, all reviewers were positive about the paper. AC agreed with the reviewers that this paper introduces a novel, solid, and parameter-simple approach to source-free domain adaptation with comprehensive empirical evaluation for several settings, which will be widely interested by the community. A further comment of AC is that the connection to Shai Ben-David et al.'s seminal bound is rather off-topic to this paper, making the discussion subject to flaw --- there is no formal modeling of the source and target data distributions that is required by the bound, while the bound cannot describe domain relatedness in terms of model parameters. So I suggest the authors to remove this part to make the paper more convincing. | train | [
"KpifzvRo_Fy",
"YiW_pzJrK_m",
"GTtkQ6sAwQF",
"E8S0UD0ARSQ",
"NkLqbjL3sr",
"Wzr1lmuqljQ",
"0l-9vJk55225",
"BibdVaooCS4",
"6KI7BVmvTbTI",
"BS_UxOd4dsk",
"EuMMoFSSyNe",
"tcXWd0ZSyZw",
"4aeyTi9QNwW",
"1fZfQl3kOz3",
"LwWVsY_U4Aa"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the efforts made by the authors on the additional works and clarifications. Most of my concerns have been addressed. Thank you.",
" We appreciate your positive reviews and support. We are glad that the rebuttal and additional experiments adequately addressed your concerns. Finally, thanks for your ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"EuMMoFSSyNe",
"nips_2022_yTJze_xm-u6",
"BS_UxOd4dsk",
"6KI7BVmvTbTI",
"0l-9vJk55225",
"tcXWd0ZSyZw",
"tcXWd0ZSyZw",
"LwWVsY_U4Aa",
"LwWVsY_U4Aa",
"1fZfQl3kOz3",
"4aeyTi9QNwW",
"nips_2022_yTJze_xm-u6",
"nips_2022_yTJze_xm-u6",
"nips_2022_yTJze_xm-u6",
"nips_2022_yTJze_xm-u6"
] |
nips_2022_JyTT03dqCFD | The Neural Testbed: Evaluating Joint Predictions |
Predictive distributions quantify uncertainties ignored by point estimates. This paper introduces The Neural Testbed: an open source benchmark for controlled and principled evaluation of agents that generate such predictions. Crucially, the testbed assesses agents not only on the quality of their marginal predictions per input, but also on their joint predictions across many inputs. We evaluate a range of agents using a simple neural network data generating process.
Our results indicate that some popular Bayesian deep learning agents do not fare well with joint predictions, even when they can produce accurate marginal predictions. We also show that the quality of joint predictions drives performance in downstream decision tasks. We find these results are robust across choice a wide range of generative models, and highlight the practical importance of joint predictions to the community. | Accept | Getting a reasonable estimation of joint predictions is crucial for many uncertainty estimation tasks. The paper proposed a set of benchmarks for predicting joint probabilities of the outputs over a few input examples. The proposed synthetic tasks are easy to deploy and test on most Bayesian methods, including Bayesian neural networks.
Uncertainty estimation is one of the fundamental challenges for modern machine learning algorithms. Many downstream application areas in reinforcement learning, active learning, and safety require a model to assess its prediction confidence. Yet, unlike the standard classification tasks, there is a lack of benchmark datasets to evaluate the performance of uncertainty methods. The strength of this paper is:
1) Develop a suite of benchmarks, although synthetic and toyish, to allow a quantitative study of the joint prediction of the existing machine learning methods. The proposed benchmark allows researchers to study uncertainty estimation without invoking any downstream application in RL or active learning.
2) The work bridge the gap between the benchmarks on marginal predictions, such as Riquelme et al. "Deep Bayesian bandits showdown" and heavy machinery of exploration tasks in RL.
The weakness of the current submission is a lack of clarity in the current writing, as pointed out by one of the reviewers. Many experimental hyperparameters are omitted from the main text, which would help the readers understand the benchmark details and design choices. Also, there is a glaring limitation of the benchmarks' simplicity and whether the generative model choice could generalize to high-dimensional problems.
Given the scarcity of other benchmarks in the uncertainty estimation tasks, the strength outweighs the weakness of this paper. | val | [
"j1b6UnmEtm",
"ohB57hrSZWe",
"Jf6BE9_8XE",
"a85ayJXptd6",
"b_I7G1pB-9e",
"XtZOBsv2chJ",
"mpUfWPjbwlw5",
"sfEjFmU-F8h",
"n7v_PfSmEiF",
"itrSd-EKPjj",
"8XoNAuW5XKl",
"NYES4pd9qAF",
"98D7uBVUJg",
"bjGKXG2L2nK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response for my feedback. Regarding my comment on limitations/assumptions: The main assumption I was referring to was around the environment epsilon distribution and its scalability aspect. This is also raised by other reviewers as the analyses are performed with simple models. Havin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
2,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"b_I7G1pB-9e",
"XtZOBsv2chJ",
"a85ayJXptd6",
"bjGKXG2L2nK",
"98D7uBVUJg",
"NYES4pd9qAF",
"8XoNAuW5XKl",
"itrSd-EKPjj",
"itrSd-EKPjj",
"nips_2022_JyTT03dqCFD",
"nips_2022_JyTT03dqCFD",
"nips_2022_JyTT03dqCFD",
"nips_2022_JyTT03dqCFD",
"nips_2022_JyTT03dqCFD"
] |
nips_2022_tzNWhvOomsK | Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes | We develop a rigorous mathematical analysis of zero-shot learning with attributes. In this setting, the goal is to label novel classes with no training data, only detectors for attributes and a description of how those attributes are correlated with the target classes, called the class-attribute matrix. We develop the first non-trivial lower bound on the worst-case error of the best map from attributes to classes for this setting, even with perfect attribute detectors. The lower bound characterizes the theoretical intrinsic difficulty of the zero-shot problem based on the available information---the class-attribute matrix---and the bound is practically computable from it. Our lower bound is tight, as we show that we can always find a randomized map from attributes to classes whose expected error is upper bounded by the value of the lower bound. We show that our analysis can be predictive of how standard zero-shot methods behave in practice, including which classes will likely be confused with others. | Accept | The paper provides a lower bound on the error attainable by a zero-shot learning method in terms of a linear program involving the class-attribution matrix which is provided as side-information. All the reviewers agree that the analysis is novel and studies an important problem. The main concerns are that: a. the problem itself is studied in a highly constrained setup, b. hard to understand the exact insights from the result, and how it could be used to further the state of the art.
In particular, it would be important to tone down some of the claims, and clarify to the reader that the claims are in context of the considered problem setup only. | train | [
"bJg99TfT0zP",
"RWPgaFzNCdH",
"nAjlFI2Clm",
"3c2xNTiZ0rI",
"gu8XXRdRoaJ",
"gMxyUM-FbP1",
"wDhf13cc9k",
"XSl0hviXZ3S",
"Xaahi2v--R1",
"veJBjLdk7i",
"Z6GBd756G6c",
"0wB1cgbZ-kN"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response.\n\nI feel that the majority of my concerns have been addressed. I apologize for my late response; I needed some time to read over the provided reference [Romera-Paredes & Torr, 2015]. Several reviewers, including myself, were concerned by the relevance of the result to the \... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"RWPgaFzNCdH",
"nAjlFI2Clm",
"Z6GBd756G6c",
"0wB1cgbZ-kN",
"veJBjLdk7i",
"wDhf13cc9k",
"Xaahi2v--R1",
"nips_2022_tzNWhvOomsK",
"nips_2022_tzNWhvOomsK",
"nips_2022_tzNWhvOomsK",
"nips_2022_tzNWhvOomsK",
"nips_2022_tzNWhvOomsK"
] |
nips_2022_Wl1ZIgMqLlq | Deep Ensembles Work, But Are They Necessary? | Ensembling neural networks is an effective way to increase accuracy, and can often match the performance of individual larger models. This observation poses a natural question: given the choice between a deep ensemble and a single neural network with similar accuracy, is one preferable over the other? Recent work suggests that deep ensembles may offer distinct benefits beyond predictive power: namely, uncertainty quantification and robustness to dataset shift. In this work, we demonstrate limitations to these purported benefits, and show that a single (but larger) neural network can replicate these qualities. First, we show that ensemble diversity, by any metric, does not meaningfully contribute to an ensemble's ability to detect out-of-distribution (OOD) data, but is instead highly correlated with the relative improvement of a single larger model. Second, we show that the OOD performance afforded by ensembles is strongly determined by their in-distribution (InD) performance, and - in this sense - is not indicative of any "effective robustness." While deep ensembles are a practical way to achieve improvements to predictive power, uncertainty quantification, and robustness, our results show that these improvements can be replicated by a (larger) single model. | Accept | This paper challenges a widely held view in deep learning: that deep ensembles are always superior to single models when it comes to uncertainty quantification and robustness. The authors convincingly show that a single big model may do equally well, and that much of the benefit of ensembles over single models of the same size seems to derive more from their improved accuracy than from their diversity. The reviewers thoroughly investigated the authors' claims and engaged in discussion. All reviewers are convinced that the main claim of the paper is mostly correct, and that it is a very interesting finding. I therefore strongly recommend accepting this paper. | train | [
"VV8bKG9ayqK",
"q7C4H13Mnmh",
"JUShIneD0y_",
"DIFWVJGVUZ",
"8gp_ZTQFeS",
"gLgZwjQxShdN",
"jxi9GL38bcB",
"53R8-ycVfkH",
"PVb44d2TSqqL",
"-xcv_dXl2qb",
"K1CPU0QnLIC",
"cKvXI7Sbg2p",
"fXsViyUXgB4",
"nbvH97FIaCp",
"eCKsdf4kGf0",
"2MRBjLk8nQW",
"xMWklnRnJHx"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much! We agree that the above points will be of interest to readers, and that addressing them in the discussion will improve our paper. We will be sure to include them in our final manuscript. ",
" Thank you for the clarifications. I think that adding the above mentioned points to the discussion ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
3
] | [
"q7C4H13Mnmh",
"-xcv_dXl2qb",
"gLgZwjQxShdN",
"jxi9GL38bcB",
"53R8-ycVfkH",
"cKvXI7Sbg2p",
"K1CPU0QnLIC",
"PVb44d2TSqqL",
"xMWklnRnJHx",
"2MRBjLk8nQW",
"eCKsdf4kGf0",
"nbvH97FIaCp",
"nips_2022_Wl1ZIgMqLlq",
"nips_2022_Wl1ZIgMqLlq",
"nips_2022_Wl1ZIgMqLlq",
"nips_2022_Wl1ZIgMqLlq",
"n... |
nips_2022_CIYF4tpQzgK | Recursive Reasoning in Minimax Games: A Level $k$ Gradient Play Method | Despite the success of generative adversarial networks (GANs) in generating visually appealing images, they are notoriously challenging to train. In order to stabilize the learning dynamics in minimax games, we propose a novel recursive reasoning algorithm: Level $k$ Gradient Play (Lv.$k$ GP) algorithm. Our algorithm does not require sophisticated heuristics or second-order information, as do existing algorithms based on predictive updates. We show that as k increases, Lv.$k$ GP converges asymptotically towards an accurate estimation of players' future strategy.
Moreover, we justify that Lv.$\infty$ GP naturally generalizes a line of provably convergent game dynamics which rely on predictive updates. Furthermore, we provide its local convergence property in nonconvex-nonconcave zero-sum games and global convergence in bilinear and quadratic games. By combining Lv.$k$ GP with Adam optimizer, our algorithm shows a clear advantage in terms of performance and computational overhead compared to other methods. Using a single Nvidia RTX3090 GPU and 30 times fewer parameters than BigGAN on CIFAR-10, we achieve an FID of 10.17 for unconditional image generation within 30 hours, allowing GAN training on common computational resources to reach state-of-the-art performance. | Accept | This paper proposes a novel recursive reasoning algorithm for minimax games, in which players try to anticipate their opponent's next round move instead of reacting to the current round. Importantly, this is achieved without requiring expensive second order information. Reviewers found the paper clearly written and well motivated, addressing an important problem. The work appears novel, and there is good experimental evidence that the algorithm delivers on its promises. | train | [
"inYLDsTWQf3",
"coJe-1eBPH",
"W7ZwFRAXU_4",
"QrkAN85heF0",
"gV0DkEvNnO",
"Rx3Be8_W3EF",
"-dH698JgpJr",
"cE4-Lr5KEn",
"oe60kAVtv02",
"T1uwoqQ0Wnx",
"iNIqrpOoMzc",
"LlnNgqA4hX_",
"uIxBqGjIGVJ",
"17UkVEouz23",
"h6QGqGCycoV",
"zgiL5hnFb8H",
"jFR4F1A0QD6",
"qPyc-Zl7Xdj",
"InxmxEhGpOw"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
... | [
" Dear Reviewer Rboc,\n\nAs participants of Neurips 2022, we sincerely thank you for your candid comments and helpful suggestions. We really appreciate your consistent communication with us. We believe that you have done a great job as a reviewer. \n\nAs the author of this paper, we will try our best to address you... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"coJe-1eBPH",
"W7ZwFRAXU_4",
"cE4-Lr5KEn",
"gRaihGAxiKi",
"cE4-Lr5KEn",
"cE4-Lr5KEn",
"cE4-Lr5KEn",
"oe60kAVtv02",
"T1uwoqQ0Wnx",
"LlnNgqA4hX_",
"uIxBqGjIGVJ",
"uIxBqGjIGVJ",
"17UkVEouz23",
"h6QGqGCycoV",
"VT_v9koC-3u",
"siuTfDK7Bk0",
"qPyc-Zl7Xdj",
"InxmxEhGpOw",
"PcVAZgCLvNQ",
... |
nips_2022_ErUlLrGaVEU | The Privacy Onion Effect: Memorization is Relative | Machine learning models trained on private datasets have been shown to leak their private data. Recent work has found that the average data point is rarely leaked---it is often the outlier samples that are subject to memorization and, consequently, leakage. We demonstrate and analyze an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable to a privacy attack exposes a new layer of previously-safe points to the same attack. We perform several experiments that are consistent with this hypothesis. For example, we show that for membership inference attacks, when the layer of easiest-to-attack examples is removed, another layer below becomes easy-to-attack. The existence of this effect has various consequences. For example, it suggests that proposals to defend against memorization without training with rigorous privacy guarantees are unlikely to be effective. Further, it suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users. | Accept | Reviewers found the paper to be thought provoking and relevant and would be of broad interest to the community.
| train | [
"JFA5SWd7h_T",
"0VmZWc-0ht1",
"ehQNqidlCU",
"Gc1k0jKmIh9",
"scutkmLDJy",
"i2i5r_NvloW",
"50VqEpxE6yG",
"C2HHA8VzjED9",
"JuR0Q69HDio",
"fIAwEgdUZ6t",
"QhrzrIN-6wA",
"QE7Dj4Q-GMy",
"9IKFfwlJdGK"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your responses. I will keep my positive score.",
" We can add a histogram like this to the appendix. In the meantime, here are some percentiles:\n\nmin: 0.499\n\n10%: 0.506\n\n20%: 0.511\n\n30%: 0.523\n\n40%: 0.545\n\nmedian: 0.577\n\n60%: 0.620\n\n70%: 0.674\n\n80%: 0.744\n\n90%: 0.833\n\nmax: 0.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"C2HHA8VzjED9",
"ehQNqidlCU",
"JuR0Q69HDio",
"scutkmLDJy",
"i2i5r_NvloW",
"9IKFfwlJdGK",
"QE7Dj4Q-GMy",
"QhrzrIN-6wA",
"fIAwEgdUZ6t",
"nips_2022_ErUlLrGaVEU",
"nips_2022_ErUlLrGaVEU",
"nips_2022_ErUlLrGaVEU",
"nips_2022_ErUlLrGaVEU"
] |
nips_2022_1LmgISIDZJ | Learning to Sample and Aggregate: Few-shot Reasoning over Temporal Knowledge Graphs | In this paper, we investigate a realistic but underexplored problem, called few-shot temporal knowledge graph reasoning, that aims to predict future facts for newly emerging entities based on extremely limited observations in evolving graphs. It offers practical value in applications that need to derive instant new knowledge about new entities in temporal knowledge graphs (TKGs) with minimal supervision. The challenges mainly come from the few-shot and time shift properties of new entities. First, the limited observations associated with them are insufficient for training a model from scratch. Second, the potentially dynamic distributions from the initially observable facts to the future facts ask for explicitly modeling the evolving characteristics of new entities. We correspondingly propose a novel Meta Temporal Knowledge Graph Reasoning (MetaTKGR) framework. Unlike prior work that relies on rigid neighborhood aggregation schemes to enhance low-data entity representation, MetaTKGR dynamically adjusts the strategies of sampling and aggregating neighbors from recent facts for new entities, through temporally supervised signals on future facts as instant feedback. Besides, such a meta temporal reasoning procedure goes beyond existing meta-learning paradigms on static knowledge graphs that fail to handle temporal adaptation with large entity variance. We further provide a theoretical analysis and propose a temporal adaptation regularizer to stabilize the meta temporal reasoning over time. Empirically, extensive experiments on three real-world TKGs demonstrate the superiority of MetaTKGR over eight state-of-the-art baselines by a large margin. | Accept | This paper studies few-shot temporal knowledge graph reasoning. The task looks to predict future facts for newly emerging entities based on limited observations in evolving graphs. The authors propose MetaTKGR, a meta-learning framework for reasoning over temporal knowledge graphs.
The reviewers agree that the proposed method is reasonable and sound, the experiments are thorough, and the results provide valuable insights for future work. Reviewers' raised concerns and questions are properly addressed by the author's response. | train | [
"BGDD6eoBR95",
"PzfmA0IQgXG",
"uSh98jj5f5t",
"5bPgG5l1DJr",
"22H3TIIeJ5T",
"8MnAwaPPhqW",
"KAtzUAYjXcf",
"GYXdCe5yVQh",
"q8DUcCr8ibA",
"R-CsuTrPXAK",
"2Hh904Kc4cO",
"W6N8OJjOYxU",
"hj9zBUChOA_"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer oR3L for providing valuable comments to improve our paper. Hopefully, you had a chance to go over our responses. Please let us know if we have addressed all your concerns and questions or not. We look forward to a fruitful discussion and your replies, which are highly valuable to u... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"22H3TIIeJ5T",
"KAtzUAYjXcf",
"GYXdCe5yVQh",
"R-CsuTrPXAK",
"8MnAwaPPhqW",
"hj9zBUChOA_",
"W6N8OJjOYxU",
"q8DUcCr8ibA",
"2Hh904Kc4cO",
"nips_2022_1LmgISIDZJ",
"nips_2022_1LmgISIDZJ",
"nips_2022_1LmgISIDZJ",
"nips_2022_1LmgISIDZJ"
] |
nips_2022_KXybrIUJnya | A Character-Level Length-Control Algorithm for Non-Autoregressive Sentence Summarization | Sentence summarization aims at compressing a long sentence into a short one that keeps the main gist, and has extensive real-world applications such as headline generation. In previous work, researchers have developed various approaches to improve the ROUGE score, which is the main evaluation metric for summarization, whereas controlling the summary length has not drawn much attention. In our work, we address a new problem of explicit character-level length control for summarization, and propose a dynamic programming algorithm based on the Connectionist Temporal Classification (CTC) model. Results show that our approach not only achieves higher ROUGE scores but also yields more complete sentences. | Accept | This paper addresses sentence compression by controlling the output by the number of characters. The proposal relies
on a non-autoregressive generation model interfaced with a dynamic programming algorithm where lengths are divided into buckets
for efficient inference. The proposed algorithm has advantages over previous methods, both computationally and in terms
of results. The authors have addressed the reviewers comments, and honestly discussed shortcomings and limitations.
It is also worth pointing out that they even have conducted a human evaluation study to assess the completeness of the summaries
produced by their length-controlled model. I would urge the authors to evaluate additional aspects of the output such as content, and
faithfulness to the input (i.e., are there hallucinations). Also the comparison should include an AR system and the gold upper bound. | train | [
"2jiImhE3u5Z",
"ymJxzuOkngK",
"ZnIi2ppmnRK",
"gGyrmmtA5L",
"E53Sl9wB2yP",
"3g862LibCZv",
"0gVCSLmz1Xc",
"ZsJV0iMxB_m",
"vb1p5MlOIt",
"N3XGjiv0JX",
"XGPXDrjKVS8"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the efforts made in the rebuttal. I have carefully read the response. They address a large part of my concerns and I do not have further questions. I really appreciate your honest discussion with NAUS and the convincing comparison with NAUS. \n\nRegarding my initial comments, I wish to clarify that the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"3g862LibCZv",
"nips_2022_KXybrIUJnya",
"XGPXDrjKVS8",
"N3XGjiv0JX",
"vb1p5MlOIt",
"0gVCSLmz1Xc",
"ZsJV0iMxB_m",
"nips_2022_KXybrIUJnya",
"nips_2022_KXybrIUJnya",
"nips_2022_KXybrIUJnya",
"nips_2022_KXybrIUJnya"
] |
nips_2022_KwwBBSzQgRX | Optimal Rates for Regularized Conditional Mean Embedding Learning | We address the consistency of a kernel ridge regression estimate of the conditional mean embedding (CME), which is an embedding of the conditional distribution of $Y$ given $X$ into a target reproducing kernel Hilbert space $\mathcal{H}_Y$. The CME allows us to take conditional expectations of target RKHS functions, and has been employed in nonparametric causal and Bayesian inference.
We address the misspecified setting, where the target CME is
in the space of Hilbert-Schmidt operators acting from an input interpolation space between $\mathcal{H}_X$ and $L_2$, to $\mathcal{H}_Y$. This space of operators is shown to be isomorphic to a newly defined vector-valued interpolation space. Using this isomorphism, we derive a novel and adaptive statistical learning rate for the empirical CME estimator under the misspecified setting. Our analysis reveals that our rates match the optimal $O(\log n / n)$ rates without assuming $\mathcal{H}_Y$ to be finite dimensional. We further establish a lower bound on the learning rate, which shows that the obtained upper bound is optimal. | Accept | There is a wide consensus among reviewers, after the discussion period, that this submission has strong and novel results where a clear minimax optimality is established for the conditional mean embedding problem.
For the camera ready version, the authors are strongly encouraged to clearly implement all revisions mentioned in the discussion related to the correctness of the lower bound. | train | [
"kb9woSjlglE",
"mtZsSqETFki",
"FE8xY_7ihew",
"M0_cyWOQ0Ny",
"DYdtRqh2Sx",
"aPoAi213J2n",
"blBTW6QEkG",
"sAd2EBipcz1",
"lqUBRPRQCj",
"dKaiod9Z5OC"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I'd like to thank the authors for addressing mine and other reviewers' concerns. I believe the paper makes a significant contribution to the theory of kernel mean embeddings in misspecified settings and should be accepted.",
" We thank the reviewer for their time and effort in reading our manuscript. We agree t... | [
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
2
] | [
"FE8xY_7ihew",
"dKaiod9Z5OC",
"lqUBRPRQCj",
"sAd2EBipcz1",
"blBTW6QEkG",
"nips_2022_KwwBBSzQgRX",
"nips_2022_KwwBBSzQgRX",
"nips_2022_KwwBBSzQgRX",
"nips_2022_KwwBBSzQgRX",
"nips_2022_KwwBBSzQgRX"
] |
nips_2022_7WuCttgNQ79 | Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization | Self-supervised pre-training recently demonstrates success on large-scale multimodal data, and state-of-the-art contrastive learning methods often enforce the feature consistency from cross-modality inputs, such as video/audio or video/text pairs. Despite its convenience to formulate and leverage in practice, such cross-modality alignment (CMA) is only a weak and noisy supervision, since two modalities can be semantically misaligned even they are temporally aligned. For example, even in the (often adopted) instructional videos, a speaker can sometimes refer to something that is not visually present in the current frame; and the semantic misalignment would only be more unpredictable for the raw videos collected from unconstrained internet sources. We conjecture that might cause conflicts and biases among modalities, and may hence prohibit CMA from scaling up to training with larger and more heterogeneous data. This paper first verifies our conjecture by observing that, even in the latest VATT pre-training using only narrated videos, there exist strong gradient conflicts between different CMA losses within the same sample triplet (video, audio, text), indicating them as the noisy source of supervision. We then propose to harmonize such gradients during pre-training, via two techniques: (i) cross-modality gradient realignment: modifying different CMA loss gradients for one sample triplet, so that their gradient directions are in more agreement; and (ii) gradient-based curriculum learning: leveraging the gradient conflict information on an indicator of sample noisiness, to develop a curriculum learning strategy to prioritize training with less noisy sample triplets. Applying those gradient harmonization techniques to pre-training VATT on the HowTo100M dataset, we consistently improve its performance on different downstream tasks. Moreover, we are able to scale VATT pre-training to more complicated non-narrative Youtube8M dataset to further improve the state-of-the-arts. | Accept | Authors present a mechanism to improve triple cross-modality alignment between video-text / video-audio with a shared encoding backbone. The basic idea is to measure the cosine similarity between the gradients coming from video-text and video-audio. When the gradients conflict, the authors postulate that this is caused by mis-matched data. In order to address, the gradients can either be re-aligned, or this information can be used for curriculum learning to filter noisy samples.
Authors study on 6 datasets, modifying VATT with their approach, and demonstrate consistent improvements in performance.
Pros:
- [R] Well written (disagreement) and easy to follow.
- [R] Interesting method with clear impact.
- [R] Experimental evaluation is comprehensive
- [R] Method can scale up to noisier data (YouTube8M) to demonstrate even further gains in performance.
Cons:
- [AC] Authors assume that when gradients conflict, this correlates with noisy data. But no study was performed to confirm this. The method may improve performance on end tasks, but the motivation as to why it improves performance is only based on assumptions, and no data. Authors could greatly improve the paper by doing a study on a random sample of conflicting gradients to confirm that the data samples in those situations are misaligned more than when the gradients are aligned.
- [R] Only applies to modality agnostic single backbone setting. Authors should make this more clear in the paper. Authors have addressed this concern.
- [R] Writing could be improved.
Unanimous reviewer ratings on accept. In light of the utility of the approach, the measured improvements in performance, and reviewer comments, AC recommends accept. AC, however, recommends to authors to reframe their writing to emphasize that they *assume* gradient misalignment correlates to noisy samples, or perform an experiment to confirm that this is the case.
AC Rating: Accept | train | [
"kdCSa8jYqH",
"S6CSrlcAcA",
"YjJCJviRxkS",
"Am7u2XGWcmq",
"F5U2UaGP9Cq",
"1gUMYTJYRv8",
"fS7XDYMlCtx",
"8DWahYTDYO"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the acknowledgement, we will add additional results on MS-CLIP [r1] in the final version of the paper, as soon as they have the pre-training code released.",
" Thanks for the clarification and insights. Look forward to seeing these additional results in the final version of the paper. (Note that MS-C... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"S6CSrlcAcA",
"F5U2UaGP9Cq",
"fS7XDYMlCtx",
"8DWahYTDYO",
"1gUMYTJYRv8",
"nips_2022_7WuCttgNQ79",
"nips_2022_7WuCttgNQ79",
"nips_2022_7WuCttgNQ79"
] |
nips_2022_PbKa0yApPq5 | A permutation-free kernel two-sample test | The kernel Maximum Mean Discrepancy~(MMD) is a popular multivariate distance metric between distributions. The usual kernel-MMD test statistic (for two-sample testing) is a degenerate U-statistic under the null, and thus it has an intractable limiting null distribution. Hence, the standard approach for designing a level-$(1-\alpha)$ two-sample test using this statistic involves selecting the rejection threshold as the $(1-\alpha)$-quantile of the permutation distribution. The resulting nonparametric test has finite-sample validity but suffers from large computational cost, since the test statistic must be recomputed for every permutation.
We propose the cross-MMD, a new quadratic time MMD test statistic based on sample-splitting and studentization. We prove that under mild assumptions, it has a standard normal limiting distribution under the null. Importantly, we also show that the resulting test is consistent against any fixed alternative, and when using the Gaussian kernel, it has minimax rate-optimal power against local alternatives. For large sample-sizes, our new cross-MMD provides a significant speedup over the MMD, for only a slight loss in power. | Accept | The paper proposes a variant of MMD test, which is asymptotically normal. The proposed method is computed easily with data-splitting and studentization. The paper provides a solid theoretical study about the consistency and minimax optimal rate for local alternatives. The experimental results also show favorable results.
The proposed simple method for obtaining the asymptotic normality is a significant advance in the topic of MMD, which is a popular statistic for comparing two distributions but suffers from the complicated asymptotics in the previous variants. The theoretical study is solid. We believe the work is worth being presented in NeurIPS. | train | [
"uXhnas6ZbDl",
"7eI5Nxql6M",
"8sRaQuuNWsi",
"bawJUAgsblb",
"3laxjw5k8Tc",
"ZKpen9k8r86",
"LoOpZStu8Z",
"v0_mXHHGCBG",
"nn2blWK3Dq",
"-FbWtaC3bR",
"-EFNxczMd28"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have done a thorough job of responding to my comments and questions as well as the other reviewers. I have no other concerns at this time.",
" We thank the reviewers for their feedback. We have made the following changes to the manuscript:\n\n1. We have added all the additional experimental results,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"3laxjw5k8Tc",
"nips_2022_PbKa0yApPq5",
"bawJUAgsblb",
"-EFNxczMd28",
"-FbWtaC3bR",
"nn2blWK3Dq",
"v0_mXHHGCBG",
"nips_2022_PbKa0yApPq5",
"nips_2022_PbKa0yApPq5",
"nips_2022_PbKa0yApPq5",
"nips_2022_PbKa0yApPq5"
] |
nips_2022_edgCBcwZxgd | Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis | Advanced deep neural networks (DNNs), designed by either human or AutoML algorithms, are growing increasingly complex. Diverse operations are connected by complicated connectivity patterns, e.g., various types of skip connections. Those topological compositions are empirically effective and observed to smooth the loss landscape and facilitate the gradient flow in general. However, it remains elusive to derive any principled understanding of their effects on the DNN capacity or trainability, and to understand why or in which aspect one specific connectivity pattern is better than another. In this work, we theoretically characterize the impact of connectivity patterns on the convergence of DNNs under gradient descent training in fine granularity. By analyzing a wide network's Neural Network Gaussian Process (NNGP), we are able to depict how the spectrum of an NNGP kernel propagates through a particular connectivity pattern, and how that affects the bound of convergence rates. As one practical implication of our results, we show that by a simple filtration of "unpromising" connectivity patterns, we can trim down the number of models to evaluate, and significantly accelerate the large-scale neural architecture search without any overhead. | Accept | This paper studies the relationship between connectivity of a deep network and its convergence, both theoretically and empirically. The paper also studies simpler metrics such as effective depth and width to guide the architecture search. Overall this is an impressive theoretical paper supported by empirical evidences.
All the three reviewers find the paper a valuable contribution to an important theoretical problem in deep learning. After reading the rebuttals, Reviewer rAbP recommended to accept this paper in its current form. Reviewer D7qw felt that all the concerns had been well addressed, and increased the score by one. Reviewer 6D9f agreed with the authors' response. | test | [
"ejF7iQ68CWI",
"JIUTq2wHY9m",
"YSgj65oHV2",
"P9oN1BnZeAe",
"5tJE63-tYGJ",
"RLPmCTZqFPP",
"eSypwYKAp2",
"xaBfOGhiNVq",
"7Dzu6DbY6iR",
"WCawzWJWC9p",
"gVGp7pN9uj"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your time and response! We will include your suggestions in our camera ready!",
" We greatly appreciate your further comments!\n\nWe agree that convergence is a part of the whole picture of a network's property. Especially, a network's generalization and functional complexity are also important as... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"YSgj65oHV2",
"P9oN1BnZeAe",
"5tJE63-tYGJ",
"eSypwYKAp2",
"gVGp7pN9uj",
"WCawzWJWC9p",
"7Dzu6DbY6iR",
"nips_2022_edgCBcwZxgd",
"nips_2022_edgCBcwZxgd",
"nips_2022_edgCBcwZxgd",
"nips_2022_edgCBcwZxgd"
] |
nips_2022_5aZ8umizItU | Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers | Complex time-varying systems are often studied by abstracting away from the dynamics of individual components to build a model of the population-level dynamics from the start. However, when building a population-level description, it can be easy to lose sight of each individual and how they contribute to the larger picture. In this paper, we present a novel transformer architecture for learning from time-varying data that builds descriptions of both the individual as well as the collective population dynamics. Rather than combining all of our data into our model at the onset, we develop a separable architecture that operates on individual time-series first before passing them forward; this induces a permutation-invariance property and can be used to transfer across systems of different size and order. After demonstrating that our model can be applied to successfully recover complex interactions and dynamics in many-body systems, we apply our approach to populations of neurons in the nervous system. On neural activity datasets, we show that our model not only yields robust decoding performance, but also provides impressive performance in transfer across recordings of different animals without any neuron-level correspondence. By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding. | Accept | This paper introduces the multi-stage Embedded Interaction Transformer which models individual channels of systems with multiple interacting elements and then models their interactions. The approach is applied to simulated systems to recover known interaction-dynamics as well as neural datasets, revealing transferability of some model parameters across animals. Essentially the model is structured to solve a supervised regression/classification problem where the input corresponds to many (possibly a variable number of) timeseries and the timeseries channels reflect observations of interacting elements.
Reviewers generally found the paper clear, proposing a simple innovation, and applied to an interesting problem class involving neural data. Two of the reviewers expressed some concerns about the degree of novelty in the proposed approach. The authors responded to this point directly and while they didn't totally satisfy reviewer 6wnv, this reviewer still updated their rating from a 4 to a 5.
In my own assessment, aligned with the less enthusiastic reviewers, I found the technical contribution somewhat incremental, but clearly enough described. And I found the evaluation somewhat limited (consistent with reviewer 7m6L), given the moderate magnitudes of improvement, but adequate. That the EIT generalizes better than other approaches does increase its potential impact and may inspire future follow-up work. Given the distribution of reviewer ratings, I'm willing to endorse the paper. | train | [
"_Yh_0eA2ttV",
"ufHshP68VbV",
"J8BVObQ8pyZ",
"uu-ErKlVfYi",
"oXg3rra3ydt",
"KtSVIRTvHJ4V",
"8SoFv7oEKSg",
"vnqCjIeL7b",
"DG42PljQ-LA",
"p6ewxQerXc",
"ys6D7SKwAbW",
"OPh0NVPmnEr",
"bMiaXpzs1jI"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed answers!\n\nRegarding the problem formulation and some presentation issues, the authors mostly resolve the concern. The novelty of the work is also clarified through the answer and the general response.\n\nAlthough the author claims the novelty in terms of the general framework, especially... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"uu-ErKlVfYi",
"bMiaXpzs1jI",
"OPh0NVPmnEr",
"ys6D7SKwAbW",
"ys6D7SKwAbW",
"p6ewxQerXc",
"p6ewxQerXc",
"DG42PljQ-LA",
"nips_2022_5aZ8umizItU",
"nips_2022_5aZ8umizItU",
"nips_2022_5aZ8umizItU",
"nips_2022_5aZ8umizItU",
"nips_2022_5aZ8umizItU"
] |
nips_2022_11nMVZK0WYM | Pruning has a disparate impact on model accuracy | Network pruning is a widely-used compression technique that is able to significantly scale down overparameterized models with minimal loss of accuracy. This paper shows that pruning may create or exacerbate disparate impacts. The paper sheds light on the factors to cause such disparities, suggesting differences in gradient norms and distance to decision boundary across groups to be responsible for this critical issue. It analyzes these factors in detail, providing both theoretical and empirical support, and proposes a simple, yet effective, solution that mitigates the disparate impacts caused by pruning. | Accept | The paper hits two important angles in current ML: first, offering theoretical predictions about the behavior of networks in the context of pruning, and second, providing guidance on how differences in data distributions will be affected by pruning. Examples include face image analysis with labels for race, these should be treated carefully so as not to pass on errors.
Ethics reviewers would prefer that face recognition not be the central example, but serve more as a supporting example. If that modification is possible within the NeurIPS timeframe if the paper is accepted, I would like to see that happen, but I would not necessarily hold back on publication for that sole reason. At the very minimum there should be a discussion of potential harms of disparate quality and availability in facial recognition technology, and how the current work relates to those. For example, do we *want* facial recognition to work well? Can we spell out the possible harms of disparate quality, such as increased risk of false identification?
The reviewers have engaged with author comments and have specific recommendations for improvements in clarity and evaluation strength. | train | [
"Yb4amFxUh6",
"LukBK12y3g3",
"6yUwkT1M2j1",
"hyuZVtaEqHO",
"e-JxVkIFOJd",
"H_IYkxrab4",
"W2Eug5l-5TL",
"w_aHedsNm8Q",
"nW8aaMLl5en",
"Su9IKjckXUw",
"L-TmvxnCYmx",
"k-6OC8Dfum0",
"Vg72vxrnnP",
"gWP6jh2t78R",
"wZ3DmuwOtG",
"wsXRBJuL7sRs",
"q7eAVodPgOx",
"Mo-4YxVFlhX",
"gh6enpj1rfA"... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" My guess is that ethics review was requested for this paper on the basis of using face recognition datasets to train an ethnicity classifier - this is honestly a pretty risky decision on the part of the authors and I would recommend not doing this.\nAlthough the point the authors are making is good, that the prun... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2022_11nMVZK0WYM",
"6yUwkT1M2j1",
"e-JxVkIFOJd",
"H_IYkxrab4",
"W2Eug5l-5TL",
"gWP6jh2t78R",
"q7eAVodPgOx",
"Mo-4YxVFlhX",
"q7eAVodPgOx",
"k-6OC8Dfum0",
"wZ3DmuwOtG",
"Vg72vxrnnP",
"gh6enpj1rfA",
"Mo-4YxVFlhX",
"q7eAVodPgOx",
"nips_2022_11nMVZK0WYM",
"nips_2022_11nMVZK0WYM",
... |
nips_2022_oNWqs_JRcDD | Efficient Non-Parametric Optimizer Search for Diverse Tasks | Efficient and automated design of optimizers plays a crucial role in full-stack AutoML systems. However, prior methods in optimizer search are often limited by their scalability, generability, or sample efficiency. With the goal of democratizing research and application of optimizer search, we present the first efficient, scalable and generalizable framework that can directly search on the tasks of interest. We first observe that optimizer updates are fundamentally mathematical expressions applied to the gradient. Inspired by the innate tree structure of the underlying math expressions, we re-arrange the space of optimizers into a super-tree, where each path encodes an optimizer. This way, optimizer search can be naturally formulated as a path-finding problem, allowing a variety of well-established tree traversal methods to be used as the search algorithm. We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent-form detection that leverage the characteristics of optimizer update rules to further boost the sample efficiency. We provide a diverse set of tasks to benchmark our algorithm and demonstrate that, with only 128 evaluations, the proposed framework can discover optimizers that surpass both human-designed counterparts and prior optimizer search methods. Our code is publicly available at https://github.com/ruocwang/enos. | Accept | Two of the reviewers were positive on this paper and thought the work would be of interest to the community. One reviewer felt the paper lacked clarity, but the other reviewers disagreed. I think there will be interest in the optimizer search for deep learning that this paper presents and feel it should appear in NeurIPS. | train | [
"bKFk60MZVV4",
"fwJkaNOUoc",
"dDzLGREAYxA",
"X40N9hUKxnn",
"oEztIhSGr8F",
"BCdhU_y2P0W",
"52Qszrru3At",
"rtFtuL4IVm3",
"tzLwcDbM8eB",
"1anPpoFrHIs",
"8bQokMSD4mOZ",
"fi_mLXG5wwP",
"8qBv6kO8GE",
"DS7XTKJE04",
"90TfphOFc7",
"vHWedxjFsq",
"QEKUvfx-04z",
"yde0QRJ33m7",
"VaHxTshByCO"
... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for going through the response and for raising the evaluation. We are very glad that your questions and concerns are properly addressed. If any further questions come up, please feel free to let us know, and we are more than happy to discuss them with you.",
" I would like to thank the aut... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"fwJkaNOUoc",
"90TfphOFc7",
"oEztIhSGr8F",
"oEztIhSGr8F",
"fi_mLXG5wwP",
"nips_2022_oNWqs_JRcDD",
"rtFtuL4IVm3",
"8bQokMSD4mOZ",
"VaHxTshByCO",
"VaHxTshByCO",
"VaHxTshByCO",
"yde0QRJ33m7",
"QEKUvfx-04z",
"QEKUvfx-04z",
"QEKUvfx-04z",
"nips_2022_oNWqs_JRcDD",
"nips_2022_oNWqs_JRcDD",
... |
nips_2022_OFJSAMwskM | Triangulation candidates for Bayesian optimization | Bayesian optimization involves "inner optimization" over a new-data acquisition criterion which is non-convex/highly multi-modal, may be non-differentiable, or may otherwise thwart local numerical optimizers. In such cases it is common to replace continuous search with a discrete one over random candidates. Here we propose using candidates based on a Delaunay triangulation of the existing input design. We detail the construction of these "tricands" and demonstrate empirically how they outperform both numerically optimized acquisitions and random candidate-based alternatives, and are well-suited for hybrid schemes, on benchmark synthetic and real simulation experiments. | Accept | Overall, I think this paper makes a pretty interesting contribution. I think that one of the fundamental problems of Bayesian optimization that is often swept aside (at least in the literature if not in software) is the fact that--with many acquisition functions--it doesn't really solve global optimization as a problem, but merely shifts that problem around to a much less expensive one (e.g., optimizing the cheap acquisition function rather than the cheap objective function). Despite this, extremely simple procedures like restarted gradient descent methods and discretization are pretty common place, and very little progress has been made or even attempted on this problem in recent years. The authors approach here seems pretty reasonable based on a relatively agreeable intuition.
With that said, I do think it would be very useful for the authors in the camera ready version to address Reviewer czZM's final comments in their last remark. I agree that 2*d initializations for multi-start optimization is extremely small for most problems, especially since modern software can support optimizing from each initialization in batch mode parallel pretty efficiently. Given that multi-start optimization is arguably the predominant method used at least in full software implementations, it's probably worth including a comparison at a variety of initialization budgets, up to *much* larger ones than this. | train | [
"7MbdqWCgS8",
"d_whleFXrE",
"V52P0AIbA2K",
"RueoJV9VemD",
"yT28Fn6Okcm",
"_uODKJbOGU5",
"0xjysuA3gN0",
"vr-sE7SztlT",
"CX7IIwiv-tU",
"SF41gU4NCB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the rebuttal and the feedback from the other reviewers. I'm satisfied with most of the responses except for (1) as I don't think anyone would use so few initialization points given that, e.g., EI is well-known to be numerically zero in large parts of the domain. For example, Ax (which relies on BoTorc... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
4,
5
] | [
"RueoJV9VemD",
"_uODKJbOGU5",
"SF41gU4NCB",
"CX7IIwiv-tU",
"vr-sE7SztlT",
"0xjysuA3gN0",
"nips_2022_OFJSAMwskM",
"nips_2022_OFJSAMwskM",
"nips_2022_OFJSAMwskM",
"nips_2022_OFJSAMwskM"
] |
nips_2022_9XWHdVCynhp | Rashomon Capacity: A Metric for Predictive Multiplicity in Classification | Predictive multiplicity occurs when classification models with statistically indistinguishable performances assign conflicting predictions to individual samples. When used for decision-making in applications of consequence (e.g., lending, education, criminal justice), models developed without regard for predictive multiplicity may result in unjustified and arbitrary decisions for specific individuals. We introduce a new metric, called Rashomon Capacity, to measure predictive multiplicity in probabilistic classification. Prior metrics for predictive multiplicity focus on classifiers that output thresholded (i.e., 0-1) predicted classes. In contrast, Rashomon Capacity applies to probabilistic classifiers, capturing more nuanced score variations for individual samples. We provide a rigorous derivation for Rashomon Capacity, argue its intuitive appeal, and demonstrate how to estimate it in practice. We show that Rashomon Capacity yields principled strategies for disclosing conflicting models to stakeholders. Our numerical experiments illustrate how Rashomon Capacity captures predictive multiplicity in various datasets and learning models, including neural networks. The tools introduced in this paper can help data scientists measure and report predictive multiplicity prior to model deployment. | Accept | The paper presents a new metric for “predictive multiplicity”, which is the tendency of different models from a hypothesis class with similar overall performance to make different predictions on individual samples; predictive multiplicity is relevant to fairness and interpretability of ML models. The paper also presents analysis and algorithms for computing the metric.
The three reviewers generally agreed in their characterization of the paper: the high-level goal was well motivated and timely, the paper was very well written, and technically solid. They raised concerns/questions about motivation and connection to other ideas (e.g., ensembles, calibration), as well as specific suggestions for the experiments and writing. The authors made substantive changes in response to suggestions (especially those of Reviewer hCae) and wrote very detailed responses to questions. Overall, the remaining hesitation from reviewers centers on the significance and usefulness of these ideas in practice. This left the paper as borderline in its ratings. To the meta-reviewer (who also looked at the paper), some skepticism about whether this is the final solution for characterizing the reliability of ML model predictions is certainly warranted; however, the paper appears to be a solid contribution to a nascent area that provides a starting point and is likely to provoke discussion and follow-on work.
| train | [
"ESLqpaabkSc",
"2P3qhMTel8i",
"vgoRfVlIEP",
"lKDum-07OgF",
"5z0gJ1YoM0R",
"DWAl9KPcLyp",
"36iSpYkHmw",
"iWIgmAlvRc2",
"x_CYY3Q9GQQ",
"RYNR06ZZcxt",
"FJjm41mzErC",
"r2og_I8fvun",
"jNIS3IJkzmt",
"tTOjEm_JqEI",
"7aVpd10Dzw",
"yJIZYueFQ7n",
"rmXrV0wBJlm",
"pSwQ6psiiQi",
"GoeUsREaoQf"... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"offic... | [
" Thank you once more for your comments and time. We're writing since we are now nearing the end of the author-reviewer discussion period.\nIf you have additional comments, please let us know soon and we would be happy to try to answer before the deadline. If we have addressed your questions, we would appreciate it... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_9XWHdVCynhp",
"vgoRfVlIEP",
"lKDum-07OgF",
"yJIZYueFQ7n",
"DWAl9KPcLyp",
"36iSpYkHmw",
"x_CYY3Q9GQQ",
"M90fvGzVQjN",
"7aVpd10Dzw",
"HsSW6_4Mtg4",
"KNnb0vbaNrW",
"2eNEKn9_JzE",
"tTOjEm_JqEI",
"7aVpd10Dzw",
"HsSW6_4Mtg4",
"rmXrV0wBJlm",
"pSwQ6psiiQi",
"GoeUsREaoQf",
"KNn... |
nips_2022_8rfYWE3nyXl | Are All Losses Created Equal: A Neural Collapse Perspective | While cross entropy (CE) is the most commonly used loss function to train deep neural networks for classification tasks, many alternative losses have been developed to obtain better empirical performance. Among them, which one is the best to use is still a mystery, because there seem to be multiple factors affecting the answer, such as properties of the dataset, the choice of network architecture, and so on. This paper studies the choice of loss function by examining the last-layer features of deep networks, drawing inspiration from a recent line work showing that the global optimal solution of CE and mean-square-error (MSE) losses exhibits a Neural Collapse phenomenon. That is, for sufficiently large networks trained until convergence, (i) all features of the same class collapse to the corresponding class mean and (ii) the means associated with different classes are in a configuration where their pairwise distances are all equal and maximized. We extend such results and show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse. Hence, all relevant losses (i.e., CE, LS, FL, MSE) produce equivalent features on training data. In particular, based on the unconstrained feature model assumption, we provide either the global landscape analysis for LS loss or the local landscape analysis for FL loss and show that the (only!) global minimizers are neural collapse solutions, while all other critical points are strict saddles whose Hessian exhibit negative curvature directions either in the global scope for LS loss or in the local scope for FL loss near the optimal solution. The experiments further show that Neural Collapse features obtained from all relevant losses (i.e., CE, LS, FL, MSE) lead to largely identical performance on test data as well, provided that the network is sufficiently large and trained until convergence. | Accept | This paper demonstrates that the neural collapse phenomenon - first observed under cross entropy and MSE losses - occurs with a broad family of other loss functions. The authors include a theoretical justification of this phenomenon as well as experiments exploring its implications. Overall, this paper makes an important contribution to the study of loss functions and raises interesting questions about neural network training. To strengthen the paper, I encourage the authors to incorporate the reviewer’s suggestions for additional experiments (ablation studies and additional datasets). Nevertheless, this paper is well suited for publication at NeurIPS and will be of interest to the neural network community. | train | [
"oWjsld-E9x",
"1Gr0BPxQm4T",
"QgEufDZmSJ4",
"Tvxp63stXWW",
"ScCb8oUwEl-",
"TmW3jqmtQW9",
"p3KGGT3L3RX7",
"6toNez0zYA2",
"2rCw5gsk39i",
"Foh-cxnwamD",
"ysKH21XeM_o"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thanks for the reviewer for the appreciation and support of our work. We also appreciate the reviewer’s constructive suggestions in the review, which we will incoporated in our camera-ready version.",
" We thank the reviewer for the appreciation of our efforts and recognition of our work in the response, and... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"QgEufDZmSJ4",
"Tvxp63stXWW",
"6toNez0zYA2",
"TmW3jqmtQW9",
"ysKH21XeM_o",
"Foh-cxnwamD",
"6toNez0zYA2",
"2rCw5gsk39i",
"nips_2022_8rfYWE3nyXl",
"nips_2022_8rfYWE3nyXl",
"nips_2022_8rfYWE3nyXl"
] |
nips_2022_ST5ZUlz_3w | Interventions, Where and How? Experimental Design for Causal Models at Scale | Causal discovery from observational and interventional data is challenging due to limited data and non-identifiability which introduces uncertainties in estimating the underlying structural causal model (SCM). Incorporating these uncertainties and selecting optimal experiments (interventions) to perform can help to identify the true SCM faster. Existing methods in experimental design for causal discovery from limited data either rely on linear assumptions for the SCM or select only the intervention target. In this paper, we incorporate recent advances in Bayesian causal discovery into the Bayesian optimal experimental design framework, which allows for active causal discovery of nonlinear, large SCMs, while selecting both the target and the value to intervene with. We demonstrate the performance of the proposed method on synthetic graphs (Erdos-Rènyi, Scale Free) for both linear and nonlinear SCMs as well as on the \emph{in-silico} single-cell gene regulatory network dataset, DREAM. | Accept | The paper proposes a Bayesian optimization strategy for causal discovery under causal sufficiency and additive noise. The main point is to choose interventions that maximize mutual information between parameters and observations. The procedure combines techniques from several existing methods. The authors have successfully addressed questions raised by reviewer PxXf about related work. Several reviewers praise the writing (SUP9,3PzT,25Ya). Overall, this is a strong paper, with atomic interventions as the main limitation (see reviewers 25Ya,h91K) | train | [
"NpUiUNjqy5w",
"b7LOQByTKzj",
"luHuYOGXL-8",
"5MK0YF2_9Q",
"yyGfO4DzkY",
"5KMNex9mSSF",
"asdulNUeTAQ",
"Z8jYIQGLRTP",
"goIdwjGEUc2",
"dA5Rm-n4KPG",
"0R956k1MVbI",
"HuvJI7mQ4JH",
"BYHOGct34Pe",
"-WDuV8iCl0Y",
"WPPcGzM-0p",
"8gWe5b9yEUj",
"pmn1h65hayw",
"K_HS5g0eemn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I sincerely thank the authors for the clarification of the notation and the use of the GP model. This significantly answers my questions. I maintain my current score.",
" I think the authors did a great job in the rebuttal and answered most of my (and other reviewers') concerns (including the similarity with pr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3,
3,
2
] | [
"goIdwjGEUc2",
"dA5Rm-n4KPG",
"0R956k1MVbI",
"nips_2022_ST5ZUlz_3w",
"nips_2022_ST5ZUlz_3w",
"pmn1h65hayw",
"pmn1h65hayw",
"WPPcGzM-0p",
"K_HS5g0eemn",
"-WDuV8iCl0Y",
"BYHOGct34Pe",
"8gWe5b9yEUj",
"nips_2022_ST5ZUlz_3w",
"nips_2022_ST5ZUlz_3w",
"nips_2022_ST5ZUlz_3w",
"nips_2022_ST5ZUl... |
nips_2022_r5rzV51GZx | CoPur: Certifiably Robust Collaborative Inference via Feature Purification | Collaborative inference leverages diverse features provided by different agents (e.g., sensors) for more accurate inference. A common setup is where each agent sends its embedded features instead of the raw data to the Fusion Center (FC) for joint prediction. In this setting, we consider the inference-time attacks when a small fraction of agents are compromised. The compromised agent either does not send embedded features to the FC, or sends arbitrarily embedded features. To address this, we propose a certifiably robust COllaborative inference framework via feature PURification (CoPur), by leveraging the block-sparse nature of adversarial perturbations on the feature vector, as well as exploring the underlying redundancy across the embedded features (by assuming the overall features lie on an underlying lower dimensional manifold). We theoretically show that the proposed feature purification method can robustly recover the true feature vector, despite adversarial corruptions and/or incomplete observations. We also propose and test an untargeted distributed feature-flipping attack, which is agnostic to the model, training data, label, as well as the features held by other agents, and is shown to be effective in attacking state-of-the-art defenses. Experiments on ExtraSensory and NUS-WIDE datasets show that CoPur significantly outperforms existing defenses in terms of robustness against targeted and untargeted adversarial attacks. | Accept | This paper focuses on improving the robustness of the model for collaborative inference. A pre-processing-based defense method is proposed against inference phase attacks. Both theoretical analyses and empirical evaluations are provided to demonstrate the effectiveness of the proposed method.
In the review process, mixed reviews are achieved. Four reviewers are positive about this paper, while one reviewer is negative. Reviews appreciate the strengths of the paper: (1) the idea is interesting; (2) the pre-processing strategy does not require much computing cost; (3) the theoretical analysis is detailed and solid; (4) empirical evaluations are overall convincing.
One reviewer who is negative about this paper points out major concerns: (1) theoretical results on the sparsity; (2) empirical evaluations on the sparsity; (3) the problems in the presentations of the paper. Major concerns have been addressed during the rebuttal. We suggest the author carefully merge the rebuttals in the final version. | test | [
"UVoJfpeCURO",
"S33JrWXBlwH",
"6V_PhxaTkT",
"XbzCX49hd90",
"NJUfG8oifb",
"4IkaRVED8y",
"jMr075usH-v",
"vWyCGRjaUgo",
"8PDbx39kwfT",
"kvxit1i7p9N",
"NvoUvpcnFL",
"IoWMn77lf-N",
"U3FqVQ7TgImA",
"v-YMvyOihD",
"8bdrZANsJ8",
"b5QsepLxS1n",
"OcnYtglr8zu",
"WvzhdtzcEh",
"HtuyaR6mxhK",
... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thanks for your feedback on our response! We would appreciate if you can further consider the merit of a certifiable defense to solve a trendy distributed/collaborative inference problem, as well as the impact of theoretical innovation on separating low-dim manifold and block-sparse structure. We believe our work... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4,
2
] | [
"S33JrWXBlwH",
"NJUfG8oifb",
"HtuyaR6mxhK",
"OcnYtglr8zu",
"_TbbzbOh5gC",
"9wxtj0SQ9sQ",
"vWyCGRjaUgo",
"U3FqVQ7TgImA",
"_TbbzbOh5gC",
"_TbbzbOh5gC",
"9wxtj0SQ9sQ",
"HtuyaR6mxhK",
"WvzhdtzcEh",
"WvzhdtzcEh",
"WvzhdtzcEh",
"OcnYtglr8zu",
"nips_2022_r5rzV51GZx",
"nips_2022_r5rzV51GZx... |
nips_2022_wZEfHUM5ri | CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion | Masked Image Modeling (MIM) has recently been established as a potent pre-training paradigm. A pretext task is constructed by masking patches in an input image, and this masked content is then predicted by a neural network using visible patches as sole input. This pre-training leads to state-of-the-art performance when finetuned for high-level semantic tasks, e.g. image classification and object detection. In this paper we instead seek to learn representations that transfer well to a wide variety of 3D vision and lower-level geometric downstream tasks, such as depth prediction or optical flow estimation. Inspired by MIM, we propose an unsupervised representation learning task trained from pairs of images showing the same scene from different viewpoints. More precisely, we propose the pretext task of cross-view completion where the first input image is partially masked, and this masked content has to be reconstructed from the visible content and the second image. In single-view MIM, the masked content often cannot be inferred precisely from the visible portion only, so the model learns to act as a prior influenced by high-level semantics. In contrast, this ambiguity can be resolved with cross-view completion from the second unmasked image, on the condition that the model is able to understand the spatial relationship between the two images. Our experiments show that our pretext task leads to significantly improved performance for monocular 3D vision downstream tasks such as depth estimation. In addition, our model can be directly applied to binocular downstream tasks like optical flow or relative camera pose estimation, for which we obtain competitive results without bells and whistles, i.e., using a generic architecture without any task-specific design. | Accept | Combining masked image modeling with cross-view completion, the paper develops a self-supervised pretext task appropriate for learning visual representations for downstream 3D tasks. After the author response and discussion period, all four reviewers give positive ratings. The pretext task design is novel, well-motivated for 3D representation learning, and shown to be experimentally effective on downstream 3D vision tasks. | train | [
"p5RPUJytiJy",
"8L-5d8eG5Gx",
"Yq_Wfr1QuYM",
"HM-ByatK5s1",
"Ras_Cft3e69",
"UOOqKNl4y21",
"U_8QbS9fYUq",
"oWI-nUQt1u",
"a59WF243HbS",
"Ygl-miXegTs",
"Lo59Jy_kUie",
"VugKGmF0MUA"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nThank you for your answer. \n\nWe agree that the domain shift can be greater with a fully virtual simulation than with data rendered from real world 3D scans or extremely photorealistic settings. Still, we believe that, if the most interesting part of the signal (the geometry) is present in a given setup (fully... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"8L-5d8eG5Gx",
"HM-ByatK5s1",
"VugKGmF0MUA",
"Lo59Jy_kUie",
"Ygl-miXegTs",
"Ygl-miXegTs",
"a59WF243HbS",
"nips_2022_wZEfHUM5ri",
"nips_2022_wZEfHUM5ri",
"nips_2022_wZEfHUM5ri",
"nips_2022_wZEfHUM5ri",
"nips_2022_wZEfHUM5ri"
] |
nips_2022_F2mhzjHkQP | On the SDEs and Scaling Rules for Adaptive Gradient Algorithms | Approximating Stochastic Gradient Descent (SGD) as a Stochastic Differential Equation (SDE) has allowed researchers to enjoy the benefits of studying a continuous optimization trajectory while carefully preserving the stochasticity of SGD. Analogous study of adaptive gradient methods, such as RMSprop and Adam, has been challenging because there were no rigorously proven SDE approximations for these methods. This paper derives the SDE approximations for RMSprop and Adam, giving theoretical guarantees of their correctness as well as experimental validation of their applicability to common large-scaling vision and language settings. A key practical result is the derivation of a square root scaling rule to adjust the optimization hyperparameters of RMSprop and Adam when changing batch size, and its empirical validation in deep learning settings. | Accept | The paper attains the SDE approximations for two optimization algorithms RMSProp and Adam. The authors have addressed the concerns raised by the reviewers during the rebuttal period. All the reviewers agreed that the paper should be accepted at NeurIPS 2022. Please incorporate the reviewers’ suggestions in their detailed reviews and revise the final version of the paper properly. | val | [
"SAeAwTgYrN9",
"VtzN3BwIM4C",
"MU9lHnz2p7",
"ZwrzSpP6_a",
"VzWGcZiAQSt",
"iA8H6uIUa2Q",
"kQZ1Fpd40aO",
"8FowhhQGdZ-",
"p2oYn5WCgVv",
"Ia0BLShl_Dj",
"XPLaJVWQRv_",
"zUy67XYhnZb"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for clarifying the question!\n\n**Can your analysis be applied to the variable variance of the gradient (due to the change of batch size), and if so, can we formally show the correctness of the LR schedulers in large-batch training?**\n\nThe LR schedulers in the two papers the reviewer mentioned change LR ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"VtzN3BwIM4C",
"ZwrzSpP6_a",
"kQZ1Fpd40aO",
"zUy67XYhnZb",
"XPLaJVWQRv_",
"Ia0BLShl_Dj",
"p2oYn5WCgVv",
"nips_2022_F2mhzjHkQP",
"nips_2022_F2mhzjHkQP",
"nips_2022_F2mhzjHkQP",
"nips_2022_F2mhzjHkQP",
"nips_2022_F2mhzjHkQP"
] |
nips_2022_6Nh0D44tRAz | Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees | We study the problem of representation learning in stochastic contextual linear bandits. While the primary concern in this domain is usually to find \textit{realizable} representations (i.e., those that allow predicting the reward function at any context-action pair exactly), it has been recently shown that representations with certain spectral properties (called \textit{HLS}) may be more effective for the exploration-exploitation task, enabling \textit{LinUCB} to achieve constant (i.e., horizon-independent) regret. In this paper, we propose \textsc{BanditSRL}, a representation learning algorithm that combines a novel constrained optimization problem to learn a realizable representation with good spectral properties with a generalized likelihood ratio test to exploit the recovered representation and avoid excessive exploration. We prove that \textsc{BanditSRL} can be paired with any no-regret algorithm and achieve constant regret whenever an \textit{HLS} representation is available. Furthermore, \textsc{BanditSRL} can be easily combined with deep neural networks and we show how regularizing towards \textit{HLS} representations is beneficial in standard benchmarks. | Accept | All reviewers are in agreement that this paper provides better algorithms to learn good representation from a set of realizable representation in contextual linear bandits. The proposed algorithm is deemed to be novel and general. Accept.
In the final version, please release the code for reproducibility. The authors are encouraged to include the detailed comparison with LEADER in the rebuttal. | train | [
"b-jTnE0RqTSI",
"3MeIjpyBByG",
"i0C4B_TF7HI",
"Q8MVCpblgaF",
"Mx0EJqhusxl",
"F2MF26QX5h",
"hIJnHtk1klF"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the response and my concerns are addressed. My rating will remain unchanged.",
" Thank you for the review and the positive feedback.\n\n**Point 1 (dataset-based experiments):** While cleaning the code after the deadline, we realized that the experiments based on datasets (i.e. statlog, magic, covert... | [
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"i0C4B_TF7HI",
"hIJnHtk1klF",
"F2MF26QX5h",
"Mx0EJqhusxl",
"nips_2022_6Nh0D44tRAz",
"nips_2022_6Nh0D44tRAz",
"nips_2022_6Nh0D44tRAz"
] |
nips_2022_dTTKMy00PTJ | Universal Rates for Interactive Learning | Consider the task of learning an unknown concept from a given concept class; to what extent does interacting with a domain expert accelerate the learning process? It is common to measure the effectiveness of learning algorithms by plotting the "learning curve", that is, the decay of the error rate as a function of the algorithm's resources (examples, queries, etc). Thus, the overarching question in this work is whether (and which kind of) interaction accelerates the learning curve. Previous work in interactive learning focused on uniform bounds on the learning rates which only capture the upper envelope of the learning curves over families of data distributions. We thus formalize our overarching question within the distribution dependent framework of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring a single upper bound which applies uniformly to all distributions. Our main result reveals a fundamental trichotomy of interactive learning rates, thus providing a complete characterization of universal interactive learning. As a corollary we deduce a strong affirmative answer to our overarching question, showing that interaction is beneficial. Remarkably, we show that in important cases such benefits are realized with label queries, that is, by active learning algorithms. On the other hand, our lower bounds apply to arbitrary binary queries and, hence, they hold in any interactive learning setting. | Accept | This paper provides a characterization of learning rates for interactive learning in the universal learning framework. All reviewers praised the novelty and quality of this submission. | train | [
"Sxk93mhFsno",
"Ul0kQutoQVZ",
"Bmk6E9VSQcd",
"OJZIlvTXTepS",
"kyK-Sk87zZI_",
"OJLy2W67ySdF",
"xMTc-9obQGL",
"vFdOLFOlCp",
"FF8rCKB2xXd"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifying comments. These are indeed nice directions for future research.",
" > *What are the precise technical differences between your proofs and those of [BHM+21] for the passive setting?*\n\nWe now explain the technical differences between our work and [BHM+21] in detail.\n\n* Upper bound... | [
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"OJZIlvTXTepS",
"Bmk6E9VSQcd",
"FF8rCKB2xXd",
"vFdOLFOlCp",
"xMTc-9obQGL",
"nips_2022_dTTKMy00PTJ",
"nips_2022_dTTKMy00PTJ",
"nips_2022_dTTKMy00PTJ",
"nips_2022_dTTKMy00PTJ"
] |
nips_2022_5g7l7EJoZT | Wavelet Feature Maps Compression for Image-to-Image CNNs | Convolutional Neural Networks (CNNs) are known for requiring extensive computational resources, and quantization is among the best and most common methods for compressing them. While aggressive quantization (i.e., less than 4-bits) performs well for classification, it may cause severe performance degradation in image-to-image tasks such as semantic segmentation and depth estimation. In this paper, we propose Wavelet Compressed Convolution (WCC)---a novel approach for high-resolution activation maps compression integrated with point-wise convolutions, which are the main computational cost of modern architectures. To this end, we use an efficient and hardware-friendly Haar-wavelet transform, known for its effectiveness in image compression, and define the convolution on the compressed activation map. We experiment with various tasks that benefit from high-resolution input. By combining WCC with light quantization, we achieve compression rates equivalent to 1-4bit activation quantization with relatively small and much more graceful degradation in performance. Our code is available at https://github.com/BGUCompSci/WaveletCompressedConvolution. | Accept | The paper proposes a method for compressing feature maps in convolutional neural networks to reduce the computational cost. The method is tailored to image-to-image networks, where existing compression schemes do not work well.
After the rebuttal, all reviewers support the publication of the manuscript. The reviewers note that the problem setting is important (i.e., compression for image-to-image networks, where existing compression schemes do not work well as identified by the paper under review), and that the proposed method works well and is interesting. The reviewers also identified a few weaknesses that for the most part have been cleared up by the author's response. I, therefore, recommend accepting the paper.
| train | [
"6a-bboTYOi",
"V-2OIZ80sOG",
"nu3sTTLhkcZ",
"Zm9NUE8VibB",
"jKHOAby4P2",
"wWDr68pvgi7",
"KbBYmBGpYIW",
"8sRtzEatDb3",
"MxVbWHYGOwS",
"oBE3VHcRAtH",
"fr0LxbAMQFJ",
"mCVbegDf8g",
"gDsM_NpWUvd",
"TN-urr9d_pR"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. It's unfortunate that the results from Nagel 2021 couldn't be reproduced, but it's also a common and understandable situation. \n\n2. Using the Haar wavelet due to a focus on computational complexity makes sense, and Appendix F does give a sense of the mIoU gains achievable with more complex wavelets. Adding a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"MxVbWHYGOwS",
"KbBYmBGpYIW",
"8sRtzEatDb3",
"nips_2022_5g7l7EJoZT",
"oBE3VHcRAtH",
"TN-urr9d_pR",
"gDsM_NpWUvd",
"mCVbegDf8g",
"fr0LxbAMQFJ",
"nips_2022_5g7l7EJoZT",
"nips_2022_5g7l7EJoZT",
"nips_2022_5g7l7EJoZT",
"nips_2022_5g7l7EJoZT",
"nips_2022_5g7l7EJoZT"
] |
nips_2022_98TSEoHOoQE | AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning | Multi-task learning (MTL) jointly learns a set of tasks by sharing parameters among tasks. It is a promising approach for reducing storage costs while improving task accuracy for many computer vision tasks. The effective adoption of MTL faces two main challenges. The first challenge is to determine what parameters to share across tasks to optimize for both memory efficiency and task accuracy. The second challenge is to automatically apply MTL algorithms to an arbitrary CNN backbone without requiring time-consuming manual re-implementation and significant domain expertise. This paper addresses the challenges by developing the first programming framework AutoMTL that automates efficient MTL model development for vision tasks. AutoMTL takes as inputs an arbitrary backbone convolutional neural network (CNN) and a set of tasks to learn, and automatically produces a multi-task model that achieves high accuracy and small memory footprint simultaneously. Experiments on three popular MTL benchmarks (CityScapes, NYUv2, Tiny-Taskonomy) demonstrate the effectiveness of AutoMTL over state-of-the-art approaches as well as the generalizability of AutoMTL across CNNs. AutoMTL is open-sourced and available at https://github.com/zhanglijun95/AutoMTL. | Accept | All reviewers are positive about the paper. The AC concurs. | train | [
"iyOWkATsmT",
"LrrBkD-rKf_",
"a83CbW4VCl",
"eKqd9Uwn9GD",
"-leqAACVxuc",
"1svNqrJy5o4",
"aDGjiQx39rn",
"hBZ1L5ebDX",
"h02IOaMYcD2",
"6P0zXWmbfbu",
"8juUJRpHOqI",
"HfH8AYUPM9u",
"LPLM8LlOh2d",
"Q5JFdycRO9Z",
"tMfhXQC22SI",
"XM_A19g3xCb",
"mYBcR9Up_RB",
"fU1bxtl8533"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your suggestions! We're revised our related work section to include more literatures and one part for MTL optimization methods. The new version of our submission has been uploaded.",
" Thank you for your response and most of my concerns are addressed. However, the related work section is not satis... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"LrrBkD-rKf_",
"8juUJRpHOqI",
"LPLM8LlOh2d",
"fU1bxtl8533",
"mYBcR9Up_RB",
"XM_A19g3xCb",
"fU1bxtl8533",
"mYBcR9Up_RB",
"XM_A19g3xCb",
"nips_2022_98TSEoHOoQE",
"HfH8AYUPM9u",
"fU1bxtl8533",
"mYBcR9Up_RB",
"tMfhXQC22SI",
"XM_A19g3xCb",
"nips_2022_98TSEoHOoQE",
"nips_2022_98TSEoHOoQE",... |
nips_2022_g05fHAvNeXx | What You See is What You Get: Principled Deep Learning via Distributional Generalization | Having similar behavior at training time and test time—what we call a “What You See Is What You Get” (WYSIWYG) property—is desirable in machine learning. Models trained with standard stochastic gradient descent (SGD), however, do not necessarily have this property, as their complex behaviors such as robustness or subgroup performance can differ drastically between training and test time. In contrast, we show that Differentially-Private (DP) training provably ensures the high-level WYSIWYG property, which we quantify using a notion of distributional generalization. Applying this connection, we introduce new conceptual tools for designing deep-learning methods by reducing generalization concerns to optimization ones: to mitigate unwanted behavior at test time, it is provably sufficient to mitigate this behavior on the training data. By applying this novel design principle, which bypasses “pathologies” of SGD, we construct simple algorithms that are competitive with SOTA in several distributional-robustness applications, significantly improve the privacy vs. disparate impact trade-off of DP-SGD, and mitigate robust overfitting in adversarial training. Finally, we also improve on theoretical bounds relating DP, stability, and distributional generalization. | Accept | The consensus amongst the reviewers is that the connection between DP and DG, and the resulting WYSIWYG framework for providing generalization guarantees for suitably trained models is both of theoretical, and as the paper demonstrates, potentially practical interest. The paper does have some downsides, in particular, that the connection between DP and DG (and similar ideas, including information-theoretic bounds based on conditional mutual information) have been floating around in the literature, and these connections seem to have been somewhat missed by the authors. However, overall, based on the merit of the theoretical framework, and the interesting experimental results, I recommend that this paper be accepted. | train | [
"g04D0a6dXmg",
"_yBsL_KN5d",
"SPxUjCWzWKUm",
"yUpWfOYTirD",
"odh1C1S6rACs",
"GxpM3JoSmtsM",
"4gKM7hHD5xM",
"HWx5e7IMxs4",
"EKnVtUeH21R",
"M2jaVdqaPJF",
"naFUHFTXVTt",
"bLBUrOySXT4"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you again for your thoughtful review and comments. With the author-reviewer period ending soon, we just wanted to reach out and see if any of the reviewers had any comments back to our rebuttal. We are looking for feedback on whether the points made in the reviews have now been addressed. We are happy to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
5
] | [
"nips_2022_g05fHAvNeXx",
"SPxUjCWzWKUm",
"odh1C1S6rACs",
"EKnVtUeH21R",
"M2jaVdqaPJF",
"naFUHFTXVTt",
"bLBUrOySXT4",
"nips_2022_g05fHAvNeXx",
"nips_2022_g05fHAvNeXx",
"nips_2022_g05fHAvNeXx",
"nips_2022_g05fHAvNeXx",
"nips_2022_g05fHAvNeXx"
] |
nips_2022_mUeMOdJ2IJp | Subspace Recovery from Heterogeneous Data with Non-isotropic Noise | Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from $n$ users with user $i$ contributing data samples from a $d$-dimensional distribution with mean $\mu_i$. Our goal is to recover the linear subspace shared by $\mu_1,\ldots,\mu_n$ using the data points from all users, where every data point from user $i$ is formed by adding an independent mean-zero noise vector to $\mu_i$. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. | Accept | This paper studies the problem of performing subspace recovery (i.e. PCA) with heterogenous and non-isotropic noise. In particular there are $n$ users who each get samples drawn from a $d$ dimensional distribution with mean $\mu_i$. Furthermore the means lie in a $k$ dimensional subspace, and the goal is to estimate it. The main catch is that while they require the noise to be subgaussian, they make no assumption on it being isotropic or homogenous across the different users. When each user gets only one sample, the problem is impossible. But when each user gets two samples, they give a simple estimator based on appropriately chosen $U$-statistics and bound its estimation error. Moreover they show that this bound is optimal up to constant factors. This is the first work to study PCA in a federated setting. It is a clean problem, with an elegant and complete solution. | train | [
"SJ9-41WBgJ_",
"-zKH_wTWNO",
"LB9-807Do4e",
"aLqHCaP7Ec_",
"wzY5_HVTKp9",
"iNrd9Fs41dj",
"mxSfEfgLdjt",
"AxFKKfPcC0N",
"QfLKTlK4HXF"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response. A main concern in my original review was on the (technical) presentation of the results; e.g., how the assumptions have been structured, how the proof sketches have been distilled, etc. Without a revised version I am unable to comment on this aspect. However, I will change my... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"LB9-807Do4e",
"mxSfEfgLdjt",
"QfLKTlK4HXF",
"AxFKKfPcC0N",
"iNrd9Fs41dj",
"nips_2022_mUeMOdJ2IJp",
"nips_2022_mUeMOdJ2IJp",
"nips_2022_mUeMOdJ2IJp",
"nips_2022_mUeMOdJ2IJp"
] |
nips_2022_XFnDhcEH9FF | Hyperbolic Embedding Inference for Structured Multi-Label Prediction | We consider a structured multi-label prediction problem where the labels are organized under implication and mutual exclusion constraints. A major concern is to produce predictions that are logically consistent with these constraints. To do so, we formulate this problem as an embedding inference problem where the constraints are imposed onto the embeddings of labels by geometric construction. Particularly, we consider a hyperbolic Poincaré ball model in which we encode labels as Poincaré hyperplanes that work as linear decision boundaries. The hyperplanes are interpreted as convex regions such that the logical relationships (implication and exclusion) are geometrically encoded using the insideness and disjointedness of these regions, respectively. We show theoretical groundings of the method for preserving logical relationships in the embedding space. Extensive experiments on 12 datasets show 1) significant improvements in mean average precision; 2) lower number of constraint violations; 3) an order of magnitude fewer dimensions than baselines. | Accept | The reviews of this paper are uniformly positive. The novelty is the handling of exclusion edges which expands on previous work. On the negative side the improvements seem small and do not solidly establish the value the value of the hyperbolic hyperplanes. But the reviewers liked the paper and I recommend acceptance.
| train | [
"cay3rLNiFCC",
"Rh-giCvW5ns",
"DuP7uPbCOp3",
"-cbbD-YmdRj",
"kYcPoD2phyf",
"3vcge5selmfU",
"FgKZrd9tn6",
"q8okOll-qO2",
"IIFCPj0NmLy",
"rHUExJsuCRB",
"kJR1SrDkpHj",
"SQGrn1UuxHE",
"Hm-e2REfBZN"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the further comments. However, the main issue of using center of ball as parameterization is not w.r.t the implementation only, but rather about the presentation. Because we have to restrict the center point of ball to be located outside of the Poincare Ball, which make the presentation ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
3
] | [
"DuP7uPbCOp3",
"-cbbD-YmdRj",
"IIFCPj0NmLy",
"q8okOll-qO2",
"nips_2022_XFnDhcEH9FF",
"Hm-e2REfBZN",
"SQGrn1UuxHE",
"kJR1SrDkpHj",
"rHUExJsuCRB",
"nips_2022_XFnDhcEH9FF",
"nips_2022_XFnDhcEH9FF",
"nips_2022_XFnDhcEH9FF",
"nips_2022_XFnDhcEH9FF"
] |
nips_2022_lxsL16YeE2w | UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes | We introduce UViM, a unified approach capable of modeling a wide range of computer vision tasks. In contrast to previous models, UViM has the same functional form for all tasks; it requires no task-specific modifications which require extensive human expertise. The approach involves two components: (I) a base model (feed-forward) which is trained to directly predict raw vision outputs, guided by a learned discrete code and (II) a language model (autoregressive) that is trained to generate the guiding code. These components complement each other: the language model is well-suited to modeling structured interdependent data, while the base model is efficient at dealing with high-dimensional outputs. We demonstrate the effectiveness of UViM on three diverse and challenging vision tasks: panoptic segmentation, depth prediction and image colorization, where we achieve competitive and near state-of-the-art results. Our experimental results suggest that UViM is a promising candidate for a unified modeling approach in computer vision. | Accept | Three reviewers provided positive reviews which were further strengthened post discussion. They agreed that that the motivation was strong, the model was novel and the paper was well written. They appreciated the ablations provided by the authors and found the results compelling. The main concern by the reviewers was a missing experiment which was provided by the authors in their rebuttal, and was appreciated by multiple reviewers. In summary, the reviewers are unanimous in their support of this paper. I agree with their reviews and I recommend acceptance. | train | [
"a-N54Og3-CW",
"u6v4Zy40ApZ",
"MO5wZ8ukZWJ",
"GCP-TfSw3FL",
"jHZZz7BmI4",
"eZMt5vs9ytY",
"AmmmgUVbSY3",
"zujadL80crh",
"v08zbTTdim3",
"eOi8FQngAFT",
"rEliAPSWZ7W",
"uWdG5m0IIOB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Indeed, we agree the new experiments arising from the reviewer discussions are valuable additions, which we will include in the final paper version.",
" Thank you very much for the positive reaction to our additional experiment. We will include it in the final paper version, as the capacity comparison is a vali... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"jHZZz7BmI4",
"GCP-TfSw3FL",
"eZMt5vs9ytY",
"zujadL80crh",
"v08zbTTdim3",
"AmmmgUVbSY3",
"eOi8FQngAFT",
"rEliAPSWZ7W",
"uWdG5m0IIOB",
"nips_2022_lxsL16YeE2w",
"nips_2022_lxsL16YeE2w",
"nips_2022_lxsL16YeE2w"
] |
nips_2022_DTsCy9Lyj5- | BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach | Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning.
Conventional BO methods need to differentiate through the low-level optimization process with implicit differentiation, which requires expensive calculations related to the Hessian matrix. There has been a recent quest for first-order methods for BO, but the methods proposed to date tend to be complicated and impractical for large-scale deep learning applications. In this work, we propose a simple first-order BO algorithm that depends only on first-order gradient information, requires no implicit differentiation, and is practical and efficient for large-scale non-convex functions in deep learning. We provide non-asymptotic convergence analysis of the proposed method to stationary points for non-convex objectives and present empirical results that show its superior practical performance. | Accept | There is general consensus among the reviewers that this paper is a valuable contribution to the bilevel optimization literature.
- The value function formulation is still relatively unexplored in bilevel optimization (althought not completely new). Having a new paper developing this direction will be a nice addition to the literature.
- The paper seems well written and sound.
- The experiments, though they don't really assess the scalability of the approach, are illustrative and diverse.
We therefore recommend acceptance.
To the authors: please take into account the reviewer comments in the camera-ready paper. Please be careful of not overselling the contributions with superlatives like "powerful method". | val | [
"mRGdpd3iKsC",
"eny5q_mc2A5",
"Pj68zbXIz0",
"4eNa7iQDGJg",
"8iCUNPKk0d",
"M47n_mwnTb",
"hgp7ocsyOX",
"l1VYlhDx1W",
"XqC_GylFEZM",
"NEkvXbJYCKt",
"LiCTzGpxAk",
"bPhWsbnSH6D",
"3O6pGM0fpV6",
"NV6dxEZGV2z",
"Txjehbjg1rU",
"AoScuwzWUfH",
"KJUSrkBFyzu"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I have increased the score to 6.",
" We sincerely thank the reviewer for the reference and suggestion. Due to the limited time remaining, we could not systematically run hyperparameter search for SUSTAIN and MRVRBO on our problems using our implementation. Moreover, simply incorporating... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
5,
4
] | [
"eny5q_mc2A5",
"4eNa7iQDGJg",
"AoScuwzWUfH",
"M47n_mwnTb",
"XqC_GylFEZM",
"hgp7ocsyOX",
"NEkvXbJYCKt",
"3O6pGM0fpV6",
"bPhWsbnSH6D",
"KJUSrkBFyzu",
"AoScuwzWUfH",
"Txjehbjg1rU",
"NV6dxEZGV2z",
"nips_2022_DTsCy9Lyj5-",
"nips_2022_DTsCy9Lyj5-",
"nips_2022_DTsCy9Lyj5-",
"nips_2022_DTsCy... |
nips_2022_12nqqeQnDW7 | Coordinate Linear Variance Reduction for Generalized Linear Programming | We study a class of generalized linear programs (GLP) in a large-scale setting, which includes simple, possibly nonsmooth convex regularizer and simple convex set constraints. By reformulating (GLP) as an equivalent convex-concave min-max problem, we show that the linear structure in the problem can be used to design an efficient, scalable first-order algorithm, to which we give the name Coordinate Linear Variance Reduction (CLVR; pronounced ``clever''). CLVR yields improved complexity results for (GLP) that depend on the max row norm of the linear constraint matrix in (GLP) rather than the spectral norm. When the regularization terms and constraints are separable, CLVR admits an efficient lazy update strategy that makes its complexity bounds scale with the number of nonzero elements of the linear constraint matrix in (GLP) rather than the matrix dimensions. On the other hand, for the special case of linear programs, by exploiting sharpness, we propose a restart scheme for CLVR to obtain empirical linear convergence. Then we show that Distributionally Robust Optimization (DRO) problems with ambiguity sets based on both $f$-divergence and Wasserstein metrics can be reformulated as (GLPs) by introducing sparsely connected auxiliary variables. We complement our theoretical guarantees with numerical experiments that verify our algorithm's practical effectiveness, in terms of wall-clock time and number of data passes. | Accept | Overall all reviewers were positive about this paper and I tend to agree, but no reviewer felt particularly excited about the results. | train | [
"YqGLYOxlsaK",
"xVQ1gY6Edz6",
"uytiULtEz3",
"-vXyvnquDso",
"lR55-J0_Mbb",
"WVtYGiVRszE",
"3v9YlzBTqi2",
"5u_RkaSeRX",
"R728JBIPpD",
"HMGlPmMRhUg",
"XFNdtCw_1tw",
"0iEqJHwkK_",
"Pvl8jJgXxDy",
"HRfvIcVmo29",
"i4irAs1mEL",
"p1BDzxRxM98",
"PUAPyY1tqGn",
"BvtUPu6bzS1",
"nZo4YKaWiB",
... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"... | [
" Dear Reviewers,\n\nThank you for your constructive feedback which has greatly helped us improve our paper.\n\nWe wanted to provide a brief update regarding some of the requested additions related to the numerical experiments (all new material provided in Appendix D \"Experiment Details\"):\n\n* Based on the sugge... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"nips_2022_12nqqeQnDW7",
"uytiULtEz3",
"Pvl8jJgXxDy",
"lR55-J0_Mbb",
"R728JBIPpD",
"HRfvIcVmo29",
"HMGlPmMRhUg",
"0iEqJHwkK_",
"Pvl8jJgXxDy",
"nZo4YKaWiB",
"nZo4YKaWiB",
"nZo4YKaWiB",
"nZo4YKaWiB",
"nZo4YKaWiB",
"BvtUPu6bzS1",
"PUAPyY1tqGn",
"kIIQj-fPUV",
"t3Q-3gWo40",
"t3Q-3gWo4... |
nips_2022_2-CflpDkezH | Finding Correlated Equilibrium of Constrained Markov Game: A Primal-Dual Approach | Constrained Markov game is a fundamental problem that covers many applications, where multiple players compete with each other under behavioral constraints. The existing literature has proved the existence of Nash equilibrium for constrained Markov games, which turns out to be PPAD-complete and cannot be computed in polynomial time. In this work, we propose a surrogate notion of correlated equilibrium (CE) for constrained Markov games that can be computed in polynomial time, and study its fundamental properties. We show that the modification structure of CE of constrained Markov games is fundamentally different from that of unconstrained Markov games. Moreover, we prove that the corresponding Lagrangian function has zero duality gap. Based on this result, we develop the first primal-dual algorithm that provably converges to CE of constrained Markov games. In particular, we prove that both the duality gap and the constraint violation of the output policy converge at the rate $\mathcal{O}(\frac{1}{\sqrt{T}})$. Moreover, when adopting the V-learning algorithm as the subroutine in the primal update, our algorithm achieves an approximate CE with $\epsilon$ duality gap with the sample complexity $\mathcal{O}(H^9|\mathcal{S}||\mathcal{A}|^{2} \epsilon^{-4})$. | Accept | This paper proposes and examines a notion of correlated equilibrium for "constrained stochastic games", that is, stochastic games where the players seek to optimize their payoffs modulo guaranteeing a certain target.
The reviewers' initial concerns were addressed satisfactorily by the authors during the rebuttal phase, leading to a unanimous "accept" recommendation from the reviewers. After my own reading of the paper, I concur with this assessment: the paper treats an interesting and timely topic, and the results are both interesting and technically challenging. On a personal note, I would urge the authors to explain in more detail the notion of a "constrained" Markov game, as the terminology is not quite standard in game theory (where constraints typically have a different meaning than in MDPs); however, other than that, the reviews speak for themselves and I am also happy to recommend acceptance. | val | [
"5UaYCq91RQM",
"9QCbSmA1hT2",
"V7ez7s0TVW",
"Hz6xXgKkelc",
"PPJOwBvYsBr",
"xnhk7FITbcf",
"qpBZvnA7MB",
"d3yNzfGdV_A",
"mC_Gvaz9nkP",
"BpflVGhFIA6",
"CQWyzfhEqcO",
"cKPyRNrHjIk",
"9jhOniaMtWz",
"rMAPChOXvfw-",
"5ArXBEpYEqJ",
"LQXwW8BcScU",
"OYWkwQY49X8",
"2xDjwG_6bd4",
"Hi__gTunQe... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for the response. I think the paper is clearer now that it clearly states what constraints it can handle. I will update my score to a 6. \n\nOne last comment: I think one obstacle to the approach which you propose to handle resource consumption constraints is that Assumption 2 prevents you from setting $r(... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
2,
4
] | [
"PPJOwBvYsBr",
"mC_Gvaz9nkP",
"xnhk7FITbcf",
"qpBZvnA7MB",
"d3yNzfGdV_A",
"CQWyzfhEqcO",
"d3yNzfGdV_A",
"5ArXBEpYEqJ",
"BpflVGhFIA6",
"lLUQNH04B7A",
"yfikYhc5Sj",
"Hi__gTunQeN",
"Hi__gTunQeN",
"2xDjwG_6bd4",
"2xDjwG_6bd4",
"OYWkwQY49X8",
"nips_2022_2-CflpDkezH",
"nips_2022_2-CflpDk... |
nips_2022_rApvGord7j | Fair Bayes-Optimal Classifiers Under Predictive Parity | Increasing concerns about disparate effects of AI have motivated a great deal of work on fair machine learning. Existing works mainly focus on independence- and separation-based measures (e.g., demographic parity, equality of opportunity, equalized odds), while sufficiency-based measures such as predictive parity are much less studied. This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups. We prove that, if the overall performances of different groups vary only moderately, all fair Bayes-optimal classifiers under predictive parity are group-wise thresholding rules. Perhaps surprisingly, this may not hold if group performance levels vary widely; in this case, we find that predictive parity among protected groups may lead to within-group unfairness. We then propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied. FairBayes-DPP is an adaptive thresholding algorithm that aims to achieve predictive parity, while also seeking to maximize test accuracy. We provide supporting experiments conducted on synthetic and empirical data. | Accept | The author considers fair Bayes-optimal classifiers under predictive parity. The authors show that under some sufficient conditions all such classifiers are groupwise threshold rules. They also show that htis is not necessarily the case when the condition does not hold. Some empirical results are also provided. The reviewers have made several important suggestions; in particular the authors should provide better context as well as discuss related work using other sufficiency metrics. As the changes are mainly about prior work, I'm inclined to recommend acceptance, but the authors should aboslutely implement all the changes they promised in the responses to the reviews. | train | [
"mnOcn1fOO01",
"bX-sYIqplEO",
"4gdRMoyqypx",
"y940Nz1EkCS",
"W6s2d9EZPc",
"PP6BThFnNN",
"Fu8fW94bxMn",
"GRM-fDBW8uP",
"KFCD8ytr6zY",
"xFl-kGI0B-7",
"NGzCDEg3txy",
"ZvhPImbcUMZ",
"9k1daA_AtDm",
"RDA5f39acyk",
"sLuTRtCoi5"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Referee,\n\nWe do appreciate your positive and very encouraging comments. We will add these helpful discussions in the final version to better validate our paper. \n\nBest,\n\nAuthors",
" \n$\\\\textbf{Concern 2}:$ I understand the difference from the multi-calibration. I thank the authors for the detaile... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"PP6BThFnNN",
"Fu8fW94bxMn",
"Fu8fW94bxMn",
"W6s2d9EZPc",
"NGzCDEg3txy",
"GRM-fDBW8uP",
"KFCD8ytr6zY",
"sLuTRtCoi5",
"RDA5f39acyk",
"9k1daA_AtDm",
"ZvhPImbcUMZ",
"nips_2022_rApvGord7j",
"nips_2022_rApvGord7j",
"nips_2022_rApvGord7j",
"nips_2022_rApvGord7j"
] |
nips_2022_EI1x5B1-o8M | First Hitting Diffusion Models for Generating Manifold, Graph and Categorical Data | We propose a family of First Hitting Diffusion Models (FHDM), deep generative models that generate data with a diffusion process that terminates at a random first hitting time. This yields an extension of the standard fixed-time diffusion models that terminate at a pre-specified deterministic time. Although standard diffusion models are designed for continuous unconstrained data, FHDM is naturally designed to learn distributions on continuous as well as a range of discrete and structure domains. Moreover, FHDM enables instance-dependent terminate time and accelerates the diffusion process to sample higher quality data with fewer diffusion steps. Technically, we train FHDM by maximum likelihood estimation on diffusion trajectories augmented from observed data with conditional first hitting processes (i.e., bridge) derived based on Doob's $h$-transform, deviating from the commonly used time-reversal mechanism.
We apply FHDM to generate data in various domains such as point cloud (general continuous distribution), climate and geographical events on earth (continuous distribution on the sphere), unweighted graphs (distribution of binary matrices), and segmentation maps of 2D images (high-dimensional categorical distribution). We observe considerable improvement compared with the state-of-the-art approaches in both quality and speed. | Accept | The paper introduces a new approach for generative modeling: a diffusion process is run until it first hits a target set, and then outputs the first point that is hit.
Three reviewers generally praised the originality, technical quality, and empirical results of the paper. They found the idea very interesting and novel, and technically sound. The numerical results were judged to be compelling and fair. One concern was clarity of exposition. There seemed to be two issues: (1) there were more typos and rough edges than expected, (2) more significantly, there was some difficulty in following all details of the main method given the notational complexity and significant amount of mathematical background on diffusion processes. Reviewer FUwB gave a number of concrete suggestions for improvement.
Reviewer wpmu had a negative overall opinion and critiqued the originality, quality, and clarity. On these issues: (1) the quality concern was based on a misunderstanding that was later resolved, (2) the originality concern does not seem justified to the meta-reviewer (it is based on a shared technical tool with a not-yet-published paper), and (3) the clarity concern is similar to those raised by other reviewers (especially FUwB). Overall, the meta-reviewer does not feel that the low score (3 = “reject”) was fully justified.
In summary, overall reviewers found the paper sound and novel, with the main area for improvement being clarity of exposition about diffusion processes; one reviewer considered originality a weakness, but the meta-reviewer did not find this position well justified.
| train | [
"-pmf9N-KzW",
"skKKDztaEGA",
"i3GiXtxXW5Kq",
"p_Mp6QE6Vmj",
"y9ueGpIrrLb",
"QuvTLE0Pg3Y",
"4uh7D3tlidn",
"l7Eqy0gI_T7",
"mExiR9uKMZc"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors, thanks for the comments.\nI think this work has received good scores from all other reviewers, and everyone agrees the idea is novel and with a lot of potential which has been shown empirically.\nI am still not fully convinced about the notation and exposition, but I will raise my score.\n\nCheers\n... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
4
] | [
"skKKDztaEGA",
"mExiR9uKMZc",
"l7Eqy0gI_T7",
"4uh7D3tlidn",
"QuvTLE0Pg3Y",
"nips_2022_EI1x5B1-o8M",
"nips_2022_EI1x5B1-o8M",
"nips_2022_EI1x5B1-o8M",
"nips_2022_EI1x5B1-o8M"
] |
nips_2022_Yq6g9xluV0 | Rapid Model Architecture Adaption for Meta-Learning | Network Architecture Search (NAS) methods have recently gathered much attention. They design networks with better performance and use a much shorter search time compared to traditional manual tuning. Despite their efficiency in model deployments, most NAS algorithms target a single task on a fixed hardware system. However, real-life few-shot learning environments often cover a great number of tasks ($T$) and deployments on a wide variety of hardware platforms ($H$).
The combinatorial search complexity $T \times H$ creates a fundamental search efficiency challenge if one naively applies existing NAS methods to these scenarios. To overcome this issue, we show, for the first time, how to rapidly adapt model architectures to new tasks in a \emph{many-task many-hardware} few-shot learning setup by integrating Model Agnostic Meta Learning (MAML) into the NAS flow. The proposed NAS method (H-Meta-NAS) is hardware-aware and performs optimisation in the MAML framework. MetaNAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints. In particular, on the 5-way 1-shot Mini-ImageNet classification task, the proposed method outperforms the best manual baseline by a large margin ($5.21\%$ in accuracy) using $60\%$ less computation. | Accept | Reviewers agree that the proposed many-task many-hardware few-shot learning setup is well motivated and valuable in real-world applications. The main novelty of this paper lies in the meta-learning side, and their results reveal that metric-based MAML methods are not latency-friendly, and the community should look back into optimization-based MAML methods. Authors have devoted extensive engineering efforts into this paper, as recognized by several reviewers.
Previously raised concerns are mainly about the evaluation of overall novelty (e.g., similarity with OFA), possibly inaccurate layer-wise profiling due to layer/operation fusion, and missing comparison with more recent methods. Most of them have been well resolved in the author feedback; therefore, AC recommends acceptance. | train | [
"8hxMgoRafmM",
"TUyn-T6IZJe",
"xHZT7iC0GRJ",
"eWQkoF9sxEV",
"kvZ04jv4rAv",
"yyWNU3iDxia",
"qGV74ks5h2l",
"7gVI7GrE0yv",
"JOeOM08eJmg",
"VCd8g7Uj9Km",
"jWBzt6C0En7",
"wM6JL3nrP4",
"eLLgOdjCnHw",
"oY1YXILtlM",
"tNQSm_uo_xo",
"LcUSxkUSSw0",
"f4jRuGn0wfX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThe discussion period is ending soon. We appreciated your previous thoughtful review and made the suggested layout changes in our Method section. We also added an answer to your question in our previous response. \n\nPlease feel free to let us know if you have any further questions, so that we m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"qGV74ks5h2l",
"7gVI7GrE0yv",
"kvZ04jv4rAv",
"wM6JL3nrP4",
"jWBzt6C0En7",
"LcUSxkUSSw0",
"f4jRuGn0wfX",
"oY1YXILtlM",
"tNQSm_uo_xo",
"f4jRuGn0wfX",
"LcUSxkUSSw0",
"tNQSm_uo_xo",
"oY1YXILtlM",
"nips_2022_Yq6g9xluV0",
"nips_2022_Yq6g9xluV0",
"nips_2022_Yq6g9xluV0",
"nips_2022_Yq6g9xluV... |
nips_2022_aZQJMVx8fk | Density-driven Regularization for Out-of-distribution Detection | Detecting out-of-distribution (OOD) samples is essential for reliably deploying deep learning classifiers in open-world applications. However, existing detectors relying on discriminative probability suffer from the overconfident posterior estimate for OOD data. Other reported approaches either impose strong unproven parametric assumptions to estimate OOD sample density or develop empirical detectors lacking clear theoretical motivations. To address these issues, we propose a theoretical probabilistic framework for OOD detection in deep classification networks, in which two regularization constraints are constructed to reliably calibrate and estimate sample density to identify OOD. Specifically, the density consistency regularization enforces the agreement between analytical and empirical densities of observable low-dimensional categorical labels. The contrastive distribution regularization separates the densities between in distribution (ID) and distribution-deviated samples. A simple and robust implementation algorithm is also provided, which can be used for any pre-trained neural network classifiers. To the best of our knowledge, we have conducted the most extensive evaluations and comparisons on computer vision benchmarks. The results show that our method significantly outperforms state-of-the-art detectors, and even achieves comparable or better performance than methods utilizing additional large-scale outlier exposure datasets. | Accept | This paper develops a method for improving out-of-distribution detection in deep learning based on a novel regularization term. There was significant variance in review scores with two championing the paper for acceptance and two borderline scores (7, 7, 5, 4) resulting in an aggregated score just above borderline accept. The reviewers arguing for acceptance found the method novel, the simplicity of the algorithm compelling and the experiments extensive and convincing. One reviewer was concerned that baseline comparisons provided in the paper seem less strong than reported in other work. Two reviewers questioned the mathematical derivations and some of the underlying assumptions of the paper.
That two reviewers are arguing for acceptance is a signal that the paper could be a useful contribution and interesting to the community. Since the experiments seem extensive and seem to demonstrate that the method consistently works well, and given that it is simple to implement, that seems to validate the underlying assumptions and it could provide a useful baseline. Therefore the recommendation is to accept the paper. Please make sure to address the remaining reviewer concerns in the final manuscript. | train | [
"SEz813olY0",
"QTTEsme6ULC",
"cWhSRa2va4",
"lYgrIHoerZq",
"0yh9r0BTdJa",
"YgUBMtpiU_k",
"4MyZrPXTN_v",
"7Fa1_pOFL11",
"kH9XUFp1VFc",
"QUEsC0jRLX5",
"-MoRFVcYoP-",
"RjJBEl_6_YW",
"vtoYmQw3YAr",
"h5HoBLrtDC",
"-R-EBN4hMFP",
"_-GPYDv-DJu",
"4MqtkzdxVwf",
"eMp9hw6fqDr",
"4asAkpY2vTo"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you authors for your detailed response, especially clarifying some of the theoretical aspects which I had stated in my initial review. From a practical perspective, however, what is used as a proxy for $p(x)$ is indeed the sum of logits and it would seem from your argument that this sum of logits, when regu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
5
] | [
"-R-EBN4hMFP",
"srCYtYZHsjU",
"eMp9hw6fqDr",
"srCYtYZHsjU",
"4MyZrPXTN_v",
"4asAkpY2vTo",
"4asAkpY2vTo",
"4MqtkzdxVwf",
"4MqtkzdxVwf",
"4MqtkzdxVwf",
"srCYtYZHsjU",
"srCYtYZHsjU",
"eMp9hw6fqDr",
"eMp9hw6fqDr",
"eMp9hw6fqDr",
"eMp9hw6fqDr",
"nips_2022_aZQJMVx8fk",
"nips_2022_aZQJMVx... |
nips_2022_YeuBRKq_yZ- | Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions | We study the linear contextual bandit problem in the presence of adversarial corruption, where the reward at each round is corrupted by an adversary, and the corruption level (i.e., the sum of corruption magnitudes over the horizon) is $C\geq 0$. The best-known algorithms in this setting are limited in that they either are computationally inefficient or require a strong assumption on the corruption, or their regret is at least $C$ times worse than the regret without corruption. In this paper, to overcome these limitations, we propose a new algorithm based on the principle of optimism in the face of uncertainty. At the core of our algorithm is a weighted ridge regression where the weight of each chosen action depends on its confidence up to some threshold. We show that for both known $C$ and unknown $C$ cases, our algorithm with proper choice of hyperparameter achieves a regret that nearly matches the lower bounds. Thus, our algorithm is nearly optimal up to logarithmic factors for both cases. Notably, our algorithm achieves the near-optimal regret for both corrupted and uncorrupted cases ($C=0$) simultaneously. | Accept | The paper was generally well-received by the reviewers, who appreciated the contributions, especially the tightness of the bounds and the simplicity of the algorithm. The minor concerns raised in the original reviews were all addressed in the rebuttal, and eventually all reviewers agreed that the paper is suitable for publication at NeurIPS 2022. The authors are encouraged to take all the reviewers' comments into account when preparing the final version of the paper.
One remaining technical concern that needs to be clarified in the final version is the computational efficiency of the proposed algorithm, as raised by reviewers MF5t and Ccme. In their response to reviewer MF5t, the authors erroneously claimed that a linear optimization oracle suffices to implement their algorithm, even though the objective function maximized in the implementation is a nonlinear, strictly convex function. However, the challenge of solving such potentially intractable optimization problems is not unique to this particular algorithm, and indeed all OFUL / LinUCB style methods require solving similar problems --- see, e.g., the discussion in Section 19.3.1. in "Bandit Algorithms" by Lattimore and Szepesvari. The final version should correct the confusing claims about this computational issue, for example by simply updating footnote 4 in the current draft. | train | [
"De_qSCp0XxP",
"VA9Gsb2Lzv",
"DUHsAHq1-tL",
"2BaRhcEO9FQr",
"U-Iqy1KApd",
"zd43ejHP7vu",
"2nAuRY4oxB3N",
"y2-F5JgGGVZ",
"3lPj-eV3uZM",
"sHMNj-XUGG",
"23NBGAuqOTP",
"d76496zjzcW",
"0DfCWPOA90x",
"6kkcZPdWciu"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nSince the deadline for the author-reviewer discussion phase is fast approaching, we would like to follow up to hear your feedback on our response. In our rebuttal, we have addressed all your questions. According to your suggestion, we have added numerical experiments to validate our CW-OFUL algo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"23NBGAuqOTP",
"3lPj-eV3uZM",
"2BaRhcEO9FQr",
"U-Iqy1KApd",
"zd43ejHP7vu",
"2nAuRY4oxB3N",
"6kkcZPdWciu",
"0DfCWPOA90x",
"d76496zjzcW",
"23NBGAuqOTP",
"nips_2022_YeuBRKq_yZ-",
"nips_2022_YeuBRKq_yZ-",
"nips_2022_YeuBRKq_yZ-",
"nips_2022_YeuBRKq_yZ-"
] |
nips_2022_IE32oIlhXz | On the Generalization Power of the Overfitted Three-Layer Neural Tangent Kernel Model | In this paper, we study the generalization performance of overparameterized 3-layer NTK models. We show that, for a specific set of ground-truth functions (which we refer to as the "learnable set"), the test error of the overfitted 3-layer NTK is upper bounded by an expression that decreases with the number of neurons of the two hidden layers. Different from 2-layer NTK where there exists only one hidden-layer, the 3-layer NTK involves interactions between two hidden-layers. Our upper bound reveals that, between the two hidden-layers, the test error descends faster with respect to the number of neurons in the second hidden-layer (the one closer to the output) than with respect to that in the first hidden-layer (the one closer to the input). We also show that the learnable set of 3-layer NTK without bias is no smaller than that of 2-layer NTK models with various choices of bias in the neurons. However, in terms of the actual generalization performance, our results suggest that 3-layer NTK is much less sensitive to the choices of bias than 2-layer NTK, especially when the input dimension is large. | Accept | This paper studies the generalization error of three-layer relu neural networks, when only the middle layer weights are trained.
The focus is the regression setting, and the goal is to capture the hidden layer interactions. The paper aims to determine how the (hidden) layer interactions influence the double-descent curve. The generalization error bound established depends on the layer width in an interesting manner, which may shed light on understanding deeper networks outside of the kernel regime.
All reviewers rated this work above the bar. As such, I recommend accepting this paper.
There were a few parts that the reviewers found unclear/needs improvement. In particular, some of the clarifications made by the authors in their rebuttal can help make this paper more clear for its future readers.
| train | [
"ZTKgCV5Olgp",
"G8JX778vpMW",
"SLxhIfmPJJe",
"NiVctEiNA3r",
"FV3fAfkFjir",
"-32PhJL00d8",
"l_4sEq08yGu",
"p1YVLS8w_uD",
"MTuufxmu-cE",
"hKFjUgmeFWK",
"P0N6uyNkKLJ",
"4g8_wC8oJ4P",
"5ELUtsamv3Y",
"adhRVAbNtiS",
"P86aui5Incj",
"szkFSLCRh4o"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the comments.\n\n**\"Concerns about the high-level contribution\"**: The high-level contribution is that this is the first work that studies the generalization performance on the overfitted 3-layer NTK, especially with finite width. By providing an upper bound, we rigorously prove that the generalizati... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"G8JX778vpMW",
"4g8_wC8oJ4P",
"l_4sEq08yGu",
"FV3fAfkFjir",
"-32PhJL00d8",
"szkFSLCRh4o",
"P86aui5Incj",
"adhRVAbNtiS",
"adhRVAbNtiS",
"5ELUtsamv3Y",
"5ELUtsamv3Y",
"5ELUtsamv3Y",
"nips_2022_IE32oIlhXz",
"nips_2022_IE32oIlhXz",
"nips_2022_IE32oIlhXz",
"nips_2022_IE32oIlhXz"
] |
nips_2022_XiLasGufCM | Insights into Pre-training via Simpler Synthetic Tasks | Pre-training produces representations that are effective for a wide range of downstream tasks, but it is still unclear what properties of pre-training are necessary for effective gains. Notably, recent work shows that even pre-training on synthetic tasks can achieve significant gains in downstream tasks. In this work, we perform three experiments that iteratively simplify pre-training and show that the simplifications still retain much of its gains. First, building on prior work, we perform a systematic evaluation of three existing synthetic pre-training methods on six downstream tasks. We find the best synthetic pre-training method, LIME, attains an average of $67\%$ of the benefits of natural pre-training. Second, to our surprise, we find that pre-training on a simple and generic synthetic task defined by the set function achieves $65\%$ of the benefits, almost matching LIME. Third, we find that $39\%$ of the benefits can be attained by using merely the parameter statistics of synthetic pre-training. We release the source code at \url{https://github.com/felixzli/synthetic_pretraining}. | Accept | # Overview
This paper presents a fascinating research question: "What properties of pre-training are necessary for effective gains" in natural language processing settings? The paper also tackles it in a fascinating way: exploring several synthetic tasks in exacting detail. This is an empirical paper whose goal is to consolidate existing knowledge on the topic of synthetic pre-training and pose a scientific question about the nature of pre-training.
What does it look like for such a paper to be publishable? There are several things it _doesn't_ need to do. That includes:
* Setting a new SOTA for anything, synthetic pre-training or otherwise.
* Providing a theoretical account for empirical phenomena.
* Completely or definitively answer the provided research question, which is a massive undertaking that will require an entire research literature, not a single paper.
However, what such a paper does need to do is to be absolutely rigorous in the situations that it purports to address and to follow empirical best practices. That includes:
* Considering a comprehensive range of downstream tasks (that provides a basis for comparison between methods) and considering them across the full range of experiments.
* Reporting all data clearly in ways that fairly state what was discovered, even if some results are mixed or negative.
* Running relevant ablations to dig into the core research question.
* Consider new approaches to the problem inspired by those findings to test the predictive power of those findings.
# Reviews
## Reviewer 6Qmz
This reviewer raises a number of concerns about the paper, some of which I have discarded based on the criteria above. For example, I have ignored the following concerns since they set an unfair bar for a scientific paper taking on a vast question:
* _This paper does not give a clear answer to the core question raised in the introduction._
* _The design and usefulness of the simple synthetic tasks lack theoretical analysis._
* _The performances of almost all of the[ synthetic tasks] are limited (i.e., worse than the original.)_
* _This paper does not present any definitive or helpful conclusions for the future of language pre-training._
However, the reviewer raises several important methodological concerns that undermine confidence in the empirical findings of the paper:
* _The gain on natural language is very limited against on artificial language._ _The performance on SQuAD are significantly dropped._ (AC note: In general, I found describing the data as "recovering XX% of the change in accuracy from pre-training T5" to be a pretty misleading way to exaggerate relatively small improvements of a handful of percentage points over random initialization. If the results are already strong, there's no need to do this. It's not a reason to reject, but it made me fear that there were other things hidden in the paper that I didn't find. The big question for this reviewer seems to be "are these results good enough to suggest that something interesting has been found via these experiments?" It's in the eye of the beholder, and that's part of the challenge of getting in a paper like this.)
* _There seems to be an attempt to avoid the issue, making most experiments on a summarization task._
* The reviewer also suggests some fantastic follow-on experiments that would be great ablations or ways of evaluating the predictive power of the proposed findings.
In general, this reviewer questions the rigor of the evaluation of the pre-training methods studied. That's of paramount concern for a paper like this.
## Reviewer iZRj
This reviewer had several questions that were addressed in the author response. One question that remains unaddressed:
* _One part that nags at me is that most of the comparisons of the paper report relative performance along a linear scale._ (The AC shares this concern and found this frustrating. It overstates the quality of the results, since each marginal improvement in performance is difficult to accomplish. SQuAD scores are still dozens of percentage points worse than T5, for example. I think this may mislead readers - and possibly misled some of the reviewers - into thinking the results were better than they actually were.)
## Reviewer 9nVq
This reviewer had a couple of concerns. Most prominently:
* _Missing such explanations [for why these simpler methods work] may result in limited technical insight._ (I agree, but it's a tall order to do this definitively and that's an unfair bar for acceptance.)
## Reviewer tNV7
This reviewer was primarily concerned with the scale of the experiments. I also share this concern. Even a 200M parameter T5 model is relatively small all things considered. (I understand that the authors may not have enough compute available to go after bigger models with this range of pre-training and evaluation techniques, but - if that's a concern - than the overall topic of understanding T5 pre-training may not be the right problem for them to go after.) Analyzing how these techniques do as scale increases is an important aspect of the question the authors are taking on. Do these synthetic tasks top out at some point, while pretraining on natural language data continues to help? Does the gap keep increasing? These are big questions in a field that is completely preoccupied with all things "scale" right now, so it's a big missing piece of this paper in my view.
# Recommendation: Accept
I argue in favor of accepting this paper. Despite the range of flaws pointed out by the reviewers and this AC, this paper takes a productive scientific step toward understanding the properties of pre-training tasks that make pre-training effective. This paper is a great survey of the current state of the art and fills in many gaps in our knowledge about how successful pre-training is at various levels of complexity. Yes, there is much more work to do, and I hope these authors and other members of the community build on this work to do so.
# Please please PLEASE fix a few things!
With that said, there are a few things that I beg and implore the authors to do in the camera ready (in order of priority for me). Without doing these things, this paper may not serve as a solid enough foundation for future work that people can readily build on it, and the paper's impact will be dramatically less than it could be (scientifically and in terms of citations). This is for your own good!
1. Ensure that all experiments are evaluated on all fine-tuning tasks. At least provide that data in appendices. I know you were frustrated with Reviewer 6Qmz, but the reviewer was completely right on this and several other points.
2. Get rid of those annoying comparisons made about "percentage of gap closed." Just talk about percentage points on the actual task. It's confusing and misleading, and it overstates the efficacy of these methods. It doesn't matter if these methods are good or not: the important part is filling in gaps in our scientific knowledge. This bad way of describing the data isn't _quite_ enough to reject over on its own, but it seems to have annoyed the heck out of several of the reviewers and it _really_ annoyed the heck out of me.
3. Add the T5 200M results to the paper and add a discussion of how these results change as you scale up. Ideally, add one larger scale so you have three points and can start to see trends. This is a place where the "percentage of gap closed" metrics make more sense as a way to compare across scales.
4. Add GLUE. The world of NLP fine-tuning tasks is vast, and everyone has their favorites. Make sure everyone's favorite is there so nobody can complain.
# Notes to Authors
All of the reviewers agreed that this was an interesting scientific question, and I encourage the authors to continue building on this line of work. In addition, all of the reviewers did respond to the rebuttal, although several appear to have provided their private thoughts to me rather than broadcasting it (including Reviewer 6Qmz). I have taken those thoughts into account when putting together this metareview.
| train | [
"9_0eEi_ZpBv",
"WGMDj3LZd6X",
"K5p3qocAY1Q",
"M4FokBi_pao",
"K3AEl5ihMBl",
"Zwwev_25iMh",
"KOBQtUC3DYe",
"73gtgoGxnz3",
"BrW3Hl1R4Qe",
"2MPBpgTi2ETx",
"zxadCIAjHT",
"FJd-gBOv2py",
"bajkxXPiMjZ",
"Byz-cqg4vvj",
"C6_E362sdjj"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the feedback, I would like to make an apology for the later reply as I have been on a trip since last month. I promise I will read all feedbacks before the ddl and take necessary actions by considering the authors feedbacks if I cannot make a prompt enough reply during the discussion.\n\nAs much more m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"Zwwev_25iMh",
"C6_E362sdjj",
"Byz-cqg4vvj",
"bajkxXPiMjZ",
"FJd-gBOv2py",
"C6_E362sdjj",
"C6_E362sdjj",
"Byz-cqg4vvj",
"bajkxXPiMjZ",
"FJd-gBOv2py",
"nips_2022_XiLasGufCM",
"nips_2022_XiLasGufCM",
"nips_2022_XiLasGufCM",
"nips_2022_XiLasGufCM",
"nips_2022_XiLasGufCM"
] |
nips_2022_mhP6mHgrg1c | ORIENT: Submodular Mutual Information Measures for Data Subset Selection under Distribution Shift | Real-world machine-learning applications require robust models that generalize well to distribution shift settings, which is typical in real-world situations. Domain adaptation techniques aim to address this issue of distribution shift by minimizing the disparities between domains to ensure that the model trained on the source domain performs well on the target domain. Nevertheless, the existing domain adaptation methods are computationally very expensive. In this work, we aim to improve the efficiency of existing supervised domain adaptation (SDA) methods by using a subset of source data that is similar to target data for faster model training. Specifically, we propose ORIENT, a subset selection framework that uses the submodular mutual information (SMI) functions to select a source data subset similar to the target data for faster training. Additionally, we demonstrate how existing robust subset selection strategies, such as GLISTER, GRADMATCH, and CRAIG, when used with a held-out query set, fit within our proposed framework and demonstrate the connections with them. Finally, we empirically demonstrate that SDA approaches like d-SNE, CCSA, and standard Cross-entropy training, when employed together with ORIENT, achieve a) faster training and b) better performance on the target data. | Accept | The paper contributes to an important research direction: reducing the computing resources needed for training deep learning models while achieving state-of-the-art results. The proposed method is a principled strategy for domain adaptation problems. While based on existing notions, they are used cleverly. The diverse and extensive experiments show that relying on submodular mutual information to select target points is a promising strategy (although reporting the error bars more systematically would be most appreciated). The activation maps are a nice addition to commonly reported accuracy metrics.
For the reasons mentioned above, I recommend accepting the paper.
I strongly encourage the authors to consider the reviewer's discussion to provide an improved version. In addition to reviewers' suggestions for improvements, I would like to see in the revised version:
- Line 118: set operations $A \cup x$ written as $A \cup \lbrace x \rbrace$ ;
- Figures 3 and 4: error bars obtained by repeating the experiments with random data splits ;
- Line 278: the name of the missing author (H. Shimodaira) to reference [1] ;
- Section A.5: a clear explanation of how are computed the standard deviations reported in Tables 5,6,7,8. | train | [
"3UkBoQG9N1B",
"8QP3c5ZD_Rv",
"B_XOaoJWN_",
"NVB9Gr8f16",
"fsW1jBJT--e",
"XzZRLHK__QS",
"lMizrWvYQpq",
"E9I3w5ClG0dS",
"_yfZr2m7fD7",
"hKSAXtCgO40L",
"7oWw70kwnYk",
"NrCjoq3oK73",
"ewd_Rry8xa_",
"G755t0M9kQA",
"7DRhIN19NLf"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for engaging in the discussion and highlighting some salient points. We agree that the paper warrants a more thorough discussion of the stopping criterion being used, and we will add more discussion about early stopping in the revised version of the paper. \n\nTables 16-27 in the appendix pr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"NVB9Gr8f16",
"hKSAXtCgO40L",
"fsW1jBJT--e",
"E9I3w5ClG0dS",
"_yfZr2m7fD7",
"G755t0M9kQA",
"G755t0M9kQA",
"G755t0M9kQA",
"NrCjoq3oK73",
"ewd_Rry8xa_",
"7DRhIN19NLf",
"nips_2022_mhP6mHgrg1c",
"nips_2022_mhP6mHgrg1c",
"nips_2022_mhP6mHgrg1c",
"nips_2022_mhP6mHgrg1c"
] |
nips_2022_p_BVHgrvHD4 | An Information-Theoretic Framework for Deep Learning | Each year, deep learning demonstrate new and improved empirical results with deeper and wider neural networks. Meanwhile, with existing theoretical frameworks, it is difficult to analyze networks deeper than two layers without resorting to counting parameters or encountering sample complexity bounds that are exponential in depth. Perhaps it may be fruitful to try to analyze modern machine learning under a different lens. In this paper, we propose a novel information-theoretic framework with its own notions of regret and sample complexity for analyzing the data requirements of machine learning. We use this framework to study the sample complexity of learning from data generated by deep ReLU neural networks and deep networks that are infinitely wide but have a bounded sum of weights. We establish that the sample complexity of learning under these data generating processes is at most linear and quadratic, respectively, in network depth. | Accept | While some reviewers have expressed some criticism for the possibility that some assumptions might be unrealistic, all the reviewers commented on the refreshingly novel approach that could lead to new directions of research. Hence, while not perfect, this is an exciting paper that should be accepted at the conference. Please take into account the reviewers' comments in preparing the camera-ready version, in particular the comments on the limitations of the proposed approach. | train | [
"PJegSpa6GA",
"Ko4n_NXsHZG",
"I0k69ioMB0X5",
"Ydp60byhdw-",
"OioQq4eSe3",
"oGZOqUtZa-c",
"3e7pmwoy4WX",
"ijOwLjvM8PA",
"xnesyUfoQ-lg",
"9_gzQixi62P",
"3VSCZoJRhD-",
"o42_yD1mGdQ",
"hq0iRurxD-G",
"-5TjlZnTl-B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I see. I believe having further discussions of these topics in the appendix would be fruitful.\n\nIn any case, my opinions have not been significantly changed. I believe this is a solid work with new insights. There are many questions left open and it remains to see how useful the framework can be, but I think it... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
5
] | [
"Ydp60byhdw-",
"xnesyUfoQ-lg",
"OioQq4eSe3",
"oGZOqUtZa-c",
"3e7pmwoy4WX",
"9_gzQixi62P",
"-5TjlZnTl-B",
"hq0iRurxD-G",
"o42_yD1mGdQ",
"3VSCZoJRhD-",
"nips_2022_p_BVHgrvHD4",
"nips_2022_p_BVHgrvHD4",
"nips_2022_p_BVHgrvHD4",
"nips_2022_p_BVHgrvHD4"
] |
nips_2022_9T0Bnap5-j7 | DeepFoids: Adaptive Bio-Inspired Fish Simulation with Deep Reinforcement Learning | Our goal is to synthesize realistic underwater scenes with various fish species in different fish cages, which can be utilized to train computer vision models to automate fish counting and sizing tasks. It is a challenging problem to prepare a sufficiently diverse labeled dataset of images from aquatic environments. We solve this challenge by introducing an adaptive bio-inspired fish simulation. The behavior of caged fish changes based on the species, size and number of fish, and the size and shape of the cage, among other variables. However, a method to autonomously achieve schooling behavior for caged fish did not exist. In this paper, we propose a method for achieving schooling behavior for any given combination of variables, using multi-agent deep reinforcement learning (DRL) in various fish cages in arbitrary environments. Furthermore, to visually reproduce the underwater scene in different locations and seasons, we incorporate a physically-based underwater simulation. | Accept | This paper proposes to build photorealistic 3D models and simulation of fish schools in fish cages, with individual fish represented by swimming agents that are trained using multi-agent deep RL with bio-inspired rewards. The authors use these trained models, tuned to different fish species, to generate photorealistic images that will be fed to image YOLO-based detectors and counters, for applications to ecology and sustainable fishing. The trained fish detectors and counters is finally evaluated on real recordings.
Reviewers praised the clarity of the paper (vrph, e9JA), the motivation (vrph, ZV3m, RkxH, e9JA), the successful application of deep RL to a real problem (ZV3m, e9JA), the quality of the results (ZV3m, e9JA).
Reviewer vrph noted that the sim2real evaluation was insufficient, but the authors provided more details about comparisons of fish trajectories and distributions in the appendix. Reviewer ZV3m noted that no baseline (including Boids or random fish placement) had been used to train the fish detector and counter, and that the sizing task was mentioned but not done. Reviewer RkxH deplored the lack of metrics to qualify the learned fish behaviour. Reviewer e9JA noted that the paper was a bit long, with essential details in the appendix.
Reviewers agree on high scores (5, 6, 6, 7) and therefore I would recommend this paper for acceptance.
Thank you, Sincerely, Area Chair | train | [
"BF-NSC8ig2",
"1qzu-C3QR9n",
"2UAjOr7J3Kk",
"3VsO6V121fF",
"n_0emcKNDum",
"gSG7UVrnker",
"p2I0s3QawMF",
"lxU1FWxARPK",
"PQmvPT0eV2",
"sUNaOKASBR3",
"tQQVPSTDyNu",
"8H4AkDMnhc-"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for taking the time to read our comments and being open-minded to hear them out. We really appreciate that you updated the final rating.",
" Thank you for taking the time out to respond to my review. Having considered your response and manuscript, I have decided to update my rating. ",
" We ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"1qzu-C3QR9n",
"p2I0s3QawMF",
"3VsO6V121fF",
"gSG7UVrnker",
"8H4AkDMnhc-",
"PQmvPT0eV2",
"tQQVPSTDyNu",
"sUNaOKASBR3",
"nips_2022_9T0Bnap5-j7",
"nips_2022_9T0Bnap5-j7",
"nips_2022_9T0Bnap5-j7",
"nips_2022_9T0Bnap5-j7"
] |
nips_2022_voV_TRqcWh | Poisson Flow Generative Models | We propose a new "Poisson flow" generative model~(PFGM) that maps a uniform distribution on a high-dimensional hemisphere into any data distribution. We interpret the data points as electrical charges on the $z=0$ hyperplane in a space augmented with an additional dimension $z$, generating a high-dimensional electric field (the gradient of the solution to Poisson equation). We prove that if these charges flow upward along electric field lines, their initial distribution in the $z=0$ plane transforms into a distribution on the hemisphere of radius $r$ that becomes uniform in the $r \to\infty$ limit. To learn the bijective transformation, we estimate the normalized field in the augmented space. For sampling, we devise a backward ODE that is anchored by the physically meaningful additional dimension: the samples hit the (unaugmented) data manifold when the $z$ reaches zero. Experimentally, PFGM achieves current state-of-the-art performance among the normalizing flow models on CIFAR-10, with an Inception score of $9.68$ and a FID score of $2.35$. It also performs on par with the state-of-the-art SDE approaches while offering $10\times $ to $20 \times$ acceleration on image generation tasks. Additionally, PFGM appears more tolerant of estimation errors on a weaker network architecture and robust to the step size in the Euler method. The code is available at https://github.com/Newbeeer/poisson_flow . | Accept | All the reviewers agreed that the paper is novel and interesting with significant contributions. While there were certain concerns regarding the experimentation, clarity, and mathematical rigor initially, the extensive rebuttal provided by the authors addressed most of the concerns, hence some reviewers increased their scores. Hence, I am happy to recommend an acceptance for the paper.
However, I must say that, I find the phrase "record breaking" academically inappropriate and I kindly request the authors to replace it with a more academic phrase, such as achieving the state-of-the-art. | train | [
"CVIAqo_jTxW",
"1CdrJVf4Nhs",
"8ZBR2iGeO42",
"oUBfMRmhXil",
"OI975URnNWy",
"NVHfvNPyc22",
"WGQRrd2iL0i",
"n-opbEgLJII",
"ZSeyK0VUiBV",
"O_iBPGOEIt",
"VTJh3jat3mJ",
"AhszDnhbxPd",
"MIpJKQ95NwL",
"c2GIWcy00bh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors:\n\nThank you for your response. \nI decide to increase my rating to weak accept after reading your new response.\n\nBest,\nReviewer RM6S",
" I would like to thank the authors for answering my and other reviewers' questions. I have read all these comments and responses. Within a limited period of r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"8ZBR2iGeO42",
"O_iBPGOEIt",
"oUBfMRmhXil",
"NVHfvNPyc22",
"NVHfvNPyc22",
"c2GIWcy00bh",
"n-opbEgLJII",
"ZSeyK0VUiBV",
"MIpJKQ95NwL",
"AhszDnhbxPd",
"nips_2022_voV_TRqcWh",
"nips_2022_voV_TRqcWh",
"nips_2022_voV_TRqcWh",
"nips_2022_voV_TRqcWh"
] |
nips_2022_4BoN6bk-FEz | On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood | We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given $n$ independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error $\epsilon \gg n^{-1/3}$. This result improves upon the previous best accuracy threshold of $\epsilon \gg n^{-1/4}$ achievable by polynomial time computable PML-based universal estimators \cite{ACSS20, ACSS20b}. Our estimator reaches a theoretical limit for universal symmetric property estimation as \cite{Han20} shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every $1$-Lipschitz property when $\epsilon \ll n^{-1/3}$. | Accept | This work makes theoretical contributions to a long line of work on efficient property estimation. Specifically, the paper closes the gap on the best achievable results via the approximate profile maximum likelihood (PML) approach and the previously known information theoretic lower bound. The reviewers agreed that the technical arguments are novel and non-trivial and the result meets the bar for acceptance. | train | [
"2R0I652N7ji",
"hREX0s0nQHx",
"hmDAGk1vBxiu",
"GN2TnIrbMMo",
"nqRQ8LOOSCd",
"NYLhWSW_7Lu"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and valuable feedback. We appreciate your writing suggestions. We respond to specific points below.\n\n{\\bf Concerns related to writing}: Thank you for the feedback, in the final revision, we will add more background in the introduction and other areas to make the paper self-contained and... | [
-1,
-1,
-1,
8,
5,
5
] | [
-1,
-1,
-1,
4,
4,
2
] | [
"NYLhWSW_7Lu",
"nqRQ8LOOSCd",
"GN2TnIrbMMo",
"nips_2022_4BoN6bk-FEz",
"nips_2022_4BoN6bk-FEz",
"nips_2022_4BoN6bk-FEz"
] |
nips_2022_hXzOqPlXDwm | KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation | Relative positional embeddings (RPE) have received considerable attention since RPEs effectively model the relative distance among tokens and enable length extrapolation. We propose KERPLE, a framework that generalizes relative position embedding for extrapolation by kernelizing positional differences. We achieve this goal using conditionally positive definite (CPD) kernels, a class of functions known for generalizing distance metrics. To maintain the inner product interpretation of self-attention, we show that a CPD kernel can be transformed into a PD kernel by adding a constant offset. This offset is implicitly absorbed in the Softmax normalization during self-attention. The diversity of CPD kernels allows us to derive various RPEs that enable length extrapolation in a principled way. Experiments demonstrate that the logarithmic variant achieves excellent extrapolation performance on three large language modeling datasets. Our implementation and pretrained checkpoints are released at~\url{https://github.com/chijames/KERPLE.git}. | Accept | In the context of transformer models, this paper builds on previous studies (ALiBi embeddings) to improving relative positional embeddings, to generalizable better on long sequences
The key contribution of the paper compared to the ALiBi approach, is a non-linear bias (instead of a linear one), proportional to the distance between token positions n and m, added to the pre-softmax attention score for all position pairs n and m. A the theoretical justification based on kernel computation is provided for these specific terms.
The general formulation of the kernelized version of positional embeddings is viewed as a significant contribution by all reviewers.
Reviewers had concerns regarding the experimental section, but these have been addressed during the rebuttal phase. | train | [
"htkPpqg7tOo",
"fNTrMjYAEi8",
"IHjWP3rNp9e",
"j3CIOEHGy0",
"e3gB-6Pjqir",
"Xlr2Lfrz7c_",
"e4fYeXSr7UV",
"eHYoQxP09T3",
"CQTAuSPcyF8",
"jwd7DnXjfc",
"gOOjvymdyEg",
"pRfWl0w9k9j",
"dx_lGtQvkNk",
"cn11_NChZ0",
"MkhaC3uTXdj",
"_8xdCfvyoHL",
"9tzPIM3sB3",
"I2091TJvcvO",
"yJc2anvlmP",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"... | [
" Dear reviewer,\n\nAs the discussion window is going to close in 5 mins, we can only provide some quick responses below:\n\nOnly wonder to what curves on log-kernels plot final learnt params correspond: We will add this to the plot in the final revision.\n\nThus even with proper positional embedding maybe our atte... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
5
] | [
"fNTrMjYAEi8",
"gOOjvymdyEg",
"j3CIOEHGy0",
"eHYoQxP09T3",
"e4fYeXSr7UV",
"_8xdCfvyoHL",
"dx_lGtQvkNk",
"MkhaC3uTXdj",
"cn11_NChZ0",
"pRfWl0w9k9j",
"nips_2022_hXzOqPlXDwm",
"I2091TJvcvO",
"sdQsCPqS22I",
"XlMI6uR8M03",
"PLVjlTjrLal",
"8qX5KUTkkFC",
"JSUHIYhfZPk",
"K1vLocEEa9a",
"W... |
nips_2022_ah2gZLdT9u | Staggered Rollout Designs Enable Causal Inference Under Interference Without Network Knowledge | Randomized experiments are widely used to estimate causal effects across many domains. However, classical causal inference approaches rely on independence assumptions that are violated by network interference, when the treatment of one individual influences the outcomes of others. All existing approaches require at least approximate knowledge of the network, which may be unavailable or costly to collect. We consider the task of estimating the total treatment effect (TTE), the average difference between the outcomes when the whole population is treated versus when the whole population is untreated. By leveraging a staggered rollout design, in which treatment is incrementally given to random subsets of individuals, we derive unbiased estimators for TTE that do not rely on any prior structural knowledge of the network, as long as the network interference effects are constrained to low-degree interactions among neighbors of an individual. We derive bounds on the variance of the estimators, and we show in experiments that our estimator performs well against baselines on simulated data. Central to our theoretical contribution is a connection between staggered rollout observations and polynomial extrapolation. | Accept | The paper proposes a staggered rollout randomized design for estimating the total treatment effect with network data, without requiring the knowledge of the network. This is an interesting problem that appears in many real world settings. The paper has many strengths and few weaknesses. I believe that the weaknesses can be addressed based on the authors' response. I strongly advise the authors to revise the manuscript based on the received comments when submitting the final version. | train | [
"lmkj-_EbOjB",
"CIrJgfScdMF6",
"C1yqP6EerzD",
"XjaMDCi6j10",
"Ficb_zuf1KY",
"b9oRMx3-bOg",
"iSOYk9Vac-s",
"s6UyZI5bzaQ",
"yrMx7oEiHB5",
"8w5mjTew5h"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for answering my questions. I would like to keep my score after reading the authors' response and other reviewers' evaluations. Thanks!",
" I thank the authors for answering my questions. My evaluation of the paper remains the same.",
" ## Low polynomial degree assumption\nIn... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"C1yqP6EerzD",
"XjaMDCi6j10",
"8w5mjTew5h",
"yrMx7oEiHB5",
"s6UyZI5bzaQ",
"iSOYk9Vac-s",
"nips_2022_ah2gZLdT9u",
"nips_2022_ah2gZLdT9u",
"nips_2022_ah2gZLdT9u",
"nips_2022_ah2gZLdT9u"
] |
nips_2022_ajH17-Pb43A | AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient Hyper-parameter Tuning | Deep neural networks have seen great success in recent years; however, training a deep model is often challenging as its performance heavily depends on the hyper-parameters used. In addition, finding the optimal hyper-parameter configuration, even with state-of-the-art (SOTA) hyper-parameter optimization (HPO) algorithms, can be time-consuming, requiring multiple training runs over the entire dataset
for different possible sets of hyper-parameters. Our central insight is that using an informative subset of the dataset for model training runs involved in hyper-parameter optimization, allows us to find the optimal hyper-parameter configuration significantly faster. In this work, we propose AUTOMATA, a gradient-based subset selection framework for hyper-parameter tuning. We empirically evaluate the effectiveness of AUTOMATA in hyper-parameter tuning through several experiments on real-world datasets in the text, vision, and tabular domains. Our experiments show that using gradient-based data subsets for hyper-parameter tuning achieves significantly faster turnaround times and speedups of 3×-30× while achieving comparable performance to the hyper-parameters found using the entire dataset. | Accept | This paper proposes AUTOMATA, an approach that uses GradMatch to select subsets of data in order to accelerate hyperparameter tuning. The reviewers all found the approach to be practical and empirically effective. There were concerns about the robustness to different subset sizes, particularly across different datasets, but the authors demonstrated that AUTOMATA works well across a number of settings during the rebuttal period. The remaining criticism largely revolves around the novelty of the approach, but the majority of the reviewers believe that this is a useful application of gradient-based subset selection.
| test | [
"h6edbSduA17",
"mhyi8qchT_L",
"NRJqtoDnz25",
"1p-FB1Hjfz4i",
"NQ2atM7r5a",
"04cEgDfpkEg",
"xAgACA9vSLw",
"aP6buMc0M1D",
"xI0QkwFdUT_",
"24AeBbp_qo5",
"w1lIDVOzdrH",
"6R_mojAAK3h"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response. Basically, the author's response is convincing me. The difficulty of applying the data subset selection to hyperparameter optimization was made clear by the discussion based on the Spearman rank correlation. I appreciate that the authors added Figure 1 (b) regarding this resul... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"NQ2atM7r5a",
"04cEgDfpkEg",
"xAgACA9vSLw",
"xI0QkwFdUT_",
"xI0QkwFdUT_",
"6R_mojAAK3h",
"w1lIDVOzdrH",
"24AeBbp_qo5",
"nips_2022_ajH17-Pb43A",
"nips_2022_ajH17-Pb43A",
"nips_2022_ajH17-Pb43A",
"nips_2022_ajH17-Pb43A"
] |
nips_2022_nEJMdZd8cIi | projUNN: efficient method for training deep networks with unitary matrices | In learning with recurrent or very deep feed-forward networks, employing unitary matrices in each layer can be very effective at maintaining long-range stability. However, restricting network parameters to be unitary typically comes at the cost of expensive parameterizations or increased training runtime. We propose instead an efficient method based on rank-$k$ updates -- or their rank-$k$ approximation -- that maintains performance at a nearly optimal training runtime. We introduce two variants of this method, named Direct (projUNN-D) and Tangent (projUNN-T) projected Unitary Neural Networks, that can parameterize full $N$-dimensional unitary or orthogonal matrices with a training runtime scaling as $O(kN^2)$. Our method either projects low-rank gradients onto the closest unitary matrix (projUNN-T) or transports unitary matrices in the direction of the low-rank gradient (projUNN-D). Even in the fastest setting ($k=1$), projUNN is able to train a model's unitary parameters to reach comparable performances against baseline implementations. In recurrent neural network settings, projUNN closely matches or exceeds benchmarked results from prior unitary neural networks. Finally, we preliminarily explore projUNN in training orthogonal convolutional neural networks, which are currently unable to outperform state of the art models but can potentially enhance stability and robustness at large depth. | Accept | This paper provides two routines to replace gradient updates with low-rank unitary updates, and provides extensive technical discussion and experiments. Reviewers are uniformly positive, and I also voice similar praises, e.g., I too appreciate the extensive discussion in appendices A and B, and the detailed experiments in the later appendices. As such, it is easy to recommend acceptance, and I will push for this to receive at least a spotlight. Even so, I urge the authors to make careful revisions for remaining issues raised by the reviewers, and to perform a full pass of their own. | train | [
"wXiDoo8ZK3v",
"GJcrEu5lQYY",
"Pvr7lwOXp0Z",
"ORIuZ0UYixG",
"OqQb548z9gC",
"M0mTBqsFewb",
"zwAIBLdqLmY",
"NUrpnjLQ5yW",
"n85nIkpKW19"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed responses. Indeed, it will be interesting to analyze and identify potential practical applications.",
" We thank the reviewer for their feedback and their positive comments about our work. \n\n**Reviewer**: “Unfortunately the CNN code was not available in time for the review” \\\n**Re... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
9,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"OqQb548z9gC",
"n85nIkpKW19",
"NUrpnjLQ5yW",
"zwAIBLdqLmY",
"M0mTBqsFewb",
"nips_2022_nEJMdZd8cIi",
"nips_2022_nEJMdZd8cIi",
"nips_2022_nEJMdZd8cIi",
"nips_2022_nEJMdZd8cIi"
] |
nips_2022_GIZlheqznkT | SUNMASK: Mask Enhanced Control in Step Unrolled Denoising Autoencoders | This paper introduces SUNMASK, an approach for generative sequence modeling based on masked unrolled denoising autoencoders. By explicitly incorporating a conditional masking variable, as well as using this mask information to modulate losses during training based on expected exemplar difficulty, SUNMASK models discrete sequences without direct ordering assumptions. The addition of masking terms allows for fine-grained control during generation, starting from random tokens and a mask over subset variables, then predicting tokens which are again combined with a subset mask for subsequent repetitions. This iterative process gradually improves token sequences toward a structured output, while guided by proposal masks. The broad framework for unrolled denoising autoencoders is largely independent of model type, and we utilize both transformer and convolution based architectures in this work. We demonstrate the efficacy of this approach both qualitatively and quantitatively, applying SUNMASK to generative modeling of symbolic polyphonic music, and language modeling for English text. | Reject | This paper introduces SUNMASK for modeling discrete sequences. It builds upon previous works such as SUNDAE, Coconet and order-agnostic NADE, but uses a masking scheme that enables fine-grained or human-in-the-loop control during the generation. The qualitative experiments about musical inpainting and masking terms in language modeling do support this motivation to some extent. However, the reviewers are mainly concerned with both the algorithmic novelty and experiments of the paper.
Regarding the algorithmic novelty, some reviewers are concerned that the method is a straightforward combination of SUNMASK and Coconet. I tend to agree with this. Also the reviewers are concerned that the paper could have done a better job in the introduction and background section by putting the method in a better context and better describing related methods such as SUNDAE. This could help highlight the novelty of the paper.
Regarding the experiments, some reviewers are concerned about language modeling (Fig. 2) experiments, and that they do not show much improvement over SUNDAE. I tend to agree, and this is also shown in the results of Table 3 where it is clear that the quality of the language model is not on a par with the recent developments.
Also some of the motivations of the paper, especially the arguments about "high trust / low trust" interpretation of the mask, was unclear to me.
In short, I believe the paper should be clarified and improved by addressing the above concerns. | train | [
"fRoqesZwU76",
"k-Gh-WPtVx",
"o0EH6UsShK_",
"XygLgdDlNDI",
"E-4PUGcg-_a",
"VDAW6PpqRdK",
"x8Ro-NDm18TD",
"m6NntbRInUh",
"avqO09Ign6e",
"UY1carN03Y0",
"ID1qM0TGIjC",
"207BpZrj73a",
"M2QDXQ3F5Qp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the author’s response, the updated paper and other reviewers’ responses and have decided to keep my score. In hindsight, after reading other reviewers’ reviews, I think I should’ve given a five, it seems most reviewers agreed that the writing is not good enough for publication and the significance is ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
3,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
4,
5,
2
] | [
"VDAW6PpqRdK",
"E-4PUGcg-_a",
"M2QDXQ3F5Qp",
"207BpZrj73a",
"ID1qM0TGIjC",
"UY1carN03Y0",
"avqO09Ign6e",
"nips_2022_GIZlheqznkT",
"nips_2022_GIZlheqznkT",
"nips_2022_GIZlheqznkT",
"nips_2022_GIZlheqznkT",
"nips_2022_GIZlheqznkT",
"nips_2022_GIZlheqznkT"
] |
nips_2022_wVc4Qg5Bhah | Extra-Newton: A First Approach to Noise-Adaptive Accelerated Second-Order Methods | In this work, we propose a universal and adaptive second-order method for minimization of second-order smooth, convex functions. Precisely, our algorithm achieves $O(\sigma / \sqrt{T})$ when the oracle feedback is stochastic with variance $\sigma$, and obtains the improved $O( 1 / T^3)$ convergence with deterministic oracles. Our method achieves this rate interpolation without knowing the nature of the oracle a priori, which was enabled by a parameter-free step-size that is oblivious to the knowledge of smoothness modulus, variance bounds and the diameter of the constrained set. To our knowledge, this is the first universal algorithm that achieves the aforementioned global guarantees within second-order convex optimization literature. | Accept | A solid theoretical paper that proposes a second-order method of iteration complexity O(\sigma T^{-1/2} + T^{-3}) for minimizing Hessian-smooth convex functions. A notable feature of the proposed method is its ability to adapt to problem parameters.
The paper has generated considerable discussion between the reviewers and the authors which helped clarifying several major concerns. Please make sure to take into account the important reviewers' feedback, and criticisms, in the revised version. | train | [
"fBi9nw8ej-F",
"UpCbeVFjqa",
"lVKFO5ClJG_",
"-vSsI-azIrl",
"kkHeYyMGtv5",
"9HoyvHp6iF2",
"biHiiudPQBn",
"qsWzzmF-Qyc",
"KgVjcUaU5h-",
"Igx8SFp3lT",
"zcxyskU1jy",
"JB4a4sf1rK9",
"h2BAVrUHu3c",
"p8tyy7GRljF",
"liysABFw85Y",
"4lSE_eaMgO",
"STCfj9vHn7J",
"HTXcWcdNi0H",
"swfTbUy3hC",
... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official... | [
" Thanks. Yes, I do find your responses convincing. I will maintain my original score.",
" Thank you once more for the quick response.\n\n- We will organize the proofs for implicit and explicit algorithm to avoid repetitions pertaining to the iterate updates with the same optimality condition (this mainly concern... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"UpCbeVFjqa",
"lVKFO5ClJG_",
"-vSsI-azIrl",
"kkHeYyMGtv5",
"biHiiudPQBn",
"sN7PW8lb3Vs",
"DG1z9NT8SAW",
"nips_2022_wVc4Qg5Bhah",
"nips_2022_wVc4Qg5Bhah",
"nips_2022_wVc4Qg5Bhah",
"JB4a4sf1rK9",
"HTXcWcdNi0H",
"nips_2022_wVc4Qg5Bhah",
"YvCI0jQ-GPA",
"DG1z9NT8SAW",
"DG1z9NT8SAW",
"sN7P... |
nips_2022_VVcSpAbR4zX | Learning to Discover and Detect Objects | We tackle the problem of novel class discovery, detection, and localization (NCDL). In this setting, we assume a source dataset with labels for objects of commonly observed classes. Instances of other classes need to be discovered, classified, and localized automatically based on visual similarity, without human supervision. To this end, we propose a two-stage object detection network Region-based NCDL (RNCDL), that uses a region proposal network to localize object candidates and is trained to classify each candidate, either as one of the known classes, seen in the source dataset, or one of the extended set of novel classes, with a long-tail distribution constraint on the class assignments, reflecting the natural frequency of classes in the real world. By training our detection network with this objective in an end-to-end manner, it learns to classify all region proposals for a large variety of classes, including those that are not part of the labeled object class vocabulary. Our experiments conducted using COCO and LVIS datasets reveal that our method is significantly more effective compared to multi-stage pipelines that rely on traditional clustering algorithms or use pre-extracted crops. Furthermore, we demonstrate the generality of our approach by applying our method to a large-scale Visual Genome dataset, where our network successfully learns to detect various semantic classes without explicit supervision. Code is available at: https://github.com/vlfom/RNCDL. | Accept | The meta reviewer has carefully read the paper, reviews, rebuttals, and discussions. The meta reviewer appreciates the authors' efforts to respond and revise the paper. The authors did a good job of convincing the reviewers. The meta reviewer believes that open-world object detection using self-supervised learning is of interest to the community. And the authors clearly explain the motivation that object discovery is modeled as a clustering problem and solved by Sinkhorn clustering. The authors are suggested to polish the paper considering the reviewers' comments. | train | [
"yTLQOQ0ShNT",
"IMV-IjCjaYr",
"GKjAupmCBAf",
"kliSC6t9R1W",
"Gz9jI2jPNju",
"kRR3oXzn45c",
"TMIn2rV3Dpc",
"0OPkJdN6QHg",
"BG2FcxVOOvxm",
"Edw7sksYCXY",
"Mm3CWP-YXxyR",
"vvINvu1X_7x",
"zgTM9ExFJu-1",
"_-3o-MKJrqk",
"wvqIY3jp-qL",
"G_GHjfbD_TA",
"6K6hsGYWLxf",
"SpotXBzdUPm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for increasing their rating, and we are glad to have solved most of the reviewer's concerns. ",
" The rebuttal solves most of my concerns. I will raise my rating to 5.",
" We thank the reviewer for increasing their initial score, and we are happy that our rebuttal answered the reviewer's... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"IMV-IjCjaYr",
"vvINvu1X_7x",
"kRR3oXzn45c",
"vvINvu1X_7x",
"TMIn2rV3Dpc",
"_-3o-MKJrqk",
"Mm3CWP-YXxyR",
"BG2FcxVOOvxm",
"zgTM9ExFJu-1",
"nips_2022_VVcSpAbR4zX",
"SpotXBzdUPm",
"G_GHjfbD_TA",
"6K6hsGYWLxf",
"wvqIY3jp-qL",
"nips_2022_VVcSpAbR4zX",
"nips_2022_VVcSpAbR4zX",
"nips_2022_... |
nips_2022_H4DqfPSibmx | FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length. Approximate attention methods have attempted to address this problem by trading off model quality to reduce the compute complexity, but often do not achieve wall-clock speedup. We argue that a missing principle is making attention algorithms IO-aware---accounting for reads and writes between levels of GPU memory. We propose FlashAttention, an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of FlashAttention, showing that it requires fewer HBM accesses than standard attention, and is optimal for a range of SRAM sizes. We also extend FlashAttention, yielding an approximate attention algorithm that is faster than any existing approximate attention method. FlashAttention, 3x speedup on GPT-2 (seq. length 1K), and 2.4x speedup on long-range arena (seq. length 1K-4K). FlashAttention, yielding higher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on long-document classification) and entirely new capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge (seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy). | Accept | The authors study the problem of improving wall-clock time of performing exact and approximate attention in Transformer networks. Towards this, they identified the number of HBM access as a limiting factor. The authors propose FlashAttention which employs tiling and fusion operations to efficiently implement the attention operation on (GPU) SRAM while minimizing the number of HBM accesses. Even though the proposed approach has higher FLOP coun wrt. the standard attention implementation, it leads to reduced wall clock time due to the smaller number of HBM accesses. The proposed FlashAttention can be easily modified to obtain a sparse attention method, namely block-sparse FlashAttention. The paper is very well written and the authors demonstrate the utility of the proposed FlashAttention/block-sparse FlashAttenion on a wide range of settings. On many benchmarks, the proposed method allows for a longer context length, which leads to higher performance as compared to the existing approaches in the literature.
Overall, the findings of the paper are very interesting and impactful. All the reviewers are quite positive about the paper. During the rebuttal phase the authors have included additional experiments that further strengthen the contributions and value of the paper. Some of the reviewers pointed out that the paper builds on well-known techniques like tiling and fusion. However, given that the paper makes significant improvements on a timely and actively studied problem of making the attention operation efficient and the results in the paper have the potential to inspire many follow-up explorations, I recommend that the paper be accepted in NeurIPS 2022. | train | [
"lvqpfYcdHIt",
"J6tUmq7NU3J",
"A1ea5v7tAiS",
"S3rTSzZ5U-5",
"GXZ5zyuikDz",
"zRDcMITjd6",
"95mbT1tIc4",
"p-bFgQc7MY",
"lA1gAlhBPYR",
"BV773qo-ehP",
"kDFFlAU6eA1",
"xAhYrPDgE-",
"FDU7pQcNH4w",
"nLQRKmUZfxV",
"yBQDZgiqlbE",
"gzO66mRp8hl",
"YZxfMlFlZwK",
"KRQfEiBB7mu",
"NMXby1GMsJn"
... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"... | [
" Satisfied with the response of the authors and increasing the score to clear accept. Nice work.",
" Thank you for the productive discussion.\n\n**Q1: Comparison with fusion techniques and is FlashAttention sensitive to batch size?**\nFlashAttention is not particularly sensitive to batch size. We updated the ben... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4,
4
] | [
"p-bFgQc7MY",
"S3rTSzZ5U-5",
"xAhYrPDgE-",
"kDFFlAU6eA1",
"lA1gAlhBPYR",
"95mbT1tIc4",
"NMXby1GMsJn",
"KRQfEiBB7mu",
"YZxfMlFlZwK",
"kDFFlAU6eA1",
"gzO66mRp8hl",
"yBQDZgiqlbE",
"nLQRKmUZfxV",
"nips_2022_H4DqfPSibmx",
"nips_2022_H4DqfPSibmx",
"nips_2022_H4DqfPSibmx",
"nips_2022_H4DqfP... |
nips_2022_Zh21fp1B0vv | Rate-Optimal Online Convex Optimization in Adaptive Linear Control | We consider the problem of controlling an unknown linear dynamical system under adversarially-changing convex costs and full feedback of both the state and cost function. We present the first computationally-efficient algorithm that attains an optimal $\sqrt{T}$-regret rate compared to the best stabilizing linear controller in hindsight, while avoiding stringent assumptions on the costs such as strong convexity. Our approach is based on a careful design of non-convex lower confidence bounds for the online costs, and uses a novel technique for computationally-efficient regret minimization of these bounds that leverages their particular non-convex structure. | Accept | I recommend acceptance due to the unabiguously and uniformly positive opinions of the reviewers. The reviews identify the authors' clear contribution (improving the rate) for a problem of interest in the community (online linear control with relaxed assumptions). Concerns about clarity or alternative approaches were resolved during the discussion period. | train | [
"K4ZHoA9oele",
"7UemvrMIDa",
"B8pDBjrMgTd",
"D1iFjp_8c3mk",
"E13xJHw7z2VZ",
"4t19REvSEYa",
"22YnaQP8lpK",
"Gn1opXtLtgu",
"HzjmYtAY4GH",
"iANWJ8_6AE",
"R1phW3dyZ0y",
"vcxT36D_vVN",
"IK7Mgu8y2W"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi, thanks for the response.\n\nI'm not fully convinced that using convexity + Jensen on the true cost does not work; I understand that the optimistic cost is a non-starter. However, this is something I need to spend some time on to convince myself or disabuse myself of this idea. My score (or possible increments... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"HzjmYtAY4GH",
"22YnaQP8lpK",
"4t19REvSEYa",
"E13xJHw7z2VZ",
"Gn1opXtLtgu",
"IK7Mgu8y2W",
"vcxT36D_vVN",
"R1phW3dyZ0y",
"iANWJ8_6AE",
"nips_2022_Zh21fp1B0vv",
"nips_2022_Zh21fp1B0vv",
"nips_2022_Zh21fp1B0vv",
"nips_2022_Zh21fp1B0vv"
] |
nips_2022_6I3zJn9Slsb | Model-based Lifelong Reinforcement Learning with Bayesian Exploration | We propose a model-based lifelong reinforcement-learning approach that estimates a hierarchical Bayesian posterior distilling the common structure shared across different tasks. The learned posterior combined with a sample-based Bayesian exploration procedure increases the sample efficiency of learning across a family of related tasks. We first derive an analysis of the relationship between the sample complexity and the initialization quality of the posterior in the finite MDP setting. We next scale the approach to continuous-state domains by introducing a Variational Bayesian Lifelong Reinforcement Learning algorithm that can be combined with recent model-based deep RL methods, and that exhibits backward transfer. Experimental results on several challenging domains show that our algorithms achieve both better forward and backward transfer performance than state-of-the-art lifelong RL methods. | Accept | Unanimous accept from 3 experienced reviewers with good confidences
Important topic (lifelong RL), clearly explained, well-formulated with a variational and hierarchical Bayesian model, evaluated on a range of relevant experiments in discrete and continuous settings in MuJoCo, and also MetaWorld MT10 & MT50 domains in response to reliever osbM and thFd questions, adapted to the lifelong setting of rolling out tasks sequentially, to include like opening a boxes/closing drawers/opening windows. | train | [
"EMaHHqc1ey0",
"SIXBHrFf0W",
"41BlqZJ6jl4",
"PIF2-PqXls",
"uqqUopkdMrx",
"xOtwPgQi0e",
"MjLIBVftEbu",
"YNiVMvT7Ut0F",
"lxAEt-SsinC",
"IViSTXtSriX",
"sT9pl32FyVCy",
"5vFVtu6FsfiN",
"erzqn4NDXEQ",
"sQ-h087_iFf",
"1ASkAl7_Cp",
"yc2WBM8uVHA",
"M12SSivMFNh",
"SUPDPsX1U8G",
"-XYirE-RlS... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response, I have updated my review. ",
" We want to thank the reviewer once again for helping us improve our paper. We have run experiments on MetaWorld MT10 & MT50 domains as the reviewer suggested. We also made the role of the baseline \"Single-task MBRL\" clearer in the paper, w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"M12SSivMFNh",
"M12SSivMFNh",
"PIF2-PqXls",
"uqqUopkdMrx",
"MjLIBVftEbu",
"lxAEt-SsinC",
"5vFVtu6FsfiN",
"erzqn4NDXEQ",
"sQ-h087_iFf",
"-XYirE-RlS5",
"-XYirE-RlS5",
"SUPDPsX1U8G",
"SUPDPsX1U8G",
"SUPDPsX1U8G",
"M12SSivMFNh",
"M12SSivMFNh",
"nips_2022_6I3zJn9Slsb",
"nips_2022_6I3zJn... |
nips_2022_0ISChqjlrq | Knowledge Distillation: Bad Models Can Be Good Role Models | Large neural networks trained in the overparameterized regime are able to fit noise to zero train error. Recent work of Nakkiran and Bansal has empirically observed that such networks behave as “conditional samplers” from the noisy distribution. That is, they replicate the noise in the train data to unseen examples. We give a theoretical framework for studying this conditional sampling behavior in the context of learning theory. We relate the notion of such samplers to knowledge distillation, where a student network imitates the outputs of a teacher on unlabeled data. We show that samplers, while being bad classifiers, can be good teachers. Concretely, we prove that distillation from samplers is guaranteed to produce a student which approximates the Bayes optimal classifier. Finally, we show that some common learning algorithms (e.g., Nearest-Neighbours and Kernel Machines) can often generate samplers when applied in the overparameterized regime. | Accept | The paper develops a theoretical framework (in the context of learning theory) for training overparameterized networks with label noise, and shows that, in the context of ensemble distillation, teacher networks need not be good classifiers as long as they are good conditional samplers (in the sense defined in the paper).
This is a primarily theoretical paper, with limited empirical results. All reviewers are positive about the paper: one reviewer recommends acceptance (7), the other three recommend weak acceptance (6). The reviewers are positive about the theoretical aspects of the paper, describing them as interesting and technically sound, but are less convinced about the practicality of the theory or the significance of the empirical results.
Given that the theoretical contribution of the paper seems significant and no concerns have been raised about the soundness of the theoretical arguments, I'm happy to recommend acceptance. | train | [
"77O3MAoA7_Z",
"oQZABBI-3iN",
"uueY8B-kexm",
"BMRmjV-vFm",
"XqmhksFa9kn",
"ZkPyVjHxaRh",
"xGBF2sOZgT",
"Cq1h8BbIMcW",
"udGDtPp90ZQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the rebuttal addressing my concerns. After reading I would like to keep my initial rating. ",
" We would like to thank the reviewer for championing our paper.\nWe will amend figure 1 according to your suggestions. Regarding the labeling of an external set, while it is not p... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"BMRmjV-vFm",
"udGDtPp90ZQ",
"Cq1h8BbIMcW",
"xGBF2sOZgT",
"ZkPyVjHxaRh",
"nips_2022_0ISChqjlrq",
"nips_2022_0ISChqjlrq",
"nips_2022_0ISChqjlrq",
"nips_2022_0ISChqjlrq"
] |
nips_2022_1Re5RKwpieG | AVLEN: Audio-Visual-Language Embodied Navigation in 3D Environments | Recent years have seen embodied visual navigation advance in two distinct directions: (i) in equipping the AI agent to follow natural language instructions, and (ii) in making the navigable world multimodal, e.g., audio-visual navigation. However, the real world is not only multimodal, but also often complex, and thus in spite of these advances, agents still need to understand the uncertainty in their actions and seek instructions to navigate. To this end, we present AVLEN -- an interactive agent for Audio-Visual-Language Embodied Navigation. Similar to audio-visual navigation tasks, the goal of our embodied agent is to localize an audio event via navigating the 3D visual world; however, the agent may also seek help from a human (oracle), where the assistance is provided in free-form natural language. To realize these abilities, AVLEN uses a multimodal hierarchical reinforcement learning backbone that learns: (a) high-level policies to choose either audio-cues for navigation or to query the oracle, and (b) lower-level policies to select navigation actions based on its audio-visual and language inputs. The policies are trained via rewarding for the success on the navigation task while minimizing the number of queries to the oracle. To empirically evaluate AVLEN, we present experiments on the SoundSpaces framework for semantic audio-visual navigation tasks. Our results show that equipping the agent to ask for help leads to a clear improvement in performances, especially in challenging cases, e.g., when the sound is unheard during training or in the presence of distractor sounds. | Accept | The authors present a model for soundscapes that is able to request guidance from an oracle. This requires that the agent know when to query the user (oracle) for help. This sits somewhere between the work on oracle guidance and language instruction following. Ablations and comparisons are provided but do not fully explore the natural set of questions about why/where the model performs best. Note that this oracle deviates from that in the most relevant prior work by Nguyen and these instructions differ from those generated by a model in papers by Thomason.
Overall, the work makes a nice contribution to bridging existing literatures within EAI. | train | [
"nHAmiQ-Gbdg",
"OlnokoEwrCs",
"hsUQP9H1V2",
"toRvHxNcbG-",
"TqPOn4fvvHaL",
"CLaEB-Al3dE",
"10YnPRXuld9",
"HdMuxi0mnp",
"wAorAgVrNCw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. My questions have been answered and I'm keeping the initial score.",
" We thank the reviewer for going through our paper and providing thoughtful comments.\n- **Prior works:** We agree with the reviewer that there have been independent prior works for Audio-visual navigation, as wel... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"toRvHxNcbG-",
"10YnPRXuld9",
"wAorAgVrNCw",
"HdMuxi0mnp",
"CLaEB-Al3dE",
"nips_2022_1Re5RKwpieG",
"nips_2022_1Re5RKwpieG",
"nips_2022_1Re5RKwpieG",
"nips_2022_1Re5RKwpieG"
] |
nips_2022_zp_Cp38qJE0 | Transferring Fairness under Distribution Shifts via Fair Consistency Regularization | The increasing reliance on ML models in high-stakes tasks has raised a major concern on fairness violations. Although there has been a surge of work that improves algorithmic fairness, most of them are under the assumption of an identical training and test distribution. In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse. In this paper, we study how to transfer model fairness under distribution shifts, a widespread issue in practice. We conduct a fine-grained analysis of how the fair model is affected under different types of distribution shifts, and find that domain shifts are more challenging than subpopulation shifts. Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness. Guided by it, we propose a practical algorithm with a fair consistency regularization as the key component. A synthetic dataset benchmark, which covers all types of distribution shifts, is deployed for experimental verification of the theoretical findings. Experiments on synthetic and real datasets including image and tabular data demonstrate that our approach effectively transfers fairness and accuracy under various types of distribution shifts. | Accept | The reviews are a bit divergent. While all the reviewers appreciate the clarity of the paper and the theory-inspired proposed algorithm, they raised some concerns e.g., on the assumption employed for obtaining a theoretical result, as well as on marginal (or worse) EO fairness performance in some cases. Although concerns on the fairness performance improvement in light of the employed metrics are still unresolved, many of the concerns are properly addressed, and with regard to the writing quality and insights, I believe that the paper is worth being published. Hence, I recommend the acceptance of this paper. | train | [
"aol1d3MKJWN",
"d3pcm4FbGWa",
"jyXd1-oW3wp",
"MKBS3ouFcdY",
"bDlcwxFOm1Y",
"jwijPsGZ3Jj",
"JGrWs2m6-pa",
"kVCtvu3PUENk",
"NVjc4fM89Kp",
"X8JPFRyGbe0",
"WpflgIZP6ngR",
"XP-ECiA315Vq",
"v8NblNcHYhh",
"RU--JPZVaol",
"E3hxaePkZWC",
"HYGqRu4pAs0",
"wDUkqhkflV",
"CVb5pe2KJFW"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification.\n\nI do not agree that the variance of group accuracy should be always prioritized before equalized odds. Also the usage of \"trivial fairness\" is rather ad-hoc and it only means the model outputs a constant resulting $\\Delta_{odds}=0$, but clearly currently all the baselines have ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
4
] | [
"d3pcm4FbGWa",
"jyXd1-oW3wp",
"kVCtvu3PUENk",
"CVb5pe2KJFW",
"wDUkqhkflV",
"JGrWs2m6-pa",
"E3hxaePkZWC",
"WpflgIZP6ngR",
"v8NblNcHYhh",
"nips_2022_zp_Cp38qJE0",
"XP-ECiA315Vq",
"CVb5pe2KJFW",
"RU--JPZVaol",
"wDUkqhkflV",
"HYGqRu4pAs0",
"nips_2022_zp_Cp38qJE0",
"nips_2022_zp_Cp38qJE0"... |
nips_2022_HMs5pxZq1If | A Consistent and Differentiable Lp Canonical Calibration Error Estimator | Calibrated probabilistic classifiers are models whose predicted probabilities can directly be interpreted as uncertainty estimates. It has been shown recently that deep neural networks are poorly calibrated and tend to output overconfident predictions. As a remedy, we propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates, which asymptotically converges to the true $L_p$ calibration error. This novel estimator enables us to tackle the strongest notion of multiclass calibration, called canonical (or distribution) calibration, while other common calibration methods are tractable only for top-label and marginal calibration. The computational complexity of our estimator is $\mathcal{O}(n^2)$, the convergence rate is $\mathcal{O}(n^{-1/2})$, and it is unbiased up to $\mathcal{O}(n^{-2})$, achieved by a geometric series debiasing scheme. In practice, this means that the estimator can be applied to small subsets of data, enabling efficient estimation and mini-batch updates. The proposed method has a natural choice of kernel, and can be used to generate consistent estimates of other quantities based on conditional expectation, such as the sharpness of a probabilistic classifier. Empirical results validate the correctness of our estimator, and demonstrate its utility in canonical calibration error estimation and calibration error regularized risk minimization. | Accept | This meta review is based on the reviews, the authors rebuttal and the discussion with the reviewers, and ultimately my own judgement on the paper. There was a consensus that the paper contributes an interesting and novel strategy to model calibration based on Dirichelt kernel density estimates, and the reviewers praised several of its aspects. I feel this work deserves to be featured at NeurIPS and will attract interest from the community. I would like to personally invite the authors to carefully revise their manuscript to take into account the remarks and suggestions made by reviewers. Congratulations! | train | [
"1jUmse_UCqL",
"I68g0ljp879",
"-z0UsNFReoX",
"INxiFw0o5bg",
"fN2Dt5zxzP",
"E0jYpnf2dbq",
"wnVN40iO5v",
"w9HLGwXDGBv",
"deKK3azGyWY",
"jL4HcTBT-gD",
"rVtDxr-0Lf8"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response. I stand by my initial review and rating, I think this is a strong paper, and I thank them for the references they pointed me to. ",
" Thank you for the great suggestions and the opportunity to make our propositions more precise. We updated the propositions and the proofs ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"fN2Dt5zxzP",
"-z0UsNFReoX",
"E0jYpnf2dbq",
"nips_2022_HMs5pxZq1If",
"deKK3azGyWY",
"wnVN40iO5v",
"rVtDxr-0Lf8",
"jL4HcTBT-gD",
"nips_2022_HMs5pxZq1If",
"nips_2022_HMs5pxZq1If",
"nips_2022_HMs5pxZq1If"
] |
nips_2022_wKf5dRSartn | On Batch Teaching with Sample Complexity Bounded by VCD | In machine teaching, a concept is represented by (and inferred from) a small number of labeled examples. Various teaching models in the literature cast the interaction between teacher and learner in a way to obtain a small complexity (in terms of the number of examples required for teaching a concept) while obeying certain constraints that are meant to prevent unfair collusion between teacher and learner. In recent years, one major research goal has been to show interesting relationships between teaching complexity and the VC-dimension (VCD). So far, the only interesting relationship known from batch teaching settings is an upper bound quadratic in the VCD, on a parameter called recursive teaching dimension. The only known upper bound on teaching complexity that is linear in VCD was obtained in a model of teaching with sequences rather than batches.
This paper is the first to provide an upper bound of VCD on a batch teaching complexity parameter. This parameter, called STDmin, is introduced here as a model of teaching that intuitively incorporates a notion of ``importance'' of an example for a concept. In designing the STDmin teaching model, we argue that the standard notion of collusion-freeness from the literature may be inadequate for certain applications; we hence propose three desirable properties of teaching complexity and demonstrate that they are satisfied by STDmin. | Accept | The reviewers agree that this is a solid contribution. Please do revise the paper according to the reviewers comments and the discussion. | train | [
"yuYkWlOZ_2y",
"Uc34suvKV7L",
"md88N63l9A0",
"aj1_w_ORKAD",
"pcc2Lt5vs7U",
"WGi9CWhfZEk",
"qJqrJgzTdLK",
"q6lqurZJPuJ",
"SKCFJrL5Oou",
"VEvB_dp5Dwh",
"fcfHhTO26n_",
"DZ84dxOzPtJ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response to our rebuttal. We are glad to read that you are not opposed to acceptance of the paper, and hope that our responses are helpful in improving your rating. If you have any other comments or feedback, please let us know. Thank you again for your careful review.",
" \nI have read all t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
3,
3
] | [
"Uc34suvKV7L",
"qJqrJgzTdLK",
"q6lqurZJPuJ",
"nips_2022_wKf5dRSartn",
"DZ84dxOzPtJ",
"fcfHhTO26n_",
"VEvB_dp5Dwh",
"SKCFJrL5Oou",
"nips_2022_wKf5dRSartn",
"nips_2022_wKf5dRSartn",
"nips_2022_wKf5dRSartn",
"nips_2022_wKf5dRSartn"
] |
nips_2022_U6vBmFL9SxP | Nonlinear Sufficient Dimension Reduction with a Stochastic Neural Network | Sufficient dimension reduction is a powerful tool to extract core information hidden in the high-dimensional data and has potentially many important applications in machine learning tasks. However, the existing nonlinear sufficient dimension reduction methods often lack the scalability necessary for dealing with large-scale data. We propose a new type of stochastic neural network under a rigorous probabilistic framework and show that it can be used for sufficient dimension reduction for large-scale data. The proposed stochastic neural network is trained using an adaptive stochastic gradient Markov chain Monte Carlo algorithm, whose convergence is rigorously studied in the paper as well. Through extensive experiments on real-world classification and regression problems, we show that the proposed method compares favorably with the existing state-of-the-art sufficient dimension reduction methods and is computationally more efficient for large-scale data. | Accept | This paper gives a new method to perform nonlinear sufficient dimension reduction by utilizing a stochastic neural network. The derivation of the proposed method is justified by some theoretical background, and a convergence rate analysis is given for the derived algorithm. The practical performance is evaluated by some numerical experiments on real datasets.
Although there are some related work, the proposed model is new. The theoretical justification of the proposed method is solid. The paper is overall clearly written.
The numerical experiments properly shows effectiveness of the method (while they are rather small).
In summary, this paper presents a novel method with nice theoretical and numerical justifications. I recommend acceptance. | val | [
"h2mT4oEYnnX",
"m4jiTgkfr6X",
"IPtvOs6iehS",
"Yt5sA55s8ka",
"3FjSrLMnQF",
"kFZjQfsIl9Z",
"3oLpsum7su",
"pxuqW08A_sl",
"Ws-uGbtLXZm"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Do you have any follow-up questions on our response? We are looking forward to your further comments. ",
" Do you have any follow-up questions on our response? We are looking forward to your further comments. ",
" Do you have any follow-up questions on our response? We are looking forward to your further c... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Yt5sA55s8ka",
"3FjSrLMnQF",
"kFZjQfsIl9Z",
"Ws-uGbtLXZm",
"pxuqW08A_sl",
"3oLpsum7su",
"nips_2022_U6vBmFL9SxP",
"nips_2022_U6vBmFL9SxP",
"nips_2022_U6vBmFL9SxP"
] |
nips_2022_-r6-WNKfyhW | Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively | Large-scale pre-trained language models have achieved impressive results on a wide range of downstream tasks recently. However, fine-tuning an extremely large-scale pre-trained language model on limited target datasets is often plagued by overfitting and representation degradation. In this paper, we propose a Dynamic Parameter Selection (DPS) algorithm for the large-scale pre-trained models during fine-tuning, which adaptively selects a more promising subnetwork to perform staging updates based on gradients of back-propagation.
Experiments on the GLUE benchmark show that DPS outperforms previous fine-tuning methods in terms of overall performance and stability, and consistently achieves better results with variable pre-trained language models. In addition, DPS brings a large magnitude of improvement in out-of-domain transferring experiments and low-resource scenarios, which shows that it can maintain stable general contextual features and reduce the representation collapse. We release our code at \url{https://github.com/ZhangHaojie077/DPS}. | Accept | The reviewers were very positive about this paper, which proposes a method for selective fine-tuning of language models on downstream tasks. The method works well in various settings including realistic out-of-distribution and low-resource scenarios.
The main strengths are:
* useful idea
* conceptually simple method
* the choice of datasets
* the use of competitive models
* convincing experiments in multiple settings, including OOD generalization
The reviewers also noted some weaknesses, especially:
* the need for a first pass, computational bottleneck, --> author response seems satisfactory
* the choice of baseline to compare against
* need for theoretical guarantees --> to me, this is not strictly necessary for an empirical paper
* experiments only with classification tasks --> to me, the choice of tasks (GLUE, NLI) is reasonable, as the authors argue
* uninformative discussion of limitations
The authors did a fine job addressing most of these issues in their response and they should update their paper accordingly. Concerning the last point: the authors did not address this point, and are strongly encouraged to do so in their revision. | train | [
"S-w-3nfSB7L",
"2xEA64734aj",
"U5DsdvtNmLN",
"vPRFccuUYbM",
"OWWQm2rbVLT",
"q2S06-V0F5",
"OS_I_UlP90j",
"6Y5Edwvn7xM",
"DK_5UdKnTVG"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your careful and valuable comments. We will answer your questions point by point.\n\nQ1: Some minor spelling and grammatical mistakes\n\nA1: Thanks for pointing out spelling and grammatical mistakes. We will correct them in the new version.\n\nQ2: The treatment of related work (Sec 5) is quite superfic... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"DK_5UdKnTVG",
"6Y5Edwvn7xM",
"OS_I_UlP90j",
"q2S06-V0F5",
"q2S06-V0F5",
"nips_2022_-r6-WNKfyhW",
"nips_2022_-r6-WNKfyhW",
"nips_2022_-r6-WNKfyhW",
"nips_2022_-r6-WNKfyhW"
] |
nips_2022_WHqVVk3UHr | Exploring the Whole Rashomon Set of Sparse Decision Trees | In any given machine learning problem, there may be many models that could explain the data almost equally well. However, most learning algorithms return only one of these models, leaving practitioners with no practical way to explore alternative models that might have desirable properties beyond what could be expressed within a loss function. The Rashomon set is the set of these all almost-optimal models. Rashomon sets can be extremely complicated, particularly for highly nonlinear function classes that allow complex interaction terms, such as decision trees. We provide the first technique for completely enumerating the Rashomon set for sparse decision trees; in fact, our work provides the first complete enumeration of any Rashomon set for a non-trivial problem with a highly nonlinear discrete function class. This allows the user an unprecedented level of control over model choice among all models that are approximately equally good. We represent the Rashomon set in a specialized data structure that supports efficient querying and sampling. We show three applications of the Rashomon set: 1) it can be used to study variable importance for the set of almost-optimal trees (as opposed to a single tree), 2) the Rashomon set for accuracy enables enumeration of the Rashomon sets for balanced accuracy and F1-score, and 3) the Rashomon set for a full dataset can be used to produce Rashomon sets constructed with only subsets of the data set. Thus, we are able to examine Rashomon sets across problems with a new lens, enabling users to choose models rather than be at the mercy of an algorithm that produces only a single model. | Accept | This paper presents an algorithm to completely enumerate the Rashomon set of sparse decision trees using a branch-and-bound search over a set of hierarchical maps. The reviewers were in agreement that this paper is an important contribution to the field. The method is particularly useful for identifying near-optimal decision trees and variable importance scoring.
The problem of the Rashomon effect, where many equally good models could explain the data well, is a longstanding concern across nearly all machine learning applications. The Rashomon set is the set of such nearly-optimal models. While researchers would have liked to explore the Rashomon set, computational feasibility has limited such endeavors. This paper enables a shift in focus from a single optimal model to set of near-optimal models for a large and widely used class of models. Therefore, it can be expected that this work will have groundbreaking impact on multiple areas of machine learning. | train | [
"DFRAYD2zOO5",
"ILUwzUm4Dj2",
"x6i-JQeYvhW",
"_oT_o4ChLHx",
"YhOKnVLoqC",
"5HDSfrmGnAz",
"imw51RHUOv5",
"UQ5gKpTxx3-",
"NF5_Fy1euYl"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification. This is just an acknowledgment that I've read the comment. The author's responses answered all unclear points I'd like to know, in particular, the per-leaf penalty, lambda term, corresponds to the sparsity-inducing factor and 'sparse' means less leaves for ensuring interpretability. ... | [
-1,
-1,
-1,
-1,
-1,
9,
7,
9,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"x6i-JQeYvhW",
"NF5_Fy1euYl",
"UQ5gKpTxx3-",
"imw51RHUOv5",
"5HDSfrmGnAz",
"nips_2022_WHqVVk3UHr",
"nips_2022_WHqVVk3UHr",
"nips_2022_WHqVVk3UHr",
"nips_2022_WHqVVk3UHr"
] |
nips_2022_NtwEUZE6VcL | Global Convergence of Direct Policy Search for State-Feedback $\mathcal{H}_\infty$ Robust Control: A Revisit of Nonsmooth Synthesis with Goldstein Subdifferential | Direct policy search has been widely applied in modern reinforcement learning and continuous control. However, the theoretical properties of direct policy search on nonsmooth robust control synthesis have not been fully understood. The optimal $\mathcal{H}_\infty$ control framework aims at designing a policy to minimize the closed-loop $\mathcal{H}_\infty$ norm, and is arguably the most fundamental robust control paradigm. In this work, we show that direct policy search is guaranteed to find the global solution of the robust $\mathcal{H}_\infty$ state-feedback control design problem. Notice that policy search for optimal $\mathcal{H}_\infty$ control leads to a constrained nonconvex nonsmooth optimization problem, where the nonconvex feasible set consists of all the policies stabilizing the closed-loop dynamics. We show that for this nonsmooth optimization problem, all Clarke stationary points are global minimum. Next, we identify the coerciveness of the closed-loop $\mathcal{H}_\infty$ objective function, and prove that all the sublevel sets of the resultant policy search problem are compact. Based on these properties, we show that Goldstein's subgradient method and its implementable variants can be guaranteed to stay in the nonconvex feasible set and eventually find the global optimal solution of the $\mathcal{H}_\infty$ state-feedback synthesis problem. Our work builds a new connection between nonconvex nonsmooth optimization theory and robust control, leading to an interesting global convergence result for direct policy search on optimal $\mathcal{H}_\infty$ synthesis. | Accept | The paper analyses direct policy search as an algorithm in H_inf control synthesis specifically for the problem of minimizing the H_inf closed-loop norm and show that it can achieve global optima. The authors observe that this is a constrained nonconvex nonsmooth optimization problem and by studying the landscape the landscape establish the existence of a unique global minimizing set and any Clarke stationary points are global minimum. They further propose using the Goldstein subdifferential method and show it can stay in the feasible set and find a global optimal state-feedback controller.
Overall the reviewers strongly appreciated the contribution and clarity of the paper. Some reviewers raised the concern of asymptotic vs non aymptotic results but were eventually satisfied by the author response, which highlighted bounds for delta, epsilon stationary points and the subtlety of delta, epsilon stationary points not being close to the global minima. There was unanimous agreement between reviewers that the paper would be a strong contribution to the conference and hence I recommend acceptance. | train | [
"xTNPYJ7psg1",
"qhePS0cP9tm",
"Qv9ja-O1EFJ",
"t1PCo8HWpu4v",
"JI1dpT4vLn2",
"j5lBpHFqaH",
"Bon4L81-E0",
"AU1dRZ24lhf",
"_NK13v25bVx",
"7-VS9gOtRuR",
"2kxODQo1tMB",
"CLnAUlQttao",
"nj5nH57OG6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my concerns. I appreciated the changes to the manuscript and have raised my score to a 7. I still would prefer some more insight on the magnitude of Delta_0 in typical situations, but I think the changes make the paper more than appropriate for the NeurIPS community.\n\nI did notice that ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"Qv9ja-O1EFJ",
"JI1dpT4vLn2",
"t1PCo8HWpu4v",
"CLnAUlQttao",
"j5lBpHFqaH",
"nj5nH57OG6",
"2kxODQo1tMB",
"2kxODQo1tMB",
"7-VS9gOtRuR",
"nips_2022_NtwEUZE6VcL",
"nips_2022_NtwEUZE6VcL",
"nips_2022_NtwEUZE6VcL",
"nips_2022_NtwEUZE6VcL"
] |
nips_2022_riIaC2ivcYA | Improved Algorithms for Neural Active Learning | We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting. In particular, we introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work. Then, the proposed algorithm leverages the powerful representation of NNs for both exploitation and exploration, has the query decision-maker tailored for $k$-class classification problems with the performance guarantee, utilizes the full feedback, and updates parameters in a more practical and efficient manner. These careful designs lead to a better regret upper bound, improving by a multiplicative factor $O(\log T)$ and removing the curse of input dimensionality. Furthermore, we show that the algorithm can achieve the same performance as the Bayes-optimal classifier in the long run under the hard-margin setting in classification problems. In the end, we use extensive experiments to evaluate the proposed algorithm and SOTA baselines, to show the improved empirical performance. | Accept | The paper has two main contributions:
1. It builds upon a recent theoretical work on providing regret bounds for active learning using neural networks. The current paper presents a better regret analysis thereby removing the explicit dependence on the effective dimensionality term and also does not have hard to estimate terms (such as the 'S' term), as was the case in prior work.
2. The paper presents a practical implementation via mini-batch SGD and presents a strong set of empirical results.
All the reviewers agree on the technical novelty and the quality of the experimental work. During the discussion phase the authors also clarified questions regarding the comparison of their results with other prior works. Overall the reviewers agree that this is a submission worthy of acceptance. | train | [
"LHQ43Q_c_Zr",
"nQbo8xFKdY",
"B5WAxAYeUVv-",
"yIWawEydcLS",
"657Qb87J2PS",
"crAJC3aMwCY",
"sqhE3wvg-0M",
"TNKjXFcgiV_",
"ZeW2VM-Quw6",
"ROY7hsAw77-",
"U_2NXJTJgGA",
"jiBL26LIAeO",
"iKTHkWOcrKP",
"l9UmBD0eghd",
"XzDOQq_B-Ts",
"USlJMqY1ZS0",
"2zDURxpA0U"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much again for your time and efforts in reviewing our paper!",
" Thank you for your response. It addresses my concerns and clarifies my questions.\nI would like to keep my positive score.",
" Thanks very much for Reviewer's efforts on this paper and suggestions. We will include the new result... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"nQbo8xFKdY",
"sqhE3wvg-0M",
"657Qb87J2PS",
"crAJC3aMwCY",
"ROY7hsAw77-",
"TNKjXFcgiV_",
"2zDURxpA0U",
"ZeW2VM-Quw6",
"USlJMqY1ZS0",
"XzDOQq_B-Ts",
"l9UmBD0eghd",
"iKTHkWOcrKP",
"nips_2022_riIaC2ivcYA",
"nips_2022_riIaC2ivcYA",
"nips_2022_riIaC2ivcYA",
"nips_2022_riIaC2ivcYA",
"nips_... |
nips_2022_kxXvopt9pWK | Denoising Diffusion Restoration Models | Many interesting tasks in image restoration can be cast as linear inverse problems. A recent family of approaches for solving these problems uses stochastic algorithms that sample from the posterior distribution of natural images given the measurements. However, efficient solutions often require problem-specific supervised training to model the posterior, whereas unsupervised methods that are not problem-specific typically rely on inefficient iterative methods. This work addresses these issues by introducing Denoising Diffusion Restoration Models (DDRM), an efficient, unsupervised posterior sampling method. Motivated by variational inference, DDRM takes advantage of a pre-trained denoising diffusion generative model for solving any linear inverse problem. We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization under various amounts of measurement noise. DDRM outperforms the current leading unsupervised methods on the diverse ImageNet dataset in reconstruction quality, perceptual quality, and runtime, being $5\times$ faster than the nearest competitor. DDRM also generalizes well for natural images out of the distribution of the observed ImageNet training set. | Accept | In this paper, the authors present a variational inference approach based on a diffusion model to learning the posterior distribution of an unsupervised linear inverse problem. The presented method is evaluated on multiple linear inverse problems and beats state of the art unsupervised inversion methods while running significantly faster than them. The algorithm has strong theoretical support, in the form of a rigorously proven equivalence with unconditional denoising with diffusion models. The paper could be improved by a more extensive set of experiments, including investigation of stability against incorrect degradation models. Overall, the algorithmic advance with a strong theoretical foundation outweighs these weaknesses, and the paper is recommended for acceptance. | train | [
"5aeLGeAe4yB",
"y0LP_0x8Eer",
"SVjMVKwNjQK",
"dIlch7t0wua",
"M3JLik0uP1U",
"MIeJDUwoCDC",
"dchqrEOD70",
"EOAQCzWRFUI",
"q8p-Tp5JDQ9",
"0JsjGpz_Upo",
"endhsKtULsD",
"1YwnoSb4jf5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. The rebuttal is satisfactory to me, thus I keep my original rating.",
" The rebuttal is almost satisfactory to me except that I haven't seen the SSIM for the competing methods. It is hard to conclude about the performance ranking with this metric for now. ",
" 1. It would be better... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"dchqrEOD70",
"EOAQCzWRFUI",
"MIeJDUwoCDC",
"nips_2022_kxXvopt9pWK",
"1YwnoSb4jf5",
"endhsKtULsD",
"0JsjGpz_Upo",
"q8p-Tp5JDQ9",
"nips_2022_kxXvopt9pWK",
"nips_2022_kxXvopt9pWK",
"nips_2022_kxXvopt9pWK",
"nips_2022_kxXvopt9pWK"
] |
nips_2022_ZLsZmNe1RDb | How to talk so AI will learn: Instructions, descriptions, and autonomy | From the earliest years of our lives, humans use language to express our beliefs and desires. Being able to talk to artificial agents about our preferences would thus fulfill a central goal of value alignment. Yet today, we lack computational models explaining such language use. To address this challenge, we formalize learning from language in a contextual bandit setting and ask how a human might communicate preferences over behaviors. We study two distinct types of language: instructions, which provide information about the desired policy, and descriptions, which provide information about the reward function. We show that the agent's degree of autonomy determines which form of language is optimal: instructions are better in low-autonomy settings, but descriptions are better when the agent will need to act independently. We then define a pragmatic listener agent that robustly infers the speaker's reward function by reasoning about how the speaker expresses themselves. We validate our models with a behavioral experiment, demonstrating that (1) our speaker model predicts human behavior, and (2) our pragmatic listener successfully recovers humans' reward functions. Finally, we show that this form of social learning can integrate with and reduce regret in traditional reinforcement learning. We hope these insights facilitate a shift from developing agents that obey language to agents that learn from it. | Accept | I thank the authors for their submission and active participation in the discussions. The paper investigates language instructions and descriptions as a way to teach a student RL agent. All reviewers unanimously agree that this is a solid paper worthy of acceptance. In particular, reviewers found the paper to be well motivated [vzCm] and well written [mi6j], tacklign an important problem [feLE]. The experiments are convincing [vzCm], and insightful [mi6j], and the method interesting [aJ8p] and novel [8vXv]. Thus, I am recommending acceptance of the paper and encourage the authors to further improve their paper based on the reviewer feedback. | test | [
"mWidi4jQUo",
"hK88Q4b1PdVq",
"gmdhS7x_qr4",
"nqwgd_Yc-r1",
"ocQKJIskHbK",
"VjU-AkCY0_6",
"WkzYiRtwaNh",
"Up2v4cQD05p",
"DwbuwSH85eV",
"o7cpUYIf93Z",
"RodHR1NPH7",
"Gw3_zW-o8Vu",
"sEHpbrl8irF",
"avg4jtWZwoW",
"gK4QsvK8pO8",
"9yMeIeSTq9O",
"34yoLdlUnkj",
"JwcxH4inF98"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nI have read all the reviews and author responses, and I am happy with the author's suggested \nchanges. \n\nThank you.\n",
" Thanks for responding to my questions, this clarifies a few open points. After reading this as well as the other reviews and responses, I will not change my evaluation of the paper, an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3,
2
] | [
"34yoLdlUnkj",
"o7cpUYIf93Z",
"nqwgd_Yc-r1",
"ocQKJIskHbK",
"VjU-AkCY0_6",
"JwcxH4inF98",
"Up2v4cQD05p",
"DwbuwSH85eV",
"34yoLdlUnkj",
"9yMeIeSTq9O",
"gK4QsvK8pO8",
"sEHpbrl8irF",
"avg4jtWZwoW",
"nips_2022_ZLsZmNe1RDb",
"nips_2022_ZLsZmNe1RDb",
"nips_2022_ZLsZmNe1RDb",
"nips_2022_ZLs... |
nips_2022_zSeoDvsDCe | Sign and Basis Invariant Networks for Spectral Graph Representation Learning | We introduce SignNet and BasisNet---new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if v is an eigenvector then so is -v; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. Moreover, when used with Laplacian eigenvectors, our architectures are provably expressive for graph representation learning: they can approximate any spectral graph convolution, can compute spectral invariants that go beyond message passing neural networks, and can provably simulate previously proposed graph positional encodings. Experiments show the strength of our networks for molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. | Reject | This paper proposes new neural architectures that are invariant to sign and basis, named SignNet and BasisNet. Some reviewers raised the question of why such invariances are needed, and they noticed that only permutation and rotation symmetry arise in all the data mentioned in the paper. One of the reviewers strongly believes that the proposed approach is a simple tweak from the prior work "Invariant and Equivariant Graph Networks" (IGN) and the complexity of the proposed method is much higher than the spectral model using the eigenvectors. In their response, the authors provided some explanation and theoretical analysis on why SignNet and BasisNet are different from spectral graph convolutions. They argue that they can simulate spectral graph convolutions, and approximate other functions that spectral convolutions cannot express. | train | [
"HU6iIkdvyX",
"m4MhDFBMk6T",
"z_-E_uKrdGk",
"dYRqbtgEHuL",
"DmehGkIdW3d",
"2R9waqAvAF",
"izlbg_C9jE0",
"6Jtdu41q3QF",
"KGAWKsNTm5_",
"cKHOvqPquhG",
"jBdwFSYGjs0",
"2yjBZuJBr2X",
"7FSgm0J0Ns",
"znzlKia8B96",
"damCCOJr2eg",
"H3TvFRXcSeU"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their detailed and patient response to all my concerns and questions. The explanation helped in clarifying my concerns. I have therefore decided to change my score to 7. ",
" Thank you for the reply. We are glad that you find the revised version to be “clearer and easier to follow”, we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"m4MhDFBMk6T",
"dYRqbtgEHuL",
"DmehGkIdW3d",
"6Jtdu41q3QF",
"izlbg_C9jE0",
"H3TvFRXcSeU",
"damCCOJr2eg",
"KGAWKsNTm5_",
"znzlKia8B96",
"jBdwFSYGjs0",
"7FSgm0J0Ns",
"nips_2022_zSeoDvsDCe",
"nips_2022_zSeoDvsDCe",
"nips_2022_zSeoDvsDCe",
"nips_2022_zSeoDvsDCe",
"nips_2022_zSeoDvsDCe"
] |
nips_2022_a01PL2gb7W5 | Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime | We provide quantitative bounds measuring the $L^2$ difference in function space between the trajectory of a finite-width network trained on finitely many samples from the idealized kernel dynamics of infinite width and infinite data. An implication of the bounds is that the network is biased to learn the top eigenfunctions of the Neural Tangent Kernel not just on the training set but over the entire input space. This bias depends on the model architecture and input distribution alone and thus does not depend on the target function which does not need to be in the RKHS of the kernel. The result is valid for deep architectures with fully connected, convolutional, and residual layers. Furthermore the width does not need to grow polynomially with the number of samples in order to obtain high probability bounds up to a stopping time. The proof exploits the low-effective-rank property of the Fisher Information Matrix at initialization, which implies a low effective dimension of the model (far smaller than the number of parameters). We conclude that local capacity control from the low effective rank of the Fisher Information Matrix is still underexplored theoretically. | Accept | This paper focuses on theoretically bounding the difference between a finite-width network in a finite sample size regime and the corresponding kernel dynamics in finite width and finite data regime for various neural network architectures. Using the spectrum of the NTK they provide some insights into the spectral bias of neural nets. All reviewers were positive and recommend acceptance. I concur with this decision.
| train | [
"PRiKMyZ9CG5",
"STQjvcRxi8",
"4irPd_AJoYa",
"hEqebd1_SKe",
"kS4bT4Cj_Ul",
"a6U9SaXhsJ8",
"jP6IDJQT538",
"ZT2RinLdST",
"f6MaBMCvsdi",
"G1xckh-VaPR",
"bKg-FsE7Bc9",
"k2Ge5weQOYL",
"qv0_KBrukOr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the detailed response, which is quite helpful. My concerns are addressed, and I have updated my review to reflect the support.",
" Thanks for the response and clarifications. Good work!",
" As for your comment about the reference \"On the exact computation of linear frequ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"jP6IDJQT538",
"ZT2RinLdST",
"qv0_KBrukOr",
"qv0_KBrukOr",
"k2Ge5weQOYL",
"bKg-FsE7Bc9",
"bKg-FsE7Bc9",
"G1xckh-VaPR",
"G1xckh-VaPR",
"nips_2022_a01PL2gb7W5",
"nips_2022_a01PL2gb7W5",
"nips_2022_a01PL2gb7W5",
"nips_2022_a01PL2gb7W5"
] |
nips_2022_WDS1M0gsfXk | On Image Segmentation With Noisy Labels: Characterization and Volume Properties of the Optimal Solutions to Accuracy and Dice | We study two of the most popular performance metrics in medical image segmentation, Accuracy and Dice, when the target labels are noisy. For both metrics, several statements related to characterization and volume properties of the set of optimal segmentations are proved, and associated experiments are provided. Our main insights are: (i) the volume of the solutions to both metrics may deviate significantly from the expected volume of the target, (ii) the volume of a solution to Accuracy is always less than or equal to the volume of a solution to Dice and (iii) the optimal solutions to both of these metrics coincide when the set of feasible segmentations is constrained to the set of segmentations with the volume equal to the expected volume of the target. | Accept | Three knowledgeable referees support accept and a fourth one does not oppose to accept so I recommend Accept. | train | [
"MfUZuSkx2JS",
"Qub_FyO3KLv",
"JMJWiVnnZGi",
"JXLBkINwGgF",
"jmArWcaQfdv",
"UeImhBkYHw",
"5CcRUwf3OfP",
"fGd44MkCTBE",
"vTpEDTDlR9M",
"tpJeBJN-QF5",
"vJc2Fz4iiB1",
"_t6TFJEamNY",
"ZnEf_OYfNEI",
"tOni8NH9RQz",
"rW5zPABkoUb"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank your for the update. We are very happy you are positive to the response and to the updates.\n\nWe agree that papers with the style of deriving and evaluating new methods can be important and interesting, but due to the wide use of methods targeting Accuracy and Dice under noisy label conditions, we think it... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3,
1
] | [
"Qub_FyO3KLv",
"fGd44MkCTBE",
"tpJeBJN-QF5",
"jmArWcaQfdv",
"UeImhBkYHw",
"5CcRUwf3OfP",
"vTpEDTDlR9M",
"ZnEf_OYfNEI",
"tOni8NH9RQz",
"rW5zPABkoUb",
"_t6TFJEamNY",
"nips_2022_WDS1M0gsfXk",
"nips_2022_WDS1M0gsfXk",
"nips_2022_WDS1M0gsfXk",
"nips_2022_WDS1M0gsfXk"
] |
nips_2022_Inj9ed0mzQb | Weisfeiler and Leman Go Walking: Random Walk Kernels Revisited | Random walk kernels have been introduced in seminal work on graph learning and were later largely superseded by kernels based on the Weisfeiler-Leman test for graph isomorphism. We give a unified view on both classes of graph kernels. We study walk-based node refinement methods and formally relate them to several widely-used techniques, including Morgan's algorithm for molecule canonization and the Weisfeiler-Leman test. We define corresponding walk-based kernels on nodes that allow fine-grained parameterized neighborhood comparison, reach Weisfeiler-Leman expressiveness, and are computed using the kernel trick. From this we show that classical random walk kernels with only minor modifications regarding definition and computation are as expressive as the widely-used Weisfeiler-Leman subtree kernel but support non-strict neighborhood comparison. We verify experimentally that walk-based kernels reach or even surpass the accuracy of Weisfeiler-Leman kernels in real-world classification tasks. | Accept | The paper yields new understanding of random walk kernels and W-L graph isomorphism test based kernels. This is a very interesting contribution towards understanding kernels between graphs. This should be of interest to Neurips community. | train | [
"9MIcBdxhqoq",
"amuiSovuS-s",
"RdqmVUUEAuO",
"Dlw2HNwDLuj",
"7XADe9WBLQq",
"7UAC7pWil7B",
"PS-Su4xWnW",
"7p26j--UAuc"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all reviewers for their comments and helpful feedback on our manuscript. Individual points are addressed below in response to the reviews.",
" > The paper proposes a walk-based kernels for graphs with nodes being labeled with WL labeling scheme.\n> The paper's originality is in its combin... | [
-1,
-1,
-1,
-1,
5,
5,
9,
7
] | [
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"nips_2022_Inj9ed0mzQb",
"7XADe9WBLQq",
"7UAC7pWil7B",
"PS-Su4xWnW",
"nips_2022_Inj9ed0mzQb",
"nips_2022_Inj9ed0mzQb",
"nips_2022_Inj9ed0mzQb",
"nips_2022_Inj9ed0mzQb"
] |
nips_2022_PrJSZxup-U | Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems | We study Reinforcement Learning for partially observable systems using function approximation. We propose a new PO-bilinear framework, that is general enough to include models such as undercomplete tabular Partially Observable Markov Decision Processes (POMDPs), Linear Quadratic Gaussian (LQG), Predictive State Representations (PSRs), as well as a newly introduced model Hilbert Space Embeddings of POMDPs. Under this framework, we propose an actor-critic style algorithm that is capable to performing agnostic policy learning. Given a policy class that consists of memory based policies (i.e., policy that looks at a fixed-length window of recent observations), and a value function class that consists of functions taking both memory and future observations as inputs, our algorithm learns to compete against the best memory-based policy among the policy class. For certain examples such as undercomplete POMDPs and LQGs, by leveraging their special properties, our algorithm is even capable of competing against the globally optimal policy without paying an exponential dependence on the horizon. | Accept | This paper contributes to advancing our understanding on when and how sample efficient learning is possibly in partially observable dynamical systems. The authors introduces a framework that encompasses a wide range of relevant settings (e.g., LQG, POMDPs, PSR) and propose a general algorithm to solve this general setting and derive theoretical guarantees on its sample complexity. Overall the contribution is novel, non-trivial, and interesting. I encourage the authors to include part of the rebuttal, which effectively clarifies some aspects of the paper and makes it accessible to a broader audience. | train | [
"XxtHawPVyKc",
"HJuny-UoX4x",
"d2AlzrZcALW",
"d89yakigujq",
"o0Zv-99BaJ",
"R8z6v9ZXGp",
"ZbazTRI_-Js",
"lPgY2bGpKyi",
"AAcMDmi1yMs",
"KdK8uKTlzub"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nThank you very much for your quick response. We will incorporate your comments and try to improve the draft/result! ",
" Thanks for the response.\n\nIt would be really interesting to see an approximate version that avoids the worst-case dependence on $|\\Pi|$ and $|\\mathcal{G}|$. It would also be helpful if ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
2,
1
] | [
"HJuny-UoX4x",
"o0Zv-99BaJ",
"KdK8uKTlzub",
"AAcMDmi1yMs",
"lPgY2bGpKyi",
"ZbazTRI_-Js",
"nips_2022_PrJSZxup-U",
"nips_2022_PrJSZxup-U",
"nips_2022_PrJSZxup-U",
"nips_2022_PrJSZxup-U"
] |
nips_2022_PRsjhKIrVg | WeightedSHAP: analyzing and improving Shapley based feature attributions | Shapley value is a popular approach for measuring the influence of individual features. While Shapley feature attribution is built upon desiderata from game theory, some of its constraints may be less natural in certain machine learning settings, leading to unintuitive model interpretation. In particular, the Shapley value uses the same weight for all marginal contributions---i.e. it gives the same importance when a large number of other features are given versus when a small number of other features are given. This property can be problematic if larger feature sets are more or less informative than smaller feature sets. Our work performs a rigorous analysis of the potential limitations of Shapley feature attribution. We identify simple settings where the Shapley value is mathematically suboptimal by assigning larger attributions for less influential features. Motivated by this observation, we propose WeightedSHAP, which generalizes the Shapley value and learns which marginal contributions to focus directly from data. On several real-world datasets, we demonstrate that the influential features identified by WeightedSHAP are better able to recapitulate the model's predictions compared to the features identified by the Shapley value. | Accept | Strengths:
* paper points out and addresses significant issue in conventional Shapley-based approaches
* theoretical analysis having game-theoretic interpretation
* convincing empirical results (after revision)
* clear writing, good survey of related work
Weaknesses:
* limited empirical comparisons (addressed in revision)
* lacks qualitative analysis (addressed in revision)
* raises general questions regarding suitability of Shapley approaches, and the role of assumptions within
Summary:
This paper presents a nice generalization of Shapley-based feature attribution that, by introducing weights, mitigates a certain drawback of the standard approach, and provides more flexibility. Most reviewers view the paper’s contributions favorably; theoretical results and interpretations appear to be sound, and the empirical results complement them well (after additions made by the authors in the revised version and in response to reviewer feedback). One reviewer was worried about the soundness of a certain aspect of the proposed approach, but the authors’ response was helpful in clarifying this issue. Another reviewer pointed out that, more generally, arguing for relaxing assumptions (e.g., via weighting) may actually suggest that the overall approach is limited; the authors are encouraged to add a discussion in the paper that addresses this concern.
| train | [
"zi1mEUm3vBuv",
"_-njQKgWVjG",
"XFhJsXEtr_S",
"M9HYmfutl3v",
"Yf8_9KDOrKd",
"36PiiBQ6hDM",
"1NpRUamjZTS",
"KxkxmBDs8r-",
"q2bbPesdulU",
"eDg1MaW17Al",
"2_-hRaLSRY6",
"yhVJX4k3QGo",
"Gvrvzr7sqlU",
"G34PxQ8bZpJ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that we addressed your concerns. We also thank you for re-evaluating our work!\n",
" Thank you for your responses. I appreciate the changes you've made to the paper which have addressed my main concerns, particularly those regarding additional comparisons and metrics. The illustrative examples are a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"_-njQKgWVjG",
"36PiiBQ6hDM",
"Gvrvzr7sqlU",
"2_-hRaLSRY6",
"G34PxQ8bZpJ",
"Gvrvzr7sqlU",
"yhVJX4k3QGo",
"2_-hRaLSRY6",
"2_-hRaLSRY6",
"nips_2022_PRsjhKIrVg",
"nips_2022_PRsjhKIrVg",
"nips_2022_PRsjhKIrVg",
"nips_2022_PRsjhKIrVg",
"nips_2022_PRsjhKIrVg"
] |
nips_2022_C2Mikd2WpOc | The Burer-Monteiro SDP method can fail even above the Barvinok-Pataki bound | The most widely used technique for solving large-scale semidefinite programs (SDPs) in practice is the non-convex Burer-Monteiro method, which explicitly maintains a low-rank SDP solution for memory efficiency. There has been much recent interest in obtaining a better theoretical understanding of the Burer-Monteiro method. When the maximum allowed rank $p$ of the SDP solution is above the Barvinok-Pataki bound (where a globally optimal solution of rank at most \(p\) is guaranteed to exist), a recent line of work established convergence to a global optimum for generic or smoothed instances of the problem. However, it was open whether there even exists an instance in this regime where the Burer-Monteiro method fails. We prove that the Burer-Monteiro method can fail for the Max-Cut SDP on $n$ vertices when the rank is above the Barvinok-Pataki bound ($p \ge \sqrt{2n}$). We provide a family of instances that have spurious local minima even when the rank $p = n/2$. Combined with existing guarantees, this settles the question of the existence of spurious local minima for the Max-Cut formulation in all ranges of the rank and justifies the use of beyond worst-case paradigms like smoothed analysis to obtain guarantees for the Burer-Monteiro method. | Accept | The Burer-Monteiro method is widely used for solving large scale semidefinite programs. It works based on replacing an $n \times n$ positive semidefinite matrix $X$ with $Y Y^T$ where $Y$ is $n \times p$. This has the benefit that it is more space efficient to store $Y$ than $X$, but it transforms a convex optimization problem into a nonconvex one. Above the Barvinok-Pataki bound (an analogue of the notion of a basic feasible solution for semidefinite rather than linear programming) we are at least guaranteed that there is a low-rank optimal solution. But does the nonconvex problem have spurious critical points? Recent works have studied the critical points under a smoothed analysis model, and shown that the Burer-Monteiro method works almost down to the Barvinok-Pataki bound. The main contribution of this paper is to complete the analysis of the landscape, by showing that without smoothing, even for the MAX-CUT SDP, there are spurious critical points even for $p = n/2$. One reviewer had doubts about the relationship to the work of Bhojanapalli-Boumal-Jain-Netrapalli, but I found the author reply to be convincing that the setting and techniques are fundamentally different. The other reviewers were uniformly positive. This is a nice contribution to the literature on the Burer-Monteiro method.
As a comment to the authors, I would suggest elaborating on the connection to the work of Mei et al. I agree that showing the global and local optima are close in objective value can be somewhat orthogonal to showing that the SDP recovers e.g. some underlying clustering in community detection. This provides a further justification why it is important to understand the loss landscape, and not just bound the suboptimality of any locally optimal solution. Indeed, from what I remember of the Mei et al. paper, the locally optimal solutions do not get non-trivial performance for the associated community detection problem, so I think investigating this further and explaining it would be helpful, since these are subtle distinctions. | test | [
"epzEkDPUBw2",
"yT0Ma4o5G7n",
"WlTcn7hp0Xg",
"iIymUaoOALQ",
"cPXD4Uy6qiW",
"0cThO5MtYhI",
"vwv1ROXM8Y1",
"AucVLJQtY2w",
"bvyOC1MLVYTU",
"7o5zFUqxNNW",
"TZL1l9UDLyN",
"Toj_CvDgAw",
"oQ9oTNB_pn4",
"R9v27EMha-a"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for getting back to us – we appreciate it very much.\n\nPerhaps one point that may help clarify the challenge with lower bounds for standard Burer-Monteiro, as opposed to the penalized version in BBJN: the BBJN bad instance sets $C=0$, and *only* considers the non-convex objective that arises from penal... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"yT0Ma4o5G7n",
"iIymUaoOALQ",
"bvyOC1MLVYTU",
"7o5zFUqxNNW",
"vwv1ROXM8Y1",
"AucVLJQtY2w",
"R9v27EMha-a",
"oQ9oTNB_pn4",
"Toj_CvDgAw",
"TZL1l9UDLyN",
"nips_2022_C2Mikd2WpOc",
"nips_2022_C2Mikd2WpOc",
"nips_2022_C2Mikd2WpOc",
"nips_2022_C2Mikd2WpOc"
] |
nips_2022_U3gobB4oKv | Fairness Transferability Subject to Bounded Distribution Shift | Given an algorithmic predictor that is "fair"' on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound? In this paper, we study the transferability of statistical group fairness for machine learning predictors (i.e., classifiers or regressors subject to bounded distribution shift. Such shifts may be introduced by initial training data uncertainties, user adaptation to a deployed predictor, dynamic environments, or the use of pre-trained models in new settings. Herein, we develop a bound that characterizes such transferability, flagging potentially inappropriate deployments of machine learning for socially consequential tasks. We first develop a framework for bounding violations of statistical fairness subject to distribution shift, formulating a generic upper bound for transferred fairness violations as our primary result. We then develop bounds for specific worked examples, focusing on two commonly used fairness definitions (i.e., demographic parity and equalized odds) and two classes of distribution shift (i.e., covariate shift and label shift). Finally, we compare our theoretical bounds to deterministic models of distribution shift and against real-world data, finding that we are able to estimate fairness violation bounds in practice, even when simplifying assumptions are only approximately satisfied. | Accept | Reviewers agreed that this work, looking at the degradation of group fairness metrics under a few types of distribution shift, add to the literature in this space. The author's/authors' rebuttal was especially effective in belaying concerns from two reviewers, in addition to many of my own. I agree with Reviewer 84wj's skepticism of the practical implementability of some theoretical results (e.g. verification of the Lipschitz condition for Thm 3.2), but these are not showstopper concerns; still, please do address them in a final/next version. Overall, though, the paper addresses a timely topic in a strong and reasonably complete way. | train | [
"NQ0b5YTGvW",
"Wem5ZaBVNgV",
"Hydxca_qao",
"0Y6BNXEY8J_",
"osGvkle_mc",
"7A2BXjzVC0b9",
"MUf-63rFZ0z",
"yWOcKEgG298",
"KAMFcRncK2",
"R2Jlqs-_Sl3",
"mFHE9u-uLQe",
"QNnr7m2Yb6P",
"5W79VgQjnZv4",
"BuDESYVtgq9",
"IrnP5mNo4KD",
"i3fMr1L6jfK",
"MbvLgIfJCx2",
"ZpW9zIYx3Uw",
"KXrRzqIVUrL... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you again for your review and your kind reevaluation following our rebuttal. We greatly appreciate your time and assistance!",
" Thank you again for your review and your kind reevaluation following our rebuttal. The comment on the practicality of Lipschitz condition is well-received. We agree this merits ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"Hydxca_qao",
"0Y6BNXEY8J_",
"yWOcKEgG298",
"ufIhQ1lzvod",
"7A2BXjzVC0b9",
"KXrRzqIVUrL",
"Sh7OFi8Jfrt",
"-gkhZsVRfW",
"ufIhQ1lzvod",
"KXrRzqIVUrL",
"IrnP5mNo4KD",
"ufIhQ1lzvod",
"-gkhZsVRfW",
"Sh7OFi8Jfrt",
"KXrRzqIVUrL",
"MbvLgIfJCx2",
"ZpW9zIYx3Uw",
"nips_2022_U3gobB4oKv",
"ni... |
nips_2022_ZJ7Lrtd12x_ | Near-Optimal Sample Complexity Bounds for Constrained MDPs | In contrast to the advances in characterizing the sample complexity for solving Markov decision processes (MDPs), the optimal statistical complexity for solving constrained MDPs (CMDPs) remains unknown. We resolve this question by providing minimax upper and lower bounds on the sample complexity for learning near-optimal policies in a discounted CMDP with access to a generative model (simulator). In particular, we design a model-based algorithm that addresses two settings: (i) relaxed feasibility, where small constraint violations are allowed, and (ii) strict feasibility, where the output policy is required to satisfy the constraint. For (i), we prove that our algorithm returns an $\epsilon$-optimal policy with probability $1 - \delta$, by making $\tilde{O}\left(\frac{S A \log(1/\delta)}{(1 - \gamma)^3 \epsilon^2}\right)$ queries to the generative model, thus matching the sample-complexity for unconstrained MDPs. For (ii), we show that the algorithm's sample complexity is upper-bounded by $\tilde{O} \left(\frac{S A \, \log(1/\delta)}{(1 - \gamma)^5 \, \epsilon^2 \zeta^2} \right)$ where $\zeta$ is the problem-dependent Slater constant that characterizes the size of the feasible region. Finally, we prove a matching lower-bound for the strict feasibility setting, thus obtaining the first near minimax optimal bounds for discounted CMDPs. Our results show that learning CMDPs is as easy as MDPs when small constraint violations are allowed, but inherently more difficult when we demand zero constraint violation. | Accept | The paper studies the sample complexity of constrained MDPs and provides (nearly) matching lower and upper bounds when the constraint needs to be satisfied exactly and when slackness is allowed. Reviewer KfEz initially had concerns about comparison with prior works, which were resolved during discussion period. Other than that, multiple reviewers point out that the main text writing is not particularly informative; much space in the main text was devoted to inconsequential details while key intuitions are missing. In addition, as Reviewer 3jqt points out and the AC agrees, the generative model + tabular setting is quite restrictive and limits the significance of the work. All that said, all reviewers also agree that this is a technically solid paper with multiple interesting insights (e.g., the separation between the strict and relaxed feasibility settings), and the final reviewer scores are unanimously on the positive side. | train | [
"Y1Ubrhje6I8",
"mLBBlTSKPqx",
"pH5KzQ8xaB",
"Knl6afoYib",
"9nwUPhwShEq",
"JgieQw5SlV1",
"nFBia15rRnK",
"K-ML-2o4RqL",
"0zC8_Eixa7N",
"h234dMqe11Y",
"ADIpn4RYLs",
"9EK5dAdHZIB",
"-X4Ho5uxKn",
"-H-ibfRTzTw",
"I-j3XjC-5_9",
"itnYIrFL3Hw",
"oCcyJ8-rU59",
"fn6ndUbiyz"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for a positive assessment of our work. We notice that you have decreased your overall score from 8 to 7. Could you please let us know the reason for this decrease - in particular, what additional contributions or modifications are required for a higher rating? This will help us further improve the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"I-j3XjC-5_9",
"oCcyJ8-rU59",
"-X4Ho5uxKn",
"0zC8_Eixa7N",
"JgieQw5SlV1",
"nFBia15rRnK",
"K-ML-2o4RqL",
"ADIpn4RYLs",
"fn6ndUbiyz",
"oCcyJ8-rU59",
"oCcyJ8-rU59",
"itnYIrFL3Hw",
"itnYIrFL3Hw",
"I-j3XjC-5_9",
"nips_2022_ZJ7Lrtd12x_",
"nips_2022_ZJ7Lrtd12x_",
"nips_2022_ZJ7Lrtd12x_",
... |
nips_2022_OtjQ7NTu3j | Multi-Fidelity Best-Arm Identification | In several real-world applications, a learner has access to multiple environment simulators, each with a different precision (e.g., simulation accuracy) and cost (e.g., computational time). In such a scenario, the learner faces the trade-off between selecting expensive accurate simulators or preferring cheap imprecise ones. We formalize this setting as a multi-fidelity variant of the stochastic best-arm identification problem, where querying the original arm is expensive, but multiple and biased approximations (i.e., fidelities) are available at lower costs. The learner's goal, in this setting, is to sequentially choose which simulator to query in order to minimize the total cost, while guaranteeing to identify the optimal arm with high probability. We first derive a lower bound on the identification cost, assuming that the maximum bias of each fidelity is known to the learner. Then, we propose a novel algorithm, Iterative Imprecise Successive Elimination (IISE), which provably reduces the total cost w.r.t. algorithms that ignore the multi-fidelity structure and whose cost complexity upper bound mimics the structure of the lower bound. Furthermore, we show that the cost complexity of IISE can be further reduced when the agent has access to a more fine-grained knowledge of the error introduced by the approximators.
Finally, we numerically validate IISE, showing the benefits of our method in simulated domains.
| Accept | This paper considers the multi-fidelity variant of the best-arm identification problem. I recommend its acceptance and I strongly encourage the authors to take the several fantastic points raised by the reviewers while crafting their next draft. For instance, please include a discussion about (and clarification of) Assumption 1 that reflects the discussion with the reviewers. | train | [
"YHuGjkx3hp",
"NUNcB7eWaOF",
"OTeWppDWDSa",
"i2EBZTzpe8a",
"u2q6rb6TYJY",
"4zg_sulKvjv",
"YhSI9oXBmh",
"dT-ylMdPpAQ",
"3unZM-VwEZG",
"hvuaVLSxaZ",
"sdsEj4p-Hoj",
"Ix7P1dsy0m",
"NjKUZgErLj6",
"_MMI3TB5WC",
"3dBmKID8jX",
"GNi5MeMYaVf",
"4VocqR7loM4",
"YKQ7-4U0BcR"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for having appreciated our effort in making the paper more accessible to the reader. Additionally, we highlight that an algorithm outline (as previously requested) was added to simplify the exposition in Appendix F. Since the reviewer did not mention it in the response, we remark it in case ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"4zg_sulKvjv",
"4zg_sulKvjv",
"4zg_sulKvjv",
"u2q6rb6TYJY",
"NjKUZgErLj6",
"YKQ7-4U0BcR",
"YKQ7-4U0BcR",
"_MMI3TB5WC",
"YKQ7-4U0BcR",
"YKQ7-4U0BcR",
"YKQ7-4U0BcR",
"4VocqR7loM4",
"GNi5MeMYaVf",
"3dBmKID8jX",
"nips_2022_OtjQ7NTu3j",
"nips_2022_OtjQ7NTu3j",
"nips_2022_OtjQ7NTu3j",
"n... |
nips_2022_p6hArCtwLAU | TreeMoCo: Contrastive Neuron Morphology Representation Learning | Morphology of neuron trees is a key indicator to delineate neuronal cell-types, analyze brain development process, and evaluate pathological changes in neurological diseases. Traditional analysis mostly relies on heuristic features and visual inspections. A quantitative, informative, and comprehensive representation of neuron morphology is largely absent but desired. To fill this gap, in this work, we adopt a Tree-LSTM network to encode neuron morphology and introduce a self-supervised learning framework named TreeMoCo to learn features without the need for labels. We test TreeMoCo on 2403 high-quality 3D neuron reconstructions of mouse brains from three different public resources. Our results show that TreeMoCo is effective in both classifying major brain cell-types and identifying sub-types. To our best knowledge, TreeMoCo is the very first to explore learning the representation of neuron tree morphology with contrastive learning. It has a great potential to shed new light on quantitative neuron morphology analysis. Code is available at https://github.com/TencentAILabHealthcare/NeuronRepresentation. | Accept | The paper proposes a self-supervised learning method targeting representation learning for large-scale neuronal morphology. This is an important problem in connectomics, and systems neuroscience at large, and very little progress has been to date in applying modern machine learning methods to this problem. The authors combine a TreeLSTM with self-supervised learning.
The paper received borderline reviews, but the reviewers were unanimous in recommending acceptance. Two main concerns were raised. First, the reviewers questioned the novelty and applicability of the method for the machine learning field at large. Ultimately, there was some acknowledgement that some of the techniques proposed could have applicability beyond the field of ML for neuronal morphology. But even if this were not the case, neuroscience is an important focus area for the NeurIPS conference. Second, there were issues raised with the evaluation methodology. These were clarified by the authors, and additional material was added to the Appendix to enable a better understanding of the method.
I recommend acceptance. | train | [
"Y9KY3JekzBH",
"uENGKmnFXxW",
"rCAMYAmiip",
"R4tFx7_i-l_",
"Jc02aQfCuTE",
"4WZDkKJ_k1",
"igA_hgq-7H3",
"FhHvXv2bKvJ",
"SYpWqserQem",
"HrHHbzWlh4A",
"akant4MmLAQ",
"IC6g7Tm4xs",
"sZkObMJKfJ8",
"7BZtckVoCq",
"T9vOnngK6yr",
"ZKOpqN1wIq",
"EkrfZpZOEUD",
"Xkq8xf2P7ZM",
"aoe3fSTXbl",
... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" I thank reviewer ZR1s for pointing out that the test accuracy graphs over epochs has been added to Appendix D. I missed it before and now agree that it's quite informative, and I do appreciate the transparency it adds. I decided to increase my score to 6: Weak accept. ",
" I want to thank the authors for the re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"rCAMYAmiip",
"aoe3fSTXbl",
"R4tFx7_i-l_",
"Jc02aQfCuTE",
"sZkObMJKfJ8",
"igA_hgq-7H3",
"FhHvXv2bKvJ",
"T9vOnngK6yr",
"MHprhaPylUt",
"YDb7ielVFzC",
"NVTACWpWb3",
"MHprhaPylUt",
"X30MVniOuAp",
"nips_2022_p6hArCtwLAU",
"ZKOpqN1wIq",
"EkrfZpZOEUD",
"Xkq8xf2P7ZM",
"YDb7ielVFzC",
"E4C... |
nips_2022_p3w4l4nf_Rr | Learning in Congestion Games with Bandit Feedback | In this paper, we investigate Nash-regret minimization in congestion games, a class of games with benign theoretical structure and broad real-world applications. We first propose a centralized algorithm based on the optimism in the face of uncertainty principle for congestion games with (semi-)bandit feedback, and obtain finite-sample guarantees. Then we propose a decentralized algorithm via a novel combination of the Frank-Wolfe method and G-optimal design. By exploiting the structure of the congestion game, we show the sample complexity of both algorithms depends only polynomially on the number of players and the number of facilities, but not the size of the action set, which can be exponentially large in terms of the number of facilities. We further define a new problem class, Markov congestion games, which allows us to model the non-stationarity in congestion games. We propose a centralized algorithm for Markov congestion games, whose sample complexity again has only polynomial dependence on all relevant problem parameters, but not the size of the action set. | Accept | This paper examines the behavior of certain payoff-based learning algorithms in repeated congestion games.
The initial reviews were positive, but a more detailed reading of the paper and the discussion phase revealed the following issues:
- In terms of technical results, the authors are not proving convergence to Nash equilibrium, but the minimization of "Nash regret" (a regret-like variant of the Nikaido-Isoda function) which, at best, means that among $K$ episodes, there exists one episode for which $\pi^k$ is close to Nash. This issue was discussed during both the author-reviewer and reviewer-AC phases, and there was consensus that the authors' results cannot be labeled as "convergence to a Nash equilibrium". The authors' revision was not satisfactory in this regard: the statement that the paper is using a "non-asymptotic notion of convergence" misses the crux of the issue and creates more confusion so it was not seen as a step in the right direction.
- In terms of writing, the paper begins by treating repeated games in normal form, with polcies playing the role of mixed strategies; here, the episodes are single-shot instances of play. In the decentralized case however, the episodes are no longer single-shot instances of play, and the players are assumed to be using the same policy throughout this sampling period - in essence, playing the game in a stationary way. Finally, in Section 6, the authors treat stochastic / Markov games, where the notion of an episode is something still different. The inclusion of the algorithms in the revised makes things easier to follow, but it does not address why the UCB-based approach of the centralized case cannot be expected to work here - the authors are justifying the use of the FW update as a means to combat the curse of dimensionality but this does not explain why FW updates are not used in the relevant centralized updates as well. All this makes the paper quite difficult to follow at a technical level.
- Finally, in terms of positioning, the authors seem to be motivated by the recent regret-based works of Ding et al. (2022) and Liu et al. (2021) but, at the same time, they seem to ignore the much wider literature on game-theoretic learning. This was also flagged as a cause for concern during the discussion phase.
Despite the above shortcomings, the reviewers appreciated the paper's technical contributions and felt that they warrant acceptance. As a result, a decision was reached to make a "conditional accept" recommendation subject to the authors' revising their paper to account for the above issues. Specifically:
- The final version of the paper must make clear that the type of results obtained do not concern convergence to (or "learning of") Nash equilibrium, but the minimization of a regret-like measure based on the Nikaido-Isoda function. [In particular, the motivating question in L56 should be removed, and the abstract and introduction must be likewise amended]
- The passage from single-stage to multiple-stage episodes must be justified, as well as the passage from repeated to stochastic congestion games.
- The authors must improve the positioning of their paper in relation to existing works on multi-agent learning with bandit feedback; some relevant references are provided below.
These changes are quite extensive, but the quality of the paper did improve during the revision phase, so the committee felt confident that the authors can undertake the above required changes for the camera-ready submission.
**Relevant references:**
1. Sebastian Bervoets, Mario Bravo, and Mathieu Faure, Learning with minimal information in continuous games, Theoretical Economics 15 (2020), 1471–1508.
1. Mario Bravo, David S. Leslie, and Panayotis Mertikopoulos, Bandit learning in concave N-person games, NeurIPS ’18: Proceedings of the 32nd International Conference of Neural Information Processing Systems, 2018.
1. Roberto Cominetti, Emerson Melo, and Sylvain Sorin, A payoff-based learning procedure and its application to traffic games, Games and Economic Behavior 70 (2010), no. 1, 71–83.
1. Pierre Coucheney, Bruno Gaujal, and Panayotis Mertikopoulos, Penalty-regulated dynamics and robust learning procedures in games, Mathematics of Operations Research 40 (2015), no. 3, 611– 633.
1. Angeliki Giannou, Emmanouil Vasileios Vlatakis-Gkaragkounis, and Panayotis Mertikopoulos,
The convergence rate of regularized learning in games: From bandits and uncertainty to opti- mism and beyond, NeurIPS ’21: Proceedings of the 35th International Conference on Neural Information Processing Systems, 2021.
1. David S. Leslie, Reinforcement learning in games, Ph.D. thesis, University of Bristol, 2004.
1. David S. Leslie and E. J. Collins, Individual Q-learning in normal form games, SIAM Journal on Control and Optimization 44 (2005), no. 2, 495–514.
1. David S. Leslie and E. J. Collins, Generalised weakened fictitious play, Games and Economic Behavior 56 (2006), no. 2, 285–298. | train | [
"zygKZZQLIH",
"ab2unazBX2g",
"Gt1ktPc8P6i",
"Tb49AgGiGL",
"Xnsocvjgm4",
"VKVkO9Xt6Y0",
"xDiGOW256EE",
"hzxQsxYugvv",
"TaToKXYgT3T",
"VfaulB5imFa",
"1FFu5ZeBdjD",
"lumbTe0i0Kw"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your detailed response! You answered my curiosities. I will leave my score unchanged. If the paper is accepted, I also encourage the authors to make the modifications that they are mentioning in the response in the final version of the paper.",
" ```\n[Krichene et al. (2015)] Walid Krich... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"hzxQsxYugvv",
"Gt1ktPc8P6i",
"Tb49AgGiGL",
"nips_2022_p3w4l4nf_Rr",
"VKVkO9Xt6Y0",
"xDiGOW256EE",
"lumbTe0i0Kw",
"1FFu5ZeBdjD",
"VfaulB5imFa",
"nips_2022_p3w4l4nf_Rr",
"nips_2022_p3w4l4nf_Rr",
"nips_2022_p3w4l4nf_Rr"
] |
nips_2022_401LFvBGIb | Deep feedforward functionality by equilibrium-point control in a shallow recurrent network. | Recurrent neural network based machine learning systems are typically employed for their sequential functionality in handling time-varying signals, such as for speech processing. However, neurobiologists find recurrent connections in the vision system and debate about equilibrium-point control in the motor system. Thus, we need a deeper understanding of how recurrent dynamics can be exploited to attain combinational stable-input stable-output functionality. Here, we study how a simplified Cohen-Grossberg neural network model can realize combinational multi-input Boolean functionality. We place our problem within the discipline of algebraic geometry, and solve a special case of it using piecewise-linear algebra. We demonstrate a connectance-efficient realization of the parity function as a proof-of-concept. Small-scale systems of this kind can be easily built, say for hobby robotics, as a network of two-terminal devices of resistors and tunnel diodes. Large-scale systems may be energy-efficiently built as an interconnected network of multi-electrode nanoclusters with non-monotonic transport mechanisms. | Reject | We had quite a bit of discussion on this paper. I read the paper and agree with some of the discussions that the paper in its current form might not attract interest in the community due to the following reasons:
- While it is highly interdisciplinary and potentially of high impact, the authors did not manage to connect all topics to tell a concise story on the use of equilibrium-point control theory in making a new recurrent neural network model. Here are some final comments from our discussions with the reviewers:
- While the content looks technically sound, we haven't seen from the authors the expected revision to improve the paper. We were expecting a better introduction to the problem and further discussion to convey the contribution.
- We think that it would be in the interest of the authors to extend the paper with clarifications and more background information such that a larger audience will be able to learn something from their work.
- We believe that this issue is fixable as there is ample space to add additional background/introduction to explain the problem/relevance. However, the authors did not manage to convince the reviewers how they are going to address this during the discussion period.
- The authors could elaborate much more in detail on all topics involved to build a better understanding first and convey their contribution and impact better.
- Also, there are recent advances in state-space models and their use as expressive representation learning algorithms within the community which are disregarded in the paper. In particular, the stability condition, memorization, the efficiency of computing transition matrices (which are very relevant to this paper), and more properties are discussed the last few years. Here is an example:
[1] Gu et al. Efficiently modeling long sequences with structured state spaces, ICLR 2022 https://arxiv.org/abs/2111.00396
Based on these points, I vote for the rejection of this paper. | val | [
"DJdg4E7TlHh",
"2tQRWcKhBxs",
"HCCxzU3-eal",
"QGJsdK_mO9M",
"-DJ86b8lMsM",
"_jAUacdof3",
"a65WYRbclqf",
"eN3Rwl2Z820"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. Yes, a broad class of functions can be approximated as piecewise-linear. The focus here is on the parity function because it is highly-nonlinear (of maximal threshold order [41]). Thank you for acknowledging that this is already a nice proof-of-concept. \n\n2. Empirical results based on varying N can be provi... | [
-1,
-1,
-1,
-1,
4,
6,
7,
2
] | [
-1,
-1,
-1,
-1,
2,
1,
2,
1
] | [
"_jAUacdof3",
"-DJ86b8lMsM",
"a65WYRbclqf",
"eN3Rwl2Z820",
"nips_2022_401LFvBGIb",
"nips_2022_401LFvBGIb",
"nips_2022_401LFvBGIb",
"nips_2022_401LFvBGIb"
] |
nips_2022_ACThGJBOctg | Kernel Interpolation with Sparse Grids | Structured kernel interpolation (SKI) accelerates Gaussian processes (GP) inference by interpolating the kernel covariance function using a dense grid of inducing points, whose corresponding kernel matrix is highly structured and thus amenable to fast linear algebra. Unfortunately, SKI scales poorly in the dimension of the input points, since the dense grid size grows exponentially with the dimension. To mitigate this issue, we propose the use of sparse grids within the SKI framework. These grids enable accurate interpolation, but with a number of points growing more slowly with dimension. We contribute a novel nearly linear time matrix-vector multiplication algorithm for the sparse grid kernel matrix. We also describe how sparse grids can be combined with an efficient interpolation scheme based on simplicial complexes. With these modifications, we demonstrate that SKI can be scaled to higher dimensions while maintaining accuracy, for both synthetic and real datasets. | Accept | All reviewers found this paper relevant for the conference, original, and well written. All four reviewers recommended accepting the paper. For the camera-ready, please go through the reviewer comments in detail and check that you have addressed the remaining concerns regarding clarity and improving presentation.
For the camera-ready version, you also need to fix the font issue in your paper. In the current version, the font/text size is not what it should be (most likely due to some package clash). | train | [
"MyHdRjPVIJD",
"KAmHV3ED_W",
"rRKSzmFFtdr",
"zzamdC5FvcY",
"iKjysDhAudHh",
"DoXK6UqVZDjR",
"icP71mA1lR",
"SvAjOcJxGlD",
"mYz-R3bSGSv",
"M8YpNpjhhH-",
"JPUgdcpuvm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for responding to the rebuttal and reminding us that the submission can be updated. \n\nWe have included both variance in Table 1 and more synthetic experiments in Appendix C.4. In summary, synthetic data experiments reinforce the original findings in the main section, i.e., sparse grid interpolation b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"KAmHV3ED_W",
"zzamdC5FvcY",
"JPUgdcpuvm",
"M8YpNpjhhH-",
"mYz-R3bSGSv",
"SvAjOcJxGlD",
"nips_2022_ACThGJBOctg",
"nips_2022_ACThGJBOctg",
"nips_2022_ACThGJBOctg",
"nips_2022_ACThGJBOctg",
"nips_2022_ACThGJBOctg"
] |
nips_2022_FFPcFtWJwsB | Large-scale Optimization of Partial AUC in a Range of False Positive Rates | The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning. However, it summarizes the true positive rates (TPRs) over all false positive rates (FPRs) in the ROC space, which may include the FPRs with no practical relevance in some applications. The partial AUC, as a generalization of the AUC, summarizes only the TPRs over a specific range of the FPRs and is thus a more suitable performance measure in many real-world situations. Although partial AUC optimization in a range of FPRs had been studied, existing algorithms are not scalable to big data and not applicable to deep learning. To address this challenge, we cast the problem into a non-smooth difference-of-convex (DC) program for any smooth predictive functions (e.g., deep neural networks), which allowed us to develop an efficient approximated gradient descent method based on the Moreau envelope smoothing technique, inspired by recent advances in non-smooth DC optimization. To increase the efficiency of large data processing, we used an efficient stochastic block coordinate update in our algorithm. Our proposed algorithm can also be used to minimize the sum of ranked range loss, which also lacks efficient solvers. We established a complexity of $\tilde O(1/\epsilon^6)$ for finding a nearly $\epsilon$-critical solution. Finally, we numerically demonstrated the effectiveness of our proposed algorithms in training both linear models and deep neural networks for partial AUC maximization and sum of ranked range loss minimization. | Accept | The paper proposes a difference-of-convex (DC) based algorithm for optimizing the partial AUC performance measure. The authors show convergence guarantees and superior empirical performance compared to baselines.
The reviewers had some concern about the novelty of the use of DC optimization, to which the authors point out that the presence of non-smooth terms in both components of the DC objective makes their formulation non-standard and interesting.
Overall, the reviewers seem to agree that paper should be accepted.
To address some pending concerns, the authors are **strongly encouraged** to include the following:
- A detailed discussion on hyper-parameter tuning and the values they prescribe in practice, as well as, an analysis of how sensitive their algorithm is to the hyper-parameters
- The additional experiments that the authors promised to Reviewer AF28
We trust that the authors will include the above in the camera-ready version of the paper. | train | [
"P4bQ9_EifGM",
"02GdErY6t4",
"PoHzBVfHR5",
"AQOljZ7jOjW",
"QMDthT7LW5",
"_xC6auapQm",
"uwVXcGyZrJj",
"fNlzecYbKv",
"LkKKYjQlgT7",
"_vqlHfj14FA",
"djURL3kYicY",
"ArP9KB5Tnvq",
"rlf-WlzuXVp",
"uYlgcTLfRk"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 412K,\n\nThank you for reading our response and raising your score.\n\nThe GPU time represents the time used if the algorithm is run on GPU. In Figure 7, the dashed line of $SVM\\_{pAUC}^{tight}$ does not reflect its convergence with GPU time. It is reported for reference only since we use the autho... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"02GdErY6t4",
"fNlzecYbKv",
"djURL3kYicY",
"uYlgcTLfRk",
"_xC6auapQm",
"uwVXcGyZrJj",
"ArP9KB5Tnvq",
"uYlgcTLfRk",
"rlf-WlzuXVp",
"djURL3kYicY",
"nips_2022_FFPcFtWJwsB",
"nips_2022_FFPcFtWJwsB",
"nips_2022_FFPcFtWJwsB",
"nips_2022_FFPcFtWJwsB"
] |
nips_2022_WXdSp8k0TMn | Revisiting Non-Parametric Matching Cost Volumes for Robust and Generalizable Stereo Matching | Stereo matching is a classic challenging problem in computer vision, which has recently witnessed remarkable progress by Deep Neural Networks (DNNs). This paradigm shift leads to two interesting and entangled questions that have not been addressed well. First, it is unclear whether stereo matching DNNs that are trained from scratch really learn to perform matching well. This paper studies this problem from the lens of white-box adversarial attacks. It presents a method of learning stereo-constrained photometrically-consistent attacks, which by design are weaker adversarial attacks, and yet can cause catastrophic performance drop for those DNNs. This observation suggests that they may not actually learn to perform matching well in the sense that they should otherwise achieve potentially even better after stereo-constrained perturbations are introduced. Second, stereo matching DNNs are typically trained under the simulation-to-real (Sim2Real) pipeline due to the data hungriness of DNNs. Thus, alleviating the impacts of the Sim2Real photometric gap in stereo matching DNNs becomes a pressing need. Towards joint adversarially robust and domain generalizable stereo matching, this paper proposes to learn DNN-contextualized binary-pattern-driven non-parametric cost-volumes. It leverages the perspective of learning the cost aggregation via DNNs, and presents a simple yet expressive design that is fully end-to-end trainable, without resorting to specific aggregation inductive biases. In experiments, the proposed method is tested in the SceneFlow dataset, the KITTI2015 dataset, and the Middlebury dataset. It significantly improves the adversarial robustness, while retaining accuracy performance comparable to state-of-the-art methods. It also shows a better Sim2Real generalizability. Our code and pretrained models are released at \href{https://github.com/kelkelcheng/AdversariallyRobustStereo}{this Github Repo}. | Accept | This paper addresses the problem of robustness in stereo-matching. It has been reviewed by several knowledgeable reviewers with extensive experience in stereo-matching and learning for stereo. The majority consensus from the reviews was that the paper will be of interest to the community and should be accepted. This meta-review agrees, and recommends acceptance.
However, as noted by the reviewers, there are some issues with the text that need to be fixed e.g.:
Lack of focus in aspects of the presentation (3tmL)
Lacking descriptions of the method and evaluation (3tmL)
Lack of discussion of the multi-view (not time synchronized) setting (WPxi)
Difficult to compare the results across Tables (WPxi)
Missing discussion of Cai et al. 3DV 2020 (RUGj)
Finally, while HFdJ was less supportive of the paper in their original review, they did not provide references to address their claims that the "The cost aggregation problem perspective has been widely exploited." Furthermore, they did not engage in the discussion to further expand on their concerns. Given this, less weight was placed on their comments.
| train | [
"neEQO4M1oy",
"5lcARMfnAXL",
"RtNwffGIxDB",
"Csw7tpMPifv",
"hC912nz_CCI",
"rXTf-k83aFD",
"bm6E6D0we7g",
"9Df7IlUmtYz",
"N8984bd2jG9",
"k6HoJrvbakQ",
"3kQVF02Dv_6",
"WERnOcDj8Og",
"-NtUC5p554",
"lylcQm-JXUK",
"kBp4eHXpngC",
"jvYioFrTAuc"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. Based on your comments and the other reviews I am raising my score to weak accept.",
" Thank you very much for your detailed answer. This resolves my concerns.",
" **Comments**: Would it be possible to get a summary / preview of this discussion of the applicability and limitation ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
5
] | [
"N8984bd2jG9",
"Csw7tpMPifv",
"Csw7tpMPifv",
"hC912nz_CCI",
"k6HoJrvbakQ",
"bm6E6D0we7g",
"3kQVF02Dv_6",
"jvYioFrTAuc",
"-NtUC5p554",
"lylcQm-JXUK",
"WERnOcDj8Og",
"kBp4eHXpngC",
"nips_2022_WXdSp8k0TMn",
"nips_2022_WXdSp8k0TMn",
"nips_2022_WXdSp8k0TMn",
"nips_2022_WXdSp8k0TMn"
] |
nips_2022_2Bus7sfjZh8 | Learning and Covering Sums of Independent Random Variables with Unbounded Support | We study the problem of covering and learning sums $X = X_1 + \cdots + X_n$ of independent integer-valued random variables $X_i$ (SIIRVs) with infinite support. De et al. at FOCS 2018, showed that even when the collective support of $X_i$'s is of size $4$, the maximum value of the support necessarily appears in the sample complexity of learning $X$. In this work, we address two questions: (i) Are there general families of SIIRVs with infinite support that can be learned with sample complexity independent of both $n$ and the maximal element of the support? (ii) Are there general families of SIIRVs with infinite support that admit proper sparse covers in total variation distance? As for question (i), we provide a set of simple conditions that allow the infinitely supported SIIRV to be learned with complexity $ \text{poly}(1/\epsilon)$ bypassing the aforementioned lower bound. We further address question (ii) in the general setting where each variable $X_i$ has unimodal probability mass function and is a different member of some, possibly multi-parameter, exponential family $\mathcal{E}$ that satisfies some structural properties. These properties allow $\mathcal{E}$ to contain heavy tailed and non log-concave distributions. Moreover, we show that for every $\epsilon > 0$, and every $k$-parameter family $\mathcal{E}$ that satisfies some structural assumptions, there exists an algorithm with $\widetilde{O}(k) \cdot \text{poly}(1/\epsilon)$ samples that learns a sum of $n$ arbitrary members of $\mathcal{E}$ within $\epsilon$ in TV distance. The output of the learning algorithm is also a sum of random variables within the family $\mathcal{E}$. En route, we prove that any discrete unimodal exponential family with bounded constant-degree central moments can be approximated by the family corresponding to a bounded subset of the initial (unbounded) parameter space. | Accept | The authors address the fundamental problem of learning the sum of n independent random variables (not necessarily identically distributed). They concentrate on the setting where the variables have infinite support. In general, it's known that the sample complexity for this problem is not bounded, but they get around this barrier by making parametric assumptions on the distributions of the variables.
The reviewers appreciated the strength of the results, and the authors engaged with them to dispel remaining questions. It is not exactly clear where an application of the results here would make a concrete impact; the authors hint in the rebuttal that understanding the "delicate structure" of sums of independent random variables is important in game theory and stochastic combinatorial optimization, but this is not made explicit. In any case, the theoretical contributions are solid enough to merit acceptance. | train | [
"5U5Fy78pzbs",
"GaB4W1gI5f3",
"3dXZqyFxC47",
"COthJZANPh0",
"s6Gb1bclKoS",
"J9VT__CtH7H",
"S8ZNHHshWJP",
"CK-v74CuKsL",
"lWvN8oDJ1om"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the very detailed response. I am fairly convinced about the minimality of the assumptions and have increased my score to reflect that. ",
" Thank you very much for your detailed response! I also appreciated the discussion of the minimality of the assumptions in your other response to the other review... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
4
] | [
"COthJZANPh0",
"s6Gb1bclKoS",
"J9VT__CtH7H",
"lWvN8oDJ1om",
"CK-v74CuKsL",
"S8ZNHHshWJP",
"nips_2022_2Bus7sfjZh8",
"nips_2022_2Bus7sfjZh8",
"nips_2022_2Bus7sfjZh8"
] |
nips_2022_vDeh2yxTvuh | When Do Flat Minima Optimizers Work? | Recently, flat-minima optimizers, which seek to find parameters in low-loss neighborhoods, have been shown to improve a neural network's generalization performance over stochastic and adaptive gradient-based optimizers. Two methods have received significant attention due to their scalability: 1. Stochastic Weight Averaging (SWA), and 2. Sharpness-Aware Minimization (SAM). However, there has been limited investigation into their properties and no systematic benchmarking of them across different domains. We fill this gap here by comparing the loss surfaces of the models trained with each method and through broad benchmarking across computer vision, natural language processing, and graph representation learning tasks. We discover several surprising findings from these results, which we hope will help researchers further improve deep learning optimizers, and practitioners identify the right optimizer for their problem. | Accept | This paper is an empirical investigation comparing two popular optimization techniques, namely SAM and SWA. Authors compare the performance of these two methods over a wide range of tasks and architectures. They also inspect and compare the properties of minima found by these methods. Given the existing interest in the ML community to understand these methods, reviewers are in agreement that insights from these paper are valuable and impactful enough to accept this paper. | test | [
"ZOt8KgO4M1U",
"bbVq3paUoyY",
"3odCbhBnBqW",
"ItSqeELTr4l",
"-Hkgy16MA7X",
"EFkfkTduPXV",
"PzXCUWkCNv",
"y3fhwBIg6P4",
"cImEDJroi1t",
"2d914D6d1Y",
"66_0z-nzaV",
"lypouLlvNK",
"_QGOpFjUrt8",
"SiMKL5SRnE0",
"CsH3bgf8tnc",
"0vWN1ZB7Ip_",
"E4AxRG_9mOc",
"v_U-ONp-PI4",
"XxEezBwjHX",
... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thank you so much for engaging with us in this discussion. We appreciate it very much.\n\nWe just updated the Appendix (see supplementary material) and tried to address all your remaining questions.\n\nAddressing your above points:\n\n2. In A.2, we added the Hessian eigenvalue density, see Figure 7. Significant p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"PzXCUWkCNv",
"ItSqeELTr4l",
"ItSqeELTr4l",
"EFkfkTduPXV",
"66_0z-nzaV",
"lypouLlvNK",
"cImEDJroi1t",
"SiMKL5SRnE0",
"2d914D6d1Y",
"kte0H1UOkKY",
"rZ96QJi7Ww",
"_QGOpFjUrt8",
"XxEezBwjHX",
"CsH3bgf8tnc",
"0vWN1ZB7Ip_",
"v_U-ONp-PI4",
"nips_2022_vDeh2yxTvuh",
"nips_2022_vDeh2yxTvuh"... |
nips_2022_IIDC-pVqkrf | Using Partial Monotonicity in Submodular Maximization | Over the last two decades, submodular function maximization has been the workhorse of many discrete optimization problems in machine learning applications. Traditionally, the study of submodular functions was based on binary function properties, but recent works began to consider continuous function properties such as the submodularity ratio and the curvature. The monotonicity property of set functions plays a central role in submodular maximization. Nevertheless, no continuous version of this property has been suggested to date (as far as we know), which is unfortunate since submoduar functions that are almost monotone often arise in machine learning applications. In this work we fill this gap by defining the monotonicity ratio, which is a continuous version of the monotonicity property. We then show that for many standard submodular maximization algorithms one can prove new approximation guarantees that depend on the monotonicity ratio; leading to improved approximation ratios for the common machine learning applications of movie recommendation, quadratic programming, image summarization and ride-share optimization. | Accept | This paper introduces a new notion of partial monotonicity in submodular maximization that is interesting and relevant. The authors derive approximation ratios for existing algorithms that are as a function of a monotonicity ratio. This new notion is likely to be studied and relevant in future work.
One concern raised by a reviewer is regarding the fact that the monotonicity ratio of a function is hard to compute. However, this is also the case for other properties of submodular functions and this concern is adequately discussed in the paper. | train | [
"MY7Xa3Sdask",
"NTYjpOZan1w",
"iBFKKol7fG",
"bgORJqkR5bO",
"-jCD96nGucd",
"yxQmZeadqZs",
"euleJ1UuiwV",
"gCuOCOq7Pm4",
"Ygyc18zEi46",
"75WkP0RUpa",
"cO_F7hx_J5R",
"tr9IU0oOpZV",
"1pd4-CaUz1",
"1Nh3cuIE_IC",
"qwNC1YzLs9x",
"AVdxgfyKbdT",
"udw8ujtLsHj",
"pWCKiA5EHO",
"HYnL7SJ6KrG",... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official... | [
" Thanks for your detailed answers to my questions. I agree that the inapproximability results in the paper are pretty significant and challenging. However, I'm still not sure about the current definition of monotonicity ratio (which is the reason for the \"difficulty in stating the monotonicity ratio in terms of p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"bgORJqkR5bO",
"iBFKKol7fG",
"HYnL7SJ6KrG",
"mzb_emgM_6V",
"YihL78fc0Hv",
"YihL78fc0Hv",
"YihL78fc0Hv",
"YihL78fc0Hv",
"YihL78fc0Hv",
"mjrILor_chd",
"mzb_emgM_6V",
"mzb_emgM_6V",
"mzb_emgM_6V",
"mzb_emgM_6V",
"mzb_emgM_6V",
"mzb_emgM_6V",
"HYnL7SJ6KrG",
"HYnL7SJ6KrG",
"nips_2022_... |
nips_2022_GL-3WEdNRM | Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks | Despite the non-convex optimization landscape, over-parametrized shallow networks are able to achieve global convergence under gradient descent. The picture can be radically different for narrow networks, which tend to get stuck in badly-generalizing local minima. Here we investigate the cross-over between these two regimes in the high-dimensional setting, and in particular investigate the connection between the so-called mean-field/hydrodynamic regime and the seminal approach of Saad \& Solla. Focusing on the case of Gaussian data, we study the interplay between the learning rate, the time scale, and the number of hidden units in the high-dimensional dynamics of stochastic gradient descent (SGD). Our work builds on a deterministic description of SGD in high-dimensions from statistical physics, which we extend and for which we provide rigorous convergence rates. | Accept | The reviewers and I agree that the contributions of the paper are of interest and useful addition to the literature. Therefore, I recommend accepting the paper.
Please consider the reviewers' comments when preparing the camera-ready version.
| train | [
"vbMMVxjTwY",
"vTKOKryLnA9",
"XF1qgJXcBtm",
"bPgOyCSBEqv",
"MXch7Z5tZ5U",
"rPUi5uH-gfw",
"uksEYfd4yLB",
"wXgu2hdziLG",
"Gn_atdNakKl",
"20WAR2LcDWI",
"Z4BHkpbd3_",
"5cedx_73jec",
"zlMc7qtfk0n",
"ttMLyB315hp",
"3D9PAn4eM3R",
"HDLivAp2AGf",
"D8IWFnwekc",
"p4h6mIrHdtI",
"7EgpxQWmn9",... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" We thank the reviewer for the comments and for supporting the acceptance of our work.",
" We thank again the reviewer for the constructive comments. Below we address the remarks.\n\n>*The propagation of error analysis was done in \"Mean-field theory of two-layers neural networks: dimension-free bounds and kerne... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4,
2,
4
] | [
"wXgu2hdziLG",
"rPUi5uH-gfw",
"MXch7Z5tZ5U",
"5cedx_73jec",
"Gn_atdNakKl",
"Z4BHkpbd3_",
"ttMLyB315hp",
"zlMc7qtfk0n",
"20WAR2LcDWI",
"3D9PAn4eM3R",
"p4h6mIrHdtI",
"D8IWFnwekc",
"7EgpxQWmn9",
"b73C_qkJa3S",
"do50bkUWZym",
"PNHc5fiz3w",
"nips_2022_GL-3WEdNRM",
"nips_2022_GL-3WEdNRM"... |
nips_2022_WrIrYMCZgbb | Exploiting Semantic Relations for Glass Surface Detection | Glass surfaces are omnipresent in our daily lives and often go unnoticed by the majority of us. While humans are generally able to infer their locations and thus avoid collisions, it can be difficult for current object detection systems to handle them due to the transparent nature of glass surfaces. Previous methods approached the problem by extracting global context information to obtain priors such as object boundaries and reflections. However, their performances cannot be guaranteed when these deterministic features are not available. We observe that humans often reason through the semantic context of the environment, which offers insights into the categories of and proximity between entities that are expected to appear in the surrounding. For example, the odds of co-occurrence of glass windows with walls and curtains are generally higher than that with other objects such as cars and trees, which have relatively less semantic relevance. Based on this observation, we propose a model ('GlassSemNet') that integrates the contextual relationship of the scenes for glass surface detection with two novel modules: (1) Scene Aware Activation (SAA) Module to adaptively filter critical channels with respect to spatial and semantic features, and (2) Context Correlation Attention (CCA) Module to progressively learn the contextual correlations among objects both spatially and semantically. In addition, we propose a large-scale glass surface detection dataset named {\it Glass Surface Detection - Semantics} ('GSD-S'), which contains 4,519 real-world RGB glass surface images from diverse real-world scenes with detailed annotations for both glass surface detection and semantic segmentation. Experimental results show that our model outperforms contemporary works, especially with 42.6\% MAE improvement on our proposed GSD-S dataset. Code, dataset, and models are available at https://jiaying.link/neurips2022-gsds/ | Accept | Post rebuttal, three out of four reviewers were in favor of acceptance. The hold-out reviewer pjyT was primarily concerned with the fact that the method isn't specifically tailored to the task (in the sense that other domains have similar features) and therefore the method ought to be tested in more settings. The AC understands pjyT's perspective in the sense that deep problem-specific understanding of a domain often drives methodological contributions. However, the AC does not find this to be a convincing argument for rejecting the paper and thinks the work provides enough of a strong contribution. Accordingly, the AC recommends accepting the paper. | test | [
"z3sRBIyL7KE",
"dZu2miE42Re",
"Q3eMiBHeFyB",
"NjHxq-uF_P9",
"B5OTO84fOfm",
"fNNPQJpiHPD",
"tRh2yO3h2DZ",
"LHF9tdqwzbf",
"CihC4aFTY-L",
"fOVH4A1a1eU"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks so much for your response.\nAfter reading the response, my concerns are not solved.",
" We would like to thank the reviewers for the evaluation of our manuscript for the positive feedback: interesting idea (Reviewer KsvV, Reviewer XdVk), proposed useful dataset (Reviewer yyRs, Reviewer XdVk, Reviewer pjy... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"Q3eMiBHeFyB",
"nips_2022_WrIrYMCZgbb",
"fOVH4A1a1eU",
"CihC4aFTY-L",
"LHF9tdqwzbf",
"tRh2yO3h2DZ",
"nips_2022_WrIrYMCZgbb",
"nips_2022_WrIrYMCZgbb",
"nips_2022_WrIrYMCZgbb",
"nips_2022_WrIrYMCZgbb"
] |
nips_2022_ccYOWWNa5v2 | Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting | In lifelong learning systems based on artificial neural networks, one of the biggest obstacles is the inability to retain old knowledge as new information is encountered. This phenomenon is known as catastrophic forgetting. In this paper, we propose a new kind of connectionist architecture, the Sequential Neural Coding Network, that is robust to forgetting when learning from streams of data points and, unlike networks of today, does not learn via the popular back-propagation of errors. Grounded in the neurocognitive theory of predictive coding, our model adapts its synapses in a biologically-plausible fashion while another neural system learns to direct and control this cortex-like structure, mimicking some of the task-executive control functionality of the basal ganglia. In our experiments, we demonstrate that our self-organizing system experiences significantly less forgetting compared to standard neural models, outperforming a swath of previously proposed methods, including rehearsal/data buffer-based methods, on both standard (SplitMNIST, Split Fashion MNIST, etc.) and custom benchmarks even though it is trained in a stream-like fashion. Our work offers evidence that emulating mechanisms in real neuronal systems, e.g., local learning, lateral competition, can yield new directions and possibilities for tackling the grand challenge of lifelong machine learning. | Accept | This paper provides a biologically-inspired method based on predictive coding to address the dangers of catastrophic forgetting in continual learning, while encouraging models to leverage similar data from their past when learning from present information. The reviewers agreed the paper was interesting and worth publishing, although there was a spread in their enthusiasm. That said, the consensus is clear enough that I am happy to recommend acceptance. | train | [
"zBueG0yuBHG",
"oWiKyldRoU",
"Qs4CIh8aOdi",
"gTcklZY4CA_",
"adI9HgjjYhh",
"r35AngnyIEw",
"XsnXLzJWehC",
"ZuzL-PLCVX9",
"_saMVAtvdS",
"HvZuteOPhK",
"F0CrdvAQJBW",
"oP35wkrHYo",
"CtWVMs1cfZc",
"JNx9ClBAQRE",
"lhj6RTho65N",
"JJ2wqtTUl6j"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Note that we will also incorporate the above additional clarifications/question response (to Reviewer 36Ec) into the main paper/appendix.",
" ***The task-IL is the multi-head scenario...It is mentioned in the response that experiments are Class-IL but I notice both multi-head and single head runs which is confu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"oWiKyldRoU",
"Qs4CIh8aOdi",
"CtWVMs1cfZc",
"r35AngnyIEw",
"XsnXLzJWehC",
"HvZuteOPhK",
"_saMVAtvdS",
"nips_2022_ccYOWWNa5v2",
"JJ2wqtTUl6j",
"F0CrdvAQJBW",
"lhj6RTho65N",
"CtWVMs1cfZc",
"JNx9ClBAQRE",
"nips_2022_ccYOWWNa5v2",
"nips_2022_ccYOWWNa5v2",
"nips_2022_ccYOWWNa5v2"
] |
nips_2022_VgX6ceDerh2 | Stochastic Online Learning with Feedback Graphs: Finite-Time and Asymptotic Optimality | We revisit the problem of stochastic online learning with feedback
graphs, with the goal of devising algorithms that are optimal, up to
constants, both asymptotically and in finite time. We show that,
surprisingly, the notion of optimal finite-time regret is not a
uniquely defined property in this context and that, in general, it
is decoupled from the asymptotic rate. We discuss alternative
choices and propose a notion of finite-time optimality that we argue
is \emph{meaningful}. For that notion, we give an algorithm that
admits quasi-optimal regret both in finite-time and asymptotically. | Accept | The reviewers largely agreed in the opinion that the paper has enough results on online learning with feedback graphs. On the other hand, concerns on the presentation and the overall picture of the contribution are raised, which I agree with. Though some of them come from the inherent difficulty of the problem, I strongly encourage the authors to carefully address these points in the final version. | train | [
"oVCX14nk3rq",
"LaKkr9qO8Xo",
"ZG0it95QDcS",
"0HjkL9irm4_",
"jPFFRMCysQ",
"Mmzv2IV5VX5",
"-ePGS33sz4X",
"0M3qGUpWG1P",
"nuZMGCRbrqv",
"wgs1oQ9GyFI"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for addressing my concerns. With the typo correction, Theorem 4.1 is now sensible. Given the additional clarifications that the authors promise in the final version, I currently plan on keeping my score.",
" > The assumption introduced in this paper might be a bit strong, while a sufficient ... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
1
] | [
"ZG0it95QDcS",
"wgs1oQ9GyFI",
"nuZMGCRbrqv",
"0M3qGUpWG1P",
"-ePGS33sz4X",
"nips_2022_VgX6ceDerh2",
"nips_2022_VgX6ceDerh2",
"nips_2022_VgX6ceDerh2",
"nips_2022_VgX6ceDerh2",
"nips_2022_VgX6ceDerh2"
] |
nips_2022_mrt90D00aQX | FedSR: A Simple and Effective Domain Generalization Method for Federated Learning | Federated Learning (FL) refers to the decentralized and privacy-preserving machine learning framework in which multiple clients collaborate (with the help of a central server) to train a global model without sharing their data. However, most existing FL methods only focus on maximizing the model's performance on the source clients' data (e.g., mobile users) without considering its generalization ability to unknown target data (e.g., a new user). In this paper, we incorporate the problem of Domain Generalization (DG) into Federated Learning to tackle the aforementioned issue. However, virtually all existing DG methods require a centralized setting where data is shared across the domains, which violates the principles of decentralized FL and hence not applicable. To this end, we propose a simple yet novel representation learning framework, namely FedSR, which enables domain generalization while still respecting the decentralized and privacy-preserving natures of this FL setting. Motivated by classical machine learning algorithms, we aim to learn a simple representation of the data for better generalization. In particular, we enforce an L2-norm regularizer on the representation and a conditional mutual information (between the representation and the data given the label) regularizer to encourage the model to only learn essential information (while ignoring spurious correlations such as the background). Furthermore, we provide theoretical connections between the above two objectives and representation alignment in domain generalization. Extensive experimental results suggest that our method significantly outperforms relevant baselines in this particular problem. | Accept | The paper studies the problem of domain generalization in federated learning and proposes a new regularizer, which is a combination of L2-norm regularizer on the representation and a conditional mutual information (between the representation and the data given the label) regularizer. The paper is well written and authors provide experimental results in domain generalization datasets. Reviewers also raise several concerns about the paper. They remark that the experiments are not in the cross-device federated learning setup with many clients / domains, the algorithm is not novel in the domain generalization community, lack of comparison with prior works. I strongly encourage authors to add more baselines from the domain generalization literature in the final version. | train | [
"XXGqQlb4OwW",
"KYoIC_tvP1h",
"xMfNRKcnq44",
"eBqQh_2Pz5L",
"MXQ2V8PUU32",
"SixS1Zhnog",
"q6kvfr70O6h",
"1zvYG1BM3E2",
"txt2daoIkj6",
"Fs1vSKr3KDYr",
"omwZEvdqJH7",
"pwG9uwGf3GT",
"l9Qs2NyOd0-",
"wDWJToUdw5f",
"Qqv-YJRqpN9",
"OLGeckh6i1",
"5jUF2cz1OyD",
"UHrtC8nXgUA"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for more clarification. I do acknowledge the soundness of the proposed method. However, I am still not convinced how the proposed method benefits FL. There are plenty of methods focusing single domain generalization, which does not require domain interaction and can also be applied in FL setting, based on ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"xMfNRKcnq44",
"eBqQh_2Pz5L",
"MXQ2V8PUU32",
"wDWJToUdw5f",
"SixS1Zhnog",
"q6kvfr70O6h",
"UHrtC8nXgUA",
"nips_2022_mrt90D00aQX",
"Fs1vSKr3KDYr",
"omwZEvdqJH7",
"OLGeckh6i1",
"UHrtC8nXgUA",
"5jUF2cz1OyD",
"Qqv-YJRqpN9",
"nips_2022_mrt90D00aQX",
"nips_2022_mrt90D00aQX",
"nips_2022_mrt9... |
nips_2022_8hoDLRLtl9h | Distribution-Informed Neural Networks for Domain Adaptation Regression | In this paper, we study the problem of domain adaptation regression, which learns a regressor for a target domain by leveraging the knowledge from a relevant source domain. We start by proposing a distribution-informed neural network, which aims to build distribution-aware relationship of inputs and outputs from different domains. This allows us to develop a simple domain adaptation regression framework, which subsumes popular domain adaptation approaches based on domain invariant representation learning, reweighting, and adaptive Gaussian process. The resulting findings not only explain the connections of existing domain adaptation approaches, but also motivate the efficient training of domain adaptation approaches with overparameterized neural networks. We also analyze the convergence and generalization error bound of our framework based on the distribution-informed neural network. Specifically, our generalization bound focuses explicitly on the maximum mean discrepancy in the RKHS induced by the neural tangent kernel of distribution-informed neural network. This is in sharp contrast to the existing work which relies on domain discrepancy in the latent feature space heuristically formed by one or several hidden neural layers. The efficacy of our framework is also empirically verified on a variety of domain adaptation regression benchmarks. | Accept | This paper works towards domain adaptation in the regression setting through the distribution-informed neural networks. It provides a nice theoretical elaboration to the three mainstream methods in this field: domain invariant learning, source data re-weighting, and adaptive Gaussian processes. However, reviewers were on the borderline even after the author rebuttal and after the rating increase. AC carefully justified the merits and flaws of this paper and felt that the former slightly outweighs the latter, and this is in agreement with the SAC. The major concern is the use of fully-connected network for theoretical analysis and empirical evaluation --- the scalability of the theory to modern neural networks seems not easy while the paper does not provide sufficient evidence in this regard. However, considering this paper is more theory-oriented, such a concern is down-weighted a little bit. Thus the paper is recommended for acceptance. Authors are suggested to improve their paper by incorporating the rebuttal and further addressing the reviews, in particular improving the clarity and smoothness of the presentation to be more readable. | train | [
"NmxE7QNlaSu",
"mANYuUxBA3A",
"kLDiAhkD0ON",
"ktaupcKabJi",
"JhJS6T3Y84K",
"CBf9XruGHqO",
"A0IlKgsU4N_",
"GFeaXUb-Rn8",
"UWgARrvE_4l",
"q6dTxT1aqZ7",
"niEiX_1s9KM",
"QEjMSgPQLQS",
"hkWzQP9g9yY",
"V-mP79vWMCu",
"-3dGOIC2DUb",
"dL37CsTTGVX",
"HjY2zvtbpL2"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your suggestions. Due to the limited time, we might not have the experimental results for validating the proposed algorithms based on CNNs. But we found that previous work [41] has a similar empirical evaluation of the neural networks based on Fully-Connected (FCN) and Convolutional (CNN) ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
2,
4,
4,
2
] | [
"kLDiAhkD0ON",
"ktaupcKabJi",
"QEjMSgPQLQS",
"q6dTxT1aqZ7",
"CBf9XruGHqO",
"HjY2zvtbpL2",
"GFeaXUb-Rn8",
"dL37CsTTGVX",
"-3dGOIC2DUb",
"niEiX_1s9KM",
"V-mP79vWMCu",
"hkWzQP9g9yY",
"nips_2022_8hoDLRLtl9h",
"nips_2022_8hoDLRLtl9h",
"nips_2022_8hoDLRLtl9h",
"nips_2022_8hoDLRLtl9h",
"nip... |
nips_2022_XcDVT8HarS | Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive? | We study the theory of neural network (NN) from the lens of classical nonparametric regression problems with a focus on NN's ability to \emph{adaptively} estimate functions with \emph{heterogeneous smoothness} --- a property of functions in Besov or Bounded Variation (BV) classes.
Existing work on this problem requires tuning the NN architecture based on the function spaces and sample sizes.
We consider a ``Parallel NN'' variant of deep ReLU networks and show that the standard weight decay is equivalent to promoting the $\ell_p$-sparsity ($0<p<1$) of the coefficient vector of an end-to-end learned function bases, i.e., a dictionary.
Using this equivalence, we further establish that by tuning only the weight decay, such Parallel NN achieves an estimation error arbitrarily close to the minimax rates for both the Besov and BV classes.
Notably, it gets exponentially closer to minimax optimal as the NN gets deeper. Our research sheds new lights on why depth matters and how NNs are more powerful than kernel methods. | Reject | This paper considers the generalization error of neural networks when approximating a specific set of functions. The main contribution of this paper is showing a rate of neural network generalization error which is better than linear functions.
The major problems of this paper are
(1). Missing an important sequence of works on separating the power of neural networks from kernel methods, for example, "what can ResNet efficiently learn, going beyond kernels". The authors do not seem to be aware of this line of work that already separates the neural network learning power from any linear learners.
(2). Assuming that the training can minimize the objective function up to global optimal. While it is true that the neural network can fit all the training data in practice, it is very unclear whether they can actually find the global optimal of the regularized objective. This is quite an unrealistic assumption. The authors should at least extend their results to the case when the objective is <= some value, instead of only talking about the global optimal.
(3). Parallel architecture. The parallel network considered in this paper has all the intermediate layers being completely disjoint. This version of the parallel network is not what is used in practice. For example in ResNext, the parallel is layer-wise. The authors should at least clarify this difference instead of trying to present the result like "we consider the parallel networks as used in practice"
| train | [
"NV0ALDRJzwj",
"7Px0J5SVYcV",
"pkeevZ5_fl",
"B7uOyPq-XKr",
"nf4hAm6yBaI5",
"mD_wXopfE2U",
"Gd3QbF4cf42",
"rMoFTMem-Tn",
"ropxxKZ2OKg",
"jB8h1PvbdSO",
"040UFXhfPw",
"3uiBVuo4flG",
"oFZrtoldBq-",
"ZKMLpE5bSe",
"jGqRQn1zWv",
"0aQNb2h4HFC",
"j2wc3NWL1ZK"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We realized that the reviewer might be referring to the ~30% smaller number of parameters in NN comparing to PNN. That is not intentional and we were only trying to roughly match them in a ballpark. We will re-do the NN experiments by increasing the width to 240 and 470 respectively, such that the difference is ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
4,
4
] | [
"B7uOyPq-XKr",
"pkeevZ5_fl",
"rMoFTMem-Tn",
"nf4hAm6yBaI5",
"Gd3QbF4cf42",
"ropxxKZ2OKg",
"jB8h1PvbdSO",
"3uiBVuo4flG",
"j2wc3NWL1ZK",
"0aQNb2h4HFC",
"jGqRQn1zWv",
"ZKMLpE5bSe",
"nips_2022_XcDVT8HarS",
"nips_2022_XcDVT8HarS",
"nips_2022_XcDVT8HarS",
"nips_2022_XcDVT8HarS",
"nips_2022... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.