paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_sFQJ0IOkHF | DivBO: Diversity-aware CASH for Ensemble Learning | The Combined Algorithm Selection and Hyperparameters optimization (CASH) problem is one of the fundamental problems in Automated Machine Learning (AutoML). Motivated by the success of ensemble learning, recent AutoML systems build post-hoc ensembles to output the final predictions instead of using the best single learner. However, while most CASH methods focus on searching for a single learner with the best performance, they neglect the diversity among base learners (i.e., they may suggest similar configurations to previously evaluated ones), which is also a crucial consideration when building an ensemble. To tackle this issue and further enhance the ensemble performance, we propose DivBO, a diversity-aware framework to inject explicit search of diversity into the CASH problems. In the framework, we propose to use a diversity surrogate to predict the pair-wise diversity of two unseen configurations. Furthermore, we introduce a temporary pool and a weighted acquisition function to guide the search of both performance and diversity based on Bayesian optimization. Empirical results on 15 public datasets show that DivBO achieves the best average ranks (1.82 and 1.73) on both validation and test errors among 10 compared methods, including post-hoc designs in recent AutoML systems and state-of-the-art baselines for ensemble learning on CASH problems. | Accept | After a thorough discussion with the authors, all reviewers agree that the paper should be accepted at NeurIPS. The reviewers appreciated the idea of incorporating diversity in the combined algorithm selection and hyper-parameter optimization (CASH) framework and the subsequent use of the diverse models in an ensemble to improve performance. The paper is very clearly written, and the experimental evaluation shows that the proposed techniques provides small but consistent improvements in performance. The authors provided a comprehensive response where they addressed most of the reviewers' concerns. I expect the authors will incorporate all the new results in the camera ready version of the paper. | train | [
"4fYX79ntpM",
"l04QgmfZ4X3",
"3b50w2ZTmPQ",
"M5NO6EV9AsM",
"gKKJGLUaEbF",
"E_e0H-wh8lu",
"SvCAlHrvPik",
"vCL0p_5NGJw",
"s1A-mARzfO8",
"cK5D5rKp9ZH",
"N_d3oCuQXkC",
"re3QDKRZGU",
"An7NtAp-KSi",
"KRMaMlyr-m0",
"3FXOy6Al1b",
"Dg_A3p0_NDx",
"xGsbS6XKhBl",
"RcP-nahbnPj",
"NdCLAPKMxW5"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thank you for the explanations in AQ2 and AQ4 and the new results in AQ3 and AQ5. AQ5 definitely makes sense. \n\nFollow up on AQ3. It is somewhat surprising to see that we need $\\beta$ to be so small otherwise diversity hurts more than it helps. It is somewhat disappointing that the crux of this paper is that d... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"l04QgmfZ4X3",
"3b50w2ZTmPQ",
"M5NO6EV9AsM",
"re3QDKRZGU",
"3FXOy6Al1b",
"vCL0p_5NGJw",
"cK5D5rKp9ZH",
"s1A-mARzfO8",
"NdCLAPKMxW5",
"N_d3oCuQXkC",
"RcP-nahbnPj",
"An7NtAp-KSi",
"KRMaMlyr-m0",
"xGsbS6XKhBl",
"Dg_A3p0_NDx",
"nips_2022_sFQJ0IOkHF",
"nips_2022_sFQJ0IOkHF",
"nips_2022_... |
nips_2022_FNzLe2-ppRO | TREC: Transient Redundancy Elimination-based Convolution | The intensive computations in convolutional neural networks (CNNs) pose challenges for resource-constrained devices; eliminating redundant computations from convolution is essential. This paper gives a principled method to detect and avoid transient redundancy, a type of redundancy existing in input data or activation maps and hence changing across inferences. By introducing a new form of convolution (TREC), this new method makes transient redundancy detection and avoidance an inherent part of the CNN architecture, and the determination of the best configurations for redundancy elimination part of CNN backward propagation. We provide a rigorous proof of the robustness and convergence of TREC-equipped CNNs. TREC removes over 96% computations and achieves 3.51x average speedups on microcontrollers with minimal (about 0.7%) accuracy loss. | Accept | This work proposes a redundancy pruning mechanism for convolutional neural networks to optimize their performance for extremely resource-constrained edge devices, such as microcontrollers.
The work is based on a novel gradient-optimized locality sensitive hashing approach that removes a lot of the indeterminacy of previous approaches. This method can achieve reliable high performance while improving the network inference latency significantly (eg 4x in some circumstance)
I think that the alone the idea of tuning LSH via SGD directly for clustering purposes is a good one and is of interest to the wider community. | train | [
"jFr1JvQh4K",
"K7QpD9LNERo",
"Zmn-3rlzKo",
"Tvco_5o5H2X",
"SkPpnLxxwox",
"oah_EQztILA",
"Zfu5kgabGW",
"F2oQgjf50EW"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" # NeurIPS 2022 author response\n\nWe thank the reviewers for insightful comments. We focus on answering the following concerns from reviewers:\n## Motivation for TREC design\nAs mentioned in the Introduction Section, in addition to designing a differentiable implementation of LSH clustering, TREC's contribution l... | [
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"F2oQgjf50EW",
"Zfu5kgabGW",
"oah_EQztILA",
"SkPpnLxxwox",
"nips_2022_FNzLe2-ppRO",
"nips_2022_FNzLe2-ppRO",
"nips_2022_FNzLe2-ppRO",
"nips_2022_FNzLe2-ppRO"
] |
nips_2022__j8yVIyp27Q | Bidirectional Learning for Offline Infinite-width Model-based Optimization | In offline model-based optimization, we strive to maximize a black-box objective function by only leveraging a static dataset of designs and their scores. This problem setting arises in numerous fields including the design of materials, robots, DNAs, proteins, etc. Recent approaches train a deep neural network (DNN) model on the static dataset to act as a proxy function, and then perform gradient ascent on the existing designs to obtain potentially high-scoring designs. This methodology frequently suffers from the out-of-distribution problem where the proxy function often returns adversarial designs. To mitigate this problem, we propose $\textit{\textbf{B}i\textbf{D}irectional learning for offline \textbf{I}nfinite-width model-based optimization}~(\textbf{BDI})$. BDI consists of two mappings: the forward mapping leverages the static dataset to predict the scores of the high-scoring designs, and the backward mapping leverages the high-scoring designs to predict the scores of the static dataset. The backward mapping, neglected in previous work, can distill more information of the static dataset into the high-scoring designs, which effectively mitigates the out-of-distribution problem. Yet, for a finite-width DNN model, the loss function of the backward mapping is intractable and only has an approximate form, which leads to a significant deterioration of the design quality. We thus adopt an infinite-width DNN model and propose to employ the corresponding neural tangent kernel to yield a closed-form loss for more accurate design updates. Experiments on various tasks verify the effectiveness of BDI. The code is available [here](https://github.com/GGchen1997/BDI). | Accept | This paper studies Offline Model-Based Optimization. This paper proposes a gradient-based method for solving Offline MBO problems using infinite-width Deep learning models.
The key novelty of the paper is in proposed use of a distillation objective to constrain the optimized design-score pairs.
All three reviewers identify the novelty of the problem and the approach.
The paper also presents strong empirical evaluation on standard benchmarks.
The rebuttal discusses yielded constructive changes in the paper, and the authors are expected to account for the discussion and suggestions in the next iteration of the manuscipt.
The AC concurs with the reviews and the discussion thereafter. | train | [
"4vaLL23PTEa",
"3eoHuS5QRM-",
"WIOQV0gZiUn",
"j8dIBmdFIiO",
"JO4prL8inPn",
"6eG9P-Z-oH",
"C5UbIvoNAdF",
"muJrF6zHqN",
"Q-O_ye6F23X",
"r4WTCDYQRif",
"0PilWkDcAX",
"ogzn2WT31Ml",
"ZebZ8nM1VI",
"xaGD_SS9OmF",
"qqfMFrJZBkp",
"bFd5Cx2gSS4",
"2KuycUBCl-m",
"_M9owUpMXf",
"bWVQzEOkk_e"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your great questions, which make our paper stronger.\n\nWe will certainly include the main points of our discussion in an improved version.",
" Thanks for responding to my questions at length, each of my major concerns has been resolved. In my current evaluation of the paper, I intend to increase ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"3eoHuS5QRM-",
"6eG9P-Z-oH",
"j8dIBmdFIiO",
"JO4prL8inPn",
"Q-O_ye6F23X",
"ogzn2WT31Ml",
"muJrF6zHqN",
"0PilWkDcAX",
"r4WTCDYQRif",
"bWVQzEOkk_e",
"_M9owUpMXf",
"ZebZ8nM1VI",
"xaGD_SS9OmF",
"qqfMFrJZBkp",
"bFd5Cx2gSS4",
"2KuycUBCl-m",
"nips_2022__j8yVIyp27Q",
"nips_2022__j8yVIyp27Q... |
nips_2022_iuW96ssPQX | A Transformer-Based Object Detector with Coarse-Fine Crossing Representations | Transformer-based object detectors have shown competitive performance recently. Compared with convolutional neural networks limited by the relatively small receptive fields, the advantage of transformer for visual tasks is the capacity to perceive long-range dependencies among all image patches, while the deficiency is that the local fine-grained information is not fully excavated. In this paper, we introduce the Coarse-grained and Fine-grained crossing representations to build an efficient Detection Transformer (CFDT). Specifically, we propose a local-global cross fusion module to establish the connection between local fine-grained features and global coarse-grained features. Besides, we propose a coarse-fine aware neck which enables detection tokens to interact with both coarse-grained and fine-grained features. Furthermore, an efficient feature integration module is presented for fusing multi-scale representations from different stages. Experimental results on the COCO dataset demonstrate the effectiveness of the proposed method. For instance, our CFDT achieves 48.1 AP with 173G FLOPs, which possesses higher accuracy and less computation compared with the state-of-the-art transformer-based detector ViDT. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/CFDT. | Accept | This work proposes a new object detector architector that is based on a CNN stem, combined with a mostly transformer-based architecture, with the addition of a cross-fusion module that allows for reconciling coarse and high-grained features for more precise object detection.
Thee paper is well-written, novel and presents a significant gain over the state of the art, using reasonable amount of compute.
Object detection is a central area of interest and this method shows how to leverage the power of transformers to push the envelope in this domain. Therefore, I propose this paper be accepted at NeurIPS 2022. | val | [
"ww3o8sL5X02",
"6XktyXmW3hV",
"He2xMGP_qP",
"PvC4WhKLAs_",
"XkyZuSLzHwS",
"mOmpCDCIphm",
"JKWuGzF8bO",
"YWu3XhsudX",
"dBvZnX3qZTJ",
"-myShTq_alh",
"fkE1G6HwBb",
"f0bEHpM1fB",
"8WHAD28ddE2",
"Nroi5oia6bB",
"4UeUdflDamq",
"hLxWk4NBmnQ",
"8h4wlFcerMG"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your recognition of our manuscript.",
" Thanks for your reply, which solves my question. I keep my original score.\n\n",
" Thank you for the support. We will add the new results and analysis in the paper.",
" I appreciate the authors' response. Now, it's clear why the proposed archit... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"6XktyXmW3hV",
"-myShTq_alh",
"PvC4WhKLAs_",
"JKWuGzF8bO",
"8h4wlFcerMG",
"8h4wlFcerMG",
"8h4wlFcerMG",
"hLxWk4NBmnQ",
"hLxWk4NBmnQ",
"hLxWk4NBmnQ",
"4UeUdflDamq",
"4UeUdflDamq",
"4UeUdflDamq",
"4UeUdflDamq",
"nips_2022_iuW96ssPQX",
"nips_2022_iuW96ssPQX",
"nips_2022_iuW96ssPQX"
] |
nips_2022_0zlLhfG6rxI | Bessel Equivariant Networks for Inversion of Transmission Effects in Multi-Mode Optical Fibres | We develop a new type of model for solving the task of inverting the transmission effects of multi-mode optical fibres through the construction of an $\mathrm{SO}^{+}(2,1)$-equivariant neural network. This model takes advantage of the of the azimuthal correlations known to exist in fibre speckle patterns and naturally accounts for the difference in spatial arrangement between input and speckle patterns. In addition, we use a second post-processing network to remove circular artifacts, fill gaps, and sharpen the images, which is required due to the nature of optical fibre transmission. This two stage approach allows for the inspection of the predicted images produced by the more robust physically motivated equivariant model, which could be useful in a safety-critical application, or by the output of both models, which produces high quality images. Further, this model can scale to previously unachievable resolutions of imaging with multi-mode optical fibres and is demonstrated on $256 \times 256$ pixel images. This is a result of improving the trainable parameter requirement from $\mathcal{O}(N^4)$ to $\mathcal{O}(m)$, where $N$ is pixel size and $m$ is number of fibre modes. Finally, this model generalises to new images, outside of the set of training data classes, better than previous models. | Accept | This paper proposes a new learning-based technique for imaging through multimodal fibers (MMF). A key idea of this work is to exploit the property that the transmission matrices associated with MMFs are approximately diagonalizable by Bessel basis. This idea then allows one to significantly reduce the number of parameters, which in turn reduces the memory usage and computational burden of the inverse problem of recovering a natural image from speckle patterns. The authors demonstrated the effectiveness of their proposed algorithm on low-resolution experimentally captured data as well as on higher-resolution simulated dataset.
| train | [
"DYPHYwRMLX0",
"0-QgR8yA1Wt",
"qTxaO861ByR",
"euccr3gIuJ8",
"msIn9z1u8BK",
"OC0aM2YVtXe",
"Ki9s_sWyLW",
"ADTEPW7XCam",
"4Jem9hOzTJu",
"P1KOGy_Iy8J",
"HzDeNvC4t_h"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! We are happy those sections addressed your most serious concerns.\n\nAlso, thank you for raising the point on cells. We were hoping to give an application where requiring a smaller training dataset would be a benefit, if a future system were to be re-calibrated in situ (with a specifi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"qTxaO861ByR",
"Ki9s_sWyLW",
"OC0aM2YVtXe",
"msIn9z1u8BK",
"OC0aM2YVtXe",
"P1KOGy_Iy8J",
"HzDeNvC4t_h",
"4Jem9hOzTJu",
"nips_2022_0zlLhfG6rxI",
"nips_2022_0zlLhfG6rxI",
"nips_2022_0zlLhfG6rxI"
] |
nips_2022_WE92fqi-N_g | VICE: Variational Interpretable Concept Embeddings | A central goal in the cognitive sciences is the development of numerical models for mental representations of object concepts. This paper introduces Variational Interpretable Concept Embeddings (VICE), an approximate Bayesian method for embedding object concepts in a vector space using data collected from humans in a triplet odd-one-out task. VICE uses variational inference to obtain sparse, non-negative representations of object concepts with uncertainty estimates for the embedding values. These estimates are used to automatically select the dimensions that best explain the data. We derive a PAC learning bound for VICE that can be used to estimate generalization performance or determine a sufficient sample size for experimental design. VICE rivals or outperforms its predecessor, SPoSE, at predicting human behavior in the triplet odd-one-out task. Furthermore, VICE's object representations are more reproducible and consistent across random initializations, highlighting the unique advantage of using VICE for deriving interpretable embeddings from human behavior. | Accept | This paper proposes a method to learn meaningful representations of data by incorporating a pick-odd-one-out task on triplets of images to learn embeddings through variational inference using a spike-and-slab Gaussian prior.
The reviewers agreed that the paper was well written, had a clear narrative, that the results appeared convincing and that the use of the spike-and-slab prior to determine appropriate dimensionality was novel and interesting.
Where pertinent questions were raised by reviewers on the exposition of the estimation of the upper bound, on the validity of the triplet task and its phrasing, and on the utlity of employing VI over a prior model (SPoSE), the authors provided responses that appear to address these issues reasonably.
The primary issue with the manuscript appears to be mainly with framing. A graphical model, with annotations for the triplet observations, or something similar---a figure to explain the model basically---would have helped make things a bit easier to situate for the reader.
On balance, though it appears the paper has more merits than issues. I would strongly urge the authors to actually make the edits discussing other baselines (Reviewer VDhE), and potentially having the discussion on determining the number of latent dimensions in the main paper rather than the supplement as it appears to be an important distinguishing feature over prior work.
| train | [
"06FlhTKaVaU",
"9F0umk-7Pom",
"pHLspvFF4Gu",
"Aa5hWEgH5N",
"9M_TEVtg21",
"6-TR1nReXrM",
"G_kN0Pqkzeb",
"3XHnpwMkIgv",
"VjRUZq88Vtp",
"AcrSijBMF5E"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **It's unclear for the neural network settings and how to get the embedding vectors.**\n\nVICE has access to the human responses rather than to the image representations of the objects. From these responses, VICE learns an embedding representation for each object. Although there is a softmax function involved to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3,
3
] | [
"AcrSijBMF5E",
"VjRUZq88Vtp",
"Aa5hWEgH5N",
"3XHnpwMkIgv",
"G_kN0Pqkzeb",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g"
] |
nips_2022_vRwCvlvd8eA | Chefs' Random Tables: Non-Trigonometric Random Features | We introduce chefs' random tables (CRTs), a new class of non-trigonometric random features (RFs) to approximate Gaussian and softmax kernels. CRTs are an alternative to standard random kitchen sink (RKS) methods, which inherently rely on the trigonometric maps. We present variants of CRTs where RFs are positive, a key requirement for applications in recent low-rank Transformers. Further variance reduction is possible by leveraging statistics which are simple to compute. One instantiation of CRTs, the optimal positive random features (OPRFs), is to our knowledge the first RF method for unbiased softmax kernel estimation with positive and bounded RFs, resulting in exponentially small tails and much lower variance than its counterparts. As we show, orthogonal random features applied in OPRFs provide additional variance reduction for any dimensionality $d$ (not only asymptotically for sufficiently large $d$, as for RKS). We test CRTs on many tasks ranging from non-parametric classification to training Transformers for text, speech and image data, obtaining new state-of-the-art results for low-rank text Transformers, while providing linear space and time complexity. | Accept | The primary motivation of the paper is the scalable training of transformers, particularly their efficient softmax-attention approximation (3). As a classical approach relying on trigonometric random Fourier features (RFF) does not guarantee positivity (which makes the training of transformers unstable), the authors consider non-trigonometric RFFs. Particularly, they propose a GERF (generalized exponential random features, specified in (4)) for the approximation of the Gaussian kernel (and the softmax kernel) which beyond the previously designed positivity of RFFs [15], can also ensure the boundedness of the random features. They analyze when these RFs give rise to unbiased kernel approximation (Theorem 3.1), establish their variance (Theorem 3.2 for fixed (x,y) inputs), and restricting its free parameters (A to be real, s = 1) they specialize the design to the minimum variance estimator referred to as OPRF (optimal positive random feature). They show tail bounds (Theorem 4.2) and attention approximation guarantees for OPRFs. They also present a DIRF (discretely-induced random features) construction with focus on the Poisson and Geometric designs, which with a shift (Section 3.2.3) can be turned to be positive. The resulting CRT (chefs' random tables) kernel approximation family is illustrated in classification (on UCI benchmarks) and in training transformers (in NLP, audio and image context).
Kernel methods are without doubt at the forefront of machine learning; developing new kernel approximation schemes is of significant interest to the NeurIPS community. As it was assessed by the reviewers, the authors present novel and valuable theoretical insights in this respect, with convincing numerical illustrations. As the reviewers pointed out the manuscript could be improved somewhat by (i) more detailed references to results from prior work, and (ii) making it more accessible to wider audience. Please incorporate these comments in the final version of the manuscript. | train | [
"PjmyUMJb8b9",
"Gn9xKjDqCnt",
"Xhx06X2jKkc",
"i6yoGhWMi7t",
"QiYI1jcrRJN",
"mFXVGs7Dkh",
"4YXrXARiVX0",
"8Vx3n8YDEeD",
"tyBA_IGN1hv",
"QHPpS1KMXo",
"FxNEp3oiRdN",
"zMG-0xCex9S"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you again for your time and helpful comments. We hope we have addressed your concerns and you might consider raising your score. If you have any further concerns, we'd be glad to address them.",
" We would like to sincerely thank the Reviewer for all the comments.\n\n*It seems that the discretely-induc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
3
] | [
"nips_2022_vRwCvlvd8eA",
"zMG-0xCex9S",
"i6yoGhWMi7t",
"FxNEp3oiRdN",
"QHPpS1KMXo",
"4YXrXARiVX0",
"tyBA_IGN1hv",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA"
] |
nips_2022_WSxarC8t-T | SketchBoost: Fast Gradient Boosted Decision Tree for Multioutput Problems | Gradient Boosted Decision Tree (GBDT) is a widely-used machine learning algorithm that has been shown to achieve state-of-the-art results on many standard data science problems. We are interested in its application to multioutput problems when the output is highly multidimensional. Although there are highly effective GBDT implementations, their scalability to such problems is still unsatisfactory. In this paper, we propose novel methods aiming to accelerate the training process of GBDT in the multioutput scenario. The idea behind these methods lies in the approximate computation of a scoring function used to find the best split of decision trees. These methods are implemented in SketchBoost, which itself is integrated into our easily customizable Python-based GPU implementation of GBDT called Py-Boost. Our numerical study demonstrates that SketchBoost speeds up the training process of GBDT by up to over 40 times while achieving comparable or even better performance.
| Accept | After rebuttal, the reviewers unanimously agree that the submission should be accepted for publication at NeurIPS. Reviewers were excited about the achieved speed-up. | train | [
"n7ur2OZWO0",
"GeH7OAmVbvd",
"-9kidQ81EDY",
"EvjXWu_19-y",
"wR1OAzYnTua",
"WLlA9x4-XIs",
"tDuwNcvcELP",
"nC2IQ6xMspU",
"Vs0WlmZybuj0",
"bQbR5vN_HLK",
"gCLree-cVM",
"6CphUQHtrmQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback and suggestions for improving our work, especially the experiment section! \n",
" Thank you for your feedback and for the idea to compare SketchBoost with TabNet!",
" Dear Reviewer psGW,\n\nThe reviewer-author discussion period will end on August 9. Have you had a chance to look at... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"EvjXWu_19-y",
"wR1OAzYnTua",
"tDuwNcvcELP",
"Vs0WlmZybuj0",
"nC2IQ6xMspU",
"nips_2022_WSxarC8t-T",
"bQbR5vN_HLK",
"gCLree-cVM",
"6CphUQHtrmQ",
"nips_2022_WSxarC8t-T",
"nips_2022_WSxarC8t-T",
"nips_2022_WSxarC8t-T"
] |
nips_2022_wOI0AUAq9BR | SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks | In the past few years, graph neural networks (GNNs) have become the de facto model of choice for graph classification. While, from the theoretical viewpoint, most GNNs can operate on graphs of any size, it is empirically observed that their classification performance degrades when they are applied on graphs with sizes that differ from those in the training data. Previous works have tried to tackle this issue in graph classification by providing the model with inductive biases derived from assumptions on the generative process of the graphs, or by requiring access to graphs from the test domain. The first strategy is tied to the quality of the assumptions made for the generative process, and requires the use of specific models designed after the explicit definition of the generative process of the data, leaving open the question of how to improve the performance of generic GNN models in general settings. On the other hand, the second strategy can be applied to any GNN, but requires access to information that is not always easy to obtain. In this work we consider the scenario in which we only have access to the training data, and we propose a regularization strategy that can be applied to any GNN to improve its generalization capabilities from smaller to larger graphs without requiring access to the test data. Our regularization is based on the idea of simulating a shift in the size of the training graphs using coarsening techniques, and enforcing the model to be robust to such a shift. Experimental results on standard datasets show that popular GNN models, trained on the 50% smallest graphs in the dataset and tested on the 10% largest graphs, obtain performance improvements of up to 30% when trained with our regularization strategy. | Accept | This work proposes a regularization approach (based on graph coarsening and alignment) to allowing graph neural networks to generalize across different graph sizes. The approach proposed here is simple, yet shown to be effective. While the reviewers had some concerns regarding this paper and the results in it, these were alleviated to a sufficient extent in rebuttal so that currently one reviewer outright supports acceptance, and the others lean towards acceptance. I agree with the opinion supporting acceptance of the paper, especially given the remark about the value (and rarity) of simple, clear, and effective solutions to important problems, which I agree is the case here. Therefore, I recommend accepting the paper, and I would like to encourage the authors to take into account the reviewers' comments (and the additional points included in the responses to them) when preparing the camera ready version. | train | [
"z5W_RxyKJfa",
"RM8LhCQTT1u",
"DmhjjI9YsOK",
"DiKcuaqQYA4",
"386X_MbhS9d",
"BwRiFZfXoTN",
"AGnyxYgVy6c",
"LlWK3k-Yon5",
"uGeSC5yxh3W",
"9Bxcs1YmGip",
"T6LZ8EJaoxP",
"bHTn3dS_qV-"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer for taking the time to answer our rebuttal. We will try to address the standing concerns below.\n\n- Q1. The unattributed datasets in [6] were designed to test the theoretical assumptions behind the model proposed in [6], and in fact there is a big discrepancy (in the ranking of th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"RM8LhCQTT1u",
"LlWK3k-Yon5",
"BwRiFZfXoTN",
"LlWK3k-Yon5",
"nips_2022_wOI0AUAq9BR",
"AGnyxYgVy6c",
"bHTn3dS_qV-",
"T6LZ8EJaoxP",
"9Bxcs1YmGip",
"nips_2022_wOI0AUAq9BR",
"nips_2022_wOI0AUAq9BR",
"nips_2022_wOI0AUAq9BR"
] |
nips_2022_GkDbQb6qu_r | CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers | Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation.
We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and fine-tune it for fast super-resolution.
The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images. | Accept | This paper describes a less auto-regressive approach for text-to-image generation. Reviewers were somewhat split, with one reject and one strong accept. Overall, the results the authors get are perhaps less compelling given the progress the field has made over the past few months since submission time, but I think the focus on less autoregression for this problem is important and potentially useful for other approaches to this problem, and also the authors had far fewer computational resources and smaller datasets than some of the other recent work, which makes me feel there is a chance the less-auto-regressive generation they employ will be useful for other text-to-image generation projects. I'm slightly less certain about this one because the results don't seem as compelling given works published after the submission deadline, and also the lack of experiments to investigate that this approach can be used with other text-2-image generation works. Still, it seems like an important direction that could use more papers, so I think this should be accepted. The bilingual generation is a bonus. | train | [
"ZbNFfm1nN5",
"V4Scasj2jIj",
"Q0zUciAAAh",
"Iv_AWJPjeo8",
"au7ym-X3icx",
"A5uUHVKPAcJ",
"O6pz7RUfDag",
"IS-KPcyimMM",
"SR9o2rOH40",
"ag5GOZXc_5P",
"A_tdGDBj_0u"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification and additional information!",
" I appreciate authors for providing the extra visualizations. Most of my questions are resolved, but still I am not very convinced but the masking strategy. I raise the score to weak accept based on overall contributions. ",
" \nThank you very much f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"Iv_AWJPjeo8",
"A_tdGDBj_0u",
"A_tdGDBj_0u",
"ag5GOZXc_5P",
"SR9o2rOH40",
"IS-KPcyimMM",
"IS-KPcyimMM",
"nips_2022_GkDbQb6qu_r",
"nips_2022_GkDbQb6qu_r",
"nips_2022_GkDbQb6qu_r",
"nips_2022_GkDbQb6qu_r"
] |
nips_2022_nxw9_ny7_H | Deep invariant networks with differentiable augmentation layers | Designing learning systems which are invariant to certain data transformations is critical in machine learning. Practitioners can typically enforce a desired invariance on the trained model through the choice of a network architecture, e.g. using convolutions for translations, or using data augmentation. Yet, enforcing true invariance in the network can be difficult, and data invariances are not always known a piori. State-of-the-art methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems, which are complex to solve and often computationally demanding. In this work we investigate new ways of learning invariances only from the training data. Using learnable augmentation layers built directly in the network, we demonstrate that our method is very versatile. It can incorporate any type of differentiable augmentation and be applied to a broad class of learning problems beyond computer vision. We provide empirical evidence showing that our approach is easier and faster to train than modern automatic data augmentation techniques based on bilevel optimization, while achieving comparable results. Experiments show that while the invariances transferred to a model through automatic data augmentation are limited by the model expressivity, the invariance yielded by our approach is insensitive to it by design. | Accept | The decision for this paper was a hard one. I pondered the scores with respect to the engagement of the different reviewers. I believe the initial scores were due to a misunderstanding of the limitations of the baseline model Augerino, and how the proposed method solves some of the failures and limitations of Augerino (e.g. being able to model only affine transformation). I also find the authors expanded their experiments in a convincing manner during the rebuttal period. We encourage the authors to *improve the clarity of their contributions* in their final version, and to include all additional experiments that were ran during the rebuttal period.
| val | [
"f7ThyXEWYm",
"6Ptp6AnIuM1",
"1zXiE_CvF9Y",
"7-o57NZ5Qo_",
"MLqDP2DFj4",
"YR_41m6nH-z",
"7dFMEnqkUQF",
"9ntBV2MYmi",
"zMDfPT6MCG",
"ed7pyM0zZ1i",
"NqFRBjvhwTM",
"madgNlDgfv",
"fUCM7hc1zRAy",
"h_iOuUbGCaW",
"TVL0YmPskKA",
"plJubC1tAQ5",
"FTL99AJUi5v",
"B85nwj8yqQW"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their new comments and are glad that our new experiments and revised manuscript helped to convince of the relevance of our contribution. Please note however that your **rating has not been improved yet**.\n\nConcerning the caption of Figure 3, we now better understand what was the proble... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"6Ptp6AnIuM1",
"YR_41m6nH-z",
"7-o57NZ5Qo_",
"9ntBV2MYmi",
"nips_2022_nxw9_ny7_H",
"7dFMEnqkUQF",
"B85nwj8yqQW",
"zMDfPT6MCG",
"FTL99AJUi5v",
"NqFRBjvhwTM",
"plJubC1tAQ5",
"fUCM7hc1zRAy",
"h_iOuUbGCaW",
"TVL0YmPskKA",
"nips_2022_nxw9_ny7_H",
"nips_2022_nxw9_ny7_H",
"nips_2022_nxw9_ny... |
nips_2022_7HTEHRMlxYH | FNeVR: Neural Volume Rendering for Face Animation | Face animation, one of the hottest topics in computer vision, has achieved a promising performance with the help of generative models. However, it remains a critical challenge to generate identity preserving and photo-realistic images due to the sophisticated motion deformation and complex facial detail modeling. To address these problems, we propose a Face Neural Volume Rendering (FNeVR) network to fully explore the potential of 2D motion warping and 3D volume rendering in a unified framework. In FNeVR, we design a 3D Face Volume Rendering (FVR) module to enhance the facial details for image rendering. Specifically, we first extract 3D information with a well designed architecture, and then introduce an orthogonal adaptive ray-sampling module for efficient rendering. We also design a lightweight pose editor, enabling FNeVR to edit the facial pose in a simple yet effective way. Extensive experiments show that our FNeVR obtains the best overall quality and performance on widely used talking-head benchmarks. | Accept | After rebuttal all reviewers recommend acceptance. The authors are encouraged to follow the reviewer suggestions on improving the final paper. | train | [
"_Axbg_KoyY",
"A7eID1fsZrm",
"Ww2bkBrP-2G",
"WnBGLJ01vED",
"dYgWHvNcc2f",
"A8TCWiFLueP",
"U8DpX55lAJt",
"VoqUk3p7olg",
"JxgB5ywhRGi",
"7YWNkfM5yx",
"1LVLhWb5sy2"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We will definitely do it. Thank you again for your supportive comment.",
" Thank the authors for providing such a comprehensive reponse to all my questions. The provided video is much better than the one before. Please try involving the provided details and changes to the revision and supplementary accordingly.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"A7eID1fsZrm",
"dYgWHvNcc2f",
"7YWNkfM5yx",
"7YWNkfM5yx",
"7YWNkfM5yx",
"nips_2022_7HTEHRMlxYH",
"JxgB5ywhRGi",
"1LVLhWb5sy2",
"nips_2022_7HTEHRMlxYH",
"nips_2022_7HTEHRMlxYH",
"nips_2022_7HTEHRMlxYH"
] |
nips_2022_UPnJuDKqOfX | HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details | Neural rendering can be used to reconstruct implicit representations of shapes without 3D supervision. However, current neural surface reconstruction methods have difficulty learning high-frequency geometry details, so the reconstructed shapes are often over-smoothed. We develop HF-NeuS, a novel method to improve the quality of surface reconstruction in neural rendering. We follow recent work to model surfaces as signed distance functions (SDFs). First, we offer a derivation to analyze the relationship between the SDF, the volume density, the transparency function, and the weighting function used in the volume rendering equation and propose to model transparency as a transformed SDF. Second, we observe that attempting to jointly encode high-frequency and low-frequency components in a single SDF leads to unstable optimization. We propose to decompose the SDF into base and displacement functions with a coarse-to-fine strategy to increase the high-frequency details gradually. Finally, we design an adaptive optimization strategy that makes the training process focus on improving those regions near the surface where the SDFs have artifacts. Our qualitative and quantitative results show that our method can reconstruct fine-grained surface details and obtain better surface reconstruction quality than the current state of the art. Code available at https://github.com/yiqun-wang/HFS. | Accept | This paper presents a new way to build centered weights for volume rendering, utilize displacement maps, adaptive scale, as well as other techniques to provide better high frequency details in neural SDF representations.
The reviewers also acknowledged the rebuttal and the revision and the authors addressing their main concerns. | train | [
"7WY8w2NiW9H",
"Lk3rOcxdX3",
"3IAFHvvsZ8u",
"4G1_xUDrGYR",
"3bkQ2VsVdw",
"HX-AZ2xaYZ2",
"DvcW6UsNYp4",
"GiMjjGfQ6BD",
"ZqNqB67GLa",
"ZHbzhrIb6EgE",
"fGlxP_9PUj1Q",
"KTpqZ1wPyPc",
"63nZlEv0Lou",
"E_g0DJY0ys",
"KWpgoXFWOFn",
"T38rDcJnxG9",
"RyCn8cKqI6c",
"EyKHQu24ZXs",
"m1R7Bb9W26c... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thank you for the response. My questions have been sufficiently addressed and some of my feedback has been integrated into the revised paper.\n\nI have also read the other reviews and the authors' response to those papers, and have no further questions.\n\nI stand with my original rating and recommend accepting t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4,
5
] | [
"63nZlEv0Lou",
"EyKHQu24ZXs",
"HX-AZ2xaYZ2",
"3bkQ2VsVdw",
"RyCn8cKqI6c",
"KTpqZ1wPyPc",
"m1R7Bb9W26c",
"m1R7Bb9W26c",
"fGlxP_9PUj1Q",
"RyCn8cKqI6c",
"EyKHQu24ZXs",
"T38rDcJnxG9",
"KWpgoXFWOFn",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPn... |
nips_2022_jzd2bE5MxW | TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels | State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. For neural networks, even when centralized SGD easily finds a solution that is simultaneously performant for all clients, current federated optimization methods fail to converge to a comparable solution. We show that this performance disparity can largely be attributed to optimization challenges presented by nonconvexity. Specifically, we find that the early layers of the network do learn useful features, but the final layers fail to make use of them. That is, federated optimization applied to this non-convex problem distorts the learning of the final layers. Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e.g., FedAvg); then, optimize a convexified problem obtained from the network's empirical neural tangent kernel approximation. Our technique yields accuracy improvements of up to $+36\%$ on FMNIST and $+37\%$ on CIFAR10 when clients have dissimilar data. | Accept | This paper introduced a novel two-stage scheme to combine the feature learning capacity of neural network and for efficient optimization linear model. It makes interesting empirical observations that are relevant to NeurIPS and may inspire future work. In particular, the observation that FedAvg learns useful features even in data heterogeneous settings is novel and helps to explain the empirical success of FedAvg plus fine-tuning. The experimental evaluation is mostly thorough, with multiple datasets tested and helpful ablations.
| train | [
"a3eXfmUSBa",
"FuTOBxt25KI",
"P7wEXchyNyl",
"krFURGWt-7T",
"_aSB9T3vNIN",
"ec-BBoYgU6o",
"W4jZcYLgJ91",
"w4DMfezoljM",
"xl_VJUaO0HF",
"_DgChOC4msz",
"pP1YH-nxZ-",
"pIJkqnVIevA",
"13fPyaoYajH",
"_vC1nFDhB8d",
"CoYcAC3Ml3y",
"YRDX79Rnge"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Authors response and new experiments have resolved my concerns. Thus I will raise my score. ",
" I would like to thank the authors for their extensive response that cleared a lot of my concerns. I would also encourage the authors to provide the discussion around the sources of non-iidness in the main text as we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"P7wEXchyNyl",
"pP1YH-nxZ-",
"krFURGWt-7T",
"pIJkqnVIevA",
"nips_2022_jzd2bE5MxW",
"xl_VJUaO0HF",
"nips_2022_jzd2bE5MxW",
"YRDX79Rnge",
"YRDX79Rnge",
"CoYcAC3Ml3y",
"CoYcAC3Ml3y",
"_vC1nFDhB8d",
"_vC1nFDhB8d",
"nips_2022_jzd2bE5MxW",
"nips_2022_jzd2bE5MxW",
"nips_2022_jzd2bE5MxW"
] |
nips_2022_OxfI-3i5M8g | Scalable Neural Video Representations with Learnable Positional Features | Succinct representation of complex signals using coordinate-based neural representations (CNRs) has seen great progress, and several recent efforts focus on extending them for handling videos. Here, the main challenge is how to (a) alleviate a compute-inefficiency in training CNRs to (b) achieve high-quality video encoding while (c) maintaining the parameter-efficiency. To meet all requirements (a), (b), and (c) simultaneously, we propose neural video representations with learnable positional features (NVP), a novel CNR by introducing "learnable positional features" that effectively amortize a video as latent codes. Specifically, we first present a CNR architecture based on designing 2D latent keyframes to learn the common video contents across each spatio-temporal axis, which dramatically improves all of those three requirements. Then, we propose to utilize existing powerful image and video codecs as a compute-/memory-efficient compression procedure of latent codes. We demonstrate the superiority of NVP on the popular UVG benchmark; compared with prior arts, NVP not only trains 2 times faster (less than 5 minutes) but also exceeds their encoding quality as 34.07$\rightarrow$34.57 (measured with the PSNR metric), even using $>$8 times fewer parameters. We also show intriguing properties of NVP, e.g., video inpainting, video frame interpolation, etc.
| Accept | The paper proposes a coordinate-based architecture for representing a video in a parameter-efficient and computationally efficient manner. Such architecture can be used for video compression, inpainting and frame interpolation.
The initial concerns were addressed during the rebuttal period and all the reviewers had a positive opinion of the paper. I therefore recommend acceptance. | train | [
"di0GkRzvpw5",
"WFcfavFTmUH",
"prMEV5l3wXY",
"yi9yye93t_4",
"L4RGdu7PK1v",
"5zUxMvUcxwX",
"q6cLtZVtENG",
"eMBadi3LXgA",
"jWgxQGbbMK",
"AssIQQ89b_G",
"iVWeYWNYNwr",
"xRB1tU4RzXI",
"WKcX34hI4SB",
"Xc81zRU3pKT",
"GIHYK7Vhstz",
"8jae65LSbxF",
"jAhoqJ3fvR3",
"DAFO2_y9QS",
"sNf6xpmXSPy... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! We are happy to find that our rebuttal successfully addressed all of your concerns. Although our primary focus is not video compression, we also think that providing the comparison results with deep video codecs under the fair evaluation condition would further strengthen our paper an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"WFcfavFTmUH",
"jAhoqJ3fvR3",
"AssIQQ89b_G",
"L4RGdu7PK1v",
"jWgxQGbbMK",
"xRB1tU4RzXI",
"Xc81zRU3pKT",
"jAhoqJ3fvR3",
"nips_2022_OxfI-3i5M8g",
"nips_2022_OxfI-3i5M8g",
"DAFO2_y9QS",
"DAFO2_y9QS",
"sNf6xpmXSPy",
"sNf6xpmXSPy",
"dqgAN-6_b_",
"dqgAN-6_b_",
"dqgAN-6_b_",
"nips_2022_Ox... |
nips_2022_UPZCt9perOn | Metric-Projected Accelerated Riemannian Optimization: Handling Constraints to Bound Geometric Penalties | We propose an accelerated first-order method for the optimization of smooth and (strongly or not) geodesically-convex functions over a compact and geodesically-convex set in Hadamard manifolds, that we access to via a metric-projection oracle. It enjoys the same rates of convergence as Nesterov's accelerated gradient descent, up to a multiplicative geometric penalty and log factors. Even without in-manifold constraints, all prior fully accelerated works require their iterates to remain in some specified compact set (which is needed in worse-case analyses due to a lower bound), while only two previous methods are able to enforce this condition and these, in contrast, have limited applicability like to local optimization or to spaces of constant curvature. Our results solve an open question in (Kim and Yang, 2022) and an another question related to one posed in (Zhang and Sra, 2016). In our solution, we show we can use projected Riemannian gradient descent to implement an inexact proximal point operator that we use as a subroutine, which is of independent interest.
| Reject | The paper deals with accelerated methods on Riemannian manifolds. A particular challenge that the paper tries to address, which the AC believes is important, is related to the bounding of the iterates. The paper starts with an explicit bounding constraint on the manifold (and relaxes to the ball constraint for certain manifolds) and shows that the proposed algorithm can respect that while achieving acceleration. The reviewers including this AC see the merits of the paper. However, the paper in its current form is far from complete. A particular concern is on the empirical performance of the algorithm, resolving which should strengthen the paper. I would encourage the authors to build on the discussions and polish the paper accordingly.
Even though the paper has positive scores, the paper in its current form is a borderline paper with significant scope for improvement. To this end, the AC cannot accept the paper. | test | [
"OpWEagbPtn1",
"U8CCrJb8pH",
"DAc2IU8pUJ6",
"6lmR8RIwmq",
"6fEkijqVXLz",
"SLmydIBk_wa",
"WfW0aTSeJQM",
"LsL9guTP5E2",
"H4NNasFDUmk",
"Ohfu_ZO6OAE",
"PWJn03a-2nS",
"hphTALmqLSk",
"VXtpvx1vKG9U",
"wqlFQYdwCmV",
"APk1U6e1Nwb",
"DNcVRKHtra7"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" To all reviewers, please note we updated the supplementary material to include a new section in the Appendix (section C) including the new subroutines and their proofs of linear convergence",
" Could you clarify why after the response your impression of the paper is the same? The main concerns, namely the imple... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"LsL9guTP5E2",
"WfW0aTSeJQM",
"6lmR8RIwmq",
"SLmydIBk_wa",
"DNcVRKHtra7",
"wqlFQYdwCmV",
"PWJn03a-2nS",
"H4NNasFDUmk",
"nips_2022_UPZCt9perOn",
"DNcVRKHtra7",
"APk1U6e1Nwb",
"wqlFQYdwCmV",
"wqlFQYdwCmV",
"nips_2022_UPZCt9perOn",
"nips_2022_UPZCt9perOn",
"nips_2022_UPZCt9perOn"
] |
nips_2022_bi1BTcXa8Q | Cross-modal Learning for Image-Guided Point Cloud Shape Completion | In this paper we explore the recent topic of point cloud completion, guided by an auxiliary image. We show how it is possible to effectively combine the information from the two modalities in a localized latent space, thus avoiding the need for complex point cloud reconstruction methods from single views used by the state-of-the-art. We also investigate a novel self-supervised setting where the auxiliary image provides a supervisory signal to the training process by using a differentiable renderer on the completed point cloud to measure fidelity in the image space. Experiments show significant improvements over state-of-the-art supervised methods for both unimodal and multimodal completion. We also show the effectiveness of the self-supervised approach which outperforms a number of supervised methods and is competitive with the latest supervised models only exploiting point cloud information. | Accept | The paper proposes a point cloud completion method that can take an auxiliary image as guidance. All the reviewers rate the paper slightly above the bar. They like the reported strong performance over the prior baseline and also the capability of using the auxiliary input. Although several reviewers raise concerns about missing experiments on real datasets such as ScanNet or KITTI, they still think the paper has sufficient merit. The AC finds no strong reason to disagree with the reviewers. | test | [
"tGYNy0IK5Pe",
"zJuaIU7F0CQ",
"cC-oO9sLMu",
"-vkIb8PYi7m",
"Z4ZDeBaRRN",
"ATc87KYJQz4",
"NkP8w3Jt_Ij",
"InZkK0-wInM",
"YyfHHX8C2sy",
"rIZHcz7ztY3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors addressed most of my concerns. I will change the final score to borderline accept.",
" The rebuttal addressed my concerns. Therefore, I change my ratings to borderline accept.",
" We thank the reviewer for the constructive feedback and for all the suggestions to improve the paper. \n\n> *The main... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5,
4
] | [
"ATc87KYJQz4",
"-vkIb8PYi7m",
"rIZHcz7ztY3",
"YyfHHX8C2sy",
"InZkK0-wInM",
"NkP8w3Jt_Ij",
"nips_2022_bi1BTcXa8Q",
"nips_2022_bi1BTcXa8Q",
"nips_2022_bi1BTcXa8Q",
"nips_2022_bi1BTcXa8Q"
] |
nips_2022_dp0zWsdOV1h | Retrieve, Reason, and Refine: Generating Accurate and Faithful Patient Instructions | The "Patient Instruction" (PI), which contains critical instructional information provided both to carers and to the patient at the time of discharge, is essential for the patient to manage their condition outside hospital. An accurate and easy-to-follow PI can improve the self-management of patients which can in turn reduce hospital readmission rates. However, writing an appropriate PI can be extremely time consuming for physicians, and is subject to being incomplete or error-prone for (potentially overworked) physicians. Therefore, we propose a new task that can provide an objective means of avoiding incompleteness, while reducing clinical workload: the automatic generation of the PI, which is imagined as being a document that the clinician can review, modify, and approve as necessary (rather than taking the human "out of the loop"). We build a benchmark clinical dataset and propose the Re$^3$Writer, which imitates the working patterns of physicians to first retrieve related working experience from historical PIs written by physicians, then reason related medical knowledge. Finally, it refines the retrieved working experience and reasoned medical knowledge to extract useful information, which is used to generate the PI for previously-unseen patient according to their health records during hospitalization. Our experiments show that, using our method, the performance of 6 different models can be substantially boosted across all metrics, with up to 20%, 11%, and 19% relative improvements in BLEU-4, ROUGE-L, and METEOR, respectively. Meanwhile, we show results from human evaluations to measure the effectiveness in terms of its usefulness for clinical practice. | Accept | The authors propose and evaluate a method to automatically generate "patient instruction" drafts. There was a consensus amongst reviewers that this is an interesting application.
While the technical innovation here may be modest, the empirical results firmly establish the benefits of the proposed "Re3Writer" approach. The ablations provided (both in the original submission and during author response) further strengthen the contribution.
Furthermore, the task definition and accompanying dataset (derived from MIMIC) constitute contributions which may lead to follow-up work. That said, in revised versions of the paper the authors should include additional details about the data and be explicit that they will release this (as mentioned by reviewer oy13). | train | [
"A_w7N1M0F4j",
"XHt8lGjFzBh",
"YhwUOr-oXT",
"OAnf_CHn_A",
"wQJFyglIT9m",
"ieD96g-LOlF",
"k40K1LyJS8",
"so62kMf_v2f",
"0DTHNxlQzlr",
"VEChKQ5masm",
"h9brTxqaQDY",
"cMxZ9vWjEqo"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for acknowledging the response. We are genuinely happy that our response properly addresses fellow reviewers' concerns. We thank the reviewer again for the constructive feedback which have helped us improve our paper!\n",
" We thank the reviewer for acknowledging the response. We are genui... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"OAnf_CHn_A",
"wQJFyglIT9m",
"VEChKQ5masm",
"ieD96g-LOlF",
"k40K1LyJS8",
"cMxZ9vWjEqo",
"h9brTxqaQDY",
"0DTHNxlQzlr",
"VEChKQ5masm",
"nips_2022_dp0zWsdOV1h",
"nips_2022_dp0zWsdOV1h",
"nips_2022_dp0zWsdOV1h"
] |
nips_2022_Z9ldMhplBrT | Rethinking the compositionality of point clouds through regularization in the hyperbolic space | Point clouds of 3D objects exhibit an inherent compositional nature where simple parts can be assembled into progressively more complex shapes to form whole objects. Explicitly capturing such part-whole hierarchy is a long-sought objective in order to build effective models, but its tree-like nature has made the task elusive. In this paper, we propose to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. The hyperbolic space is the only space that can successfully embed the tree-like nature of the hierarchy. This leads to substantial improvements in the performance of state-of-art supervised models for point cloud classification. | Accept | The paper presents a regularization for point cloud representation learning aiming to promote a part-whole hierarchy through a hyperbolic space. Most of the reviewers agree the idea of using the hyperbolic space is new and interesting. The experiment results seem to be sufficient. There was some confusion on how part is defined and compositionality in the paper. But the AC feels the paper has sufficient merit to be published It is required that the authors incorporate the reviewer feedbacks in the revised manuscripts. | train | [
"nC_4hYkyS0o",
"UezFn4AyY5f",
"6OFaqrpmbPK",
"Ku5kQ9N2dUt",
"EeMGid93iX",
"x69Vha6lIGO",
"0vULY7Etcx7",
"k6-EMUCcz0n",
"DLoXOa_WA1M",
"yxzgwQTuUeA",
"wQUhnW2inRY",
"7mG9TLBJjW",
"3GbteWpoZr73",
"fRm6ELeIzMj",
"fY9DiJPFCI",
"B_58HVlxahL",
"NcdjnjUsW68",
"LTPK8g1zaWK",
"PP4emTAJXDC... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" We report the results of using partial shape augmentation to reinforce the DGCNN backbone. The training generated partial shapes with a random number of points to supplement the full shapes. As mentioned above, the improvement over the non-augmented baseline is not significant. However, we do observe improvements... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4,
4
] | [
"UezFn4AyY5f",
"6OFaqrpmbPK",
"Ku5kQ9N2dUt",
"k6-EMUCcz0n",
"wQUhnW2inRY",
"0vULY7Etcx7",
"7mG9TLBJjW",
"DLoXOa_WA1M",
"PP4emTAJXDC",
"LTPK8g1zaWK",
"NcdjnjUsW68",
"3GbteWpoZr73",
"B_58HVlxahL",
"fY9DiJPFCI",
"nips_2022_Z9ldMhplBrT",
"nips_2022_Z9ldMhplBrT",
"nips_2022_Z9ldMhplBrT",
... |
nips_2022_ZuSiW0EixjX | Redistribution of Weights and Activations for AdderNet Quantization | Adder Neural Network (AdderNet) provides a new way for developing energy-efficient neural networks by replacing the expensive multiplications in convolution with cheaper additions (i.e., L1-norm). To achieve higher hardware efficiency, it is necessary to further study the low-bit quantization of AdderNet. Due to the limitation that the commutative law in multiplication does not hold in L1-norm, the well-established quantization methods on convolutional networks cannot be applied on AdderNets. Thus, the existing AdderNet quantization techniques propose to use only one shared scale to quantize both the weights and activations simultaneously. Admittedly, such an approach can keep the commutative law in the L1-norm quantization process, while the accuracy drop after low-bit quantization cannot be ignored. To this end, we first thoroughly analyze the difference on distributions of weights and activations in AdderNet and then propose a new quantization algorithm by redistributing the weights and the activations. Specifically, the pre-trained full-precision weights in different kernels are clustered into different groups, then the intra-group sharing and inter-group independent scales can be adopted. To further compensate the accuracy drop caused by the distribution difference, we then develop a lossless range clamp scheme for weights and a simple yet effective outliers clamp strategy for activations. Thus, the functionality of full-precision weights and the representation ability of full-precision activations can be fully preserved. The effectiveness of the proposed quantization method for AdderNet is well verified on several benchmarks, e.g., our 4-bit post-training quantized adder ResNet-18 achieves an 66.5% top-1 accuracy on the ImageNet with comparable energy efficiency, which is about 8.5% higher than that of the previous AdderNet quantization methods. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/AdderQuant. | Accept | The reviewers were mostly positive about this paper [8,6,6,4], while the negative reviewer did not update the review or respond after the author's response. I do not see any major issues remaining. The suggested method seems interesting, novel, and achieves good empirical results. | train | [
"EHI2TjL368",
"dDYeqb5esCg",
"gJJ35xuiukx",
"aI3sMQJgxrY",
"xn2v6JCgIqi",
"4blwLIxn_gO",
"uCm49e8MC6O",
"mhvJdh4d-_e",
"16elRJ5NSKq",
"56aLZhvEFfB",
"Fw55hZ1lwor"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors' response has solved all of my concern, I will keep my rating about this work.",
" We would like to sincerely thank the reviewer for providing a constructive review and detailed comments.\n\n**Q1:** Better to compare the accuracy drops in quantized CNNs as well as the currently presented accuracy dr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"uCm49e8MC6O",
"mhvJdh4d-_e",
"mhvJdh4d-_e",
"16elRJ5NSKq",
"16elRJ5NSKq",
"56aLZhvEFfB",
"Fw55hZ1lwor",
"nips_2022_ZuSiW0EixjX",
"nips_2022_ZuSiW0EixjX",
"nips_2022_ZuSiW0EixjX",
"nips_2022_ZuSiW0EixjX"
] |
nips_2022_wjClgX-muzB | Rethinking Variational Inference for Probabilistic Programs with Stochastic Support | We introduce Support Decomposition Variational Inference (SDVI), a new variational inference (VI) approach for probabilistic programs with stochastic support. Existing approaches to this problem rely on designing a single global variational guide on a variable-by-variable basis, while maintaining the stochastic control flow of the original program. SDVI instead breaks the program down into sub-programs with static support, before automatically building separate sub-guides for each. This decomposition significantly aids in the construction of suitable variational families, enabling, in turn, substantial improvements in inference performance. | Accept | The reviewers have reached consensus after processing the authors' feedback. They all agree that this manuscript presents an interesting approach to applying variational inference in a setting of probabilistic programming that is of interest to the community. The reviewers raise tangible points that the authors have incorporated into their revision. I recommend that the authors continue to polish their manuscript to clearly address these points in the final version of their manuscript. | train | [
"8MezfFjm8bt",
"nQ0By83IMUD",
"86sQmhMJLzm",
"H53vUFd2Swq",
"FXeiCrS63gp",
"OM17mcSahrK",
"0FhZ0LvnW_Z",
"LGkISGLHysv",
"4F8cfP7WDTr",
"PgtWuAF-YT",
"JU9ggTsMaFt",
"Ed6I8MqQ5h",
"8QgoRWkQNaF",
"eS9LdC6xNCW",
"oGETq0ZD_P0",
"gaJxoZPu_0P",
"ypDiqxlDQWX",
"m3SttBGt3EB",
"odOLPGMUpK2... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" We agree that there are already more complex, custom variational families that were proposed for specific models with stochastic support and that the section you highlight could be more clear about this. We will emphasize more clearly that we are focusing on automated guide construction methods and we are going t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"86sQmhMJLzm",
"H53vUFd2Swq",
"Ed6I8MqQ5h",
"Ed6I8MqQ5h",
"0FhZ0LvnW_Z",
"LGkISGLHysv",
"oGETq0ZD_P0",
"m3SttBGt3EB",
"PgtWuAF-YT",
"ypDiqxlDQWX",
"nips_2022_wjClgX-muzB",
"8QgoRWkQNaF",
"eS9LdC6xNCW",
"ZgkiugRDayY",
"Ip8iKEAWb66",
"ypDiqxlDQWX",
"AlhZVqR5mIv",
"YJok-jP1WDH",
"ni... |
nips_2022_rwdpFgfVpvN | Online Convex Optimization with Hard Constraints: Towards the Best of Two Worlds and Beyond | This paper considers online convex optimization with hard constraints and analyzes achievable regret and cumulative hard constraint violation (violation for short). The problem distinguishes itself from online convex optimization with soft constraints, where a violation at one round can be compensated/cancelled by a conservative decision at a different round. We propose a RECtified Online Optimization algorithm (RECOO) and consider two settings: fixed constraints and adversarial constraints. Both settings have been considered in the literature. Compared with existing results, {\em RECOO achieves the best of two worlds and beyond.} For the fixed-constraints setting, RECOO achieves $O\left(\sqrt{T}\right)$ regret and $O(1)$ violation, where $T$ is the learning horizon. The best known results in this case are $O(\sqrt{T})$ regret and $O\left(T^{1/4}\right)$ violation. For the adversarial-constraints setting, it guarantees $O(\sqrt{T})$ regret and $O(T^{3/4})$ violation, which match the best existing results. When the loss functions are strongly convex, RECOO can guarantee $O(\log T)$ regret and $O(1)$ violation for fixed constraints, and $O(\log T)$ regret and $O(\sqrt{T\log T})$ violation for adversarial constraints. Both these results are order-wise better than the existing bounds. The regret and violation bounds mentioned above use the best fixed decision in hindsight as the baseline. This paper further considers a dynamic baseline where the comparator sequence is time-varying. This paper shows that RECOO not only improves the existing results in the fixed-constraints setting but also {\em for the first time,} guarantees dynamic regret and violation bounds in the adversarial-constraints setting. Our experiment results confirm that RECOO outperforms several existing algorithms for both fixed and adversarial constraints. | Accept | This paper provides an algorithm for online convex optimization with varying unknown constraints. Reviewers agree that the methods involved appear novel and interesting. However, the authors are strongly encouraged to add a discussion of the computational complexity of the method, which may provide the missing tradeoff for the currently free $\epsilon$ parameter. | train | [
"bB7c_FifyoH",
"2iOlBxmu4-e",
"DIWbuvLbCTc",
"RTstfRaiRIf",
"EPb8jV-nCVe",
"PITGJQ8hvsm",
"X_KdTnquHbS",
"ahj-9Xtgd29",
"rYTFUuGaJ-",
"K8lM4H8dAu",
"7aSayy5W-I6",
"0xu4iq-fTeY",
"Iah9IOLI-9"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nWe appreciate your detailed comments again. If you have any further questions, please let us know so we can address them before the rebuttal phase ends. Thank you very much for your time!",
" Dear Reviewer DX8X: \n\nSince it has been a few days that the author-reviewer discussion period start... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
5
] | [
"nips_2022_rwdpFgfVpvN",
"K8lM4H8dAu",
"RTstfRaiRIf",
"Iah9IOLI-9",
"PITGJQ8hvsm",
"0xu4iq-fTeY",
"ahj-9Xtgd29",
"K8lM4H8dAu",
"7aSayy5W-I6",
"nips_2022_rwdpFgfVpvN",
"nips_2022_rwdpFgfVpvN",
"nips_2022_rwdpFgfVpvN",
"nips_2022_rwdpFgfVpvN"
] |
nips_2022_sRKNkpUMQNr | Information-Theoretic GAN Compression with Variational Energy-based Model | We propose an information-theoretic knowledge distillation approach for the compression of generative adversarial networks, which aims to maximize the mutual information between teacher and student networks via a variational optimization based on an energy-based model. Because the direct computation of the mutual information in continuous domains is intractable, our approach alternatively optimizes the student network by maximizing the variational lower bound of the mutual information. To achieve a tight lower bound, we introduce an energy-based model relying on a deep neural network to represent a flexible variational distribution that deals with high-dimensional images and consider spatial dependencies between pixels, effectively. Since the proposed method is a generic optimization algorithm, it can be conveniently incorporated into arbitrary generative adversarial networks and even dense prediction networks, e.g., image enhancement models. We demonstrate that the proposed algorithm achieves outstanding performance in model compression of generative adversarial networks consistently when combined with several existing models. | Accept | This work concerns the compression of generative adversarial networks and other image generation networks, such as dense prediction/image to image networks. Where existing approaches to compress these models rely on matching pairs of outputs, this work optimizes the Barber-Agakov lower bound on the differential mutual information between teacher and student, parameterizing the bound using an energy-based model. Offline distillation alternately fixes the student network parameters and the EBM parameters while optimizing the other, while online distillation can be performed by also including the teacher parameters, holding two of the three fixed while optimizing the third in a "round robin" fashion. The gradient of the EBM partition function is estimated using Langevin dynamics for a fixed number of steps, with chains initialized at student outputs. Qualitative and quantitative results support the assertion that this represents an improvement over existing approaches.
Reviewers found the paper overall quite clear, the method moderately original, the technical details sound, and the work well-situated in the context of other compression methods. Reviewer cQSG in particular praised the "commendable job in ablations against other relevant methods". 6hcq had several concerns around the clarity of the manuscript, which were addressed in rebuttal and hopefully can be incorporated into future versions. Several reviewers had concerns about the scale of experiments; the authors responded in rebuttal with megapixel experiments using StyleGAN2. The AC concurs with cQSG who remarked that these results significantly strengthen the paper.
Reviewer tn1T remained skeptical of the presented qualitative results, even after rebuttal; however, the authors were belatedly able to provide qualitative results on 1024x1024 images in the supplementary material. tn1T did not comment on these results, so I cannot infer how they feel about them, but they appear somewhat convincing to the AC: the samples presented from the VEM-trained do not show exhibit artifacts to the same degree. There is, of course, the issue of cherry-picking, and I would encourage the authors to provide as many of these comparisons as possible at reduced size, highlighting artifacts where they appear but not dropping baseline samples which do not exhibit them (if such samples arise), and highlighting any noticeable artifacts in the images produced by their method.
The AC feels that this work is technically solid, and noting the endorsement of two reviewers and that the qualitative comparison demanded by tn1T was carried out but not acknowledged by tn1T, the case in favour of acceptance outweighs that in favour of rejection. | train | [
"kKYJqg529L",
"vmATO-0ThZi",
"WrWPbIRiw_",
"-7e6HyzqrqB",
"lQ2MRDmUMj",
"OA1l1sSSxAc",
"ao_nSHKTOOp",
"rARv2VY3v4",
"jYbekigQdQ_",
"beQPeAoXx4V",
"GtrgH7agn_7",
"tZUEx2i0H3B",
"J4a6pietHs",
"7Sb95teitJ0"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank all reviewers for their constructive and positive comments and we present the summary of our responses to each reviewer as below. \n\nQ1. Visual quality compared with the previous methods \n\nA. Upon request by Reviewer tn1T, we present more qualitative results of VEM, OMGD, and CAGC in Figure ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"nips_2022_sRKNkpUMQNr",
"-7e6HyzqrqB",
"beQPeAoXx4V",
"lQ2MRDmUMj",
"jYbekigQdQ_",
"ao_nSHKTOOp",
"rARv2VY3v4",
"GtrgH7agn_7",
"7Sb95teitJ0",
"J4a6pietHs",
"tZUEx2i0H3B",
"nips_2022_sRKNkpUMQNr",
"nips_2022_sRKNkpUMQNr",
"nips_2022_sRKNkpUMQNr"
] |
nips_2022_7vmyjUHgm9_ | Less-forgetting Multi-lingual Fine-tuning | Multi-lingual fine-tuning (MLF), which fine-tunes a multi-lingual language model (MLLM) with multiple source languages, aims to gain good zero-shot performance on target languages. In MLF, the fine-tuned model tends to fit the source languages while forgetting its cross-lingual knowledge obtained from the pre-training stage. This forgetting phenomenon degenerates the zero-shot performance of MLF, which remains under-explored. To fill this gap, this paper proposes a multi-lingual fine-tuning method, dubbed Less-forgetting Multi-lingual Fine-tuning (LF-MLF). In LF-MLF, we cast multi-lingual fine-tuning as a constrained optimization problem, where the optimization objective is to minimize forgetting, and constraints are reducing the fine-tuning loss. The proposed method has superior zero-shot performance; furthermore, it can achieve the Pareto stationarity. Extensive experiments on Named Entity Recognition, Question Answering and Natural Language Inference back up our theoretical analysis and validate the superiority of our proposals. | Accept | The paper proposes a method for finetuning multi-lingual pre-trained language in multiple languages simultaneously. The task is formalized as a constrained optimization problem and the upper bound of the forgetting is given in theory. A method is developed for multi-lingual fine-tuning to minimize the upper bound. Experiments are conducted in multiple downstream tasks and the model is fine-tuned in a few high-resource languages and the performance is improved in low resource languages as zero-shot settings. The authors responded the reviewers' concerns and the reviewers agree the responses addressed their concerns. The paper is recommended to be accepted, and I ask the authors to carefully prepare the final camera-ready version based on the reviewers' feedback. | val | [
"vP8693mzth",
"tdmq57icrm",
"Kf6cnxj3XIO",
"5ozI0HlUMq",
"MpLL2QKOiCZ",
"41F7TfCRLiE",
"3p2FBlfOyc-q",
"OVkTHmUrfTl",
"2P4UaMjJ7n5",
"vaJc0v6Nga_"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We highly appreciate your time for reading our revised paper. Your constructive and professional comments help a lot for improving this work. As to the limitations, we have discussed from two perspectives: (1) the assumption of being close to the original pretraining benefits the cross lingual generalization; (2)... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"5ozI0HlUMq",
"Kf6cnxj3XIO",
"3p2FBlfOyc-q",
"41F7TfCRLiE",
"vaJc0v6Nga_",
"2P4UaMjJ7n5",
"OVkTHmUrfTl",
"nips_2022_7vmyjUHgm9_",
"nips_2022_7vmyjUHgm9_",
"nips_2022_7vmyjUHgm9_"
] |
nips_2022_c3HrNgQE7d | Exploring Figure-Ground Assignment Mechanism in Perceptual Organization | Perceptual organization is a challenging visual task that aims to perceive and group the individual visual element so that it is easy to understand the meaning of the scene as a whole. Most recent methods building upon advanced Convolutional Neural Network (CNN) come from learning discriminative representation and modeling context hierarchically. However, when the visual appearance difference between foreground and background is obscure, the performance of existing methods degrades significantly due to the visual ambiguity in the discrimination process. In this paper, we argue that the figure-ground assignment mechanism, which conforms to human vision cognitive theory, can be explored to empower CNN to achieve a robust perceptual organization despite visual ambiguity. Specifically, we present a novel Figure-Ground-Aided (FGA) module to learn the configural statistics of the visual scene and leverage it for the reduction of visual ambiguity. Particularly, we demonstrate the benefit of using stronger supervisory signals by teaching (FGA) module to perceive configural cues, \ie, convexity and lower region, that human deem important for the perceptual organization. Furthermore, an Interactive Enhancement Module (IEM) is devised to leverage such configural priors to assist representation learning, thereby achieving robust perception organization with complex visual ambiguities. In addition, a well-founded visual segregation test is designed to validate the capability of the proposed FGA mechanism explicitly. Comprehensive evaluation results demonstrate our proposed FGA mechanism can effectively enhance the capability of perception organization on various baseline models. Nevertheless, the model augmented via our proposed FGA mechanism also outperforms state-of-the-art approaches on four challenging real-world applications. | Accept | Overall, the reviewers commend the motivation of the approach, the core ideas presented in the paper, and the extensive experiments conducted for four different applications including camouflaged and salient object detection, infection, and polyp segmentation.
In response to Reviewer fHq8, the authors have mentioned updated results with hyper-parameter tuning, however, they don’t mention which set is used for this purpose. Details on whether the validation set is used or not and how it is chosen are important for the final version.
In response to Reviewer ghhU, authors have reported new experiments and comparisons, alongside clarifications on motivation and justification for the choices made as part of the approach.
It appears that the major concerns from reviewers have been addressed in the response and the paper can be accepted after the rebuttal. Authors are suggested to include all the suggested changes in the final version. | train | [
"5Tpm7HjYrF",
"BhFkAzAi2lh",
"lTvRmJFVJlQ",
"2x2dLIW3aFH",
"LcqzRA4sm6g",
"HnduKggnUOL",
"w3BkHgY400s",
"MC39_CEdxtl",
"V995MCUhRRB",
"kEsKXMXJbS4",
"eH2nPk8WntV",
"shp0GNjPmD",
"hyNGpKKinTr",
"IgdyH_89gVq",
"7hfkIet67f"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your new comments, we clarify your concern below.\n\n\n**Q#:** Regarding the selection of two cues, I understand that the selected cues are of importance. However, how about other cues mentioned in the related work section since they are also \"factors that affect Figure-Ground assignment\".\n\n**A#:**... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"BhFkAzAi2lh",
"2x2dLIW3aFH",
"IgdyH_89gVq",
"LcqzRA4sm6g",
"HnduKggnUOL",
"IgdyH_89gVq",
"MC39_CEdxtl",
"7hfkIet67f",
"hyNGpKKinTr",
"shp0GNjPmD",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d"
] |
nips_2022_8li9SYYY3eQ | Language Conditioned Spatial Relation Reasoning for 3D Object Grounding | Localizing objects in 3D scenes based on natural language requires understanding and reasoning about spatial relations. In particular, it is often crucial to distinguish similar objects referred by the text, such as "the left most chair" and "a chair next to the window". In this work we propose a language-conditioned transformer model for grounding 3D objects and their spatial relations. To this end, we design a spatial self-attention layer that accounts for relative distances and orientations between objects in input 3D point clouds. Training such a layer with visual and language inputs enables to disambiguate spatial relations and to localize objects referred by the text. To facilitate the cross-modal learning of relations, we further propose a teacher-student approach where the teacher model is first trained using ground-truth object labels, and then helps to train a student model using point cloud inputs. We perform ablation studies showing advantages of our approach. We also demonstrate our model to significantly outperform the state of the art on the challenging Nr3D, Sr3D and ScanRefer 3D object grounding datasets. | Accept | Reviewers where in agreement that the method and manuscript are strong and provide a valuable connection between 3D perception and language.
The evaluation suffers somewhat from the fact that there are no good datasets targeted at this problem. Authors mitigate this by performing a thorough evaluation with numerous ablations/alternatives that demonstrate their method works as intended.
Beyond just this application, the general approach taken of incorporating priors about a problem domain into a self-attention layer for a Transformer to combine multimodal input is timely and will be of wide interest.
I encourage the authors to update the manuscript and include the additional experiments and results they produced for reviewers.
| train | [
"5eKBg8K3OPL",
"FXQPRhynjO",
"V_bDbGLeap",
"Lg_JbI12kiv",
"ipNSHgKXbHd",
"7J_h_iFInSw",
"__TrIvMY7XR",
"pSOYWwp_N4",
"02Vdl4VPUS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response. The discussions about baselines and the language encoders are helpful. Since there is a huge performance gap between different language encoders, I would suggest the authors add this discussion to the main paper. Overall, I am satisfied with the author's responses and would ke... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"Lg_JbI12kiv",
"02Vdl4VPUS",
"pSOYWwp_N4",
"__TrIvMY7XR",
"7J_h_iFInSw",
"nips_2022_8li9SYYY3eQ",
"nips_2022_8li9SYYY3eQ",
"nips_2022_8li9SYYY3eQ",
"nips_2022_8li9SYYY3eQ"
] |
nips_2022_dz79MhQXWvg | Weakly supervised causal representation learning | Learning high-level causal representations together with a causal model from unstructured low-level data such as pixels is impossible from observational data alone. We prove under mild assumptions that this representation is however identifiable in a weakly supervised setting. This involves a dataset with paired samples before and after random, unknown interventions, but no further labels. We then introduce implicit latent causal models, variational autoencoders that represent causal variables and causal structure without having to optimize an explicit discrete graph structure. On simple image data, including a novel dataset of simulated robotic manipulation, we demonstrate that such models can reliably identify the causal structure and disentangle causal variables. | Accept | The reviewers were split about this paper: on one hand they would have liked to see better experimental results, particularly for larger graphs, on the other they appreciated the identifiability results and the ILCM algorithm. After going through the paper and discussion I have voted to accept for the following reason: even though the experimental results could be strengthened, papers with novel approaches to long-standing problems are the kind that make NeurIPS an uniquely interesting conference, particularly if those paper have strong theoretical guarantees. I urge the authors to take all of the reviewers changes into account (if not already done so). Once done this paper will be a nice addition to the conference! | train | [
"8xU24BWibI",
"pUgHe97mVR",
"IOQTTar-go",
"JqsQLMPimE3",
"Pg84bQjpf81",
"6jTtkWZLmmC",
"zJ1R7agw4Pc",
"dAypEhMKfu1",
"yZh-u7swWBE",
"B3ZyVz90pgl",
"cIDYGYq4aMw",
"T_W7mNLNzG",
"8iSQB5Cpo9",
"xMhNmUv4C4v",
"Sk_Erlibqp",
"G3svjvZ3WVj",
"wevYiyJVQ0L"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the reply, we are glad to hear that we were able to clarify our approach.\n\nYour questions and the toy example were very helpful. In addition to improving our description of ILCMs along these lines, in the final version of our paper we will also discuss which of our assumptions are required for red... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"pUgHe97mVR",
"JqsQLMPimE3",
"JqsQLMPimE3",
"Pg84bQjpf81",
"T_W7mNLNzG",
"dAypEhMKfu1",
"nips_2022_dz79MhQXWvg",
"nips_2022_dz79MhQXWvg",
"wevYiyJVQ0L",
"G3svjvZ3WVj",
"Sk_Erlibqp",
"8iSQB5Cpo9",
"xMhNmUv4C4v",
"nips_2022_dz79MhQXWvg",
"nips_2022_dz79MhQXWvg",
"nips_2022_dz79MhQXWvg",
... |
nips_2022_tro0_OqIVde | HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions | Recent progress in vision Transformers exhibits great success in various tasks driven by the new spatial modeling mechanism based on dot-product self-attention. In this paper, we show that the key ingredients behind the vision Transformers, namely input-adaptive, long-range and high-order spatial interactions, can also be efficiently implemented with a convolution-based framework. We present the Recursive Gated Convolution ($\textit{g}^\textit{n}$Conv) that performs high-order spatial interactions with gated convolutions and recursive designs. The new operation is highly flexible and customizable, which is compatible with various variants of convolution and extends the two-order interactions in self-attention to arbitrary orders without introducing significant extra computation. $\textit{g}^\textit{n}$Conv can serve as a plug-and-play module to improve various vision Transformers and convolution-based models. Based on the operation, we construct a new family of generic vision backbones named HorNet. Extensive experiments on ImageNet classification, COCO object detection and ADE20K semantic segmentation show HorNet outperform Swin Transformers and ConvNeXt by a significant margin with similar overall architecture and training configurations. HorNet also shows favorable scalability to more training data and larger model sizes. Apart from the effectiveness in visual encoders, we also show $\textit{g}^\textit{n}$Conv can be applied to task-specific decoders and consistently improve dense prediction performance with less computation. Our results demonstrate that $\textit{g}^\textit{n}$Conv can be a new basic module for visual modeling that effectively combines the merits of both vision Transformers and CNNs. Code is available at https://github.com/raoyongming/HorNet. | Accept |
This paper introduces a new operation **gnConv** and a computer vision network architecture **HorNet**. Motivated by the success philosophy of vision Transformers, the key idea of gnConv is to build a recursive form of gated convolution. It make the module input-adaptive and with long-range and high-order spatial interactions. Consistent improvement are shown over Swin and ConvNeXt on well-established CV benchmarks such as image classification on ImageNet, semantic segmentation on ADE20K and object detection on COCO.
The paper receives receives unanimous accept from all reviewers (Reviewer 3Snz champions the paper with rating score 8), leading to an ``Accept'' decision. | val | [
"7xxFoo3ZAV1",
"3anNWkWmFeDh",
"j9YaO4udujL",
"9nq_zcPB3U",
"amll9CVcER-",
"X_wvB6sOvGa",
"MMb6cU_t0Xu",
"MzZTpdYb24",
"wpEacKUrgqH",
"V_EoWH4iJXu",
"IfJzWYf_Uuw",
"-IuYWTa3tPR",
"K0XufsrXDBo",
"IX5190b0vH_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my concerns. I have increased the rating to 5.",
" Thanks a lot for checking our response and providing valuable feedback.\n\nHigh-order spatial interaction is the key concept introduced in our paper. Previous work on Transformer-like architectures usually explores the long-term and input-... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"3anNWkWmFeDh",
"j9YaO4udujL",
"X_wvB6sOvGa",
"MzZTpdYb24",
"wpEacKUrgqH",
"IfJzWYf_Uuw",
"nips_2022_tro0_OqIVde",
"IX5190b0vH_",
"V_EoWH4iJXu",
"K0XufsrXDBo",
"-IuYWTa3tPR",
"nips_2022_tro0_OqIVde",
"nips_2022_tro0_OqIVde",
"nips_2022_tro0_OqIVde"
] |
nips_2022_I-ggHgon-Az | What You See is What You Classify: Black Box Attributions | An important step towards explaining deep image classifiers lies in the identification of image regions that contribute to individual class scores in the model's output. However, doing this accurately is a difficult task due to the black-box nature of such networks. Most existing approaches find such attributions either using activations and gradients or by repeatedly perturbing the input. We instead address this challenge by training a second deep network, the Explainer, to predict attributions for a pre-trained black-box classifier, the Explanandum. These attributions are provided in the form of masks that only show the classifier-relevant parts of an image, masking out the rest. Our approach produces sharper and more boundary-precise masks when compared to the saliency maps generated by other methods. Moreover, unlike most existing approaches, ours is capable of directly generating very distinct class-specific masks in a single forward pass. This makes the proposed method very efficient during inference. We show that our attributions are superior to established methods both visually and quantitatively with respect to the PASCAL VOC-2007 and Microsoft COCO-2014 datasets. | Accept | The paper proposes an attribution prediction approach to enhance the interpretability of DNN models. For this purpose, a second “explainer” model is used which can generality class-specific masks for the classification of relevant regions.
The reviewers have overall commended the novelty of the approach, clear writing, and detailed experiments. However, there were concerns about the training required for explainability which makes the approach relatively more computationally demanding. Given the performance is better than GradCAM, the approach still offers an advantage and a suitable alternative. The rebuttal provided further clarity and corrections in light of the initial reviews, these changes must be incorporated in the final version. The AC will also support incorporating a user study since the final goal of these attributions is to provide visual explanations for human users.
Based on the reviews and rebuttal, AC recommends accepting the paper and would like to congratulate the authors!
| train | [
"CqHSdWhm_I6",
"zqAoNCB8_D",
"paT6J9_42Fc",
"UOP3tdQTuGX",
"dwezajD8q7c",
"8F_O1R4Iot5",
"wW3xd2xZG_Q",
"3IMhr9PBwa7",
"lC5gFzWPAJy",
"h7XO2jPIp0X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed response. \n\nThe authors have clarified the problem setting context. I would argue that this is not a post-hoc explainability model in the classical sense. Using ground truth class labels for training, the masking network adds elements that are not faithful to the base classi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"dwezajD8q7c",
"h7XO2jPIp0X",
"nips_2022_I-ggHgon-Az",
"h7XO2jPIp0X",
"lC5gFzWPAJy",
"lC5gFzWPAJy",
"3IMhr9PBwa7",
"nips_2022_I-ggHgon-Az",
"nips_2022_I-ggHgon-Az",
"nips_2022_I-ggHgon-Az"
] |
nips_2022_p9_Z4m2Vyvr | Amortized Mixing Coupling Processes for Clustering | Considering the ever-increasing scale of data, which may contain tens of thousands of data points or complicated latent structures, the issue of scalability and algorithmic efficiency becomes of vital importance for clustering. In this paper, we propose cluster-wise amortized mixing coupling processes (AMCP), which is able to achieve efficient amortized clustering in a well-defined non-parametric Bayesian posterior. Specifically, AMCP learns clusters sequentially with the aid of the proposed intra-cluster mixing (IntraCM) and inter-cluster coupling (InterCC) strategies, which investigate the relationship between data points and reference distribution in a linear optimal transport mixing view, and coupling the unassigned set and assigned set to generate new cluster. IntraCM and InterCC avoid pairwise calculation of distances between clusters and reduce the computational complexity from quadratic to linear in the current number of clusters. Furthermore, cluster-wise sequential process is able to improve the quick adaptation ability for the next cluster generation. In this case, AMCP simultaneously learns what makes a cluster, how to group data points into clusters, and how to adaptively control the number of clusters. To illustrate the superiority of the proposed method, we perform experiments on both synthetic data and real-world data in terms of clustering performance and computational efficiency. The source code is available at https://github.com/HuafengHK/AMCP. | Accept | In this paper. the authors propose a novel amortized clustering method in which intra-cluster mixing and inter-cluster coupling are introduced. The optimal transport is used to learn the relationship between samles and reference distribution with intra-cluster mixing. The inter-cluster coupling assign samples to clusters and generates new clusters. The proposed method is novel in the sense that optimal transport is first introduced to amortized clustering, and its effectiveness is shown though synthetic and real datasets.
Many readers would be interested in this novel approach. | train | [
"HRDKNcHVjfB",
"K2l6rKehry",
"i1P1BD9Rfkf",
"D8HLg-Z8fX",
"nSC6r_5ZmqJ",
"rfBJT-GtWE-z",
"4A8a34DaAy",
"_nLxQ5rwhB0",
"h7VoeCOZSh7",
"_zZnrz7wEas",
"YXw6FaDc6EJ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your positive feedback. Our paper won't be better without your nice suggestions. Thanks again.",
" Thank you very much for increasing score! Our paper won't be better without your valuable suggestions. Thanks again.",
" I would like to appreciate the prompt and rewarding responses give... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"i1P1BD9Rfkf",
"D8HLg-Z8fX",
"nSC6r_5ZmqJ",
"rfBJT-GtWE-z",
"YXw6FaDc6EJ",
"_zZnrz7wEas",
"h7VoeCOZSh7",
"nips_2022_p9_Z4m2Vyvr",
"nips_2022_p9_Z4m2Vyvr",
"nips_2022_p9_Z4m2Vyvr",
"nips_2022_p9_Z4m2Vyvr"
] |
nips_2022_INzRLBAA4JX | Revisiting Sparse Convolutional Model for Visual Recognition | Despite strong empirical performance for image classification, deep neural networks are often regarded as ``black boxes'' and they are difficult to interpret. On the other hand, sparse convolutional models, which assume that a signal can be expressed by a linear combination of a few elements from a convolutional dictionary, are powerful tools for analyzing natural images with good theoretical interpretability and biological plausibility. However, such principled models have not demonstrated competitive performance when compared with empirically designed deep networks. This paper revisits the sparse convolutional modeling for image classification and bridges the gap between good empirical performance (of deep learning) and good interpretability (of sparse convolutional models). Our method uses differentiable optimization layers that are defined from convolutional sparse coding as drop-in replacements of standard convolutional layers in conventional deep neural networks. We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100 and ImageNet datasets when compared to conventional neural networks. By leveraging stable recovery property of sparse modeling, we further show that such models can be much more robust to input corruptions as well as adversarial perturbations in testing through a simple proper trade-off between sparse regularization and data reconstruction terms. | Accept | In this paper, the authors introduce a convolutional sparse coding layer, which is intended as a replacement for a convolutional layer that is has greater interpretability and stability. Experiments show that a ResNet modified with this CSC-layer can achieve comparable performance on standard datasets as convolutional networks and are more robust to noise. The strength of this paper is that the novel layer it proposes is faster than previous space coding networks and that it has comparable accuracy and speed to ResNets while being more robust. A weakness of the paper is that the claims of improved interpretability do not transfer to the network as a whole. The strengths of the paper outweighs the weaknesses, and the authors should clarify in the camera ready that interpretability is only intended layer-wise. | train | [
"90W88iuDif",
"ZJqNiWIp0q0",
"vapRlkFEwmp",
"I1RilkEObZ",
"XxDQii6l0Ju",
"mY3EYbkobI0",
"WeMOMstw1TUf",
"fy5BdC95bwe",
"96rTFKvFlxz",
"7_7MvUX6HN",
"e_vbTgs8Jp1",
"7CxjdvC29S",
"JlPDGbyfO0j"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer EVjq,\n\nFor Q1, it might be caused by the number of FISTA iterations. We will conduct more ablation studies and visualization in the future version. \nAlso, thanks for the suggestion about the visualization, we will add a border between different filters to make better visualization.",
" Thanks ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"WeMOMstw1TUf",
"vapRlkFEwmp",
"I1RilkEObZ",
"7_7MvUX6HN",
"e_vbTgs8Jp1",
"JlPDGbyfO0j",
"96rTFKvFlxz",
"JlPDGbyfO0j",
"7CxjdvC29S",
"e_vbTgs8Jp1",
"nips_2022_INzRLBAA4JX",
"nips_2022_INzRLBAA4JX",
"nips_2022_INzRLBAA4JX"
] |
nips_2022_vt516zga8m | Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever | Recommender retrievers aim to rapidly retrieve a fraction of items from the entire item corpus when a user query requests, with the representative two-tower model trained with the log softmax loss. For efficiently training recommender retrievers on modern hardwares, inbatch sampling, where the items in the mini-batch are shared as negatives to estimate the softmax function, has attained growing interest. However, existing inbatch sampling based strategies just correct the sampling bias of inbatch items with item frequency, being unable to distinguish the user queries within the mini-batch and still incurring significant bias from the softmax. In this paper, we propose a Cache-Augmented Inbatch Importance Resampling (XIR) for training recommender retrievers, which not only offers different negatives to user queries with inbatch items, but also adaptively achieves a more accurate estimation of the softmax distribution. Specifically, XIR resamples items from the given mini-batch training pairs based on certain probabilities, where a cache with more frequently sampled items is adopted to augment the candidate item set, with the purpose of reusing the historical informative samples. XIR enables to sample query-dependent negatives based on inbatch items and to capture dynamic changes of model training, which leads to a better approximation of the softmax and further contributes to better convergence. Finally, we conduct experiments to validate the superior performance of the proposed XIR compared with competitive approaches. | Accept | The idea of more representative mini-batches sounds like a natural extension of the work done on stratified sampling. The reviewers were convinced the idea is both new and effective on real data. In particular, the discussion with mWFy clarified that this work is alternative way to explore the capacity of ranking optimization by leveraging the samples shared in the batch data by comparing it to other relevant work. | train | [
"L-7S3CTvD-i",
"AOZguDUkzaH",
"BN9OYbkKnTd",
"-JAcLZIoz1q",
"dciMvLyYiu",
"k4u3tCZNLCU",
"Hot3AqG2LFDu",
"qL6ggx0CkVr",
"Lzg_294QaTy",
"2trKDGSHCXa",
"D8Op6AL27UU",
"rExhbl85t1j",
"ndWegZgBPA",
"Ji-39VpOiau"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal and appreciate the efforts. I think most of my comments/questions are addressed. Other reviews and related discussions also resolve my concern on the level of contributed. Changing the rating to 6.",
" Dear Authors:\n\nThank you so much for further feedback to clarify the concerns about ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
5
] | [
"ndWegZgBPA",
"-JAcLZIoz1q",
"dciMvLyYiu",
"dciMvLyYiu",
"Hot3AqG2LFDu",
"rExhbl85t1j",
"rExhbl85t1j",
"ndWegZgBPA",
"Ji-39VpOiau",
"D8Op6AL27UU",
"nips_2022_vt516zga8m",
"nips_2022_vt516zga8m",
"nips_2022_vt516zga8m",
"nips_2022_vt516zga8m"
] |
nips_2022_SZDqCOv6vTB | Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems | Belief Propagation (BP) is an important message-passing algorithm for various reasoning tasks over graphical models, including solving the Constraint Optimization Problems (COPs). It has been shown that BP can achieve state-of-the-art performance on various benchmarks by mixing old and new messages before sending the new one, i.e., damping. However, existing methods on tuning a static damping factor for BP not only is laborious but also harms their performance. Moreover, existing BP algorithms treat each variable node's neighbors equally when composing a new message, which also limits their exploration ability. To address these issues, we seamlessly integrate BP, Gated Recurrent Units (GRUs), and Graph Attention Networks (GATs) within the massage-passing framework to reason about dynamic weights and damping factors for composing new BP messages. Our model, Deep Attentive Belief Propagation (DABP), takes the factor graph and the BP messages in each iteration as the input and infers the optimal weights and damping factors through GRUs and GATs, followed by a multi-head attention layer. Furthermore, unlike existing neural-based BP variants, we propose a novel self-supervised learning algorithm for DABP with a smoothed solution cost, which does not require expensive training labels and also avoids the common out-of-distribution issue through efficient online learning. Extensive experiments show that our model significantly outperforms state-of-the-art baselines. | Accept | 4 knowledgable reviewers reviewed the paper, 3 of them recommending weak acceptance, 1 borderline rejection. The reviewers engaged with the authors and a discussion among the reviewers took place. The reviewers appreciate the considered problem, the novelty of the proposed approach and the reported performance improvements. At the same time, there are concerns regading the theoretical justifcation of the method, relation to existing work, and comparison with other existing methods (lacking baselines and pushing the baselines to the limit). There was a discussion regading the need for a theoretical justification and I side more with the reviewers which argue that such a justification is not absolutely necessary -- nevertheless, more motivation and intutition about the proposed approach should still be provided. In summary, the paper is viewerd borderline, which I agree with, but I think there are some relevant contributions which could be interesting to the community. Hence I am recommending acceptance of the paper but strongly encourage the authors to carefully consider all comments and suggestions which came up in the reviews and discussions with the reviewers when preparing the final version of their paper. | test | [
"iQNd38p_4CK",
"krlgNvO2WR",
"kwb75orZXpg",
"EXWHe1EzI-",
"iZeJF64KA9C",
"h8hq0MRGFwV",
"051ThR0SfJU",
"Ttm6q2-1-ZM",
"SlA8ZsfbHl",
"nJJ4X0k-_Zi",
"hEtc543WXkg",
"bKU3YrpIsCM",
"2I5yS8kyQUN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the very detailed and helpful response. I don't have any more questions now.",
" Thank you the authors for answering my comments and for making clear the changes on the paper.\nI've modified original rating accordingly\n",
" Thanks for the further clarifications. I appreciate the efforts in prov... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"Ttm6q2-1-ZM",
"h8hq0MRGFwV",
"EXWHe1EzI-",
"iZeJF64KA9C",
"051ThR0SfJU",
"2I5yS8kyQUN",
"bKU3YrpIsCM",
"hEtc543WXkg",
"nJJ4X0k-_Zi",
"nips_2022_SZDqCOv6vTB",
"nips_2022_SZDqCOv6vTB",
"nips_2022_SZDqCOv6vTB",
"nips_2022_SZDqCOv6vTB"
] |
nips_2022_FgDzS8_Fz7c | Category-Level 6D Object Pose Estimation in the Wild: A Semi-Supervised Learning Approach and A New Dataset | 6D object pose estimation is one of the fundamental problems in computer vision and robotics research. While a lot of recent efforts have been made on generalizing pose estimation to novel object instances within the same category, namely category-level 6D pose estimation, it is still restricted in constrained environments given the limited number of annotated data. In this paper, we collect Wild6D, a new unlabeled RGBD object video dataset with diverse instances and backgrounds. We utilize this data to generalize category-level 6D object pose estimation in the wild with semi-supervised learning. We propose a new model, called Rendering for Pose estimation network RePoNet), that is jointly trained using the free ground-truths with the synthetic data, and a silhouette matching objective function on the real-world data. Without using any 3D annotations on real data, our method outperforms state-of-the-art methods on the previous dataset and our Wild6D test set (with manual annotations for evaluation) by a large margin. Project page with Wild6D data: \url{https://oasisyang.github.io/semi-pose/}. | Accept | This paper received 4 reviews with the following scores: SR - BR - WA - A. The reviewers acknowledged importance of the addressed problem, the dataset contribution, clear presentation, and a meaningful approach with solid empirical performance. Main disagreements were around comparisons with existing methods (some published at CVPR'22), and fairness of training setups (supervised-only vs augmentation with real data).
The AC confirms that per NeurIPS policies the lack of comparisons with CVPR'22 publications indeed cannot be a basis for rejection.
However, looking at the additional results in Table 5 (including the CVPR 2022 paper) it looks like methods have actually been compared on both datasets using the various training regimes. Given that and that the remaining concerns of the reviewers were largely addressed in the rebuttal, both the AC and the SAC recommend acceptance. | train | [
"bj01odG7H6l",
"W_xxLCPORkZ",
"ITprRje415Y",
"K1fsGZZzOee",
"AzESPHo3zM",
"BsEGND-BSwK",
"eTcf7uY8MV7",
"fap1Js30L8l",
"w3aDelysmNm",
"Z4YMP-01Rvj",
"P-ACXnBn5HT"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for responding to and addressing my questions and comments.",
" Dear ACs and Reviewers, \n\nThank you so much again for the detailed feedback. We have reached half way of the author-reviewer discussion period. However, there are no responses yet to our replies.\n\nPlease do not hesitate to let us know... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
5
] | [
"ITprRje415Y",
"nips_2022_FgDzS8_Fz7c",
"P-ACXnBn5HT",
"Z4YMP-01Rvj",
"Z4YMP-01Rvj",
"w3aDelysmNm",
"fap1Js30L8l",
"nips_2022_FgDzS8_Fz7c",
"nips_2022_FgDzS8_Fz7c",
"nips_2022_FgDzS8_Fz7c",
"nips_2022_FgDzS8_Fz7c"
] |
nips_2022_IfFZr1gl0b | Uni-Mol: A Universal 3D Molecular Representation Learning Framework | Molecular representation learning (MRL) has gained tremendous attention due to its critical role in learning from limited supervised data for applications like drug design. In most MRL methods, molecules are treated as 1D sequential tokens or 2D topology graphs, limiting their ability to incorporate 3D information for downstream tasks and, in particular, making it almost impossible for 3D geometry prediction or generation. Herein, we propose Uni-Mol, a universal MRL framework that significantly enlarges the representation ability and application scope of MRL schemes. Uni-Mol is composed of two models with the same SE(3)-equivariant transformer architecture: a molecular pretraining model trained by 209M molecular conformations; a pocket pretraining model trained by 3M candidate protein pocket data. The two models are used independently for separate tasks, and are combined when used in protein-ligand binding tasks. By properly incorporating 3D information, Uni-Mol outperforms SOTA in 14/15 molecular property prediction tasks. Moreover, Uni-Mol achieves superior performance in 3D spatial tasks, including protein-ligand binding pose prediction, molecular conformation generation, etc. Finally, we show that Uni-Mol can be successfully applied to the tasks with few-shot data like pocket druggability prediction. | Reject | This paper proposes a new framework for molecular representation learning (MRL) using both 2D and 3D molecular data. This framework is general and applied to various problems (e.g., protein-ligand binding pose prediction and molecular conformation prediction). I believe this paper is potentially quite impactful and able to reshape how research is conducted for MRL research.
However, the contribution of this paper is unclear in its current form.
- The proposed methodology is not very novel and uses a combination of existing methods.
- While the main contribution of this paper is to propose a new framework for MRL, the experiments focus on evaluating a single algorithm (i.e., SE(3)-equivariant model + 3 self-supervised tasks) compared to existing algorithms under different frameworks. In other words, the experiments do not deliver new information since (1) existing works demonstrated how combining 2D & 3D data improves downstream task performance and (2) pretraining is useful for the considered downstream tasks.
Overall, I recommend rejection for this paper. However, I believe this paper can be a very strong submission for the next conference if the authors clearly demonstrate their contribution. For example, I think the proposed idea would be pleasantly presented as an important "benchmark" paper, rather than a framework with superior performance. | train | [
"s4ZAaJZY5gy",
"6PV7yHcXFPp",
"pGMAjn_byTL",
"vr-lLKZiOhH",
"-jeGmGfwQQM",
"Zhd-TZoWHQ0",
"ev4A1EXxID",
"s9TqG3LkgIwy",
"wZ0UXiJP5P",
"J4YP9qvto9p",
"VlIgJJsDQp6",
"7wQjSwTgTau",
"PnpE-7HLJh_",
"Uv-HpgJzDqK",
"Y2Ghep82GlE",
"jZWLM_S3Kq1",
"3djj7SBxF8H",
"BJHsVZ0g2ot",
"pD2kvQRCZ8... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers, \n\nThank you so much for the comprehensive and insightful reviews. Based on the review comments, we made a significant effort to finish additional 5 ablation studies from scratch in the tight discussion period. And we have updated the Appendix with these new results, and more discussions accordin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"nips_2022_IfFZr1gl0b",
"nips_2022_IfFZr1gl0b",
"J4YP9qvto9p",
"-jeGmGfwQQM",
"Zhd-TZoWHQ0",
"PnpE-7HLJh_",
"s9TqG3LkgIwy",
"7wQjSwTgTau",
"VlIgJJsDQp6",
"nips_2022_IfFZr1gl0b",
"pD2kvQRCZ8O",
"BJHsVZ0g2ot",
"Uv-HpgJzDqK",
"3djj7SBxF8H",
"jZWLM_S3Kq1",
"nips_2022_IfFZr1gl0b",
"nips_2... |
nips_2022_k6WzeLZjxuP | Factored DRO: Factored Distributionally Robust Policies for Contextual Bandits | While there has been extensive work on learning from offline data for contextual multi-armed bandit settings, existing methods typically assume there is no environment shift: that the learned policy will operate in the same environmental process as that of data collection. However, this assumption may limit the use of these methods for many practical situations where there may be distribution shifts. In this work we propose Factored Distributionally Robust Optimization (Factored-DRO), which is able to separately handle distribution shifts in the context distribution and shifts in the reward generating process. Prior work that either ignores potential shifts in the context, or considers them jointly, can lead to performance that is too conservative, especially under certain forms of reward feedback. Our Factored-DRO objective mitigates this by considering the shifts separately, and our proposed estimators are consistent and converge asymptotically. We also introduce a practical algorithm and demonstrate promising empirical results in environments based on real-world datasets, such as voting outcomes and scene classification. | Accept | This paper studies off-policy learning with an environment shift, where the distributions of both contexts and rewards can change. The authors address both challenges in a factored form, and derive error bounds for both off-policy evaluation and optimization. The proposed approach is evaluated on real-world datasets. The original ratings of the paper were 6, 6, 6, 6, and 5; and they did not change after the rebuttal. The reviewers generally praised the paper and their main concerns were addressed in the rebuttal:
* Asymptotic errors bounds were replaced with finite-sample guarantees.
* Synthetic shifts in the distributions in experiments were complemented with actual shifts in data.
This is a good paper to accept and I believe that the authors will improve the paper further based on the feedback of the reviewers. | train | [
"NnMnTwiIesQ",
"ufWkYWJ3G_M",
"HRRwU43kR17",
"7c_XPY4MN5f",
"b5BQ9Uycu4X",
"xE2uyRVReonm",
"mZMd7DHewZC",
"F4FPeMp4a5Y",
"igXFAJr2UK",
"pi1taCsTLy",
"UujdRhgOhyO",
"qcjuBHyOtyX",
"ZzDEQ-KIaNH",
"QQ2tH3Wa6Uo",
"eUnDPEPDJpq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification. I do not have further questions. ",
" The response addressed my primary concerns, and filled all the technical \"holes\" I could point out in the review. I do think much of the technical content overlaps quite a bit with the work by Si et al. (2020), but the authors have demonstrat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3,
3
] | [
"xE2uyRVReonm",
"igXFAJr2UK",
"F4FPeMp4a5Y",
"nips_2022_k6WzeLZjxuP",
"QQ2tH3Wa6Uo",
"ZzDEQ-KIaNH",
"qcjuBHyOtyX",
"eUnDPEPDJpq",
"UujdRhgOhyO",
"eUnDPEPDJpq",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP"
] |
nips_2022_25XIE30VHZE | SecureFedYJ: a safe feature Gaussianization protocol for Federated Learning | The Yeo-Johnson (YJ) transformation is a standard parametrized per-feature unidimensional transformation often used to Gaussianize features in machine learning. In this paper, we investigate the problem of applying the YJ transformation in a cross-silo Federated Learning setting under privacy constraints. For the first time, we prove that the YJ negative log-likelihood is in fact convex, which allows us to optimize it with exponential search. We numerically show that the resulting algorithm is more stable than the state-of-the-art approach based on the Brent minimization method. Building on this simple algorithm and Secure Multiparty Computation routines, we propose SECUREFEDYJ, a federated algorithm that performs a pooled-equivalent YJ transformation without leaking more information than the final fitted parameters do. Quantitative experiments on real data demonstrate that, in addition to being secure, our approach reliably normalizes features across silos as well as if data were pooled, making it a viable approach for safe federated feature Gaussianization. | Accept | This paper studies data preprocesing in the federated learning setting.
It proposes a simple and elegant algorithm for performing a Yeo-Johnson (YJ) power transform on univariate numerical data. This is a nonlinear transform intended to make the data more like a Gaussian.
The paper shows that the likelihood objective is convex and that we can sign its derivative using linear aggregates, which can be performed using secure multiparty computation. This permits us to optimize the transformation using binary search / exponential search. In the honest-but-curious setting, this exponential search does not reveal any superfluous information.
Overall, the reviewers thought this was a valuable contribution and that the paper is well-written and technically sound and that the experimental evaluation is adequate.
However, the reviewers did identify some limitations and directions for further work:
- *Trust Model.* The paper only considers the honest-but-curious setting. What goes wrong if we consider a more powerful adversary? Can we provide further guarantees?
- *Leakage via final output.* There is still the possibility of leaking sensitive information via the final parameters $\lambda$, $\mu$, & $\sigma$. This is beyond the scope of MPC guarantees, but it raises the question of whether these techniques can be combined with tools like differential privacy to address this concern.
- *Efficiency.* The overhead of the system is still quite high in terms of rounds of interaction and total data transfer. The paper appropriately discusses this limitation.
- *Heterogeneous data.* The paper discusses data heterogeneity, but it is unclear how relevant this is. The algorithm only uses linear aggregates over the coalition, so it shouldn't matter if the data is homogeneous or heterogeneous. Figure 3 compares these two settings, but the results appear to be exactly the same. As such, this figure could be removed from the camera ready version.
On balance, this paper seems appropriate for acceptance at NeurIPS. Data preprocessing is an important task (that is often underappreciated) and the paper proposes and interesting method for doing this in the federated learning setting. | train | [
"Z9NZorGKg15",
"AEERKlWjuC",
"ERUN2zvYqI",
"SU7O2joWkku",
"KNjX92_lCKk",
"ms8zqPx-92h",
"_7YB2GYO2np",
"RJAgRPulBsX",
"aWJtiUnYaYT",
"yu9wtI_fnGd",
"7lSIWWj2H-M",
"Ex7JhV9lkqX",
"MuZjO20dQ9C",
"NTZ0iVfGk4",
"e4ZsMUrZsF",
"XO1-_jvcNhA",
"wvfxxiPZL57",
"GWEx3nQzeWj",
"Gh6_OpSmxnR",... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Dear reviewer W31E,\n\nThe discussion deadline is approaching, and we would like to know whether our detailed answer successfully adresses your remarks and questions. If it is the case, we would appreciate if you could reconsider your evaluation. Otherwise, we would be happy to discuss further with you any remain... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
4
] | [
"Gh6_OpSmxnR",
"KNjX92_lCKk",
"ms8zqPx-92h",
"wvfxxiPZL57",
"Ex7JhV9lkqX",
"_7YB2GYO2np",
"RJAgRPulBsX",
"nips_2022_25XIE30VHZE",
"NTZ0iVfGk4",
"7lSIWWj2H-M",
"nips_2022_25XIE30VHZE",
"MuZjO20dQ9C",
"jN0HJno7gCx",
"MdU9DtLjATH",
"XO1-_jvcNhA",
"Gh6_OpSmxnR",
"GWEx3nQzeWj",
"nips_20... |
nips_2022_Fytzfxj3Bq7 | Fixed-Distance Hamiltonian Monte Carlo | We propose a variation of the Hamiltonian Monte Carlo sampling (HMC) where the equations of motion are simulated for a fixed traversed distance rather than the conventional fixed simulation time. This new mechanism tends to generate proposals that have higher target probability values. The momentum distribution that is naturally joint with our Fixed-Distance HMC (FDHMC), and keeps the proposal acceptance probability close to 1, is not Gaussian and generates momentums that have a higher expected magnitude. This translates into a reduced correlation between the successive MCMC states and according to our experimental results, leads to an improvement in terms of the effective sample size per gradient when compared to the baseline HMC and No-U-Turn (NUTS) samplers.
| Accept | This paper proposes a new variant of Hamiltonian Monte Carlo. Rather than using a fixed number of iterations (as in the original HMC) or choosing the step-size adaptively (as in NUTS) the paper simulated the dynamics until a fixed *distance* has been traversed. The paper gives some arguments why this might be a good idea and a careful proof of detailed balance for thew new proposed algorithm. Reviewers agreed the algorithm seemed correct and the numerical results were compelling, but there were some existing concerns about the implementation of baseline algorithms and some questions about the technical details of the algorithm. Given the importance of HMC and the novelty and agreed correctness of this work, I recommend acceptance and urge the authors to consider the clarifying questions asked by the reviewers as opportunities for improving the paper. Also, additional evidence for experiments (e.g. perhaps comparing to another implementation of NUTS/HMC) would also be helpful. | train | [
"H_84q325puW",
"O0Wzu8N-I8u",
"wBjJi9rPVKj",
"YCmkDBA-Slf",
"oJZnkobyfuk",
"YSvVnPfeSTE",
"5C995JeVCh",
"vvdxHanOFhO",
"GJw-o8ItF6",
"agMZgsxcW86"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Once again, thanks for your suggestions and time spent on reviewing our paper.\nWhile we can still communicate, in case you have a particular numerical experiment in mind or could refer us to a related theoretical study that would potentially add to the value of the paper, we would appreciate if you could share i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"O0Wzu8N-I8u",
"vvdxHanOFhO",
"YCmkDBA-Slf",
"oJZnkobyfuk",
"agMZgsxcW86",
"GJw-o8ItF6",
"vvdxHanOFhO",
"nips_2022_Fytzfxj3Bq7",
"nips_2022_Fytzfxj3Bq7",
"nips_2022_Fytzfxj3Bq7"
] |
nips_2022_BbUxkmrstyk | An Investigation into Whitening Loss for Self-supervised Learning | A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive pairs under the conditioning that the embeddings from different views are whitened. In this paper, we propose a framework with an informative indicator to analyze whitening loss, which provides a clue to demystify several interesting phenomena as well as a pivoting point connecting to other SSL methods. We reveal that batch whitening (BW) based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. Based on our analysis, we propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based methods in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection reveal that the proposed CW-RGP possesses a promising potential for learning good representations. The code is available at https://github.com/winci-ai/CW-RGP. | Accept | This paper studies the impact of whitening losses used in recent Self-supervised learning (SSL) methods. It shows that the symmetric whitening loss can be decomposed into two asymmetric losses, explaining important behaviour experimentally observed (*e.g.* why some whitening transformations -*e.g.* PCA- are not always effective, and why whitened output is not always a good representation). The authors proposed a channel whitening with random group partition (CW-RGP) with good experimental performances.
The paper initially received mixed reviews: 2 borderline reject and 2 accept recommendations. The main reviewers concerns related to the soundness of the proposed analysis of the whitening loss, the fairness of the experiments, and the positioning and comparison to other recent baselines (*e.g.* [47], DINO or SwAV). The rebuttal did a good job in answering the reviewers' comments, and there was a consensus that the paper should be accepted after the rebuttal.
The AC's owns reading of the submission confirmed that the paper is a solid contribution for NeurIPS. It considers that the proposed analysis of whitening loss is valuable for the community, and that the proposed CW-RGP is meaningful and experimentally validated. Thus, the AC recommends acceptance.
The reviewer's consensus for acceptance has been reached given the authors' feedback including promises to improve the clarity and insights of the analysis, and regarding the positioning with recent baselines and new experimental results. Therefore, the authors are highly encouraged to carefully include these improvements into the final paper.
| train | [
"ZOd4wpYBh_0",
"Zk7WxqeJ4c",
"wgP6FqqPi6H",
"F_1GL0xSTNT",
"hQRmBL1L1Q_",
"lu3oGRgA9Fl",
"x_GHY4_Czu-",
"TxJEg1BTej",
"VVaP_YTVg4j",
"9qN8891QCin",
"rSugjUExorR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal, actually I don't have a lot of hesitation to accept this paper with well writing and sufficient experiments. If only 4 GPUs is the truth, I hope in the future with released code, there can be future job to make it finished. I still hold my rating as 7. Accept.",
" Thank you for the rebu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"TxJEg1BTej",
"hQRmBL1L1Q_",
"x_GHY4_Czu-",
"nips_2022_BbUxkmrstyk",
"lu3oGRgA9Fl",
"VVaP_YTVg4j",
"rSugjUExorR",
"9qN8891QCin",
"nips_2022_BbUxkmrstyk",
"nips_2022_BbUxkmrstyk",
"nips_2022_BbUxkmrstyk"
] |
nips_2022_q16HXpXtjJn | Beyond the Best: Distribution Functional Estimation in Infinite-Armed Bandits | In the infinite-armed bandit problem, each arm's average reward is sampled from an unknown distribution, and each arm can be sampled further to obtain noisy estimates of the average reward of that arm. Prior work focuses on the best arm, i.e. estimating the maximum of the average reward distribution. We consider a general class of distribution functionals beyond the maximum and obtain optimal sample complexities in both offline and online settings. We show that online estimation, where the learner can sequentially choose whether to sample a new or existing arm, offers no advantage over the offline setting for estimating the mean functional, but significantly reduces the sample complexity for other functionals such as the median, maximum, and trimmed mean. We propose unified meta algorithms for the online and offline settings and derive matching lower bounds using different Wasserstein distances. For the special case of median estimation, we identify a curious thresholding phenomenon on the indistinguishability between Gaussian convolutions with respect to the noise level, which may be of independent interest. | Accept | This paper studies offline and online statistical estimation in the infinite-armed bandit setting and gives a set of almost tight upper and lower bounds on the sample complexity. Initially, some reviewers raised concerns about the motivation of general functional estimation and the comparison with existing statistical estimation literature. The authors have made efforts on addressing these issues; their arguments seem reasonable. | train | [
"Ogzh9cfnOHs",
"86XqYg89zpF",
"lLb9Q-m1GlN",
"GTYfpTwAMH8-",
"KKaP9Sl7USj",
"DV6Km_RYIWH",
"byEiXHS38DR",
"4IxRoxDD8KdM",
"mG0cG704W3",
"F3TAi0UoVC",
"3nH9STRkPrS",
"DWH1ydaTvTn",
"uZ-fNyiPED"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your detailed comments and modification of your draft.\nI understand your use of the Doubling trick and its relevance to the related research. Although I still believe the work shouldn't be reviewed solely from the bandit's perspective, I acknowledge the contribution. I raised the score to 5.\n\nB... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"lLb9Q-m1GlN",
"byEiXHS38DR",
"DV6Km_RYIWH",
"uZ-fNyiPED",
"DWH1ydaTvTn",
"3nH9STRkPrS",
"F3TAi0UoVC",
"mG0cG704W3",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn"
] |
nips_2022_4-bV1bi74M | 🏘️ ProcTHOR: Large-Scale Embodied AI Using Procedural Generation | Massive datasets and high-capacity models have driven many recent advancements in computer vision and natural language understanding. This work presents a platform to enable similar success stories in Embodied AI. We propose ProcTHOR, a framework for procedural generation of Embodied AI environments. ProcTHOR enables us to sample arbitrarily large datasets of diverse, interactive, customizable, and performant virtual environments to train and evaluate embodied agents across navigation, interaction, and manipulation tasks. We demonstrate the power and potential of ProcTHOR via a sample of 10,000 generated houses and a simple neural model. Models trained using only RGB images on ProcTHOR, with no explicit mapping and no human task supervision produce state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation, including the presently running Habitat 2022, AI2-THOR Rearrangement 2022, and RoboTHOR challenges. We also demonstrate strong 0-shot results on these benchmarks, via pre-training on ProcTHOR with no fine-tuning on the downstream benchmark, often beating previous state-of-the-art systems that access the downstream training data. | Accept | *Summary*
The paper presents ProcThor, a framework to generate interactive 3D environments from an underlying distribution of room and object layouts. In the current work, 10000 3D environments of varying sizes, # rooms, and object distributions are sampled and enable simulation for object search and manipulation tasks. The experiments demonstrate that training policies on the synthetically generated environments and the finetuning it on other datasets like RoboThor, AI2Thor, and Habitat-Matterport3D lead to state-of-the-art performances. Furthermore, the ablation experiments reveal that the transfer task performance continues to improve as more and more virtual 3D environments are sampled for training. Finally, 'zero-shot' experiments (without finetuning) are included with surprisingly good results.
*Reviews*
The reviewers' ratings are 4 (borderline reject), 8 (strong accept) and 9 (very strong accept).
Reviewer L9s9 (borderline reject) did not respond to the rebuttal, but their main concerns were:
- (W1) the paper should be submitted to the datasets track
- (W2) the paper should include experiments with RGB-D sensors, and
- (W3) the paper should include experiments with other procedural generation schemes.
*Decision*
I have already discussed and dismissed concern W1 in my comments below. Regarding W2 and W3, these are asks for more experiments. The authors provided reasonable justifications and responses to these concerns. Further, I'm persuaded by the other reviewers that the paper already represents a 'massive feat of engineering' and note that the experiments already cover both zero-shot and finetuning settings on multiple leaderboards. Therefore I am putting less weight on L9s9's low rating.
AqAB and QxMz are united that this paper and the associated code release will have a significant impact on the Embodied AI community and I agree. The paper is also acknowledged as being extremely well-written, with very thorough experiments, and I feel it could be considered for an outstanding paper award.
| train | [
"C5tBcHGGlq4",
"oU8JAC-4E8w",
"MmIkCK1y5Ib",
"0lqBe2AFFTZ",
"VmKms6fvNIwO",
"Uq2dKEhlCyi",
"FVA96BnGFoj",
"HYArW8bFU1WE",
"O_lQ55QpsQk",
"Vsi1qwnDNKb",
"iRB6vYvIGNGt",
"lYPczfQCw1D",
"iG-XnnsV3kB",
"quTNP5JCuUh",
"XgNJZM4m4y",
"hPkFpUvkv-M",
"arB0LJ1_MrB",
"JXpjVKrsxCj",
"g8VaCGU... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We thank the reviewer for their thoughtful feedback and support of the work.\n\n> I'd recommend adding the response to W.5 into the paper to make it slightly stronger.\n\nThank you for the suggestion. We agree and have added the results to Section H of the Appendix.",
" We thank the reviewer for their detailed ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"HYArW8bFU1WE",
"Vsi1qwnDNKb",
"VmKms6fvNIwO",
"XgNJZM4m4y",
"iRB6vYvIGNGt",
"Vsi1qwnDNKb",
"quTNP5JCuUh",
"iG-XnnsV3kB",
"lYPczfQCw1D",
"JXpjVKrsxCj",
"nips_2022_4-bV1bi74M",
"XgNJZM4m4y",
"XgNJZM4m4y",
"XgNJZM4m4y",
"NihVzPgZSSY",
"g8VaCGU4CH8",
"g8VaCGU4CH8",
"g8VaCGU4CH8",
"0... |
nips_2022_2GsQ8dyfe45 | M$^4$I: Multi-modal Models Membership Inference | With the development of machine learning techniques, the attention of research has been moved from single-modal learning to multi-modal learning, as real-world data exist in the form of different modalities. However, multi-modal models often carry more information than single-modal models and they are usually applied in sensitive scenarios, such as medical report generation or disease identification. Compared with the existing membership inference against machine learning classifiers, we focus on the problem that the input and output of the multi-modal models are in different modalities, such as image captioning. This work studies the privacy leakage of multi-modal models through the lens of membership inference attack, a process of determining whether a data record involves in the model training process or not. To achieve this, we propose Multi-modal Models Membership Inference (M$^4$I) with two attack methods to infer the membership status, named metric-based (MB) M$^4$I and feature-based (FB) M$^4$I, respectively. More specifically, MB M$^4$I adopts similarity metrics while attacking to infer target data membership. FB M$^4$I uses a pre-trained shadow multi-modal feature extractor to achieve the purpose of data inference attack by comparing the similarities from extracted input and output features. Extensive experimental results show that both attack methods can achieve strong performances. Respectively, 72.5% and 94.83% of attack success rates on average can be obtained under unrestricted scenarios. Moreover, we evaluate multiple defense mechanisms against our attacks. The source code of M$^4$I attacks is publicly available at https://github.com/MultimodalMI/Multimodal-membership-inference.git. | Accept | This work studies membership inference attacks for multimodal models. It proposes a few different attacks under different assumptions on the attack model, and evaluates them empirically.
The reviewers found the problem interesting and the paper well-written. The paper is a welcome addition to the literature on membership inference attacks and should be of interest to this conference. I would encourage the reviewers to address the feedback from the reviewers in the final version. I recommend acceptance. | train | [
"g7vgPMYGERb",
"6dTs5a-7czX",
"sak57AaABQ0",
"_d9nRmTYGIb",
"XQ-VTs3YMMu",
"MSIN-Yr3rF3",
"oRG6jjzrVC",
"MZs-PsYHmMS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comment. As we responded previously, we considered METEOR [r2], a metric based on precision and recall scores, as an additional metric, but there is not much difference in terms of performance in comparison to what the paper used. Due to little difference in terms of performance and space limitati... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
9
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"6dTs5a-7czX",
"XQ-VTs3YMMu",
"MZs-PsYHmMS",
"oRG6jjzrVC",
"MSIN-Yr3rF3",
"nips_2022_2GsQ8dyfe45",
"nips_2022_2GsQ8dyfe45",
"nips_2022_2GsQ8dyfe45"
] |
nips_2022_qtQ9thon9fV | FOF: Learning Fourier Occupancy Field for Monocular Real-time Human Reconstruction | The advent of deep learning has led to significant progress in monocular human reconstruction. However, existing representations, such as parametric models, voxel grids, meshes and implicit neural representations, have difficulties achieving high-quality results and real-time speed at the same time. In this paper, we propose Fourier Occupancy Field (FOF), a novel, powerful, efficient and flexible 3D geometry representation, for monocular real-time and accurate human reconstruction. A FOF represents a 3D object with a 2D field orthogonal to the view direction where at each 2D position the occupancy field of the object along the view direction is compactly represented with the first few terms of Fourier series, which retains the topology and neighborhood relation in the 2D domain. A FOF can be stored as a multi-channel image, which is compatible with 2D convolutional neural networks and can bridge the gap between 3D geometries and 2D images. A FOF is very flexible and extensible, \eg, parametric models can be easily integrated into a FOF as a prior to generate more robust results. Meshes and our FOF can be easily inter-converted. Based on FOF, we design the first 30+FPS high-fidelity real-time monocular human reconstruction framework. We demonstrate the potential of FOF on both public datasets and real captured data. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/FOF. | Accept | This paper received 3 positive reviews: 2xBA + A. All reviewers acknowledged that this work introduces meaningful and non-trivial contributions, it is well presented, and the claims are supported by strong empirical performance. The remaining questions and concerns were addressed in the authors' responses, which seemed convincing to the reviewers.
The final recommendation is therefore to accept. | val | [
"M6rqQb7-hlx",
"9eN2CqumnDV",
"W9AraIXswtX",
"UMGP2SxeYW",
"bLeFpXIu81w",
"5PlE-6bsopa",
"gvHTgxYFTWh",
"dHBS8kLaRHP",
"b76tx2saz3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank authors' time and effort to answer my questions. I think the newly-added results are informative and demonstrate FOF's robustness. I maintain my score for this work.",
" Dear Reviewers:\n\nThank you very much for your time and effort in reviewing our paper. It is less than **15 hours** before the end of... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"W9AraIXswtX",
"nips_2022_qtQ9thon9fV",
"b76tx2saz3",
"dHBS8kLaRHP",
"dHBS8kLaRHP",
"gvHTgxYFTWh",
"nips_2022_qtQ9thon9fV",
"nips_2022_qtQ9thon9fV",
"nips_2022_qtQ9thon9fV"
] |
nips_2022_pfEIGgDstz0 | Non-rigid Point Cloud Registration with Neural Deformation Pyramid | Non-rigid point cloud registration is a key component in many computer vision and computer graphics applications. The high complexity of the unknown non-rigid motion make this task a challenging problem. In this paper, we break down this problem via hierarchical motion decomposition. Our method called Neural Deformation Pyramid (NDP) represents non-rigid motion using a pyramid architecture. Each pyramid level, denoted by a Multi-Layer Perception (MLP), takes as input a sinusoidally encoded 3D point and outputs its motion increments from the previous level. The sinusoidal function starts with a low input frequency and gradually increases when the pyramid level goes down. This allows a multi-level rigid to nonrigid motion decomposition and also speeds up the solving by ×50 times compared to the existing MLP-based approach. Our method achieves advanced partial-to-partial non-rigid point cloud registration results on the 4DMatch/4DLoMatch
benchmark under both no-learned and supervised settings. | Accept | All reviewers agree this work is a creative approach to nonrigid registration, which is particularly hard in the mesh-free point cloud setting. The discussion between the authors and reviewers was extremely productive and addressed most of the major concerns about this work.
In preparing the camera ready, the authors are encouraged to incorporate the new tables of results that appear in the discussion below into the paper and/or supplemental materials, to make sure it is archived. | train | [
"MQybYHIBaoK",
"FM4SicqE5j0",
"ZpWBxyYEaEm",
"4A-yYQcXMzar",
"LJ5AqphgpjB",
"HUvm3_PYhH8",
"oG-dtzjSYOp",
"gsKgJsUNiO8Z",
"Zyhj4IcUCl_",
"vXqyWKZso6A",
"Vuutvcw8Xmp",
"tw3X98-zvZ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author feedback addresses most of my concerns. Taking into account the comments from other reviewers and the author feedback, I am inclined to accept this paper, but rejecting would it not be that bad.\n",
" I thank the authors for their effort to address my concerns.\n\nI have carefully read the rebuttal a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"HUvm3_PYhH8",
"gsKgJsUNiO8Z",
"nips_2022_pfEIGgDstz0",
"vXqyWKZso6A",
"Zyhj4IcUCl_",
"Vuutvcw8Xmp",
"Vuutvcw8Xmp",
"tw3X98-zvZ",
"nips_2022_pfEIGgDstz0",
"nips_2022_pfEIGgDstz0",
"nips_2022_pfEIGgDstz0",
"nips_2022_pfEIGgDstz0"
] |
nips_2022_YgK1wNnoCWy | Green Hierarchical Vision Transformer for Masked Image Modeling | We present an efficient approach for Masked Image Modeling (MIM) with hierarchical Vision Transformers (ViTs), allowing the hierarchical ViTs to discard masked patches and operate only on the visible ones. Our approach consists of three key designs. First, for window attention, we propose a Group Window Attention scheme following the Divide-and-Conquer strategy. To mitigate the quadratic complexity of the self-attention w.r.t. the number of patches, group attention encourages a uniform partition that visible patches within each local window of arbitrary size can be grouped with equal size, where masked self-attention is then performed within each group. Second, we further improve the grouping strategy via the Dynamic Programming algorithm to minimize the overall computation cost of the attention on the grouped patches. Third, as for the convolution layers, we convert them to the Sparse Convolution that works seamlessly with the sparse data, i.e., the visible patches in MIM. As a result, MIM can now work on most, if not all, hierarchical ViTs in a green and efficient way. For example, we can train the hierarchical ViTs, e.g., Swin Transformer and Twins Transformer, about 2.7$\times$ faster and reduce the GPU memory usage by 70%, while still enjoying competitive performance on ImageNet classification and the superiority on downstream COCO object detection benchmarks. | Accept | After rebuttal and discussion all reviewers recommend acceptance. The AC sees no reason to overturn this recommendation. | test | [
"9knBlOMv5vo",
"qUIjtjDPra",
"fepmyKQ5536",
"gpzBZbAFkZjT",
"FYcMqSoLXOP",
"SXh5eukv0f9",
"XghQFxBTqo",
"ykX2LO-Ow2P",
"4_wUAaZAYds",
"U_R7fPslxvb",
"wqbox9KFCB",
"Nw8YQrjs_Y",
"xgcy7J31I-"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I do not have further questions. ",
" Dear reviewer, thanks again for the careful reviews and constructive suggestions. We have provided additional explanations and experiments to address the concerns accordingly. Since the deadline of the author-reviewer discussion period is approaching soon, we would like to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"fepmyKQ5536",
"xgcy7J31I-",
"wqbox9KFCB",
"FYcMqSoLXOP",
"SXh5eukv0f9",
"4_wUAaZAYds",
"xgcy7J31I-",
"xgcy7J31I-",
"Nw8YQrjs_Y",
"wqbox9KFCB",
"nips_2022_YgK1wNnoCWy",
"nips_2022_YgK1wNnoCWy",
"nips_2022_YgK1wNnoCWy"
] |
nips_2022_L8ESR8IQ7Gb | Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models | Transfer learning aims to leverage knowledge from pre-trained models to benefit the target task. Prior transfer learning work mainly transfers from a single model. However, with the emergence of deep models pre-trained from different resources, model hubs consisting of diverse models with various architectures, pre-trained datasets and learning paradigms are available. Directly applying single-model transfer learning methods to each model wastes the abundant knowledge of the model hub and suffers from high computational cost. In this paper, we propose a Hub-Pathway framework to enable knowledge transfer from a model hub. The framework generates data-dependent pathway weights, based on which we assign the pathway routes at the input level to decide which pre-trained models are activated and passed through, and then set the pathway aggregation at the output level to aggregate the knowledge from different models to make predictions. The proposed framework can be trained end-to-end with the target task-specific loss, where it learns to explore better pathway configurations and exploit the knowledge in pre-trained models for each target datum. We utilize a noisy pathway generator and design an exploration loss to further explore different pathways throughout the model hub. To fully exploit the knowledge in pre-trained models, each model is further trained by specific data that activate it, which ensures its performance and enhances knowledge transfer. Experiment results on computer vision and reinforcement learning tasks demonstrate that the proposed Hub-Pathway framework achieves the state-of-the-art performance for model hub transfer learning. | Accept | The submission introduces an approach called Hub-Pathway to leverage a diverse collection of pre-trained models for transfer learning. Hub-Pathway trains a pathway generator network to route examples to various models in a data-dependent manner and aggregates the outputs to produce task-specific predictions. Noise is added to the pathway generator and its output is entropy-regularized to encourage exploration, and the activated models are also individually trained on the target loss to encourage exploitation. The approach is evaluated on several image classification, facial landmark detection, and reinforcement learning tasks.
Reviewers noted the paper's clarity and writing quality and found the empirical evaluation extensive and convincing. On the other hand, they expressed doubts regarding Hub-Pathway's computational and memory complexity at training and inference time, in particular in comparison to the alternative of using an ensemble of models.
The authors responded by citing existing results in the submission and providing new results in the revised appendix showing that Hub-Pathway does in fact have lower computational complexity than an ensemble. They also argued through additional results that the forward propagation through multiple models (as opposed to holding the parameters of multiple models in memory) is the main memory bottleneck (which model ensembles also face), and that Hub-Pathway does better in that regard than model ensembles due to the pathway activation mechanism.
Overall the authors' response was satisfying to the reviewers, and their consensus is that the submission should be accepted. I therefore recommend acceptance. | train | [
"CxH3ftHiiby",
"CrCDkj8cjGL",
"L7GpOjjRwrj",
"IfLd_E8SpNo",
"LLYwpOcUXL9",
"B7P8nHXMP2G",
"hBeNdgGU0Jj",
"zfPPXbcvjjC",
"c-d617Lvv9N",
"WU568XNnBgc",
"QX_gju7Utx8",
"g3h89MCecwz",
"szBTdYCgwSq",
"k00HPMBPmOL",
"ha85E6HloL",
"UZhDxIUX-sZ",
"3Ww_gLjpWmt",
"AlBVBHHhzld",
"NOCCB3QKli... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you for providing inspiring comments and timely responses. ",
" Dear Authors,\n\nThank you for responding to the questions. This work is really interesting, and I, therefore, stand by my (accepting) recommendation.",
" Many thanks for your efforts in reviewing our paper and responses, and your comments ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"CrCDkj8cjGL",
"gwr53nFE0St",
"IfLd_E8SpNo",
"sXXkI7jCY-o",
"B7P8nHXMP2G",
"hBeNdgGU0Jj",
"zfPPXbcvjjC",
"WU568XNnBgc",
"QX_gju7Utx8",
"QX_gju7Utx8",
"AlBVBHHhzld",
"szBTdYCgwSq",
"k00HPMBPmOL",
"ha85E6HloL",
"UZhDxIUX-sZ",
"X0GuEKBzoOC",
"fbXyWBj6rt9",
"fbXyWBj6rt9",
"sXXkI7jCY-... |
nips_2022_rcrY85WLAKU | Cost-efficient Gaussian tensor network embeddings for tensor-structured inputs | This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate applications such as tensor decomposition and kernel regression. Existing works have designed embeddings for inputs $x$ with specific structures, such as the Kronecker product or Khatri-Rao product, such that the computational cost for calculating $Sx$ is efficient. We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of $S$) and the sketching computational cost are low.
We analyze general tensor network embeddings that can be reduced to a sequence of sketching matrices. We provide a sufficient condition to quantify the accuracy of such embeddings and derive sketching asymptotic cost lower bounds using embeddings that satisfy this condition and have a sketch size lower than any input dimension. We then provide an algorithm to efficiently sketch input data using such embeddings. The sketch size of the embedding used in the algorithm has a linear dependence on the number of sketching dimensions of the input. Assuming tensor contractions are performed with classical dense matrix multiplication algorithms, this algorithm achieves asymptotic cost within a factor of $O(\sqrt{m})$ of our cost lower bound, where $m$ is the sketch size. Further, when each tensor in the input has a dimension that needs to be sketched, this algorithm yields the optimal sketching asymptotic cost. We apply our sketching analysis to inexact tensor decomposition optimization algorithms. We provide a sketching algorithm for CP decomposition that is asymptotically faster than existing work in multiple regimes, and show the optimality of an existing algorithm for tensor train rounding.
| Accept | Summary:
The major strength is that the sketch size is polynomial in the number of modes for tensor train. This was not known in previous work, for example in the paper by Rakhshan and Rabusseau https://arxiv.org/abs/2003.05101 which is reference 38 and gets a sketch size which is exponential in the number of modes. There is concurrent work that also gets this here: https://arxiv.org/abs/2207.07417
Theorem 4.3 seems like it could be of independent interest. It shows that under reasonable assumptions, assuming the contraction order of the data tensor is fixed, if the embedding is a tree, then this gets the optimal running time - more complex embeddings are not needed (even if the data tensor has cycles). Section 4 is focused on giving an algorithm such that, given a data tensor and an embedding, and a fixed contraction order for the data tensor, applies the embedding to the data tensor with the smallest possible asymptotic running time. Not sure if previous work has studied this question, however.
One weakness is that the works doesn’t seem to compare to previous work related to Section 4, or running time for subspace embeddings for tensor networks. Their CP decomposition algorithm is also incomparable to the other CP decomposition algorithms they mention, since their per-iteration running time for ALS has a 1/eps^5 term. Perhaps if s (the size of each dimension of the tensor) is much bigger than N (the number of modes) or R (the rank), then the second term in the running time should dominate, meaning that the algorithm in this paper would still be significantly better than Recursive LSS (by a factor of NR as they mention in page 8), so this might not be a significant weakness.
Evaluation:
Based on the reviews and my understanding, I think this meets the bar for NeurIPS. The sketch size for tensor train and the application to CP seem useful, and tensor train rounding seems to be a strong motivation. The significance of Section 4 is not completely clear, but the other results seem to be enough by themselves.
| train | [
"WBcvgLfrJgY",
"r9pR8EC_jno",
"yBqrqWHqkBK",
"0_1e2f6DIEO",
"-sqntrDjLOo",
"F1h__Iu71M5",
"kl_qUJOM6Kd"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for the constructive feedback and great questions! Our comments to your suggestions are as follows:\n\nQ: The intro of the alg details (sec. 4) is hard to follow, and the meaning of U in (4.2) is not clear:\n\nA: Thanks! $(U_i,V_i)$ in (4.2) represents the contraction of two in... | [
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
2,
2,
4
] | [
"kl_qUJOM6Kd",
"F1h__Iu71M5",
"-sqntrDjLOo",
"nips_2022_rcrY85WLAKU",
"nips_2022_rcrY85WLAKU",
"nips_2022_rcrY85WLAKU",
"nips_2022_rcrY85WLAKU"
] |
nips_2022_jPx7vYUNUCt | Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators | The spectacular successes of recurrent neural network models where key parameters are adjusted via backpropagation-based gradient descent have inspired much thought as to how biological neuronal networks might solve the corresponding synaptic credit assignment problem [1, 2, 3]. There is so far little agreement, however, as to how biological networks could implement the necessary backpropagation through time, given widely recognized constraints of biological synaptic network signaling architectures. Here, we propose that extra-synaptic diffusion of local neuromodulators such as neuropeptides may afford an effective mode of backpropagation lying within the bounds of biological plausibility. Going beyond existing temporal truncation-based gradient approximations [4, 5, 6], our approximate gradient-based update rule, ModProp, propagates credit information through arbitrary time steps. ModProp suggests that modulatory signals can act on receiving cells by convolving their eligibility traces via causal, time-invariant and synapse-type-specific filter taps. Our mathematical analysis of ModProp learning, together with simulation results on benchmark temporal tasks, demonstrate the advantage of ModProp over existing biologically-plausible temporal credit assignment rules. These results suggest a potential neuronal mechanism for signaling credit information related to recurrent interactions over a longer time horizon. Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time. | Accept | Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators
The authors propose a biologically plausible method for temporal credit assignment called ModProp. They apply their framework on rate-based recurrent neural networks (RNNs), and show that it outperforms previous approaches.
All reviewers acknowledge that this work studies an interesting topic in computational neuroscience. The authors present compelling experimental results on synthetic data sets.
Weaknesses:
- Long-term dependencies cannot be tackled by this approach, which is quite common also for related approaches.
- Somewhat weak experimental evaluation. Evaluation on more complex standard data sets would be beneficial.
- The arguments for the used approximations is left in the appendix.
In general, an interesting study with good experimental results. I propose acceptance. | train | [
"U4CTdYXjnb8",
"2Cc9uGi9FV",
"jvJ31NsbWS",
"zF3RSqy9Iqu",
"S8walkrW75l",
"EUugZmzV58s",
"6i9x-8ohuvl",
"xWC-8XpDtKt",
"xY7zSa_zSSC",
"EdvAz5nCGsV",
"uoQLvw_FJtK",
"Ohv9vEkCu7Z",
"GIUXNPORPEm",
"TriNcsolIl1L",
"nHsR12K3SJU",
"EzgIWeV72Rp",
"avMofTyYkyN",
"elzW55GFmgJ",
"0-56s89er3... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" We have updated the submission to remove all blue coloring of texts on August 9th. ",
" We are very grateful to this reviewer for reading our response and taking that into consideration to revise their score. We thank the reviewer again for all their constructive feedback that led to the improvement of this pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"EzgIWeV72Rp",
"jvJ31NsbWS",
"wiUydbJt9xE",
"EUugZmzV58s",
"6i9x-8ohuvl",
"EdvAz5nCGsV",
"seZcY92TfoF",
"xY7zSa_zSSC",
"Ohv9vEkCu7Z",
"uoQLvw_FJtK",
"elzW55GFmgJ",
"nHsR12K3SJU",
"nHsR12K3SJU",
"4MszX1b-X6L",
"4MszX1b-X6L",
"nips_2022_jPx7vYUNUCt",
"Z8EbupXOFME",
"Z8EbupXOFME",
"... |
nips_2022_OlGu-BXgJ- | Wasserstein $K$-means for clustering probability distributions | Clustering is an important exploratory data analysis technique to group objects based on their similarity. The widely used $K$-means clustering method relies on some notion of distance to partition data into a fewer number of groups. In the Euclidean space, centroid-based and distance-based formulations of the $K$-means are equivalent. In modern machine learning applications, data often arise as probability distributions and a natural generalization to handle measure-valued data is to use the optimal transport metric. Due to non-negative Alexandrov curvature of the Wasserstein space, barycenters suffer from regularity and non-robustness issues. The peculiar behaviors of Wasserstein barycenters may make the centroid-based formulation fail to represent the within-cluster data points, while the more direct distance-based $K$-means approach and its semidefinite program (SDP) relaxation are capable of recovering the true cluster labels. In the special case of clustering Gaussian distributions, we show that the SDP relaxed Wasserstein $K$-means can achieve exact recovery given the clusters are well-separated under the $2$-Wasserstein metric. Our simulation and real data examples also demonstrate that distance-based $K$-means can achieve better classification performance over the standard centroid-based $K$-means for clustering probability distributions and images. | Accept | This paper provides a Wasserstein-based k-means formulation for clustering probability distributions. Though the overall reception was mildly positive but two reviewers raised their scores following the author feedback. There remains some doubts that there is enough evidence to support all claims in the paper--most relevant, referees have noted to us that fundamentally a strong empirical validation is lacking. We recommend revising the paper and focusing on a strong empirical narrative (without relegating new details to the appendix), together with improving the exposition as outlined by reviewers | train | [
"ASvdpUWTyVy",
"rys1kUGLNs_",
"xXHb2SSYNKQ",
"R4oABfb2SDD1",
"xqqr05hDDuh",
"2O-AUBm9g8q",
"g6I-7EGHfg9",
"vWKvv5lf1hy"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" (__On scalability of our approaches__) The scalability issue is one of our concerns. As we pointed out in Appendix B, we may consider several methods to bring down the time cost e.g. subsampling-based method for K-means and SDP. In the revision, we can observe the time complexity issue for Wasserstein $K$-means m... | [
-1,
-1,
-1,
-1,
4,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"vWKvv5lf1hy",
"g6I-7EGHfg9",
"2O-AUBm9g8q",
"xqqr05hDDuh",
"nips_2022_OlGu-BXgJ-",
"nips_2022_OlGu-BXgJ-",
"nips_2022_OlGu-BXgJ-",
"nips_2022_OlGu-BXgJ-"
] |
nips_2022__4xg5moXVg | Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules | To unveil how the brain learns, ongoing work seeks biologically-plausible approximations of gradient descent algorithms for training recurrent neural networks (RNNs). Yet, beyond task accuracy, it is unclear if such learning rules converge to solutions that exhibit different levels of generalization than their non-biologically-plausible counterparts. Leveraging results from deep learning theory based on loss landscape curvature, we ask: how do biologically-plausible gradient approximations affect generalization? We first demonstrate that state-of-the-art biologically-plausible learning rules for training RNNs exhibit worse and more variable generalization performance compared to their machine learning counterparts that follow the true gradient more closely. Next, we verify that such generalization performance is correlated significantly with loss landscape curvature, and we show that biologically-plausible learning rules tend to approach high-curvature regions in synaptic weight space. Using tools from dynamical systems, we derive theoretical arguments and present a theorem explaining this phenomenon. This predicts our numerical results, and explains why biologically-plausible rules lead to worse and more variable generalization properties. Finally, we suggest potential remedies that could be used by the brain to mitigate this effect. To our knowledge, our analysis is the first to identify the reason for this generalization gap between artificial and biologically-plausible learning rules, which can help guide future investigations into how the brain learns solutions that generalize. | Accept | This paper applied ideas about generalization in the ML literature to biologically plausible architectures and learning rules. Especially, it explored links between curvature and generalization in biologically plausible learning.
There was active discussion about this paper, and three reviewers raised their scores during the rebuttal period. All reviewers felt this was a high quality paper, and that the results would be useful for later research. The closest thing to a criticism that came up during discussion was one reviewer describing the paper as a "high quality but incremental addition to the scientific literature."
Based upon the reviews, rebuttal, and reviewer discussion, I recommend paper acceptance. The authors should be sure to update their paper as discussed during the rebuttal period, and based upon the reviewer feedback.
PS -- Links between TBTT and loss surface curvature would also be of interest in learned optimization and meta-learning more broadly, where meta-training is often performed via truncated unrolls of the inner problem. | train | [
"G51mBY7UFxo",
"qF9Ik0Ie5KE",
"Ll-g0lkzSBq",
"oTFL7nwrVaQ",
"ZjrJcehtem",
"5BGtUiXWCj7",
"5Y4DoFNE5uv",
"yzVlqhvVZBr",
"fwcneIw6H0m",
"KC3llYU3e9j",
"3_MC2D5coJM",
"9dlmhrRwzoD",
"ZdVQz3Wo-71",
"w7sSzyXANf1",
"trE5VAqyXlv",
"eYCLY_vttPW",
"K0MgBpEuYJt",
"f4bafnugqmH",
"bRwfxsjhdJ... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",... | [
" We would like to take this final moment to thank the reviewer once more for a stimulating exchange and encouraging feedback. We hope the reviewer will be satisfied with the changes made to the paper and appendix, as they suggested. We would be grateful if this would enable a score increase.\n\nbest regards,\nthe ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"5BGtUiXWCj7",
"5BGtUiXWCj7",
"5BGtUiXWCj7",
"ZjrJcehtem",
"5Y4DoFNE5uv",
"KC3llYU3e9j",
"nRmWsix2wj",
"2yeyDZ2bdw3",
"0Qn0mtD-iXX",
"3_MC2D5coJM",
"eYCLY_vttPW",
"nips_2022__4xg5moXVg",
"nRmWsix2wj",
"nRmWsix2wj",
"1OK9HSJhi9",
"1OK9HSJhi9",
"2yeyDZ2bdw3",
"2yeyDZ2bdw3",
"0Qn0mt... |
nips_2022_oDWyVsHBzNT | Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief | Model-based offline reinforcement learning (RL) aims to find highly rewarding policy, by leveraging a previously collected static dataset and a dynamics model. While the dynamics model learned through reuse of the static dataset, its generalization ability hopefully promotes policy learning if properly utilized. To that end, several works propose to quantify the uncertainty of predicted dynamics, and explicitly apply it to penalize reward. However, as the dynamics and the reward are intrinsically different factors in context of MDP, characterizing the impact of dynamics uncertainty through reward penalty may incur unexpected tradeoff between model utilization and risk avoidance. In this work, we instead maintain a belief distribution over dynamics, and evaluate/optimize policy through biased sampling from the belief. The sampling procedure, biased towards pessimism, is derived based on an alternating Markov game formulation of offline RL. We formally show that the biased sampling naturally induces an updated dynamics belief with policy-dependent reweighting factor, termed Pessimism-Modulated Dynamics Belief. To improve policy, we devise an iterative regularized policy optimization algorithm for the game, with guarantee of monotonous improvement under certain condition. To make practical, we further devise an offline RL algorithm to approximately find the solution. Empirical results show that the proposed approach achieves state-of-the-art performance on a wide range of benchmark tasks. | Accept | I went through the manuscript, reviews and authors' responses. I think this paper is qualified for NeurIPS publication. | train | [
"ER0gTgELF_F",
"wpIRTIHinzK",
"EFQivrnq3I3",
"-RtVCxVl0r",
"69jVhiNORQI",
"Rt01aVd8VxZ",
"vTW7hmCTPCv",
"-al2f-beuz-",
"AW6qrRJqf7Z",
"vNS1lRbBcAc",
"xKMYXqoWRKh",
"Sd63hkSIfz",
"a8qT8X8NAjw",
"Uuwsmb6R8fj",
"bzEaOVOtqHL",
"HMvhWqJy8rL",
"Vid6s74DSYt"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for your time to further evaluate the value of this work!",
" I have carefully read your new comments. Considering the potential impacts to RL community, I believe this work deserves a higher score. I've raised my score. Congrats to the authors for the remarkable work!",
" We thank all review... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"wpIRTIHinzK",
"EFQivrnq3I3",
"nips_2022_oDWyVsHBzNT",
"AW6qrRJqf7Z",
"Rt01aVd8VxZ",
"xKMYXqoWRKh",
"-al2f-beuz-",
"Sd63hkSIfz",
"Vid6s74DSYt",
"HMvhWqJy8rL",
"HMvhWqJy8rL",
"bzEaOVOtqHL",
"Uuwsmb6R8fj",
"nips_2022_oDWyVsHBzNT",
"nips_2022_oDWyVsHBzNT",
"nips_2022_oDWyVsHBzNT",
"nips... |
nips_2022_mNtFhoNRr4i | Hierarchical classification at multiple operating points | Many classification problems consider classes that form a hierarchy. Classifiers that are aware of this hierarchy may be able to make confident predictions at a coarse level despite being uncertain at the fine-grained level. While it is generally possible to vary the granularity of predictions using a threshold at inference time, most contemporary work considers only leaf-node prediction, and almost no prior work has compared methods at multiple operating points. We present an efficient algorithm to produce operating characteristic curves for any method that assigns a score to every class in the hierarchy. Applying this technique to evaluate existing methods reveals that top-down classifiers are dominated by a naive flat softmax classifier across the entire operating range. We further propose two novel loss functions and show that a soft variant of the structured hinge loss is able to significantly outperform the flat baseline. Finally, we investigate the poor accuracy of top-down classifiers and demonstrate that they perform relatively well on unseen classes. | Accept | The submission benchmarks several hierarchical classification techniques showing that flat softmax on the leaf nodes dominates most methods. This is quite a negative result for previous works, indicating that efforts on hierarchical classification losses are often not leading to better results. The authors introduce a loss function that does appear to result in performance beating the flat classification baseline. The writing style flows, there is a reasonably good review of the literature, and results appear to give some insights into methods for performing and evaluating hierarchical classification methods, which are complementary to the existing literature on a problem that has been studied in various forms for decades. The reviewers were unanimous in their opinion that the submission is just over the threshold for acceptance at NeurIPS after the rebuttal process. | train | [
"An_1g8accfa",
"SmVPvBM0v81",
"WCO_3hvuALs",
"Wn3jgW7C9Wx",
"xSWlXLgU0D8",
"DxAvIICF8kG",
"P3H-oJDlYsr",
"DhV-tDMmUT6",
"__g2hw7B1YG",
"-jBTV0Z59ma",
"lUOunTrhFiE"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This helps with the understanding",
" We have uploaded a revised version of the paper to address the main requests. Here is a summary of the changes.\n\n* Change paragraph 2 of Introduction to make motivation of curves more explicit\n* Add bullet point contributions at end of introduction\n* Related work: Remov... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"DhV-tDMmUT6",
"nips_2022_mNtFhoNRr4i",
"DxAvIICF8kG",
"xSWlXLgU0D8",
"P3H-oJDlYsr",
"-jBTV0Z59ma",
"__g2hw7B1YG",
"lUOunTrhFiE",
"nips_2022_mNtFhoNRr4i",
"nips_2022_mNtFhoNRr4i",
"nips_2022_mNtFhoNRr4i"
] |
nips_2022_YR-s5leIvh | CLEAR: Generative Counterfactual Explanations on Graphs | Counterfactual explanations promote explainability in machine learning models by answering the question “how should the input instance be altered to obtain a desired predicted label?". The comparison of this instance before and after perturbation can enhance human interpretation. Most existing studies on counterfactual explanations are limited in tabular data or image data. In this paper, we study the problem of counterfactual explanation generation on graphs. A few studies have explored to generate counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed: 1) optimizing in the discrete and disorganized space of graphs; 2) generalizing on unseen graphs; 3) maintaining the causality in the generated counterfactuals without prior knowledge of the causal model. To tackle these challenges, we propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models. Specifically, CLEAR leverages a graph variational autoencoder based mechanism to facilitate its optimization and generalization, and promotes causality by leveraging an auxiliary variable to better identify the causal model. Extensive experiments on both synthetic and real-world graphs validate the superiority of CLEAR over state-of-the-art counterfactual explanation methods on graphs in different aspects.
| Accept | This paper proposes a new method for producing counterfactuals on graphs. This is performed using a VAE on graphs with auxiliary variables to identify independent components and promote causality. While this work is mainly a combination of existing ideas, the resulting method is not trivial.
The engaged discussion clarified most of the concerns except a remaining concern around the diversity of the explanations. The reviewer was encouraging to measure (or optimize for) the diversity of explanation. That is, explanations that are significantly different (e.g. orthogonal from each other) in latent space. This is not a ground for rejection but it could improve this work and we encourage the authors to add this feature.
I recommend acceptance of this paper.
| train | [
"lS9S5Z5czXZ",
"pi9Eqq0mu_P",
"yYgzkNeMVI",
"yYxCvdKA818",
"7FaANxkZaE",
"sgE_KaI38py",
"X5hmKSqXymj",
"bgIHgLnEop",
"IxYeXt1WUS",
"iu-mTMB-3le",
"7O4K674rpVp",
"m5TJkBfDPu",
"3KmyohAtGW",
"arsTcYUymKh",
"ez_cHc3Cu_K",
"qBdQWczCBfl",
"qe2t1c6rW1",
"p3baE5u-zGw",
"LTOqVZE7qJD"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Yes. The graph structure of the counterfactuals is different. Thank you for this suggestion, we will add this clarification in the paper.",
" When you say \"not the same\" you mean that the discrete graph structure is different? If so, that is encouraging, you might want to clarify this in the paper.",
" Than... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"pi9Eqq0mu_P",
"yYgzkNeMVI",
"yYxCvdKA818",
"IxYeXt1WUS",
"bgIHgLnEop",
"X5hmKSqXymj",
"7O4K674rpVp",
"iu-mTMB-3le",
"qBdQWczCBfl",
"m5TJkBfDPu",
"arsTcYUymKh",
"3KmyohAtGW",
"p3baE5u-zGw",
"ez_cHc3Cu_K",
"qe2t1c6rW1",
"LTOqVZE7qJD",
"nips_2022_YR-s5leIvh",
"nips_2022_YR-s5leIvh",
... |
nips_2022_MHjxpvMzf2x | Symmetry Teleportation for Accelerated Optimization | Existing gradient-based optimization methods update parameters locally, in a direction that minimizes the loss function. We study a different approach, symmetry teleportation, that allows parameters to travel a large distance on the loss level set, in order to improve the convergence speed in subsequent steps. Teleportation exploits symmetries in the loss landscape of optimization problems. We derive loss-invariant group actions for test functions in optimization and multi-layer neural networks, and prove a necessary condition for teleportation to improve convergence rate. We also show that our algorithm is closely related to second order methods. Experimentally, we show that teleportation improves the convergence speed of gradient descent and AdaGrad for several optimization problems including test functions, multi-layer regressions, and MNIST classification. | Accept | This paper proposes a novel, symmetry teleportation approach to optimize the parameters of ML models.
The proposed approach allows iterates to move along the loss level set and improves the convergence speed. The teleportations also exploit the symmetries that are present in the optimization problem.
The paper also includes very encouraging numerical experiments.
I believe that the paper brings more insides and techniques that have been mostly overlooked in the community when training ML problems. | test | [
"NOxhvLXzKiz",
"qM8-KTuGbNq",
"-sA4iLaouQ",
"bYGhlDXfMGa",
"0Y2KKVt4TI",
"yWmXYE3PrE2",
"XohTGspg1LGp",
"1jYqH_cuwiF",
"oHu4Y0LHYxW",
"Iwvam7EWo8X",
"vP0Rbsb_NrV",
"Im0m0q_n0UP",
"E2xLfNU9zx",
"GoHhYeMyvHL",
"uDdwTYuLwwq",
"QrmutSIE2EB",
"YQQkHBaa9Bz",
"tRMSpyjeJzS",
"RbJ68Cxazp-... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the clarification. Now, it makes sense to me and I have raised the score. ",
" Thanks for reading our responses and proofs! We really appreciate it.\n\nWe have updated the paper, where we added a new section Appendix C.5. We have formalized the improvement brought by telepo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3,
2
] | [
"qM8-KTuGbNq",
"-sA4iLaouQ",
"oHu4Y0LHYxW",
"nips_2022_MHjxpvMzf2x",
"XohTGspg1LGp",
"nips_2022_MHjxpvMzf2x",
"RbJ68Cxazp-",
"tRMSpyjeJzS",
"tRMSpyjeJzS",
"YQQkHBaa9Bz",
"YQQkHBaa9Bz",
"QrmutSIE2EB",
"uDdwTYuLwwq",
"uDdwTYuLwwq",
"nips_2022_MHjxpvMzf2x",
"nips_2022_MHjxpvMzf2x",
"nip... |
nips_2022_y8FN4dHdxOE | Fused Orthogonal Alternating Least Squares for Tensor Clustering | We introduce a multi-modes tensor clustering method that implements a fused version of the alternating least squares algorithm (Fused-Orth-ALS) for simultaneous tensor factorization and clustering. The statistical convergence rates of recovery and clustering are established when the data are a noise contaminated tensor with a latent low rank CP decomposition structure. Furthermore, we show that a modified alternating least squares algorithm can provably recover the true latent low rank factorization structure when the data form an asymmetric tensor with perturbation. Clustering consistency is also established. Finally, we illustrate the accuracy and computational efficient implementation of the Fused-Orth-ALS algorithm by using both simulations and real datasets. | Accept | The paper proposes a multi-mode tensor clustering method using an alternating least square algorithm. The reviewers unanimously like the paper and the content. Some reviewers have sought a few clarifications including on the proofs. Hope the authors would address them in the revised version. | train | [
"vjcLA6_r0xv",
"lVs6Tp9I0K",
"7-xe_M4_DkV",
"bI7XrHAU9Iz",
"rPDxU5i67DES",
"lYaxkDJD_Rxp",
"1qtseNrM0Yx",
"PyIh0UWp0Xd",
"W2xFDUDWsok",
"e_sOn_wSflx",
"fI4lyJFdDQq",
"I2VW2HAMJ3K",
"JaR9eozPdWx",
"OnYlF6nvpIt"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Authors, thank you for the detailed response.\nI am satisfied with the explanation provided for my questions. I stick to my original rating. ",
" **On the sixth issue** For the derivation of Line 100, the whole error bound can be split into 2 parts, the first part is in the order our final result of Theorem 1, ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"e_sOn_wSflx",
"7-xe_M4_DkV",
"bI7XrHAU9Iz",
"rPDxU5i67DES",
"lYaxkDJD_Rxp",
"W2xFDUDWsok",
"nips_2022_y8FN4dHdxOE",
"JaR9eozPdWx",
"JaR9eozPdWx",
"I2VW2HAMJ3K",
"OnYlF6nvpIt",
"nips_2022_y8FN4dHdxOE",
"nips_2022_y8FN4dHdxOE",
"nips_2022_y8FN4dHdxOE"
] |
nips_2022_GjWDguPZRmr | Improving Variational Autoencoders with Density Gap-based Regularization | Variational autoencoders (VAEs) are one of the most powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation. The classic optimization goal of VAEs is to maximize the Evidence Lower Bound (ELBo), which consists of a conditional likelihood for generation and a negative Kullback-Leibler (KL) divergence for regularization. In practice, optimizing ELBo often leads the posterior distribution of all samples converging to the same degenerated local optimum, namely posterior collapse or KL vanishing. There are effective ways proposed to prevent posterior collapse in VAEs, but we observe that they in essence make trade-offs between posterior collapse and the hole problem, i.e., the mismatch between the aggregated posterior distribution and the prior distribution. To this end, we introduce new training objectives to tackle both problems through a novel regularization based on the probabilistic density gap between the aggregated posterior distribution and the prior distribution. Through experiments on language modeling, latent space visualization, and interpolation, we show that our proposed method can solve both problems effectively and thus outperforms the existing methods in latent-directed generation. To the best of our knowledge, we are the first to jointly solve the hole problem and posterior collapse. | Accept | The paper addresses the KL collapse of VAE models by proposing a new regularization. Reviewers generally acknowledge the novelty of the work and have the tendency of recommending acceptance. | train | [
"jJ_2Sx3V7uj",
"szl37BZVkRl",
"44mMCswbSm4",
"nOYQlXDKHP",
"jbsntyREtGe",
"GVwwHw6OLs",
"WFhw_INFmyx",
"3So4PCaGdfU",
"Tx1jdhD5yUH",
"Pe6Ro9En2Co",
"FBUX9rf-Ta",
"e6gW1OGY6-"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the encouraging comment! We are glad that you found our reply helpful.",
" Thanks to the authors for their response to my review (and apologies for my late response to the authors). I found the response helpful -- specifically, the authors' comments on runtime have given me a better idea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"szl37BZVkRl",
"3So4PCaGdfU",
"nOYQlXDKHP",
"Tx1jdhD5yUH",
"GVwwHw6OLs",
"WFhw_INFmyx",
"e6gW1OGY6-",
"Pe6Ro9En2Co",
"FBUX9rf-Ta",
"nips_2022_GjWDguPZRmr",
"nips_2022_GjWDguPZRmr",
"nips_2022_GjWDguPZRmr"
] |
nips_2022_DSEP9rCvZln | Inherently Explainable Reinforcement Learning in Natural Language | We focus on the task of creating a reinforcement learning agent that is inherently explainable---with the ability to produce immediate local explanations by thinking out loud while performing a task and analyzing entire trajectories post-hoc to produce temporally extended explanations. This Hierarchically Explainable Reinforcement Learning agent (HEX-RL), operates in Interactive Fictions, text-based game environments in which an agent perceives and acts upon the world using textual natural language. These games are usually structured as puzzles or quests with long-term dependencies in which an agent must complete a sequence of actions to succeed---providing ideal environments in which to test an agent's ability to explain its actions. Our agent is designed to treat explainability as a first-class citizen, using an extracted symbolic knowledge graph-based state representation coupled with a Hierarchical Graph Attention mechanism that points to the facts in the internal graph representation that most influenced the choice of actions. Experiments show that this agent provides significantly improved explanations over strong baselines, as rated by human participants generally unfamiliar with the environment, while also matching state-of-the-art task performance. | Accept | The paper proposes a hierarchical approach to explainable RL which combines different modules, including a knowledge graph, to generate natural language explanations.
There has been a debate between the reviewers about this approach being novel or not which was the main concern left after the rebuttal phase. Other concerns were indeed fairly well addressed by the authors.
Although each module in the proposed approach is not novel, it seems that the way they are used to address the specific problem of explainability and especially in text games is novel and sound. The results are convincing and the evaluation against a large number of baselines is the result of a large amount of work and a solid scientific method.
For these reasons, acceptance is recommended. | train | [
"92v42qZWl20",
"H1NX7NE-tMl",
"3fHKHZnMJ4I",
"gF3Q_JaiKPD",
"PUc7WCN7hZD",
"HH1PeBEQGZJ",
"Fgm2Z4-0I6",
"yC4IL5u9urM",
"PhV7sC0lFB",
"dIqUIDj5-x",
"6K5l-Pi7ejF",
"xnyJuKV5UI",
"Zcw2eb0S_Cn"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for responding to our changes and for the time you have spend in writing a thorough review with points for improvement. We are encouraged to hear that you think the paper is ready for acceptance and hope that a fruitful discussion continues into the post-rebuttal period with the other reviewers.",
" M... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"H1NX7NE-tMl",
"yC4IL5u9urM",
"Zcw2eb0S_Cn",
"xnyJuKV5UI",
"6K5l-Pi7ejF",
"nips_2022_DSEP9rCvZln",
"Zcw2eb0S_Cn",
"xnyJuKV5UI",
"6K5l-Pi7ejF",
"nips_2022_DSEP9rCvZln",
"nips_2022_DSEP9rCvZln",
"nips_2022_DSEP9rCvZln",
"nips_2022_DSEP9rCvZln"
] |
nips_2022_hciwLGxCt6S | It's DONE: Direct ONE-shot learning with Hebbian weight imprinting | Learning a new concept from one example is a superior function of the human brain and it is drawing attention in the field of machine learning as a one-shot learning task. In this paper, we propose one of the simplest methods for this task with a nonparametric weight imprinting, named Direct ONE-shot learning (DONE). DONE adds new classes to a pretrained deep neural network (DNN) classifier with neither training optimization nor pretrained-DNN modification. DONE is inspired by Hebbian theory and directly uses the neural activity input of the final dense layer obtained from data that belongs to the new additional class as the synaptic weight with a newly-provided-output neuron for the new class, by transforming all statistical properties of the neural activity into those of synaptic weight. DONE requires just one inference for learning a new concept and its procedure is simple, deterministic, not requiring parameter tuning and hyperparameters. DONE overcomes a problem of existing weight imprinting methods that interfere with the classification of original-class images. The performance of DONE depends entirely on the pretrained DNN model used as a backbone model, and we confirmed that DONE with current well-trained backbone models perform at a decent accuracy. | Reject | ## Summary
Humans can learn a new task just from a couple of examples whereas often supervised deep learning models require lots of labeled samples to learn a task. This paper proposes a one-shot learning method inspired by Hebbian learning by adding a new class to the output layer of the network with quantile normalization of the new inputs based on the weights of the last layer that corresponding to the other classes. The proposed approach uses features of a penultimate layer of an “encoder” network (for example EfficientNet). The paper presents results in the k-shot classification setting.
## Decision
This paper studies an important problem and introduces interesting ideas such as quantile normalisation into deep learning models. Nevertheless the paper requires a revision as discussed with other reviewers and as a result it would benefit from another round of reviews. However, during the discussion period none of the reviewers were willing to nominate this paper for acceptance with confidence.
* *Reviewer j4P6* claimed that the authors claims regarding to connections to Hebbian algorithm is flawed and during the discussion period authors agreed their algorithm is not Hebbian. It seems like this is the biggest problem reviewer j4P6 has with theispaper. The reviewer also complained that the paper only has a handwavy explanation of the algorithm and some limited evaluations. It’s a decent contribution overall, but in its current form it does not meet the bar of NeurIPS.
* *Reviewer 8B2d* thinks that this paper needs another revision. The method presented in the article is interesting, but the current manuscript evaluates it with only two models, and the results are better in just one of them. It is therefore not clear whether the improvement is anecdotal or general. Furthermore, the manuscript relates the work to Hebbian learning in the brain, which is irrelevant for presenting their work. The authors agreed to that point when discussing with reviewer j4P6. The reviewer thinks that the article is interesting, but the authors should revise the manuscript, remove the Hebbian analogy and focus on better evaluations, making their statements more sound.
* *Reviewer RJBn*’s score is a “6 - Weak Accept”, but fine with this paper getting rejected. After reading the reviews of other reviewers, RJBn agrees with the other reviewers that the connections drawn to Hebbian learning in the brain are weak, and that the evaluation could be better.
| train | [
"7pRRxMxTsRC",
"7zL5SCJpkCS",
"6Si4SPtKvJ",
"bBJgBScgIKR",
"KjmVUTTngY6",
"SG-1sdvVdq_",
"QZ4SI9K9Nyk",
"HHflK7UMjYZ",
"4Q4tPc_1yiy0",
"WMPDJrLh4Ns",
"VZ0ny8D9quU",
"ENWmtUtFNKg",
"G46I_DmE8NYP",
"8F-C0hW1ue",
"vQdbXCjuDLd",
"VVq0XJG-PtS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for not only your precious comments and discussion to improve our paper, but also for raising the score.\n\nWe have learned a lot from the discussion with you. Thanks to you, as with our papers, our understanding also progresses, which will be encouraging for future research as well.\n\nWe aga... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"7zL5SCJpkCS",
"vQdbXCjuDLd",
"bBJgBScgIKR",
"KjmVUTTngY6",
"SG-1sdvVdq_",
"QZ4SI9K9Nyk",
"HHflK7UMjYZ",
"ENWmtUtFNKg",
"WMPDJrLh4Ns",
"VZ0ny8D9quU",
"VVq0XJG-PtS",
"vQdbXCjuDLd",
"8F-C0hW1ue",
"nips_2022_hciwLGxCt6S",
"nips_2022_hciwLGxCt6S",
"nips_2022_hciwLGxCt6S"
] |
nips_2022_ZidkM5b92G | BagFlip: A Certified Defense Against Data Poisoning | Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model. In a trigger-less attack, the attacker can modify the training set but not the test inputs, while in a backdoor attack the attacker can also modify test inputs. Existing model-agnostic defense approaches either cannot handle backdoor attacks or do not provide effective certificates (i.e., a proof of a defense). We present BagFlip, a model-agnostic certified approach that can effectively defend against both trigger-less and backdoor attacks. We evaluate BagFlip on image classification and malware detection datasets. BagFlip is equal to or more effective than the state-of-the-art approaches for trigger-less attacks and more effective than the state-of-the-art approaches for backdoor attacks. | Accept | The paper proposes a new method for certified defense against data poisoning, in both trigger-less and backdoor scenarios. The method augments previous work (Bagging) with random flipping of labels. The latter enables computation of probabilistic certificates, although this results in a huge computational overhead. Various relaxation techniques are proposed to improve the computational burden, bringing the cost to just one order of magnitude "above" the baseline. Experiments show reasonable improvement of defense strength compared to the baselines, although the computational cost remains an issue. Apart from its incremental character and the computational complexity, the method is well executed and theoretically sound. | train | [
"8rHyuP7pKgg",
"nu7HcRR38x",
"3iKpOmww-vY",
"1LbGgagkRbT",
"g_lsCN6Gvxy",
"EfKNTQ_86q0",
"RHNPmV5Ydx1",
"_MO-8-hcQGO",
"-NpHci-nT5",
"Otam2dF5op3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. The authors clarify that the novelty of the method is a novel smoothing distribution combining the distributions of Bagging and FeatFlip. I realized the proposed method's difference through the authors' rebuttals, and I would like to reconsider the contribution (2 fair) of the original review.\n\n2. Thanks to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"nu7HcRR38x",
"g_lsCN6Gvxy",
"RHNPmV5Ydx1",
"EfKNTQ_86q0",
"Otam2dF5op3",
"-NpHci-nT5",
"_MO-8-hcQGO",
"nips_2022_ZidkM5b92G",
"nips_2022_ZidkM5b92G",
"nips_2022_ZidkM5b92G"
] |
nips_2022_JvIFpZOjLF4 | Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction | Recently, neural implicit surfaces learning by volume rendering has become popular for multi-view reconstruction. However, one key challenge remains: existing approaches lack explicit multi-view geometry constraints, hence usually fail to generate geometry-consistent surface reconstruction. To address this challenge, we propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction. We theoretically analyze that there exists a gap between the volume rendering integral and point-based signed distance function (SDF) modeling. To bridge this gap, we directly locate the zero-level set of SDF networks and explicitly perform multi-view geometry optimization by leveraging the sparse geometry from structure from motion (SFM) and photometric consistency in multi-view stereo. This makes our SDF optimization unbiased and allows the multi-view geometry constraints to focus on the true surface optimization. Extensive experiments show that our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions, thus outperforming the state-of-the-arts by a large margin. | Accept | This paper introduces new and useful losses, presents a good experimental setup, supply analysis on the bias, and is clearly written. I encourage the authors to discuss similarities and differences to NeuraWarp and pointcloud->SDF methods in their revision. | train | [
"alZYd-1Ivn",
"LUXzpNG4YEB",
"HOny6jzdia",
"ifv8WuRcqie",
"r_OIQsqXBOM",
"DNTWxA8ionRM",
"C36-1vsy-tD",
"yJ5RMieDgxJ",
"TxBxAKaOubw",
"i0pMhJoZHkZ",
"Uak6pSh69P",
"GidTo6MYlec"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the careful review and constructive discussions. We will revise our paper following the comments made by all the reviewers.",
" I would like to thank the authors for carefully addressing my concerns, especially w.r.t point2surf and feature generation from pointclouds. I have updated my r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"LUXzpNG4YEB",
"r_OIQsqXBOM",
"ifv8WuRcqie",
"yJ5RMieDgxJ",
"TxBxAKaOubw",
"i0pMhJoZHkZ",
"Uak6pSh69P",
"GidTo6MYlec",
"nips_2022_JvIFpZOjLF4",
"nips_2022_JvIFpZOjLF4",
"nips_2022_JvIFpZOjLF4",
"nips_2022_JvIFpZOjLF4"
] |
nips_2022_JSBgIaxAXk9 | Differentially Private Linear Regression via Medians | Linear regression is one of the simplest machine learning tasks. Despite much work, differentially private linear regression still lacks effective algorithms.
We propose a new approach based on a multivariate extension of the Theil-Sen estimator.
The theoretical advantage of our approach is that we do not directly rely on noise addition, which requires bounding the sensitivity. Instead we compute differentially private medians as a subroutine, which are more robust.
We also show experimentally that our approach compares favourably to prior work. | Reject | Though the reviewers appreciate the contribution overall, and the application of median methods to regression for the purpose of avoiding/circumventing clipping is novel, the strength of the contributions remains limited in light of other existing work that achieves more favorable bounds and/or uses fewer assumptions. This is certainly the case for (unpublished) [VJT22], but it is also the case for [MKFI22], which also assumes Gaussian inputs but, crucially, does not rely on prior knowledge of the distribution's covariance. Moreover, in light of the effort required to perform DP covariance estimation via [KLSU19], it is not clear that, as stated in the rebuttal, an error proportional to condition number is non-trivial. For example, it is not clear that, allowing for such error, processes like [KLSU19], do not become trivial.
A reorganization of the paper, that brings related work earlier, and explains what is known regarding regression in the unbounded regime (especially in light of [VJT22],[MKFI22]) but also on DP for other settings in the unbounded regime, and where the current work is placed/what the contributions beyond these works are, would strengthen the paper.
| train | [
"LUJTNEPnKh",
"9w8SiwzWUkK",
"7gI41sbdrDa",
"dAHa_8uzUgs",
"RgrAqzP2A2l",
"vvGKmnCnA5i",
"Vp_toF_XdMS",
"l3Qw_2MxkKx",
"yA_d2z92QIm",
"khIXIC4UfA3",
"vw8fvwkWYW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > 1. If the covariance is unknown, does the proposed algorithm have any error bounds guarantees?\n\nThanks for the great question! \n\nWe can modify Lemma A.6 (in a black-box manner) to work for a non-spherical design matrix. If the data comes from $N(0,\\Sigma)$ instead of $N(0,I)$, this is equivalent to replaci... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"9w8SiwzWUkK",
"dAHa_8uzUgs",
"Vp_toF_XdMS",
"RgrAqzP2A2l",
"vvGKmnCnA5i",
"vw8fvwkWYW",
"khIXIC4UfA3",
"yA_d2z92QIm",
"nips_2022_JSBgIaxAXk9",
"nips_2022_JSBgIaxAXk9",
"nips_2022_JSBgIaxAXk9"
] |
nips_2022_P6uZ7agiyCT | Sparse2Dense: Learning to Densify 3D Features to Boost 3D Object Detection | LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts. | Accept | This paper proposes to utilize point cloud completion tools to densify sparse point clouds which could subsequently improve the performance of point cloud detection methods. After rebuttal, reviewers agree on the novelty of the method and its effectiveness on the Waymo open dataset. AC recommends this paper for acceptance following the unanimous opinion. | train | [
"Sac6WpJQB_m",
"gPzDqnpQamA",
"hvybdtcz-aH",
"ngp7Dsidgn",
"aTQySFUbBZt",
"nuito-YBZ4H",
"lYsG8ZFgAwy",
"58w803SWyX",
"k2nVnSah5Rs",
"cEebHkVF-n1"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed reply. I think the paper is now above the acceptance bar and I increase my rating to weak accept.",
" Dear Reviewers: Thank you for your effort and time in reviewing our paper. We are encouraged to see positive ratings from all reviewers and your recognition of the good & no... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
5
] | [
"aTQySFUbBZt",
"nips_2022_P6uZ7agiyCT",
"cEebHkVF-n1",
"k2nVnSah5Rs",
"58w803SWyX",
"lYsG8ZFgAwy",
"nips_2022_P6uZ7agiyCT",
"nips_2022_P6uZ7agiyCT",
"nips_2022_P6uZ7agiyCT",
"nips_2022_P6uZ7agiyCT"
] |
nips_2022_ByYFpTwgLGO | Deep Multi-Modal Structural Equations For Causal Effect Estimation With Unstructured Proxies | Estimating the effect of intervention from observational data while accounting for confounding variables is a key task in causal inference. Oftentimes, the confounders are unobserved, but we have access to large amounts of additional unstructured data (images, text) that contain valuable proxy signal about the missing confounders. This paper argues that leveraging this unstructured data can greatly improve the accuracy of causal effect estimation. Specifically, we introduce deep multi-modal structural equations, a generative model for causal effect estimation in which confounders are latent variables and unstructured data are proxy variables. This model supports multiple multimodal proxies (images, text) as well as missing data. We empirically demonstrate that our approach outperforms existing methods based on propensity scores and corrects for confounding using unstructured inputs on tasks in genomics and healthcare. Our methods can potentially support the use of large amounts of data that were previously not used in causal inference | Accept | Reviewers agreed the paper presents a novel method addressing an important problem, building on and expanding prior work in the field. Specifically, it strongly relates to the CEVAE model, adding to it the ability to deal with multiple proxies each with a different structure, as well as introducing a new inference approach. Extensive experimental evaluation (some following the reviews) shows overall strong results. There were concerns with the somewhat limited level of novelty, and lack of theoretical foundations.
NB: The definition of ITE in the paper should in fact be CATE, the Conditional Average Treatment Effect, as it is conditioned on a variable X=x, and not a singe unit's effect. | train | [
"pUrtoRbeNJE",
"BH_U5BCxGV",
"6f5QEsBv5LU",
"OLCHt440rpm",
"MuWTRETeX9s",
"ooU9jAc0iOp",
"iHEIrlMOhOx",
"ooFjL4M_tki",
"Teu1hbtQgLi",
"pNdRmuEXwh0",
"_BvvogV-EC-",
"JNHchPwGhH",
"M2obpj9-UwF",
"SLcdPwRjJaW"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Experiments with Modified Causal Graph Structures**\n\nIn our previous response, we argued that our core method handles modified causal graph structures with possibly small modifications. Here, we perform a simulation study to confirm these results empirically. In our experiments, we find that DMSE __recovers t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"6f5QEsBv5LU",
"pNdRmuEXwh0",
"OLCHt440rpm",
"MuWTRETeX9s",
"iHEIrlMOhOx",
"Teu1hbtQgLi",
"ooFjL4M_tki",
"SLcdPwRjJaW",
"M2obpj9-UwF",
"_BvvogV-EC-",
"JNHchPwGhH",
"nips_2022_ByYFpTwgLGO",
"nips_2022_ByYFpTwgLGO",
"nips_2022_ByYFpTwgLGO"
] |
nips_2022_E9HNxrCFZPV | NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation | Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation. Previous TTA schemes assume that the test samples are independent and identically distributed (i.i.d.), even though they are often temporally correlated (non-i.i.d.) in application scenarios, e.g., autonomous driving. We discover that most existing TTA methods fail dramatically under such scenarios. Motivated by this, we present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams. Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner. Our evaluation with various datasets, including real-world non-i.i.d. streams, demonstrates that the proposed robust TTA not only outperforms state-of-the-art TTA algorithms in the non-i.i.d. setting, but also achieves comparable performance to those algorithms under the i.i.d. assumption. Code is available at https://github.com/TaesikGong/NOTE. | Accept | The paper proposed two test-time adaptation methods a) instance-aware batch normalization and b) prediction-balanced reservoir sampling and used these to show that the proposed method is better in the non-iid setting.
The reviewers found this to be an important problem the experiments generally convincing. Reviewers objected the choice of dataset (not commonly used to evaluate adaptation) and baseline models (not state of the art models) and the effect size. In the end all reviewers found the results strong enough and voted to accept. | val | [
"BnZuReLSjyx",
"i-JEjbkjjgR",
"v4PIY92_5eA",
"1SGzegtgiOu",
"OHT5H4Zab62",
"eD_gavTXdQ4",
"izOEmksbpDL",
"0fIbHJ_26lf1",
"OygYPlYaWop",
"ji80Gi3T9A1",
"8j06jEc9wB",
"sz1muyMVHE-",
"Zj6EtZy_A3qa",
"NWqRST6C9oT",
"OuyoyEm3Ecz",
"qWPYZ3e6xxy",
"MVcWsiacSev",
"pLhU1eCRXO",
"-dw4WhZ8F... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thank you for your response to our rebuttal! We agree that the concerns you pointed out are the current limitations of our study and could be further improved in the future. Still, we believe our contributions make a meaningful step towards the practical applications of the test-time adaptation paradigm, as you a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"v4PIY92_5eA",
"1SGzegtgiOu",
"sz1muyMVHE-",
"qWPYZ3e6xxy",
"eD_gavTXdQ4",
"OygYPlYaWop",
"nips_2022_E9HNxrCFZPV",
"nips_2022_E9HNxrCFZPV",
"cvGIFr7ETV-",
"cvGIFr7ETV-",
"xl8by_jGVvt",
"xl8by_jGVvt",
"-dw4WhZ8FFi",
"-dw4WhZ8FFi",
"-dw4WhZ8FFi",
"-dw4WhZ8FFi",
"pLhU1eCRXO",
"nips_20... |
nips_2022_tZUOiVGO6jN | A Deep Learning Dataloader with Shared Data Preparation | Executing a family of Deep Neural Networks (DNNs) training jobs on the same or similar datasets in parallel is typical in current deep learning scenarios. It is time-consuming and resource-intensive because each job repetitively prepares (i.e., loads and preprocesses) the data independently, causing redundant consumption of I/O and computations. Although the page cache or a centralized cache component can alleviate the redundancies by reusing the data prep work, each job's data sampled uniformly at random presents a low sampling locality in the shared dataset that causes the heavy cache thrashing. Prior work tries to solve the problem by enforcing all training jobs iterating over the dataset in the same order and requesting each data in lockstep, leading to strong constraints: all jobs must have the same dataset and run simultaneously. In this paper, we propose a dependent sampling algorithm (DSA) and domain-specific cache policy to relax the constraints. Besides, a novel tree data structure is designed to efficiently implement DSA. Based on the proposed technologies, we implemented a prototype system, named Joader, which can share data prep work as long as the datasets share partially. We evaluate the proposed Joader in practical scenarios, showing a greater versatility and superiority over training speed improvement (up to 500% in ResNet18). | Accept | The paper proposes a new data loader called Joader for parallel DNN training on overlapped datasets that allows tasks to share memory and computational resources for data preprocessing. Joader implements a new sampling mechanism and cache policy to reduce cache misses due to data access from multiple tasks and a new data structure to facilitate the implementation. Joader has been integrated with PyTorch and shown to be very effective in practice.
All reviewers agree that the paper makes a valuable contribution to the NeurIPS community and I agree with Reviewer 6QhD that the contribution stands even if the system is not covering distributed workloads. I thus recommend acceptance.
However, for the camera ready version I think it is important that the authors incorporate additional discussion about potential limitations of their approach so that the tool can be used most effectively by researchers. For example, potential overheads in the implementation of the new data structure should be discussed even if they are small in the provided experiments and also the fact that sampling across jobs is now correlated and no longer independent is important to emphasize together with potential implications for the resulting workloads (e.g. can the results of the parallel runs still be used to achieve variance reduction by ensembles?). Beyond that I think the additional experiments to break down the performance gains into individual components is valuable and clarification made in the conversation with the reviewers should be incorporated in the paper. | train | [
"IOE6P2K4cuC",
"OGJagwA6x23",
"i7p79qnkypz",
"YYcP_DZSHy",
"A718UuM2IBy",
"ZWz6paCV0M",
"wRxUIjqlKPI",
"UYpm88oAhc"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I totally understand it is hard to prove the optimality in the N-job case in such a practical scenario. The evaluation in the paper and your comments convinced me that the contributions of the DSA's current version are enough for the community. Measuring the degree of breaking the strict correlation among jobs ma... | [
-1,
-1,
-1,
-1,
-1,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"YYcP_DZSHy",
"nips_2022_tZUOiVGO6jN",
"UYpm88oAhc",
"wRxUIjqlKPI",
"ZWz6paCV0M",
"nips_2022_tZUOiVGO6jN",
"nips_2022_tZUOiVGO6jN",
"nips_2022_tZUOiVGO6jN"
] |
nips_2022_gE_vt-w4LhL | Squeezeformer: An Efficient Transformer for Automatic Speech Recognition | The recently proposed Conformer model has become the de facto backbone model for various downstream speech tasks based on its hybrid attention-convolution architecture that captures both local and global features. However, through a series of systematic studies, we find that the Conformer architecture’s design choices are not optimal. After re-examining the design choices for both the macro and micro-architecture of Conformer, we propose Squeezeformer which consistently outperforms the state-of-the-art ASR models under the same training schemes. In particular, for the macro-architecture, Squeezeformer incorporates (i) the Temporal U-Net structure which reduces the cost of the multi-head attention modules on long sequences, and (ii) a simpler block structure of multi-head attention or convolution modules followed up by feed-forward module instead of the Macaron structure proposed in Conformer. Furthermore, for the micro-architecture, Squeezeformer (i) simplifies the activations in the convolutional block, (ii) removes redundant Layer Normalization operations, and (iii) incorporates an efficient depthwise down-sampling layer to efficiently sub-sample the input signal. Squeezeformer achieves state-of-the-art results of 7.5%, 6.5%, and 6.0% word-error-rate (WER) on LibriSpeech test-other without external language models, which are 3.1%, 1.4%, and 0.6% better than Conformer-CTC with the same number of FLOPs. Our code is open-sourced and available online. | Accept | The paper conducts thorough analysis of the Conformer architecture and brings insights and techniques from other fields to simplify and improve the model structure, which is also demonstrated to show nice gains. Though as pointed by reviewers the novelty is limited, the study is very useful to the field. | val | [
"MU30AZIGMOX",
"4YHdvtFvBTG",
"UQZEbgD8lQ_",
"JZYRysqiHr6",
"pRWdbijega",
"8iQjAsRLOOV",
"NErDY0uGrY",
"FmnBs-qWysX",
"SnVFeNMHsIv",
"cbOnvcF0N9G",
"FOkHfvj5juH",
"FpJmwrxLAdb",
"Af5A0CL5wPo",
"nxY2UXJXGnC"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the final round of edits; I believe the presentation (once Figure 1 is also iterated on) is now fair and very clear. I also appreciate the authors' extensive insights and experiments: here, showing the improved blank/non-blank token repetitions of Squeezeformer-CTC over Conformer-CTC, and to the other ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"UQZEbgD8lQ_",
"pRWdbijega",
"pRWdbijega",
"NErDY0uGrY",
"FmnBs-qWysX",
"nips_2022_gE_vt-w4LhL",
"nxY2UXJXGnC",
"nxY2UXJXGnC",
"Af5A0CL5wPo",
"Af5A0CL5wPo",
"FpJmwrxLAdb",
"nips_2022_gE_vt-w4LhL",
"nips_2022_gE_vt-w4LhL",
"nips_2022_gE_vt-w4LhL"
] |
nips_2022_fpfDusqKZF | Neural Basis Models for Interpretability | Due to the widespread use of complex machine learning models in real-world applications, it is becoming critical to explain model predictions. However, these models are typically black-box deep neural networks, explained post-hoc via methods with known faithfulness limitations. Generalized Additive Models (GAMs) are an inherently interpretable class of models that address this limitation by learning a non-linear shape function for each feature separately, followed by a linear model on top. However, these models are typically difficult to train, require numerous parameters, and are difficult to scale.
We propose an entirely new subfamily of GAMs that utilizes basis decomposition of shape functions. A small number of basis functions are shared among all features, and are learned jointly for a given task, thus making our model scale much better to large-scale data with high-dimensional features, especially when features are sparse. We propose an architecture denoted as the Neural Basis Model (NBM) which uses a single neural network to learn these bases. On a variety of tabular and image datasets, we demonstrate that for interpretable machine learning, NBMs are the state-of-the-art in accuracy, model size, and, throughput and can easily model all higher-order feature interactions.
Source code is available at \href{https://github.com/facebookresearch/nbm-spam}{\ttfamily github.com/facebookresearch/nbm-spam}. | Accept | The paper proposes an approach, Neural Basis Model (NBM), that can be seen as a new subfamily of Generalized Additive Models for interpretability. The proposed model is compared to alternatives, showing competitive performance while being computationally more efficient. The authors successfully addressed questions raised by reviewers. As also noted by the authors, a major limitation of the paper remains to be the requirement of the input features being interpretable. This would limit the applicability and utility of the proposed model, limiting the significance of the contribution. | val | [
"sglLX1DpYjn",
"jLYYgW6MeD",
"oWNyu5RrGqy",
"hz2Lxvg6wIH",
"4VvDiLXCCXR",
"eKn-SCJVzT",
"zjS5knxuU8l",
"v6NniSz4o8K",
"EpfbwNuwRIU",
"YDkxWSBGXFa",
"nPpQWqST3eP",
"RNTBuHyYZO",
"CBh_j0TVmlE"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and input. \n\nWe will make sure to include following in the camera ready: results on all 16 datasets (9 new from rebuttal) with appropriate baselines and updated SOTA, additional visualizations (some are already added in the updated supplementary, see Appendix Section A.4 and Figure A.2),... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3
] | [
"jLYYgW6MeD",
"eKn-SCJVzT",
"hz2Lxvg6wIH",
"zjS5knxuU8l",
"CBh_j0TVmlE",
"RNTBuHyYZO",
"v6NniSz4o8K",
"nPpQWqST3eP",
"YDkxWSBGXFa",
"nips_2022_fpfDusqKZF",
"nips_2022_fpfDusqKZF",
"nips_2022_fpfDusqKZF",
"nips_2022_fpfDusqKZF"
] |
nips_2022_TwuColwZAVj | Scalable Interpretability via Polynomials | Generalized Additive Models (GAMs) have quickly become the leading choice for interpretable machine learning. However, unlike uninterpretable methods such as DNNs, they lack expressive power and easy scalability, and are hence not a feasible alternative for real-world tasks. We present a new class of GAMs that use tensor rank decompositions of polynomials to learn powerful, {\em inherently-interpretable} models. Our approach, titled Scalable Polynomial Additive Models (SPAM) is effortlessly scalable and models {\em all} higher-order feature interactions without a combinatorial parameter explosion. SPAM outperforms all current interpretable approaches, and matches DNN/XGBoost performance on a series of real-world benchmarks with up to hundreds of thousands of features. We demonstrate by human subject evaluations that SPAMs are demonstrably more interpretable in practice, and are hence an effortless replacement for DNNs for creating interpretable and high-performance systems suitable for large-scale machine learning.
Source code is available at \href{https://github.com/facebookresearch/nbm-spam}{\ttfamily github.com/facebookresearch/nbm-spam}. | Accept | The paper notes that polynomial functions are inherently interpretable models, and takes algorithmic advantage of the connection between polynomials and tensors by learning the coefficients of the polynomials using a low-rank tensor factorization. The resulting algorithm is shown to outperform prior SOTA interpretable models and to match blackbox model performance on several data sets. | train | [
"_qnXKCvLof9",
"it65fSGlEyc",
"k-XMKRS9aBB",
"XSG3tN0OuLl",
"19uV1HlJoVd",
"2mFqRLodPhT",
"ta-_KtdABil",
"9sFHjci-PcJ",
"67_Nt8JxPUs",
"E7d2EfVz4gx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author-response phase closes today. Please acknowledge the author rebuttal and state if your position has changed. Thanks!",
" The author-rebuttal phase closes today. Please acknowledge the author rebuttal and state if your position has changed. Thanks!",
" I do not have futher concerns if the above will ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"E7d2EfVz4gx",
"67_Nt8JxPUs",
"2mFqRLodPhT",
"E7d2EfVz4gx",
"67_Nt8JxPUs",
"9sFHjci-PcJ",
"nips_2022_TwuColwZAVj",
"nips_2022_TwuColwZAVj",
"nips_2022_TwuColwZAVj",
"nips_2022_TwuColwZAVj"
] |
nips_2022_kHeotl7q9dU | NS3: Neuro-symbolic Semantic Code Search | Semantic code search is the task of retrieving a code snippet given a textual description of its functionality. Recent work has been focused on using similarity metrics between neural embeddings of text and code. However, current language models are known to struggle with longer, compositional sentences, and multi-step reasoning. To overcome this limitation, we propose supplementing the query sentence with a layout of its semantic structure. The semantic layout is used to break down the final reasoning decision into a series of lower-level decisions. We use a Neural Module Network architecture to implement this idea. We compare our model - $NS^3$ (Neuro-Symbolic Semantic Search) - to a number of baselines, including state-of-the-art semantic code retrieval methods, such as CodeBERT, CuBERT and GraphCodeBERT, and evaluate on two datasets - Code Search Net (CSN) and Code Search and Question Answering (CoSQA). On these datasets, we demonstrate that our approach results in higher performance. We also perform additional studies to show the effectiveness of our modular design when handling compositional queries. | Accept | The paper studies the problem of retrieving code snippets given textual queries (NS3, Neuro-Symbolic Semantic Code Search). The work is motivated by language models’ limitations on encoding longer and not providing a faithful explanation of their reasoning on compositional queries. NS3 supplements the query sentence with a layout of its semantic structure, which is then used to break down the final reasoning decision into a series of lower-level decisions. NS3 outperforms baselines on CodeSearchNet and CoSQA.
Overall, the reviewers liked the motivation of the work. A lot of concerns have been raised with extensive responses from the authors. Some of these concerns remain and have to be acknowledged and clarified in the modified document. **Despite the limitations of the work, I think "accepting" this work outweighs "rejecting" it, assuming that the authors will put due diligence into improving their drafts based on their latest experiments (outlined in the author's response), along with a clear discussion of their limitations.**
Let me start with the strengths:
- All reviewers have found the work interesting.
- Strong empirical results: NS3 is evaluated on two well-established benchmarks, CodeSearchNet and CoSQA. The empirical result is very strong. The proposed method, NS3, outperforms state-of-the-art by a large margin.
- The paper includes detailed experiments on reduced dataset settings and ablation studies.
Here are several points of concern that came up in the reviews:
- NS3 requires rule-based parsing of natural languages (unlike other LM-based baselines such as CodeBERT). The difficulty of this construction ("roughly two weeks") brings up several concerns:
- These rules might not generalize to different programming languages (e.g. Python to C++): on this point, the authors have reported some evidence of generalization though the reviewer "kn2F" has viewed it as "low (<=41%)" and not convinced. I suggest the authors report these analyses in their main draft. I suspect these numbers need to be compared with the corresponding numbers of their end-to-end baselines.
- These rules might not transferrable to different **natural** languages. The authors have acknowledged this limitation. I suggest the authors be explicit about such limitations in their work.
- These rules might not generalize to longer queries; this is acknowledged in the author's response. I suggest the authors be upfront about these issues in their draft.
| train | [
"XFTppm2CTqW",
"ZhdkB4D_90",
"ZsM7v9gRmsq",
"L8fcn8paQNp",
"HQ9QwGGzPJz",
"cosyre0Pld5",
"VWfeOZlB_O",
"6LNiC6sG3jV",
"BNLkhGqoghr",
"RELTsPKGqXP",
"a9LqXGV2RdC",
"OGmzPbl2eWl",
"65_eFSsW54o",
"haZvCLBEQ-uH",
"cBleEJpIKT",
"HR3y86wZ_Pa",
"6z8_cqWb714",
"3jYg0KpyF6",
"ynjqfwA6Cl",... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"officia... | [
" Thank you for addressing my concerns regarding parsing rules and providing additional experiments about the parser success rate on other datasets. However, given that the parser success rate on other CodeSearchNet datasets is still low (<=41%), a user still needs to add NL parsing rules when they use the proposed... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"ynjqfwA6Cl",
"HR3y86wZ_Pa",
"ndfG3I7-W_l",
"ZNF0I8gzSWU",
"VWfeOZlB_O",
"ZNF0I8gzSWU",
"REjXBoVvCM",
"ndfG3I7-W_l",
"haZvCLBEQ-uH",
"ndfG3I7-W_l",
"REjXBoVvCM",
"ZNF0I8gzSWU",
"bzEuFDADzfa",
"bzEuFDADzfa",
"ZNF0I8gzSWU",
"ZNF0I8gzSWU",
"REjXBoVvCM",
"REjXBoVvCM",
"ndfG3I7-W_l",
... |
nips_2022_Ijq1_a6DESm | On the consistent estimation of optimal Receiver Operating Characteristic (ROC) curve | Under a standard binary classification setting with possible model misspecification, we study the problem of estimating general Receiver Operating Characteristic (ROC) curve, which is an arbitrary set of false positive rate (FPR) and true positive rate (TPR) pairs. We formally introduce the notion of \textit{optimal ROC curve} over a general model space. It is argued that any ROC curve estimation methods implemented over the given model space should target the optimal ROC curve over that space. Three popular ROC curve estimation methods are then analyzed at the population level (i.e., when there are infinite number of samples) under both correct and incorrect model specification. Based on our analysis, they are all consistent when the surrogate loss function satisfies certain conditions and the given model space includes all measurable classifiers. Interestingly, some of these conditions are similar to those that are required to ensure classification consistency. When the model space is incorrectly specified, however, we show that only one method leads to consistent estimation of the ROC curve over the chosen model space. We present some numerical results to demonstrate the effects of model misspecification on the performance of various methods in terms of their ROC curve estimates. | Accept | I have read all the reviews and discussions carefully.
The reviewers all praised the novelty and the significance of the work. The major complaint is that the proofs are too long to be carefully checked. The authors have appropriately addressed some comments from the reviewers. Given the unanimous support, I have decided to recommend the acceptance of the paper. | train | [
"monGsw5zwH1",
"lQPEcSgSlEG",
"jOdcwCNZhsr",
"59-wYhhAtLw",
"p2dmuSFXNB5",
"vWtpCRRHaFZ",
"BfE14_ijNA1",
"R6M8LBB-N8h",
"lYKkzR2BdIY",
"qMhKp1UYNXy",
"OluPV9r9SBF",
"eAT3qJXwUG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your further suggestions. \nWe have made some additional edits to the article\nbased on your suggestions. In particular, \n- We have rewritten the statement on line 86. \n- We will expand the real data examples in the main file if our paper is accepted. ",
" I thank the authors for their response.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"jOdcwCNZhsr",
"vWtpCRRHaFZ",
"R6M8LBB-N8h",
"nips_2022_Ijq1_a6DESm",
"eAT3qJXwUG",
"OluPV9r9SBF",
"qMhKp1UYNXy",
"lYKkzR2BdIY",
"nips_2022_Ijq1_a6DESm",
"nips_2022_Ijq1_a6DESm",
"nips_2022_Ijq1_a6DESm",
"nips_2022_Ijq1_a6DESm"
] |
nips_2022_-o0kPsyzErW | Parallel Tempering With a Variational Reference | Sampling from complex target distributions is a challenging task fundamental to Bayesian inference. Parallel tempering (PT) addresses this problem by constructing a Markov chain on the expanded state space of a sequence of distributions interpolating between the posterior distribution and a fixed reference distribution, which is typically chosen to be the prior. However, in the typical case where the prior and posterior are nearly mutually singular, PT methods are computationally prohibitive. In this work we address this challenge by constructing a generalized annealing path connecting the posterior to an adaptively tuned variational reference. The reference distribution is tuned to minimize the forward (inclusive) KL divergence to the posterior distribution using a simple, gradient-free moment-matching procedure. We show that our adaptive procedure converges to the forward KL minimizer, and that the forward KL divergence serves as a good proxy to a previously developed measure of PT performance. We also show that in the large-data limit in typical Bayesian models, the proposed method improves in performance, while traditional PT deteriorates arbitrarily. Finally, we introduce PT with two references---one fixed, one variational---with a novel split annealing path that ensures stable variational reference adaptation. The paper concludes with experiments that demonstrate the large empirical gains achieved by our method in a wide range of realistic Bayesian inference scenarios. | Accept | The idea of this paper is to tune the reference distribution for parallel tempering to improve efficiency. The key idea is simple: Assume the reference distribution is in the exponential family and use sufficient statistics. Experimental results show that this typically helps in terms of metrics like effective sample size per iteration, though not necessarily in terms of effective samples per second. There are theoretical guarantees which each rely on a long list of assumptions which are deferred to the appendix. While I realize the limitations of space, I echo the reviewers that more discussion of the assumptions should be included in the paper of which should be considered more minor or major. Still this paper proposes a novel approach that is plausibly useful in at least some settings so I recommend acceptance.
A minor point: The font sizes are poorly chosen, to the point of being unreadable if the paper is printed. I had to resort to zooming into individual figures on the computer to reference which was quite tedious. | train | [
"ALRz7TDoAHJ",
"ky8jfzWR5Sa",
"ay2Oaq11ud7",
"GqH9CIBdaBDq",
"8dPtgTLkhKaR",
"_G3SsmlMJj5",
"eZ8VjT890fK",
"w-jp_h5SVD",
"KVVZ4s_ZoOD",
"4hFBLAot6t",
"_bWKWY_SuI7",
"3XQ7q2490rd",
"cLUju83TiI8",
"xl8c3IDRA2l"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the feedback! \n\n> Suppose a particle is stuck in a very deep local mode before it explores the whole domain, the reference distribution is optimized mainly based on samples from this local mode, and hence the reference distribution (obtained from moment matching or similar) at this moment actually... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
2
] | [
"ky8jfzWR5Sa",
"ay2Oaq11ud7",
"GqH9CIBdaBDq",
"8dPtgTLkhKaR",
"_G3SsmlMJj5",
"KVVZ4s_ZoOD",
"w-jp_h5SVD",
"xl8c3IDRA2l",
"cLUju83TiI8",
"_bWKWY_SuI7",
"3XQ7q2490rd",
"nips_2022_-o0kPsyzErW",
"nips_2022_-o0kPsyzErW",
"nips_2022_-o0kPsyzErW"
] |
nips_2022_pqCT3L-BU9T | Periodic Graph Transformers for Crystal Material Property Prediction | We consider representation learning on periodic graphs encoding crystal materials. Different from regular graphs, periodic graphs consist of a minimum unit cell repeating itself on a regular lattice in 3D space. How to effectively encode these periodic structures poses unique challenges not present in regular graph representation learning. In addition to being E(3) invariant, periodic graph representations need to be periodic invariant. That is, the learned representations should be invariant to shifts of cell boundaries as they are artificially imposed. Furthermore, the periodic repeating patterns need to be captured explicitly as lattices of different sizes and orientations may correspond to different materials. In this work, we propose a transformer architecture, known as Matformer, for periodic graph representation learning. Our Matformer is designed to be invariant to periodicity and can capture repeating patterns explicitly. In particular, Matformer encodes periodic patterns by efficient use of geometric distances between the same atoms in neighboring cells. Experimental results on multiple common benchmark datasets show that our Matformer outperforms baseline methods consistently. In addition, our results demonstrate the importance of periodic invariance and explicit repeating pattern encoding for crystal representation learning. Our code is publicly available at https://github.com/YKQ98/Matformer. | Accept | This paper had borderline reviews. While the reviewers felt that the method was novel and the presentation very good, they also cited weaknesses such as limited performance gains over baselines and limited novelty. The authors responded in detail to many of the concerns, including with additional experiments, causing several reviewers to increase their scores. We encourage the authors to incorporate aspects of these responses in the final version.
Note: As the AC I will note that I did not find the authors’ very long summary of the discussion helpful in making my decision. I briefly skimmed but did not fully read it. Such a summary/discussion from the authors will obviously be presented through a very biased lens, and so it is much more helpful for me as an AC to directly look at the discussion myself in making a decision, and to ask follow up questions to directly to the reviewers if any clarification is needed. | train | [
"hUJOBrIqvU",
"U57m-Ghf6S",
"X5M2CLzT7wf",
"B0_NN-7E4E",
"4LOZ-a2H6Y",
"K_-LLgDqFz",
"BMzWmTE4LXH",
"DtJPJjPy8GZ",
"k46hHj8Buj",
"3tKGGq9-tj",
"3qO8_-p7BPk4",
"iVrQ65jjN2k",
"B0LwuUcUIFfa",
"q00toC9DCzB",
"Xx6Xa4b4gOcR",
"dWFyVh4j7sP",
"2JA-ADcvjSC0",
"SJXzvF1v-kv",
"ZjL5_XIKE06"... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
" Dear Reviewer ifKM,\n\nThanks again for your valuable comments and suggestions in your initial review, which helps improve our work a lot. Regarding your main concerns on **comparison with Graphormer**, **clarifications in writing**, and **performances when no periodicity or repetitive patterns in the graph**, we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"tJ4luCl2umg",
"K_-LLgDqFz",
"B0LwuUcUIFfa",
"DtJPJjPy8GZ",
"DtJPJjPy8GZ",
"covwb7ZpzUSH",
"tJ4luCl2umg",
"3qO8_-p7BPk4",
"iVrQ65jjN2k",
"iVrQ65jjN2k",
"iVrQ65jjN2k",
"g9BjlI_U2wz",
"qWTUs78AOO0",
"qWTUs78AOO0",
"qWTUs78AOO0",
"qWTUs78AOO0",
"4_dMLdjd7SO",
"4_dMLdjd7SO",
"4_dMLdj... |
nips_2022_2EBn01PJh17 | Adaptive Cholesky Gaussian Processes | We present a method to fit exact Gaussian process models to large datasets by considering only a subset of the data. Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead. From an empirical observation that the log-marginal likelihood often exhibits a linear trend once a sufficient subset of a dataset has been observed, we conclude that many large datasets contain redundant information that only slightly affects the posterior. Based on this, we provide probabilistic bounds on the full model evidence that can identify such subsets. Remarkably, these bounds are largely composed of terms that appear in intermediate steps of the standard Cholesky decomposition, allowing us to modify the algorithm to adaptively stop the decomposition once enough data have been observed. Empirically, we show that our method can be directly plugged into well-known inference schemes to fit exact Gaussian process models to large datasets. | Reject | This paper proposes some nice ideas on speeding up Gaussian process inference based on approximating the marginal using subsamples. However, several reviewers noted gaps and potentially flaws in the technical details. The reviews as well as detailed replies during the rebuttal period will help the authors prepare a stronger revision, but the work is not airtight and is not ready for publication in its current form | val | [
"y_SE3AFLsJx",
"EcccFK9X9Ua",
"i6KDnDEaiw",
"ersnQf7N9Vo",
"XxkndtIIUzD",
"zi3vHFaops",
"FF5IFMI6KwG",
"nyVK0s2jFOg",
"l186Q1BbZ7m",
"-xrhce6ozAp",
"L11pV5T9H1v"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nThank you for your feedback on our paper. Having rerun our experiments, we have now uploaded a revised manuscript. In particular, we would like to highlight the changes to Figure 2, where we have corrected a mistake in the timing of the exact GP. The figure now clearly shows that the overhead o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2022_2EBn01PJh17",
"i6KDnDEaiw",
"ersnQf7N9Vo",
"XxkndtIIUzD",
"nyVK0s2jFOg",
"L11pV5T9H1v",
"-xrhce6ozAp",
"l186Q1BbZ7m",
"nips_2022_2EBn01PJh17",
"nips_2022_2EBn01PJh17",
"nips_2022_2EBn01PJh17"
] |
nips_2022_xijYyYFlRIf | GAUDI: A Neural Architect for Immersive 3D Scene Generation | We introduce GAUDI, a generative model capable of capturing the distribution of complex and realistic 3D scenes that can be rendered immersively from a moving camera. We tackle this challenging problem with a scalable yet powerful approach, where we first optimize a latent representation that disentangles radiance fields and camera poses. This latent representation is then used to learn a generative model that enables both unconditional and conditional generation of 3D scenes. Our model generalizes previous works that focus on single objects by removing the assumption that the camera pose distribution can be shared across samples. We show that GAUDI obtains state-of-the-art performance in the unconditional generative setting across multiple datasets and allows for conditional generation of 3D scenes given conditioning variables like sparse image observations or text that describes the scene. | Accept | This paper proposes a framework to learn disentangled latent representation of radiance field and camera pose from trajectories of 3D scenes. The denoising diffusion probabilistic model can be further trained on the extracted latent representation for a conditioned or unconditioned generation. Experiments are conducted to validate the performance of the proposed method. The paper receives a total of 4 reviews. All reviewers lean to (borderline/weakly) accept the paper because of the novelty of the tasks, even though most of them raised concern about the view-consistency problem. AC recommends accepting the paper because the task of generating egocentric video, with an option of text-prompt input, is a novel and interesting direction, and this paper's merit outweighs its flaws. AC urges the authors to improve their paper by taking into account all the feedback from the reviewers. | train | [
"manQuoaDDLq",
"H03vtaX5VMx",
"OOQXCuWAhjO",
"zILC_x1xKkd",
"WclHdFc2RN",
"aaKIyoe62G",
"B2sqmXoQy2Z",
"WFK36BoFvAJ",
"EOYsFy9IDb",
"xQ3yNd-IEg",
"S4ZmkyGPpVC",
"hoYDVrFy0Vm",
"K3qL-Gj05QT",
"jSHGkHSBoxs",
"Hq8i3-o_gtw",
"wS5C2Ee7Q13"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for engaging in discussions with us and for their comments. We address them in the following:\n\n* “I am a little confused by the response the training set of trajectories are scene agnostic. If my understanding is correct, during training, each scene has a different set of cam... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"WclHdFc2RN",
"zILC_x1xKkd",
"aaKIyoe62G",
"xQ3yNd-IEg",
"EOYsFy9IDb",
"hoYDVrFy0Vm",
"EOYsFy9IDb",
"nips_2022_xijYyYFlRIf",
"K3qL-Gj05QT",
"jSHGkHSBoxs",
"Hq8i3-o_gtw",
"wS5C2Ee7Q13",
"nips_2022_xijYyYFlRIf",
"nips_2022_xijYyYFlRIf",
"nips_2022_xijYyYFlRIf",
"nips_2022_xijYyYFlRIf"
] |
nips_2022_dRgHxaOJsiV | 3DB: A Framework for Debugging Computer Vision Models | We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation. We demonstrate, through a wide range of use cases, that 3DB allows users to discover vulnerabilities in computer vision systems and gain insights into how models make decisions. 3DB captures and generalizes many robustness analyses from prior work, and enables one to study their interplay. Finally, we find that the insights generated by the system transfer to the physical world. 3DB will be released as a library alongside a set of examples and documentation. We attach 3DB to the submission. | Accept | Reviewers found the presented work to be a usefulf framework, the paper to contain adequate experiments and intereting demonstrations of the framework's capabilities, and be overall well written. They appreciated tha significatn prior results can be replicated easily with the the proposed framework. On the flip side, the paper presents no new fundamentally new tools, no new results, or methodoligcal contributions.
One reviewer pointed out a missing connection to probabilistic programning frameworks and referred to several related works. Checking the referenced papers and tools:
- Pyro is an established library, but is barely connected to any rendering at all and has a different use case than the presented paper.
- Picture seems to never have made it beyond a Julia-based alpha stage with "under heavy initial development and has a lot of known bugs" (author's github) and is thus of no practical importance.
- Other referenced papers have no published source code.
Probailistic programming is indeed a related direction worth discussing, but the existing tools seem to be far away from the proposed work.
Overall, the presented framework could be an interesting debugging tool for researchers and practicioners. Possibly the biggest concern targets maintenance - The value of an open source framework like the one presented here stems a lot from dedicated ongoing maintenance efforts.
I thus strongly recommend following the suggestions by reviewers and adding tutorials (ideally reproducing all the experiments in the paper) as well publishing as the collection of shapes promissed in the rebuttal. | train | [
"80p9YWRfVO8",
"6AqWEq6Qbgm",
"KlvyrU8VDf1c",
"PcFKW_Bg3m",
"qPgGU6p-k4",
"tANkNpVulu-",
"t78ms-7dvyB",
"r1wNvgJoKEU",
"_aOJp1HsmVO",
"mXmaupg0Z0Y",
"_jL100uWT0w",
"hn1-akFIWzG",
"j23-92toEsN"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nThank you for the detailed rebuttal. A key reference for probabilistic programming include - Kulkarni et al, 'Picture: A probabilistic programming language for scene perception', CVPR 2015. There are several frameworks that incorporate 3D graphics and rendering (e.g. Pyro: Deep probabilistic programming fram... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"_aOJp1HsmVO",
"PcFKW_Bg3m",
"t78ms-7dvyB",
"tANkNpVulu-",
"tANkNpVulu-",
"j23-92toEsN",
"hn1-akFIWzG",
"_jL100uWT0w",
"mXmaupg0Z0Y",
"nips_2022_dRgHxaOJsiV",
"nips_2022_dRgHxaOJsiV",
"nips_2022_dRgHxaOJsiV",
"nips_2022_dRgHxaOJsiV"
] |
nips_2022_Xg-yZos9qJQ | Exploration via Elliptical Episodic Bonuses | In recent years, a number of reinforcement learning (RL) methods have been pro- posed to explore complex environments which differ across episodes. In this work, we show that the effectiveness of these methods critically relies on a count-based episodic term in their exploration bonus. As a result, despite their success in relatively simple, noise-free settings, these methods fall short in more realistic scenarios where the state space is vast and prone to noise. To address this limitation, we introduce Exploration via Elliptical Episodic Bonuses (E3B), a new method which extends count-based episodic bonuses to continuous state spaces and encourages an agent to explore states that are diverse under a learned embed- ding within each episode. The embedding is learned using an inverse dynamics model in order to capture controllable aspects of the environment. Our method sets a new state-of-the-art across 16 challenging tasks from the MiniHack suite, without requiring task-specific inductive biases. E3B also outperforms existing methods in reward-free exploration on Habitat, demonstrating that it can scale to high-dimensional pixel-based observations and realistic environments. | Accept | The reviewers appreciated the fundamental questions the paper was asking, clear writing and argumentation of the paper and convincing empirical experiments. While there were concerns about the theoretical rationale of resetting covariance matrix, the empirical results show it is indeed important. For these reasons, I recommend acceptance. | train | [
"wDJMlk2pbdz",
"SZV6S-r5yvL",
"BktZXS1jEGv",
"KLVreeyFmPx",
"1M1wkPib_EH",
"7zQHwAhkdAL",
"xH0yBCZsiSH",
"Q-Yrd7MO4F-",
"rA3uKN4IFn8",
"bgqs7tohK5",
"V-juRSWtWA",
"zzBncHmSULf",
"tvMCHyAsTr",
"iy3rlnYOSV",
"9Hzh34aJrNh",
"Cmjqk5R2P6U"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reading our response. Although we still respectfully disagree with the current rating, we do appreciate you raising your score. We address the concerns you bring up below:\n\n$~$\n\n1. Although you say “naively using count-based bonus obviously does not make sense”, we would like to point out that t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"SZV6S-r5yvL",
"KLVreeyFmPx",
"1M1wkPib_EH",
"1M1wkPib_EH",
"tvMCHyAsTr",
"V-juRSWtWA",
"rA3uKN4IFn8",
"nips_2022_Xg-yZos9qJQ",
"Cmjqk5R2P6U",
"9Hzh34aJrNh",
"9Hzh34aJrNh",
"iy3rlnYOSV",
"iy3rlnYOSV",
"nips_2022_Xg-yZos9qJQ",
"nips_2022_Xg-yZos9qJQ",
"nips_2022_Xg-yZos9qJQ"
] |
nips_2022_h4kN_apci_R | Probabilistic Missing Value Imputation for Mixed Categorical and Ordered Data | Many real-world datasets contain missing entries and mixed data types including categorical and ordered (e.g. continuous and ordinal) variables. Imputing the missing entries is necessary, since many data analysis pipelines require complete data, but challenging especially for mixed data. This paper proposes a probabilistic imputation method using an extended Gaussian copula model that supports both single and multiple imputation. The method models mixed categorical and ordered data using a latent Gaussian distribution. The unordered characteristics of categorical variables is explicitly modeled using the argmax operator. The method makes no assumptions on the data marginals nor does it require tuning any hyperparameters. Experimental results on synthetic and real datasets show that imputation with the extended Gaussian copula outperforms the current state-of-the-art for both categorical and ordered variables in mixed data. | Accept | The authors propose a single and multiple missing value imputation method for mixed data under the MCAR (missing completely at random) assumption. The method is based on using a latent Gaussian distribution in the form of an ordinary Gaussian copula model for ordered data (ordinal and continuous) and an extended Gaussian copula model proposed by the authors for categorical variables. The author showcase impressive results for their approach, which outperforms standard imputation methods in both synthetic and real-world experiments.
The reviewers agree that this is overall solid work that makes important technical advances and illustrates the usefulness of the approach. The authors were able to address the main concerns by the reviewers in the discussion. There are some remaining questions, but they appear to be not of a nature that would generally question the results or the overall contribution. Moreover, reviewer JaBb quite strongly supports the acceptance of the manuscript.
Taken together, I think this manuscript is a very good submission and I support it's acceptance.
| train | [
"ub25V7QjwgK",
"QtBcIZC-bu",
"NEii1FtBrQc",
"SufR8LO7F8",
"ndecbGw-Fk-",
"y-sryl9C3HH",
"jpEiFlIZJtw",
"Gm_Cubo-GLu",
"AhGUlieRiir",
"QEz_M8UcDAx"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for raising your score! We actually selected the value of M and beta on randomly generated category probabilities before we conducted any experiment reported in this paper. Thus the marginal estimation performance on our selected M and beta are already test data performance! \n\nYou could also refer to ou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"QtBcIZC-bu",
"ndecbGw-Fk-",
"SufR8LO7F8",
"jpEiFlIZJtw",
"QEz_M8UcDAx",
"AhGUlieRiir",
"Gm_Cubo-GLu",
"nips_2022_h4kN_apci_R",
"nips_2022_h4kN_apci_R",
"nips_2022_h4kN_apci_R"
] |
nips_2022__L7f0ySKMWY | Near-Optimal Multi-Agent Learning for Safe Coverage Control | In multi-agent coverage control problems, agents navigate their environment to reach locations that maximize the coverage of some density. In practice, the density is rarely known $\textit{a priori}$, further complicating the original NP-hard problem. Moreover, in many applications, agents cannot visit arbitrary locations due to $\textit{a priori}$ unknown safety constraints. In this paper, we aim to efficiently learn the density to approximately solve the coverage problem while preserving the agents' safety. We first propose a conditionally linear submodular coverage function that facilitates theoretical analysis. Utilizing this structure, we develop MacOpt, a novel algorithm that efficiently trades off the exploration-exploitation dilemma due to partial observability, and show that it achieves sublinear regret. Next, we extend results on single-agent safe exploration to our multi-agent setting and propose SafeMac for safe coverage and exploration. We analyze SafeMac and give first of its kind results: near optimal coverage in finite time while provably guaranteeing safety. We extensively evaluate our algorithms on synthetic and real problems, including a bio-diversity monitoring task under safety constraints, where SafeMac outperforms competing methods. | Accept | This paper presents a novel method for multi-agent coverage control over an unknown density and safety constraints. There is some concern about the level of significance of the approach but it is interesting and sound. There were also concerns about scalability and the use of GPs for density modeling but the authors have sufficiently addressed these in the response and updated paper. The paper would be strengthened by highlighting the contributions and more extensive experiments to show the benefits of the approach in different settings. | train | [
"LEef8Omnr4f",
"D-uIlgUL9jh",
"fHWmzhqwVZ",
"-dxL5SZNZu-",
"SK2RmD-d_l",
"GK_WK4g_bI",
"AETSVcVbsNK",
"-S01qHdgp_t",
"Elm2AwFgU_U",
"ppd4VfPRoPI",
"N8mHxBnyqgz",
"Gtj1-QZAA5",
"bWFSOUxjnLe",
"ifyqmMr8mFh",
"pzo92J0BzW"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > As in the original submission there appears to be a region where the proposed approach is most useful (for low to medium number of samples, PassiveMac is far better), for high number of samples, Two-Stage is about equal or not much worse. \n\nFor the reviewer comment above, we would like to add that,\n\nPassive... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4,
5
] | [
"D-uIlgUL9jh",
"-dxL5SZNZu-",
"SK2RmD-d_l",
"AETSVcVbsNK",
"Elm2AwFgU_U",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY",
"Gtj1-QZAA5",
"bWFSOUxjnLe",
"ifyqmMr8mFh",
"pzo92J0BzW",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY"
] |
nips_2022_AhbTKBlM7X | Learning State-Aware Visual Representations from Audible Interactions | We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multi-modal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel self-supervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. | Accept | The paper presents self-supervised representation learning from egocentric video data. The reviewers unanimously support the paper. Although WY3G has not updated the rating, the reviewer commented that s/he is upgrading the rating to Weak Accept, making the paper get three unanimous Weak Accept ratings. All three reviewers find the idea of using audio to identify the state changes for learning audio-visual correlation interesting. The authors are encouraged to include added experimental results they provided during the discussion phase to the final version of the paper.
| train | [
"exX0UhsTzwB",
"ywrlXmcFs9h",
"QJ7JhZAfTRP",
"pw36liRg6Kz",
"1gpPLgnfR-E",
"5hqdfYMZOVe",
"p3UGij_9XJQ",
"WS5AoYHOJWx",
"IF0sazx3LWg",
"YIp7RRNzNuTW",
"fPRbj-_oF2r",
"CVgnH9x-MSv",
"yt1PbmmLAWx",
"hn18lMTENm",
"_cGP3J-mMxi",
"2povrMJwVGS",
"znwbZoJKoWr",
"faC_Uiks94G",
"O6KIDaZsu... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. \n\nI agree with the other two reviewers that the comparison with other state-of-the-art methods is not sufficient in the original submission and the additional comparisons provided should be added into the paper. I agree with Reviewer coBj that some examples should also be provided in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"IF0sazx3LWg",
"QJ7JhZAfTRP",
"hn18lMTENm",
"O6KIDaZsuRb",
"faC_Uiks94G",
"znwbZoJKoWr",
"nips_2022_AhbTKBlM7X",
"YIp7RRNzNuTW",
"O6KIDaZsuRb",
"fPRbj-_oF2r",
"CVgnH9x-MSv",
"yt1PbmmLAWx",
"faC_Uiks94G",
"_cGP3J-mMxi",
"znwbZoJKoWr",
"nips_2022_AhbTKBlM7X",
"nips_2022_AhbTKBlM7X",
... |
nips_2022_agNTJU1QNw | Geometric Order Learning for Rank Estimation | A novel approach to rank estimation, called geometric order learning (GOL), is proposed in this paper. First, we construct an embedding space, in which the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints: the order constraint compels objects to be sorted according to their ranks, while the metric constraint makes the distance between objects reflect their rank difference. Then, we perform the simple $k$ nearest neighbor ($k$-NN) search in the embedding space to estimate the rank of a test object. Moreover, to assess the quality of embedding spaces for rank estimation, we propose a metric called discriminative ratio for ranking (DRR). Extensive experiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and thus yields excellent rank estimation performances. | Accept | This paper proposes a new approach named geometric order learning (GOL) for rank estimation. Reviewers found that the idea is novel and the paper is well written. The authors have also clearly addressed most questions from reviewers in their responses. Thus, I recommend the acceptance of this paper. | train | [
"HRU5DR_BdcZ",
"JSUasE3CrRM",
"c9QKOVAl_zD",
"tqRlxOeeu-x",
"EDFS14vald7",
"s4cLQWDL0eN",
"ks-RMcveOBe",
"qko_CUZV8Z2",
"1OYc7F3esg2",
"Db3X9rhM7_a",
"n769zbUCOJ_",
"pCWKa3P8qh9",
"99DWMlgx03q",
"3fcvCxF6E9_"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your constructive and insightful review on our paper. We do appreciate it. ",
" Thanks for the clarifications. I don't have any other questions. ",
" Thank you for your feedback. We appreciate it greatly. \n***\n**Asymmetric $v_\\mathrm{f}$ and $v_\\mathrm{b}$:** \n\n> What we meant by 'st... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"JSUasE3CrRM",
"c9QKOVAl_zD",
"tqRlxOeeu-x",
"s4cLQWDL0eN",
"nips_2022_agNTJU1QNw",
"3fcvCxF6E9_",
"3fcvCxF6E9_",
"99DWMlgx03q",
"pCWKa3P8qh9",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw"
] |
nips_2022_pBpwRkEIjR3 | Enhanced Bilevel Optimization via Bregman Distance | Bilevel optimization has been recently used in many machine learning problems such as hyperparameter optimization, policy optimization, and meta learning. Although many bilevel optimization methods have been proposed, they still suffer from the high computational complexities and do not consider the more general bilevel problems with nonsmooth regularization. In the paper, thus, we propose a class of enhanced bilevel optimization methods with using Bregman distance to solve bilevel optimization problems, where the outer subproblem is nonconvex and possibly nonsmooth, and the inner subproblem is strongly convex. Specifically, we propose a bilevel optimization method based on Bregman distance (BiO-BreD) to solve deterministic bilevel problems, which achieves a lower computational complexity than the best known results. Meanwhile, we also propose a stochastic bilevel optimization method (SBiO-BreD) to solve stochastic bilevel problems based on stochastic approximated gradients and Bregman distance. Moreover, we further propose an accelerated version of SBiO-BreD method (ASBiO-BreD) using the variance-reduced technique, which can achieve a lower computational complexity than the best known computational complexities with respect to condition number $\kappa$ and target accuracy $\epsilon$ for finding an $\epsilon$-stationary point. We conduct data hyper-cleaning task and hyper-representation learning task to demonstrate that our new algorithms outperform related bilevel optimization approaches. | Accept | The paper studies bilevel optimization problems, provides three algorithms for different settings, and improves the convergence analysis in terms of the condition number. In addition, numerical experiments are conducted that provide illustration of the effectiveness of the algorithms. Three reviewers all agree that the paper should be published as it contributes to the literature and will be of interest to the NeurIPS audience.
When preparing the final version of the manuscript, please incorporate the discussion that addressed the reviewers' comments either in the main text or the appendix. | train | [
"iIODO0Y2XsU",
"p8Rphr5lIxH",
"M3gwXQ53dc",
"6fSohfQfZRG",
"9p0W9d6mWbI",
"jhLtNORQNhsG",
"eOgVesah4x3",
"NfOFi13e2Ws",
"smplBqKfVVX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. You resolve my questions, but I want to remain my score.",
" Thanks for your response. Questions have been solved.",
" I really appreciate the author's response. All my questions are answered.",
" Thanks so much for your comments.\n\n**Q1**: Weaknesses: 1) This paper only mention... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"jhLtNORQNhsG",
"9p0W9d6mWbI",
"6fSohfQfZRG",
"smplBqKfVVX",
"NfOFi13e2Ws",
"eOgVesah4x3",
"nips_2022_pBpwRkEIjR3",
"nips_2022_pBpwRkEIjR3",
"nips_2022_pBpwRkEIjR3"
] |
nips_2022_Vg_02McCRnY | Optimal Comparator Adaptive Online Learning with Switching Cost | Practical online learning tasks are often naturally defined on unconstrained domains, where optimal algorithms for general convex losses are characterized by the notion of comparator adaptivity. In this paper, we design such algorithms in the presence of switching cost - the latter penalizes the typical optimism in adaptive algorithms, leading to a delicate design trade-off. Based on a novel dual space scaling strategy discovered by a continuous-time analysis, we propose a simple algorithm that improves the existing comparator adaptive regret bound [ZCP22a] to the optimal rate. The obtained benefits are further extended to the expert setting, and the practicality of the proposed algorithm is demonstrated through a sequential investment task. | Accept | This is a technical, but interesting paper on online linear optimization. The nice contribution is a control of the switching cost (moving from one action to another) which makes the problem highly non-trivial.
The contribution is to consider a "smaller" set of assumptions (hence a weaker asymptotic result) than in the existing literature, but this allows to get better parametric rates.
This might not be the most breathtaking paper, but the reviewers and myself find it sufficiently interesting to be accepted at NeurIPS.
Congratulations ! | train | [
"lvu6W2CKFPE",
"h2091y7giwk",
"12W-a6qSun",
"MsrhMKvdx7j",
"SLFrIITZ3yY",
"qZDhzxZGR0Z",
"6UAlOV0asS",
"6JbO2KjduAq",
"SAI4Xa3y7L",
"3algeuBdqST",
"kXxikA6Sogq",
"VuQRq1NJWBd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think the authors have properly addressed my questions. It would be very nice if the authors could incorporate those discussions in the revised version. So I will increase my score to acceptance.",
" Thank you for the detailed response. \n\nAlthough I still worry that the techniques used in this paper is some... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
3,
4,
1
] | [
"qZDhzxZGR0Z",
"MsrhMKvdx7j",
"VuQRq1NJWBd",
"kXxikA6Sogq",
"3algeuBdqST",
"SAI4Xa3y7L",
"6JbO2KjduAq",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY"
] |
nips_2022_3uj_8G7fxgs | Multi-objective Deep Data Generation with Correlated Property Control | Developing deep generative models has been an emerging field due to the ability to model and generate complex data for various purposes, such as image synthesis and molecular design. However, the advance of deep generative models is limited by the challenges to generate objects that possess multiple desired properties because: 1) the existence of complex correlation among real-world properties is common but hard to identify; 2) controlling individual property enforces an implicit partially control of its correlated properties, which is difficult to model; 3) controlling multiple properties under variour manners simultaneously is hard and underexplored. We address these challenges by proposing a novel deep generative framework that recovers semantics and correlation of properties through disentangled latent vectors. The correlation is handled via an explainable mask pooling layer, and properties are precisely retained by the generated objects via the mutual dependence between latent vectors and properties. Our generative model preserves properties of interest while handles correlation and conflicts of properties under a multi-objective optimization framework. The experiments demonstrate our model's superior performance in generating objects with desired properties. | Accept | All three reviewers argue to accept (albeit one borderline).
Extremely substantial response from the authors addressing individual reviewer comments, which led to reviewers raising their scores, and to a much revised paper with new experiments. This effectively led to a second round of review, with engaged reviewers who confirmed their concerns have largely been met. | train | [
"v-1OuWhUlq",
"gmI6fhZ_9Au",
"9l60dqpdrI-",
"vUIthqPDPTj",
"JfblpkrDxk",
"i4BDPd-kdgP",
"MV4Wb8WcfezO",
"jmu7w7SxDQc",
"sTiNmqqj8UP",
"4crt81Gj3iO",
"Ii9mQgzUXkyT",
"2TzjVAsWa-w",
"E64hFLLC5eP9",
"ZS6cvsv2QMi",
"veiF-WnHrWP",
"3gnEy8lYbDS",
"m3Mgv9w3Xtt",
"smBe3JwDx2b",
"GU0VImi3... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My comments have been properly addressed.",
" We sincerely thank the reviewer for approving our clarification and we are glad to further discuss the evaluation of generated data.\n\nEvaluation metrics include novelty, uniqueness, diversity, validity and similarity to original distribution such as Fréchet Incept... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
2
] | [
"MV4Wb8WcfezO",
"vUIthqPDPTj",
"ZS6cvsv2QMi",
"JfblpkrDxk",
"i4BDPd-kdgP",
"E64hFLLC5eP9",
"m3Mgv9w3Xtt",
"GU0VImi3IEs",
"smBe3JwDx2b",
"smBe3JwDx2b",
"smBe3JwDx2b",
"smBe3JwDx2b",
"smBe3JwDx2b",
"veiF-WnHrWP",
"GU0VImi3IEs",
"nips_2022_3uj_8G7fxgs",
"nips_2022_3uj_8G7fxgs",
"nips_... |
nips_2022_lgNGDjWRTo- | Deep Generative Model for Periodic Graphs | Periodic graphs are graphs consisting of repetitive local structures, such as crystal nets and polygon mesh. Their generative modeling has great potential in real-world applications such as material design and graphics synthesis. Classical models either rely on domain-specific predefined generation principles (e.g., in crystal net design), or follow geometry-based prescribed rules. Recently, deep generative models have shown great promise in automatically generating general graphs. However, their advancement into periodic graphs has not been well explored due to several key challenges in 1) maintaining graph periodicity; 2) disentangling local and global patterns; and 3) efficiency in learning repetitive patterns. To address them, this paper proposes Periodical-Graph Disentangled Variational Auto-encoder (PGD-VAE), a new deep generative model for periodic graphs that can automatically learn, disentangle, and generate local and global graph patterns. Specifically, we develop a new periodic graph encoder consisting of global-pattern encoder and local-pattern encoder that ensures to disentangle the representation into global and local semantics. We then propose a new periodic graph decoder consisting of local structure decoder, neighborhood decoder, and global structure decoder, as well as the assembler of their outputs that guarantees periodicity. Moreover, we design a new model learning objective that helps ensure the invariance of local-semantic representations for the graphs with the same local structure. Comprehensive experimental evaluations have been conducted to demonstrate the effectiveness of the proposed method. | Accept | This paper proposed an interesting model for generating periodic graphs. The hierarchical representation of periodic graphs decomposes into local structure and global structure, and greatly reduces the size of the structure to be modeled, which is an interesting contribution. All reviewers liked the idea of this representation and lean toward acceptance.
I would, however, encourage the authors to include some discussion of the limitations of this approach, in particular the dependence on a stable node ordering, and the fact that the model can only generate perfectly periodic graphs (what if the graph is mostly periodic but with some noise?). | train | [
"jjaMyjWQ6dr",
"WE3ycH9mL5",
"yYujJgMMkbX",
"X-fjVX-ucqA",
"fOzYVbU0WV",
"NUJKYywLdl",
"YsfKj3yNxLw",
"uh7UPxbEDWy",
"n__G6ZvxzQB",
"exo0JpbvG_5",
"F3N1QrX4p9S",
"iFo3zb3y7dR",
"anbOKfRDUbi",
"keqGZfvVTeX",
"EtEJwQ9xCNM",
"M1lt_ADs2UVP",
"tSRw8S7UH8b",
"KRQFV49EHnO",
"SnyJmnu75DC... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" My comments have been properly addressed.",
" We sincerely appreciate for reviewers' comments and feedbacks that made our paper further improved when we were addressing their concerns. We are also glad that reviewers approved our clarifications and are satisfied with how we addressed their comments. The followi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"exo0JpbvG_5",
"nips_2022_lgNGDjWRTo-",
"keqGZfvVTeX",
"tSRw8S7UH8b",
"NUJKYywLdl",
"YsfKj3yNxLw",
"q5Fs1EMrkF",
"n__G6ZvxzQB",
"q5Fs1EMrkF",
"KRQFV49EHnO",
"q5Fs1EMrkF",
"q5Fs1EMrkF",
"q5Fs1EMrkF",
"EtEJwQ9xCNM",
"SnyJmnu75DC",
"tSRw8S7UH8b",
"nips_2022_lgNGDjWRTo-",
"nips_2022_lg... |
nips_2022_fp33Nsh0O5 | Deep Generalized Schrödinger Bridge | Mean-Field Game (MFG) serves as a crucial mathematical framework in modeling the collective behavior of individual agents interacting stochastically with a large population. In this work, we aim at solving a challenging class of MFGs in which the differentiability of these interacting preferences may not be available to the solver, and the population is urged to converge exactly to some desired distribution. These setups are, despite being well-motivated for practical purposes, complicated enough to paralyze most (deep) numerical solvers. Nevertheless, we show that Schrödinger Bridge — as an entropy-regularized optimal transport model — can be generalized to accepting mean-field structures, hence solving these MFGs. This is achieved via the application of Forward-Backward Stochastic Differential Equations theory, which, intriguingly, leads to a computational framework with a similar structure to Temporal Difference learning. As such, it opens up novel algorithmic connections to Deep Reinforcement Learning that we leverage to facilitate practical training. We show that our proposed objective function provides necessary and sufficient conditions to the mean-field problem. Our method, named Deep Generalized Schrödinger Bridge (DeepGSB), not only outperforms prior methods in solving classical population navigation MFGs, but is also capable of solving 1000-dimensional opinion depolarization, setting a new state-of-the-art numerical solver for high-dimensional MFGs. Our code will be made available at https://github.com/ghliu/DeepGSB. | Accept | This paper proposes a novel, simple but effective algorithm to solve Mean Field Games. Reviewers found the paper well written, presenting an exact and flexible method. Despite its simplicity, the method solves a wide class of MFGs. Authors were also able to demonstrate the computational advantage of their method, providing good experimental data. | train | [
"gAWjrjza-hM",
"gBTXQUzSeS1",
"Bx3B5odoKgC",
"geMijoTY4Rh",
"pq9PwlJYopu",
"uhEmeAozQbH",
"ioWyiu1yePz",
"fDp1CYCWlf",
"iv6LpODV-h",
"236QopZ6zHm",
"QqluvKeGJfDt",
"3yGUYZeDsQ9",
"5IS0Q1Td2qi",
"gIJudO1oveOM",
"3iHrvuSBbV7y",
"7xqRbkq4hE",
"IvGjllPnU4",
"kZzHQhayjgCB",
"xPs93ck2P... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" We thank the reviewer for the reply. We are pleased that the reviewer appreciated our clarifications, and we greatly appreciate the reviewer's willingness to raise the score. To ease the reviewer's burden, we provide the following list of content, linking each of our responses to the [_section, line, page_] in th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"gBTXQUzSeS1",
"gIJudO1oveOM",
"7xqRbkq4hE",
"pq9PwlJYopu",
"uhEmeAozQbH",
"ioWyiu1yePz",
"iv6LpODV-h",
"236QopZ6zHm",
"236QopZ6zHm",
"QqluvKeGJfDt",
"xNPEYkus7s5",
"5IS0Q1Td2qi",
"hWkBAkB7eew",
"3iHrvuSBbV7y",
"fdCEu3U3JMQ",
"IvGjllPnU4",
"xPs93ck2PKY",
"nips_2022_fp33Nsh0O5",
"... |
nips_2022_espX_4CLr46 | Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources | Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems. | Accept | This paper presents a method for blind separation of correlated sources, which is a challenging task. Applying the weight similarity matching approach to the Det-Max optimization, the authors develop a biologically-plausible two-layered neural network that can separate correlated sources from their linear mixture. All of reviewers agree that the paper is well written and has a solid contribution in BSS. The approach in this paper is a general framework that applies to various source domains. A downside is in experiments on real-world data, which has been improved during the author rebuttal period. Therefore, I am pleased to suggest the paper to be accepted.
| train | [
"wYmH7uFjksk",
"7V8_f7Sq2mQ",
"zFYJ_t85Pl",
"yD_v2X7B8RW",
"Rkt2LG-MDY",
"61N0b2AT2X",
"RCE7SR3dbr8",
"-QyJls9z8bx",
"TNrUMwo_pfZ",
"NdNNupLuiWD",
"x-a9saNKa8",
"NCT8otgP_OZ",
"JU57hLkDIAO",
"yEnR1YJEAYV",
"eIxESI6cv5t",
"rE-G4rASliQ",
"S6kPhx3_UB",
"YDjWuJaG8vz",
"6Nq9meKaBg",
... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" We would like to thank you for your useful feedback and suggestions.",
" We would like to thank you for your useful feedback and suggestions.",
" We would like to thank you for your useful feedback and suggestions.",
" We would like to thank you for the useful feedback and suggestions. In the final version... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
1
] | [
"-QyJls9z8bx",
"Rkt2LG-MDY",
"TNrUMwo_pfZ",
"61N0b2AT2X",
"j-Wrs7f8E6",
"S6kPhx3_UB",
"rE-G4rASliQ",
"YDjWuJaG8vz",
"eIxESI6cv5t",
"j-Wrs7f8E6",
"j-Wrs7f8E6",
"j-Wrs7f8E6",
"8azOK4QpBcA",
"8azOK4QpBcA",
"8azOK4QpBcA",
"sJ6r3hLnie9",
"sJ6r3hLnie9",
"6Nq9meKaBg",
"nips_2022_espX_4C... |
nips_2022_QNjyrDBx6tz | Practical Adversarial Multivalid Conformal Prediction | We give a simple, generic conformal prediction method for sequential prediction that achieves target empirical coverage guarantees on adversarial data. It is computationally lightweight --- comparable to split conformal prediction --- but does not require having a held-out validation set, and so all data can be used for training models from which to derive a conformal score. Furthermore, it gives stronger than marginal coverage guarantees in two ways. First, it gives threshold-calibrated prediction sets that have correct empirical coverage even conditional on the threshold used to form the prediction set from the conformal score. Second, the user can specify an arbitrary collection of subsets of the feature space --- possibly intersecting --- and the coverage guarantees will also hold conditional on membership in each of these subsets. We call our algorithm MVP, short for MultiValid Prediction. We give both theory and an extensive set of empirical evaluations. | Accept | This paper proposes a conformal prediction based method for sequential prediction, relaxing the exchangeability assumption. It is robust to distribution shift, and achieves group-conditional coverage guarantees. The method is efficient, novel, and outperforms existing methods. All the reviewers, including myself, find the paper a solid contribution to the methodology and analysis, hence a clear accept. | val | [
"JnRYZu6KNfK",
"oyz1Rmm_Qsv",
"_wcu_bMmSkmd",
"y67dsvG4IEy",
"o8J9it7g9Xpv",
"XNedzv0CP3",
"EshtFYE56e",
"0R_FGzdN52p",
"sY2oxEv1w_Y",
"NAmLlkFpdJd",
"Wfr1IhR1vSG",
"ISHdf4AHB_"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you --- and thanks once again for the time you spent reviewing!",
" Thank you for the clarifications! My concerns are resolved. I recommend acceptance.",
" I appreciate the detailed rebuttal and clarifications.",
" Thank you!",
" Thank you for the clarifications. While most of my concerns are resolv... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"oyz1Rmm_Qsv",
"sY2oxEv1w_Y",
"0R_FGzdN52p",
"o8J9it7g9Xpv",
"XNedzv0CP3",
"ISHdf4AHB_",
"Wfr1IhR1vSG",
"Wfr1IhR1vSG",
"NAmLlkFpdJd",
"nips_2022_QNjyrDBx6tz",
"nips_2022_QNjyrDBx6tz",
"nips_2022_QNjyrDBx6tz"
] |
nips_2022_KxVSnZVuZZ | Constrained Langevin Algorithms with L-mixing External Random Variables | Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used in Markov Chain Monte Carlo (MCMC) sampling, optimization, and machine learning. In recent years, the non-asymptotic analysis of Langevin algorithms for non-convex learning has been extensively explored. For constrained problems with non-convex losses over a compact convex domain with IID data variables, the projected Langevin algorithm achieves a deviation of $O(T^{-1/4} (\log T)^{1/2})$ from its target distribution \cite{lamperski2021projected} in $1$-Wasserstein distance. In this paper, we obtain a deviation of $O(T^{-1/2} \log T)$ in $1$-Wasserstein distance for non-convex losses with $L$-mixing data variables and polyhedral constraints (which are not necessarily bounded). This improves on the previous bound for constrained problems and matches the best-known bound for unconstrained problems.
| Accept | After going through all the reviews, rebuttals, and discussions in detail I am recommending a borderline acceptance for the paper. More precisely, the technical contribution of the paper is significant, even though there have been some concerns raised regarding the motivation/applicability of the setup. However, I do believe that the merits of the paper outweigh its limitations. I recommend the authors to implement all the suggested changes. | train | [
"K6LmPBMtohrr",
"fEVm8__a9zq4",
"x_9DpKnkbGx",
"QcpBf0Zb559",
"VSgLg21mLfd",
"y-PbbUf7NVCh",
"sFp5nKQbQeR",
"3fVWpGPMLPY",
"LWRuIv1_Zaa",
"RV0n2SvFb4n",
"z_B--AlSmpx"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I stand by my original evaluation.",
" (5) Response to Q2:\n\n- Sampling from Gibbs distribution, i.e. Langevin algorithms is used for high-dimensional and large-scale sampling applications. Gibbs distribution is an invariant distribution of the Langevin dynamics. And Langevin dynami... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"sFp5nKQbQeR",
"LWRuIv1_Zaa",
"LWRuIv1_Zaa",
"3fVWpGPMLPY",
"RV0n2SvFb4n",
"RV0n2SvFb4n",
"z_B--AlSmpx",
"nips_2022_KxVSnZVuZZ",
"nips_2022_KxVSnZVuZZ",
"nips_2022_KxVSnZVuZZ",
"nips_2022_KxVSnZVuZZ"
] |
nips_2022_Jupoos_K4xt | Equivariant Networks for Zero-Shot Coordination | Successful coordination in Dec-POMDPs requires agents to adopt robust strategies and interpretable styles of play for their partner. A common failure mode is symmetry breaking, when agents arbitrarily converge on one out of many equivalent but mutually incompatible policies. Commonly these examples include partial observability, e.g. waving your right hand vs. left hand to convey a covert message. In this paper, we present a novel equivariant network architecture for use in Dec-POMDPs that prevents the agent from learning policies which break symmetries, doing so more effectively than prior methods. Our method also acts as a "coordination-improvement operator" for generic, pre-trained policies, and thus may be applied at test-time in conjunction with any self-play algorithm. We provide theoretical guarantees of our work and test on the AI benchmark task of Hanabi, where we demonstrate our methods outperforming other symmetry-aware baselines in zero-shot coordination, as well as able to improve the coordination ability of a variety of pre-trained policies. In particular, we show our method can be used to improve on the state of the art for zero-shot coordination on the Hanabi benchmark. | Accept | This paper proposes a test-time algorithmic modification to address a multi-agent coordination problem where agents choose incompatible strategies due to symmetries in the environment, showing that it is applicable to ZSC tasks like Hanabi. The proposed symmetrizer is applied to LSTM recurrent policies.
Reviews were mixed for this paper, and I really valued the in-depth discussion amongst reviewers and between reviewers and authors. I am recommending acceptance primarily based on the interesting discussion and new experimental results surfaced between reviewer Y7M6 and the authors:
1. The feedback from reviewer Y7M6 and authors increased my confidence in the statistical correctness of scores wrt baseline, and it is clear that the authors ensured that the baselines were computed fairly.
2. The rebuttal-revised version of the paper shows that the symmetrizer improves upon any self-play algorithm, such as OBL level 4 and 5. Reviewer Y7M6 points out "I think it would be better to claim that this work can improve state-of-the-art competitive models to become better instead of claiming this work achieves the new state-of-the-art ZSC performance in Hanabi. ". I believe the authors have shown that with their experiments on OBL L5.
The primary critique is that many of the scores were changed late in the review process, which does call into question whether the empirical rigor should be re-examined via another round of review. The authors have gone to great lengths to improve the rigour in the rebuttal. Beyond the specifics of empirical evaluation, I think the idea presented in the paper is interesting and worth sharing with the community: this paper presents an "explicit symmetrizer", whereas to my knowledge, most of MARL focuses on "implicit symmetrizers" like OBL, well-tuned MAPPO, data augmentation, neural fictitious play-like schemes. Furthermore, the technique appears to be quite complementary and can be combined with existing MARL algorithms.
Some asks:
- this paper uses a combination of training and inference time symmetrizers, so please make it clear in the final version of the paper how each of these contribute to the stated performance gains.
- it would be helpful to mention some of the statistical comments around "In our experience, a pool of 12 policies achieving 2.52 is particularly unlucky and is the result of several seeds that happen to completely misalign with all other policies." in an appendix, even if it's sort of anecdotal.
| train | [
"edgRxKdnWiQ",
"dfzwnXwC_eY",
"8H1dFmMA4dl",
"7tiQtRAYOCq",
"ClXUejf9Z3t",
"1A_HzTdnQryg",
"i9D_rdgrazr",
"etU0VRGpuQz",
"6QUHGwtjUt",
"Ui0iFgAOFWC",
"bH_s3PLaQ7r",
"Ohn3gFpFFLA"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comment and consideration.\n\nThe paragraph in Section 3 promising an empirical test was regarding testing whether the averaging scheme helped in improving ZSC (which we did find as EQC improves ZSC), not to comparing it to using $e \\in G$. We agree that a formal ablation comparing the two app... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"dfzwnXwC_eY",
"i9D_rdgrazr",
"7tiQtRAYOCq",
"ClXUejf9Z3t",
"1A_HzTdnQryg",
"etU0VRGpuQz",
"Ohn3gFpFFLA",
"bH_s3PLaQ7r",
"Ui0iFgAOFWC",
"nips_2022_Jupoos_K4xt",
"nips_2022_Jupoos_K4xt",
"nips_2022_Jupoos_K4xt"
] |
nips_2022_ylila4AYSpV | Simple Mechanisms for Welfare Maximization in Rich Advertising Auctions | Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown.
The seller is now faced with a multi-parameter mechanism design problem.
Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical.
In this paper, we tackle a fundamental problem in the design of modern ad auctions. We adopt a ``Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main technical challenge is designing a monotone rule that yields a good approximation to the optimal welfare. Monotonicity doesn't hold for standard algorithms, e.g. the incremental bang-per-buck order, that give good approximations to ``knapsack-like'' problems such as ours. In fact, we show that no deterministic monotone rule can approximate the optimal welfare within a factor better than $2$ (while there is a non-monotone FPTAS). Our main result is a new, simple, greedy and monotone allocation rule that guarantees a $3$ approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called \emph{Generalized Second Price (GSP)} payment rule, which charges the minimum threshold price below which the allocation changes. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded. Under standard no-overbidding assumptions, we prove bounds on the a pure and Bayes-Nash PoA. Finally, we experimentally test our algorithms on real-world data. | Accept | Reviewers agreed that rich ad auction is significant and are excited about theoretical bounds on the positive result (achieved by a simple mechanism) and the negative result. Overall, this is a solid theoretical paper on an important and classical problem in industry. | train | [
"R9BnwKNK-TR",
"t889ymFq-Q0",
"70rhPMl1A4ZO",
"-jeO7TpSOzM",
"hRg6V2Kpde6N",
"l3hPfNeQb8",
"fsWYuTDoeFD",
"hdpGfIO31zz",
"HxvBaw6MWf5",
"8qQcmtpSwt",
"xSptrXWA4Ld"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer rnSK is indeed correct. The time reported in Table 1 is the mean time in seconds. We will correct it to be milli-seconds in the final version.",
" I think Figure 4 in the appendix is more consistent with what I would have expected. It has the median time for VCG between 10 and 20 msec. But Table ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4,
1
] | [
"t889ymFq-Q0",
"hRg6V2Kpde6N",
"xSptrXWA4Ld",
"8qQcmtpSwt",
"HxvBaw6MWf5",
"hdpGfIO31zz",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV"
] |
nips_2022_Yb3dRKY170h | One-shot Neural Backdoor Erasing via Adversarial Weight Masking | Recent studies show that despite achieving high accuracy on a number of real-world applications, deep neural networks (DNNs) can be backdoored: by injecting triggered data samples into the training dataset, the adversary can mislead the trained model into classifying any test data to the target class as long as the trigger pattern is presented. To nullify such backdoor threats, various methods have been proposed. Particularly, a line of research aims to purify the potentially compromised model. However, one major limitation of this line of work is the requirement to access sufficient original training data: the purifying performance is a lot worse when the available training data is limited. In this work, we propose Adversarial Weight Masking (AWM), a novel method capable of erasing the neural backdoors even in the one-shot setting. The key idea behind our method is to formulate this into a min-max optimization problem: first, adversarially recover the non-robust perturbation patterns and then (soft) mask the network weights that are sensitive to the recovered patterns. Comprehensive evaluations of several benchmark datasets suggest that AWM can largely improve the purifying effects over other state-of-the-art methods on various available training dataset sizes. | Accept | The paper presents a method for defending deep neural networks against backdoor attacks, i.e., attacks that inject “triggered” samples into the training set. The method can be seen as an improvement on Adversarial Neuron Pruning (ANP) that uses (i) soft weight masking (SWM), (ii) adversarial trigger recovery (ATR) and (iii) sparsity regularization (SR). The main focus of the paper is in the low-data regime, especially in the one-shot setting and when the network size is small.
The authors have clarified the novelty of the approach wrt to ANP and have provided additional experiments addressing some of the reviewers' concerns. In view of this, some of the reviewers raised their scores. However, there are still concerns regarding the novelty of the method and the difficulty of setting hyperparameters. The empirical results seem solid, however.
| val | [
"DEUtoVWNJ-W",
"IWtMfuQD1H",
"wVU3zTsZSwy",
"1YTTLZLjr8",
"cz9mF2Kk2nB",
"Yx-2LrJE01",
"HnHkHlzSSpk",
"7f93mML3WAp",
"9eYgULnge3A",
"RUGmjYqQcLA",
"fR5gcQLmpIX3",
"XxYEKfzDGim",
"HKDd_k3hzl",
"oSRjdWiZdcp",
"akJw83dFs2h"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply and raising your score. Yet we would like to further clarify some points in your last comment and hope to address your concerns on the contribution and hyperparameter search.\n\nIn terms of our contributions, we respectively disagree with you. Our method is *NOT* “a heuristic variant of ANP”... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"IWtMfuQD1H",
"1YTTLZLjr8",
"oSRjdWiZdcp",
"cz9mF2Kk2nB",
"Yx-2LrJE01",
"HnHkHlzSSpk",
"RUGmjYqQcLA",
"9eYgULnge3A",
"XxYEKfzDGim",
"akJw83dFs2h",
"oSRjdWiZdcp",
"HKDd_k3hzl",
"nips_2022_Yb3dRKY170h",
"nips_2022_Yb3dRKY170h",
"nips_2022_Yb3dRKY170h"
] |
nips_2022_sr0289wAUa | ASPiRe: Adaptive Skill Priors for Reinforcement Learning | We introduce ASPiRe (Adaptive Skill Prior for RL), a new approach that leverages prior experience to accelerate reinforcement learning. Unlike existing methods that learn a single skill prior from a large and diverse dataset, our framework learns a library of different distinction skill priors (i.e., behavior priors) from a collection of specialized datasets, and learns how to combine them to solve a new task. This formulation allows the algorithm to acquire a set of specialized skill priors that are more reusable for downstream tasks; however, it also brings up additional challenges of how to effectively combine these unstructured sets of skill priors to form a new prior for new tasks. Specifically, it requires the agent not only to identify which skill prior(s) to use but also how to combine them (either sequentially or concurrently) to form a new prior. To achieve this goal, ASPiRe includes Adaptive Weight Module (AWM) that learns to infer an adaptive weight assignment between different skill priors and uses them to guide policy learning for downstream tasks via weighted Kullback-Leibler divergences. Our experiments demonstrate that ASPiRe can significantly accelerate the learning of new downstream tasks in the presence of multiple priors and show improvement on competitive baselines. | Accept | This paper proposes a method to adaptively combine and use skill priors for reinforcement learning. The setting is very practical and the method proposed is novel and effective empirically. The main concerns from the reviewers were around (1) the experimental setup being too narrow and simple, and (2) the clarity of the paper. The authors added in an additional robotics experiment which partly addressed (1) and provided clarifications in their response to address (2). Overall, this seems like a solid contribution idea-wise that will inspire more work in this direction. I highly encourage the authors to revise the paper in terms of clarity (based on their rebuttal responses) as well as to consider adding a comparison to PARROT (as suggested by reviewer iPXJ). | train | [
"esV2X2yd-e",
"86eJhUNUK7S",
"2hTUm6goM6x",
"mUW_V_4cW28",
"R_73ECsZbqq",
"MOm8x22BtbC",
"Y65-82lR-BZ",
"YXfzg__qMT",
"1wqqX33Cgxi",
"qQz12S29VB",
"2hnY116bYeo",
"N1VDdVwgmbv",
"bdp2GEZgkXZ",
"idQrqcfDRSI"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. Result from robotic manipulation task: Thanks for adding this experiment! It definitely adds to the overall soundness of the paper. I strongly encourage the authors to finish some baseline evaluations and add its corresponding plot to Figure 2. \n\n2. Thanks for the clarification! This makes sense.\n\n3. I app... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"1wqqX33Cgxi",
"MOm8x22BtbC",
"1wqqX33Cgxi",
"R_73ECsZbqq",
"YXfzg__qMT",
"idQrqcfDRSI",
"bdp2GEZgkXZ",
"N1VDdVwgmbv",
"2hnY116bYeo",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa"
] |
nips_2022_o4neHaKMlse | Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone | Vision-language (VL) pre-training has recently received considerable attention. However, most existing end-to-end pre-training approaches either only aim to tackle VL tasks such as image-text retrieval, visual question answering (VQA) and image captioning that test high-level understanding of images, or only target region-level understanding for tasks such as phrase grounding and object detection. We present FIBER (Fusion-In-the-Backbone-based transformER), a new VL model architecture that can seamlessly handle both these types of tasks. Instead of having dedicated transformer layers for fusion after the uni-modal backbones, FIBER pushes multimodal fusion deep into the model by inserting cross-attention into the image and text backbones to better capture multimodal interactions. In addition, unlike previous work that is either only pre-trained on image-text data or on fine-grained data with box-level annotations, we present a two-stage pre-training strategy that uses both these kinds of data efficiently: (i) coarse-grained pre-training based on image-text data; followed by (ii) fine-grained pre-training based on image-text-box data. We conduct comprehensive experiments on a wide range of VL tasks, ranging from VQA, image captioning, and retrieval, to phrase grounding, referring expression comprehension, and object detection. Using deep multimodal fusion coupled with the two-stage pre-training, FIBER provides consistent performance improvements over strong baselines across all tasks, often outperforming methods using magnitudes more data. Code is released at https://github.com/microsoft/FIBER. | Accept | This paper proposes a two-stage pretrain visual language model which can deal with both high level and region level downstream tasks. Experiments show significant improvement SotA models. Main concerns from reviews are some missing references while the author gave detailed comparisons in the responses. Although reviewer Gvbu's opinion is still somewhat conservative, I think the novelty of the paper is clear and the comparison to the SotA is sufficient. | val | [
"DSHKpcdZzU0",
"svixthLYKba",
"OYFDJ0wzhv0",
"6g7jSSHM2W",
"LJIghAo46je",
"qXPCZaZQNhR",
"_AwM-Kupuzou",
"E_5UhJ4xgxE",
"vgV7oaO9c_Q",
"hHDPO1mHSF4",
"R_b_5hiKp_T",
"E0VDYjRQvhn",
"RBYK-8SFR-7R",
"Dn1zKeOJn7h",
"pg8IJqh7y_2",
"B3VJUXG_iic",
"f5KkA3n0a1",
"e0lQlR-XgN0",
"NiFg9_DGD... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the update!\n\nWe would like to kindly point out that we have evaluation results on ~30 datasets in the paper. Many of these tasks (e.g. VQAv2, COCO object detection, RefCOCO+) are highly competitive, yet we can obtain consistent performance improvements over strong baselines, often outperforming me... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"svixthLYKba",
"OYFDJ0wzhv0",
"XRtCzJDthe5",
"LJIghAo46je",
"Dn1zKeOJn7h",
"e0lQlR-XgN0",
"NiFg9_DGDzO",
"XRtCzJDthe5",
"hHDPO1mHSF4",
"R_b_5hiKp_T",
"E0VDYjRQvhn",
"RBYK-8SFR-7R",
"XRtCzJDthe5",
"NiFg9_DGDzO",
"B3VJUXG_iic",
"e0lQlR-XgN0",
"nips_2022_o4neHaKMlse",
"nips_2022_o4neH... |
nips_2022_AcHUIG2wA8- | Non-Gaussian Tensor Programs | The Tensor Programs framework has produced a series of powerful results by 1) expressing any deep learning computation of concern as a principled composition of element-wise nonlinearities and matrix multiplication, and 2) inductively reasoning about the program behavior as the sizes of the matrices in the program tend to infinity. For example, this framework helped to show that infinitely wide neural networks exhibit Gaussian process behavior at initialization and evolve like a kernel model during training in the so-called NTK parameterization (Yang, 2019b, 2020a; Yang and Littwin, 2021). Moreover, this framework yielded a novel parameterization, coined μP (Yang and Hu, 2021), that for the first time enabled hyperparameter tuning for enormous networks too expensive to train more than once (Yang et al., 2022). However, this framework has so far been limited to Gaussian initialized weights, while uniform or truncated Gaussian distributions are more prevalent in practice. This work extends Tensor Programs to general non-Gaussian weights, thus recovering all of the above results in all practical settings. | Accept | This submission is borderline. Reviewers were generally in consensus --- all felt that the theoretical contribution is sound and non-trivial, but delivers a relatively minor addition to the Tensor Programs framework. I fully agree with this perspective, and similarly to all reviewers, recommend that the paper be accepted and the NeurIPS community will be the judge of how significant its contribution is. | train | [
"EzvqbrvFvwr",
"YEkW6alfT1n",
"5L1pMv6N3J6",
"f9XWxu3-1h7",
"m2JrcpRD8CB6",
"H2qPuFyzBvs",
"3dhIdK91_tR",
"cTRSILK1in",
"5f4YWKe24tt",
"5Aes-X7Rm5N"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I want to thank the authors for their response and clearing up some of my misunderstandings about their work. The theoretical contribution is much stronger than I initially understood and I'll update my scores/review appropriately.",
" I appreciate the authors cleaning up the notation and thank you for addressi... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
1,
3
] | [
"H2qPuFyzBvs",
"5L1pMv6N3J6",
"cTRSILK1in",
"3dhIdK91_tR",
"5f4YWKe24tt",
"5Aes-X7Rm5N",
"nips_2022_AcHUIG2wA8-",
"nips_2022_AcHUIG2wA8-",
"nips_2022_AcHUIG2wA8-",
"nips_2022_AcHUIG2wA8-"
] |
nips_2022__bqtjfpj8h | Ask4Help: Learning to Leverage an Expert for Embodied Tasks | Embodied AI agents continue to become more capable every year with the advent of new models, environments, and benchmarks, but are still far away from being performant and reliable enough to be deployed in real, user-facing, applications. In this paper, we ask: can we bridge this gap by enabling agents to ask for assistance from an expert such as a human being? To this end, we propose the Ask4Help policy that augments agents with the ability to request, and then use expert assistance. Ask4Help policies can be efficiently trained without modifying the original agent's parameters and learn a desirable trade-off between task performance and the amount of requested help, thereby reducing the cost of querying the expert. We evaluate Ask4Help on two different tasks -- object goal navigation and room rearrangement and see substantial improvements in performance using minimal help. On object navigation, an agent that achieves a $52\%$ success rate is raised to $86\%$ with $13\%$ help and for rearrangement, the state-of-the-art model with a $7\%$ success rate is dramatically improved to $90.4\%$ using $39\%$ help. Human trials with Ask4Help demonstrate the efficacy of our approach in practical scenarios. | Accept | I thank the authors for their submission and active participation in the discussions. This paper introduces a method for learning a policy that can ask an expert for help, i.e., to obtain the expert action. On the positive side, reviewers found the method to be general [uya8], original and significant [gw2r], intruiging in terms of being able to reuse an existing policy [HGan], and tackling an important problem [rsmr,bRWC], and the paper to be clear [uya8,gw2r,rsmr,bRWC]. In terms of negative points, reviewers were concerned about the novelty [bRWC], unimpressive qualitative results despite strong quantitative results [bRWC], and issues with the range of baselines [bRWC,rsmr] and ablations considered [uya8]. Overall, the paper is borderline. However, bRWC indicated they would raise their score but I don't see this being reflected. Furthermore, in my view reviewer rsmr's concerns regarding baselines and ablations has been addressed by the author rebuttal. Thus, I am siding with reviewers gw2r and HGan, and recommend acceptance. However, I very strongly encourage the authors to further improve their paper based on the reviewer feedback, in particular the points raised by reviewer bRWC regarding the importance of the Success Prediction component of the method. | train | [
"HmoDcek1dX",
"tMcswrMk0Gb",
"lsQwtRvLnSh",
"XwTs8qhnP1",
"fbrUKeh9aNI",
"Xe7McnXJP5m",
"pkHnXdaX6_C",
"CmTCm7EeCk",
"mjdB5LyCNaE",
"8aVwUwqqNxL",
"sL-0zxzb5tT",
"wdfdDPaECxzq",
"s_nn2Lz6219",
"FbsxKgWxL8",
"ojIdQu2Vz-1",
"tIaCYCU5AtD",
"_Ro-f-VRKCe",
"xcBWyJZdKcT",
"IuolXUqfaKp"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" We are happy to hear that our rebuttal has addressed some of your concerns and thank you for increasing your rating of our work.\n\n**Therefore, I'm increasing my score to a borderline, hoping that the authors would clarify more on the potential of the current method as a building block for future studies.**\n\nW... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
8,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4,
4
] | [
"tMcswrMk0Gb",
"sL-0zxzb5tT",
"8aVwUwqqNxL",
"CmTCm7EeCk",
"mjdB5LyCNaE",
"pkHnXdaX6_C",
"ojIdQu2Vz-1",
"wdfdDPaECxzq",
"FbsxKgWxL8",
"tIaCYCU5AtD",
"ShwB63hMO8L",
"kUQGOPIQkV9",
"kUQGOPIQkV9",
"tzew69pMfGP",
"IuolXUqfaKp",
"xcBWyJZdKcT",
"nips_2022__bqtjfpj8h",
"nips_2022__bqtjfpj... |
nips_2022_WV1ZXTH0OIn | Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization | Optimizing expensive-to-evaluate black-box functions of discrete (and potentially continuous) design parameters is a ubiquitous problem in scientific and engineering applications. Bayesian optimization (BO) is a popular, sample-efficient method that leverages a probabilistic surrogate model and an acquisition function (AF) to select promising designs to evaluate. However, maximizing the AF over mixed or high-cardinality discrete search spaces is challenging standard gradient-based methods cannot be used directly or evaluating the AF at every point in the search space would be computationally prohibitive. To address this issue, we propose using probabilistic reparameterization (PR). Instead of directly optimizing the AF over the search space containing discrete parameters, we instead maximize the expectation of the AF over a probability distribution defined by continuous parameters. We prove that under suitable reparameterizations, the BO policy that maximizes the probabilistic objective is the same as that which maximizes the AF, and therefore, PR enjoys the same regret bounds as the original BO policy using the underlying AF. Moreover, our approach provably converges to a stationary point of the probabilistic objective under gradient ascent using scalable, unbiased estimators of both the probabilistic objective and its gradient. Therefore, as the number of starting points and gradient steps increase, our approach will recover of a maximizer of the AF (an often-neglected requisite for commonly used BO regret bounds). We validate our approach empirically and demonstrate state-of-the-art optimization performance on a wide range of real-world applications. PR is complementary to (and benefits) recent work and naturally generalizes to settings with multiple objectives and black-box constraints. | Accept | This paper studies a Bayesian optimization method where some of the variables are discrete and some are continuous and proposes using probabilistic reparameterization. Reviewers unanimously agreed that the paper is well-written, solves an important real-world problem, and most reviewers found the experimental results to be convincing. In addition, the reviewers found the rebuttal and revision to be convincing and clarified many of the initial questions.
Several reviewers pointed out that some of the results referenced in the paper could be more explicitly referenced in the paper and/or explained in the appendix. For the final version, please make an effort to address reviewers feedback to make the paper more self-contained. | train | [
"qzP9cvrE37S",
"NktDD8BTpS",
"cCsmcnMgdNK",
"3uQvEtkpX7_",
"nOyZcnxTT-",
"VtSr60zwPoo",
"2prvGHR6JxM",
"kUAYcaC_th",
"7rz0tyJR2mZ",
"zQJOB5qENOr",
"WY4U_RqEApY",
"ipIQNMYZPE",
"-0qma4BT1q3",
"RPsAHsgpRLz",
"uHmDFpjEzFD",
"KUS7mc8l4cr",
"vyEpCV7amUF",
"eWy3IUAUs9U",
"zr_wTevD75",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_r... | [
" - Thank you. I understand better the flaw I thought is not present. I will raise my score.\n\n- I also checked your experiments in F.1 and there is some evaluation to the number of MC samples. So indeed you performed this check, which is nice. This makes it well-rounded practical evaluation. \n\n- I have to say I... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"3uQvEtkpX7_",
"vyEpCV7amUF",
"aed2z6ofCU",
"nOyZcnxTT-",
"uHmDFpjEzFD",
"kUAYcaC_th",
"ipIQNMYZPE",
"7rz0tyJR2mZ",
"zQJOB5qENOr",
"uHmDFpjEzFD",
"eWy3IUAUs9U",
"PyZBBTkyhDF",
"RPsAHsgpRLz",
"uHmDFpjEzFD",
"KeM7YvVl1gs",
"vyEpCV7amUF",
"aed2z6ofCU",
"s8C0wPVZzb8",
"nips_2022_WV1Z... |
nips_2022_9a1oV7UunyP | When to Update Your Model: Constrained Model-based Reinforcement Learning | Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed. | Accept | This paper studies the relation between model shift and the performance of model-based reinforcement learning. The paper proposes a new algorithm that leads to empirical improvement over certain data sets. All the reviewers agree that the paper provides useful theoretical insights into model-based reinforcement learning, and the experiments are also consistent with the theory. | test | [
"d9iFUQp_SGd",
"9rhqQ9HN2CB",
"WjKE4fAi48U",
"9p2kgfusP0R",
"VZVWseLjx5",
"LX4D7us5oNf",
"vUxm3lzapf",
"-kX-Fc2HiaQ",
"_tRLzDnDxZ8",
"ytLLQBNsrPW",
"ByB5gJlmnZ",
"EpGEfLOCk6r",
"v22RSWkON3p",
"ARIV52KLKC6",
"qrfp7qoLFZK",
"XLRFOT2Rk_U",
"5bxM80s2Ip5",
"4Tzy8Q7gZxCs",
"CBCavhzBf0"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Dear reviewer, \n\nWe thank the reviewer for your insightful and constructive comments and suggestions, which provide much helpful guidance to improve the quality of our paper! We really enjoy communicating with you and appreciate your efforts! \n\nBest wishes!\n\nThe authors.",
" Thanks for the comments. Yes,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"yJSAXbpbIFZ",
"v22RSWkON3p",
"9p2kgfusP0R",
"0JpPFh_kT6U",
"-kX-Fc2HiaQ",
"vUxm3lzapf",
"ByB5gJlmnZ",
"XLRFOT2Rk_U",
"2oNSoPueuVvE",
"2oNSoPueuVvE",
"2oNSoPueuVvE",
"5bxM80s2Ip5",
"CBCavhzBf0",
"qrfp7qoLFZK",
"k54_3pS64ui",
"4Tzy8Q7gZxCs",
"cUL37YAAdzw",
"0JpPFh_kT6U",
"yJSAXbpb... |
nips_2022_J0nhRuMkdGf | Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees | Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications, including adversarial learning, GANs, transport and robust optimization. With increasing data and problem sizes necessary to train high performing models across various applications, we need to rely on parallel and distributed computing. However, in distributed training, communication among the compute nodes is a key bottleneck during training, and this problem is exacerbated for high dimensional and over-parameterized models. Due to these considerations, it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality. In this paper, we present the first theoretically grounded distributed methods for solving variational inequalities and saddle point problems using compressed communication: MASHA1 and MASHA2. Our theory and methods allow for the use of both unbiased (such as Rand$k$; MASHA1) and contractive (such as Top$k$; MASHA2) compressors. New algorithms support bidirectional compressions, and also can be modified for stochastic setting with batches and for federated learning with partial participation of clients. We empirically validated our conclusions using two experimental setups: a standard bilinear min-max problem, and large-scale distributed adversarial training of transformers. | Accept | Dear Authors,
We had a long discussion about this paper. Overall, the reviews are positive. Several reviewers raised their scores after the rebuttal phase, and they found the response by the authors satisfactory.
However, there were some concerns about the novelty of the paper that I summarize here:
This paper combines some standard techniques and ideas from decentralized optimization and minimax optimization to obtain the presented results. Hence, the algorithmic novelty of the paper is limited. Perhaps the major contribution of the paper is in the vector that they decide to quantize, but still, the main idea of the paper is very similar to the single-loop variance reduction techniques that were first proposed in stochastic optimization and later used for distributed optimization. The main theoretical challenge that the authors had to face was combining quantization with the Extra Gradient method as highlighted in the first paragraphs of section 4.1. Indeed, similar quantization ideas have been extensively studied in the distributed optimization literature and thus the algorithmic novelty seems to be very limited. Similarly, excluding the aforementioned challenge (Extra Gradient + compression) the derivation of the theoretical results appears to be tedious but based on standard techniques.
Considering the above points, the AC and one of the reviewers found the paper below the bar as its novelty is limited. However, four reviewers voted in favor of accepting this paper, as they believe the technical novelty of the paper and its proof techniques are significant enough.
I respect the majority vote and hence recommend this paper to be accepted. | val | [
"z067atOXxop",
"R8M4qH8bfk0",
"1d-4xGTMqBU",
"sDw1E5xEF02",
"EThTETUDzds",
"tOrktJC477n",
"9uKJabbbqdQI",
"CtFXk5J9F8",
"we7XXlFhwA",
"RG0VVnlZwI6",
"1CKiisvjocm",
"lpm6CTA9bXs",
"feurdYz8Ri8",
"LcajO32Xy0H",
"CiovS66XRP4",
"o6KR5q9_xfz",
"Zq3zWYlcn5_",
"6CfERbe3clX",
"QSGYF4U2pp... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" We are grateful for raising the score! Thanks again for the review, response, important comments, and positive final feedback!",
" Thanks again for your thorough response. I think all my questions are resolved, and I decide to raise my score to 7.",
" We thank Review **1QcY** for the response. See answers be... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3,
3,
3
] | [
"R8M4qH8bfk0",
"1d-4xGTMqBU",
"sDw1E5xEF02",
"1CKiisvjocm",
"tOrktJC477n",
"lpm6CTA9bXs",
"nips_2022_J0nhRuMkdGf",
"we7XXlFhwA",
"Zq3zWYlcn5_",
"IIQ5LeAi990",
"QSGYF4U2ppB",
"6CfERbe3clX",
"Zq3zWYlcn5_",
"o6KR5q9_xfz",
"nips_2022_J0nhRuMkdGf",
"nips_2022_J0nhRuMkdGf",
"nips_2022_J0nh... |
nips_2022_yfNSUQ3yRo | Noise Attention Learning: Enhancing Noise Robustness by Gradient Scaling | Machine learning has been highly successful in data-driven applications but is often hampered when the data contains noise, especially label noise. When trained on noisy labels, deep neural networks tend to fit all noisy labels, resulting in poor generalization. To handle this problem, a common idea is to force the model to fit only clean samples rather than mislabeled ones. In this paper, we propose a simple yet effective method that automatically distinguishes the mislabeled samples and prevents the model from memorizing them, named Noise Attention Learning. In our method, we introduce an attention branch to produce attention weights based on representations of samples. This attention branch is learned to divide the samples according to the predictive power in their representations. We design the corresponding loss function that incorporates the attention weights for training the model without affecting the original learning direction. Empirical results show that most of the mislabeled samples yield significantly lower weights than the clean ones. Furthermore, our theoretical analysis shows that the gradients of training samples are dynamically scaled by the attention weights, implicitly preventing memorization of the mislabeled samples. Experimental results on two benchmarks (CIFAR-10 and CIFAR-100) with simulated label noise and three real-world noisy datasets (ANIMAL-10N, Clothing1M and Webvision) demonstrate that our approach outperforms state-of-the-art methods.
| Accept | The work proposes a simple method for training an 'attention layer' that can give weights for different input samples. These weights are learned during the training process. This method appears to the theoretically justified, the method relatively simple and the empirical results seem reasonable. One concern that I share with the reviewers is about the hard (but correctly labeled) samples. The authors provide some explanation and results in the appendix, which mostly allay the concerns. I have to agree that introducing a somewhat sensitive parameter $\lambda$ is not ideal, but overall the empirical and theoretical justifications tip the balance towards acceptance in this case. | train | [
"KVZ5sb035HX",
"eg5pomG1Bci",
"8kNH9WD9C3t",
"hz0fw4dUzM",
"2HafN27-d5z",
"V4oDBR6Eg8S",
"0ssD9X9bx-",
"SXUFbYjdGAU",
"cJM2yeFYHFB"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. We have updated the Appendix for more experiments on hard samples. In Appendix F, we provide the movement of $\\tau$ distribution in the early learning stage to empirically explain how hard samples can be learned by using the proposed method.",
" I appreciate the authors' response t... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"eg5pomG1Bci",
"V4oDBR6Eg8S",
"nips_2022_yfNSUQ3yRo",
"SXUFbYjdGAU",
"cJM2yeFYHFB",
"0ssD9X9bx-",
"nips_2022_yfNSUQ3yRo",
"nips_2022_yfNSUQ3yRo",
"nips_2022_yfNSUQ3yRo"
] |
nips_2022_G1uywu6vNZe | Exponential Family Model-Based Reinforcement Learning via Score Matching | We propose an optimistic model-based algorithm, dubbed SMRL, for finite-horizon episodic reinforcement learning (RL) when the transition model is specified by exponential family distributions with $d$ parameters and the reward is bounded and known. SMRL uses score matching, an unnormalized density estimation technique that enables efficient estimation of the model parameter by ridge regression. Under standard regularity assumptions, SMRL achieves $\tilde O(d\sqrt{H^3T})$ online regret, where $H$ is the length of each episode and $T$ is the total number of interactions (ignoring polynomial dependence on structural scale parameters). | Accept | This is a clear and carefully written paper with a solid mathematical contribution. The reviewers are unanimous in supporting acceptance. | train | [
"YSxYwhAQAY",
"eEtwWFFlon",
"60cy9tXD_tI",
"LlnZuqanrDg",
"_Jdkh9zxYaz",
"qfXhBg79DY",
"bH5hau_iiPs",
"HI7NAiRyDLT",
"AX_qmGtfhhl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response. It'll be interesting to see how this idea works out in practice. I'm satisfied with the response and have updated my score.",
" After reading the other reviewers' comments, I will keep my score unchanged.",
" Thanks for your detailed response.\nGiven that my concerns have ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"qfXhBg79DY",
"_Jdkh9zxYaz",
"LlnZuqanrDg",
"HI7NAiRyDLT",
"AX_qmGtfhhl",
"bH5hau_iiPs",
"nips_2022_G1uywu6vNZe",
"nips_2022_G1uywu6vNZe",
"nips_2022_G1uywu6vNZe"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.