paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_Il0ymeSnKyL | NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation | Optical computing has become emerging technology in next-generation efficient artificial intelligence (AI) due to its ultra-high speed and efficiency. Electromagnetic field simulation is critical to the design, optimization, and validation of photonic devices and circuits.
However, costly numerical simulation significantly hinders the scalability and turn-around time in the photonic circuit design loop. Recently, physics-informed neural networks were proposed to predict the optical field solution of a single instance of a partial differential equation (PDE) with predefined parameters. Their complicated PDE formulation and lack of efficient parametrization mechanism limit their flexibility and generalization in practical simulation scenarios. In this work, for the first time, a physics-agnostic neural operator-based framework, dubbed NeurOLight, is proposed to learn a family of frequency-domain Maxwell PDEs for ultra-fast parametric photonic device simulation. Specifically, we discretize different devices into a unified domain, represent parametric PDEs with a compact wave prior, and encode the incident light via masked source modeling. We design our model to have parameter-efficient cross-shaped NeurOLight blocks and adopt superposition-based augmentation for data-efficient learning. With those synergistic approaches, NeurOLight demonstrates 2-orders-of-magnitude faster simulation speed than numerical solvers and outperforms prior NN-based models by ~54% lower prediction error using ~44% fewer parameters. | Accept | The authors propose a domain-specific extension of neural operators that is appropriate for photonics applications. This is an interesting application of neural operators which demonstrates the usefulness of building in physical priors. Some reviewers expressed concern about the topic being too far outside the usual focus of NeurIPS, but there is also an upside to introducing novel application areas to the NeurIPS community. All reviewers agreed the work was of high quality and worth accepting, so I recommend acceptance. | train | [
"nenWtQNbv4Z",
"YCHzvtaQ1DL",
"oo3YO3aaeVl",
"2zYgcHn05M3",
"9GstR2SWgcs",
"v9qGHIwA0yp",
"IJBSgzN6KzLH",
"jZB4N35VlOM",
"sL9Nu7Y5ZkR",
"1tQIrAzVcbs",
"jFfY0LcUH8l",
"9FNazN9PkZZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors’ thorough response to my questions. After reading the authors' response, now I think the paper’s contribution outweighs my initial concerns (especially regarding the significance). I still think the tackled problem of this paper is not very relevant to the general ML community, but seems ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"sL9Nu7Y5ZkR",
"jZB4N35VlOM",
"IJBSgzN6KzLH",
"9FNazN9PkZZ",
"jFfY0LcUH8l",
"1tQIrAzVcbs",
"9FNazN9PkZZ",
"jFfY0LcUH8l",
"1tQIrAzVcbs",
"nips_2022_Il0ymeSnKyL",
"nips_2022_Il0ymeSnKyL",
"nips_2022_Il0ymeSnKyL"
] |
nips_2022_Qq-ge2k8uml | Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields | Capitalizing on the recent advances in image generation models, existing controllable face image synthesis methods are able to generate high-fidelity images with some levels of controllability, e.g., controlling the shapes, expressions, textures, and poses of the generated face images. However, these methods focus on 2D image generative models, which are prone to producing inconsistent face images under large expression and pose changes. In this paper, we propose a new NeRF-based conditional 3D face synthesis framework, which enables 3D controllability over the generated face images by imposing explicit 3D conditions from 3D face priors. At its core is a conditional Generative Occupancy Field (cGOF) that effectively enforces the shape of the generated face to commit to a given 3D Morphable Model (3DMM) mesh. To achieve accurate control over fine-grained 3D face shapes of the synthesized image, we additionally incorporate a 3D landmark loss as well as a volume warping loss into our synthesis algorithm. Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images and shows more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods. | Accept | Paper attacks a hard problem and brings together state-of-the-art ideas to demonstrate substantial wins. Many good points were raised by the reviewers, and we ask the authors to carefully read through the feedback and address what they can for the final version. | train | [
"-tcekW8s7u0",
"ERT3_NZctL6",
"C6Ung6NWwPK",
"56RWXgiNFLz",
"GjO24kuKjoL",
"2ZgkkaaVYrU",
"L4et1Oq3FNj",
"-fYcHoW1R4h",
"2aPx5VYnvHo"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that your concerns have been addressed. Thank you for your valuable comments and for taking the time to respond to the rebuttal.",
" All my concerns have been feedback by the authors. According to the authors' response and other reviewers' comments, I change my Rating of this manuscript as Accept. "... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"ERT3_NZctL6",
"-fYcHoW1R4h",
"L4et1Oq3FNj",
"nips_2022_Qq-ge2k8uml",
"2aPx5VYnvHo",
"-fYcHoW1R4h",
"nips_2022_Qq-ge2k8uml",
"nips_2022_Qq-ge2k8uml",
"nips_2022_Qq-ge2k8uml"
] |
nips_2022_p62j5eqi_g2 | On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses | Clustering models constitute a class of unsupervised machine learning methods which are used in a number of application pipelines, and play a vital role in modern data science. With recent advancements in deep learning-- deep clustering models have emerged as the current state-of-the-art over traditional clustering approaches, especially for high-dimensional image datasets. While traditional clustering approaches have been analyzed from a robustness perspective, no prior work has investigated adversarial attacks and robustness for deep clustering models in a principled manner. To bridge this gap, we propose a blackbox attack using Generative Adversarial Networks (GANs) where the adversary does not know which deep clustering model is being used, but can query it for outputs. We analyze our attack against multiple state-of-the-art deep clustering models and real-world datasets, and find that it is highly successful. We then employ some natural unsupervised defense approaches, but find that these are unable to mitigate our attack. Finally, we attack Face++, a production-level face clustering API service, and find that we can significantly reduce its performance as well. Through this work, we thus aim to motivate the need for truly robust deep clustering models. | Accept | To investigate the adversarial attacks and robustness for deep clustering models, the authors propose a blackbox attack using Generative Adversarial Networks (GANs) where the adversary does not know which deep clustering model is being used, but can query it for outputs.
Based on several rounds of discussions between the authors and reviewers, the reviewers' concerns have been properly addressed.
Since all reviewers consistently gave positive comments, the AC made a final decision of acceptance.
| train | [
"vFQkCjw9ZQd",
"0Gy-0VHV4wE",
"YxwfuhaUHhXB",
"JlLzt33nj2G",
"ttGPfy49lqE",
"T8y-N-laYCH",
"jruMAMhHrXG",
"fdPZ8pPIiJ",
"5dbaZDXjpA",
"phcIQtW1zlH",
"Jjtegx--3-O",
"kw2kmnSp4AA",
"DQselQsxbjK",
"i4p8Bk5oUr",
"DFTH9hF98FB",
"g-B3jkpPsCW",
"aCOPNMCshEq"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback and for taking the time to go through the revision, we appreciate it.",
" Thanks for the additional experiments and further improvements.\n\nYour clarifications in Items 2, 3, 4 has well-resolved our concerns. Regarding Items 5 and 6, we have got your explanations. Thanks.\n\nIn our ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"0Gy-0VHV4wE",
"DQselQsxbjK",
"JlLzt33nj2G",
"Jjtegx--3-O",
"jruMAMhHrXG",
"fdPZ8pPIiJ",
"kw2kmnSp4AA",
"phcIQtW1zlH",
"nips_2022_p62j5eqi_g2",
"aCOPNMCshEq",
"g-B3jkpPsCW",
"DFTH9hF98FB",
"i4p8Bk5oUr",
"nips_2022_p62j5eqi_g2",
"nips_2022_p62j5eqi_g2",
"nips_2022_p62j5eqi_g2",
"nips_... |
nips_2022_lSfrwyww-FR | Blackbox Attacks via Surrogate Ensemble Search | Blackbox adversarial attacks can be categorized into transfer- and query-based attacks. Transfer methods do not require any feedback from the victim model, but provide lower success rates compared to query-based methods. Query attacks often require a large number of queries for success. To achieve the best of both approaches, recent efforts have tried to combine them, but still require hundreds of queries to achieve high success rates (especially for targeted attacks). In this paper, we propose a novel method for Blackbox Attacks via Surrogate Ensemble Search (BASES) that can generate highly successful blackbox attacks using an extremely small number of queries. We first define a perturbation machine that generates a perturbed image by minimizing a weighted loss function over a fixed set of surrogate models. To generate an attack for a given victim model, we search over the weights in the loss function using queries generated by the perturbation machine. Since the dimension of the search space is small (same as the number of surrogate models), the search requires a small number of queries. We demonstrate that our proposed method achieves better success rate with at least $30\times$ fewer queries compared to state-of-the-art methods on different image classifiers trained with ImageNet (including VGG-19, DenseNet-121, and ResNext-50). In particular, our method requires as few as 3 queries per image (on average) to achieve more than a $90\%$ success rate for targeted attacks and 1--2 queries per image for over a $99\%$ success rate for untargeted attacks. Our method is also effective on Google Cloud Vision API and achieved a $91\%$ untargeted attack success rate with 2.9 queries per image. We also show that the perturbations generated by our proposed method are highly transferable and can be adopted for hard-label blackbox attacks. Furthermore, we argue that BASES can be used to create attacks for a variety of tasks and show its effectiveness for attacks on object detection models. Our code is available at https://github.com/CSIPlab/BASES. | Accept | This paper proposes BASES, a query-efficient black-box adversarial attack by first generating adversarial perturbation with gradient-based attack using a weighted ensemble of surrogate models. The perturbed image is used to query the target model and its feedback is used to update the weights via zeroth-order optimization. The method is simple, intuitive and well-presented, and the authors show that it achieves a high attack success rate using a very limited query budget. The method can be used for both score-based and hard-label attacks, and experiment on Google Cloud Vision demonstrates real-world applicability.
The most common concern among reviewers is the method’s dependence on ensemble diversity. AC agrees that this is an inherent limitation as Figure 3 in the paper shows drastically reduced attack success rate when using a smaller set of surrogate models. However, reviewers wjHs and vphn argued that the paper’s contributions outweigh this limitation. AC agrees with this characterization and recommends acceptance, and encourages the authors to incorporate additional results from the rebuttal in the camera ready version.
| train | [
"lKdKGQxdbvq",
"MkLuvgojwZl",
"kToB4upS0cW",
"0M4MESYbMdc",
"pD2w9nlMh6T",
"ZFi-KXJnrUH",
"0WvRrBWe2uG",
"MmxXJxtEq5P",
"jaIxDCSdBA",
"0KOx-SuGs5b",
"6Bi-wMKptjV",
"ewZ_imnEeUG",
"pYSto1HDqHJ",
"cW7mOZNm1K4",
"xqwYSw_22Yh",
"1gLerozHBti",
"SbqTvje_PVx",
"TYSMcKMS0GD",
"r-SXRnmA5U... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
... | [
" I thank the authors for the extra details provided in the rebuttal. I appreciate the paper's contribution but believe it needs further work to be ready for publication.\n\n**Ensemble diversity**: The paper still needs a far more detailed discussion and results on the impact of ensemble diversity on attack success... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"0M4MESYbMdc",
"kToB4upS0cW",
"ZFi-KXJnrUH",
"NnRZIov0Z3Q",
"1gLerozHBti",
"0WvRrBWe2uG",
"r-SXRnmA5Uv",
"0KOx-SuGs5b",
"6Bi-wMKptjV",
"ewZ_imnEeUG",
"bXKDDJv5nwV",
"r_Q27chkcwr",
"xqwYSw_22Yh",
"xqwYSw_22Yh",
"1gLerozHBti",
"SbqTvje_PVx",
"IIsUkLuVdB5",
"nips_2022_lSfrwyww-FR",
... |
nips_2022_Qoow6uXwjnA | Quadproj: a Python package for projecting onto quadratic hypersurfaces | Quadratic hypersurfaces are a natural generalization of affine subspaces, and projections are elementary blocks of algorithms in optimization and machine learning. It is therefore intriguing that no proper studies and tools have been developed to tackle this nonconvex optimization problem. The quadproj package is a user-friendly and documented software that is dedicated to project a point onto a non-cylindrical central quadratic hypersurface.
| Reject | The paper presents a software package to do projections on the non-cylindrical central quadratic hypersurfaces. While the problem is certainly interesting (all the reviewers agree), its motivation in the context of machine learning seems to be lacking in the paper. This is missing in the paper currently and is the main source of confusion in the reviewers' and the AC's minds. After discussions among the reviewers, I believe, the paper has much scope for improvements notwithstanding the merits. Please look at the suggestions carefully. Also, the paper, as it is, seems to better fit the scope of the MLOSS journal rather than the NeurIPS conference, just a thought from the AC. Having said that, I would encourage the authors to continue the development of this package. | train | [
"S71T3SdfM1Y",
"_B1xJ5TFkA",
"ICRn4sRPk3WJ",
"fOqIqSYg-Jq",
"k0t_nHxwFVv",
"yLbctIH-uWg",
"tqsWm659Uer",
"1L3tk2cdOHo"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I do not think NeurIPS is markedly different from the SISC in expectations. The premise of the cited Call For Papers is \"We invite submissions presenting new and original research\". I already mentioned a concern on novelty.\n\nIf the paper would have been presented as suggested by Reviewer o4hv, or the library ... | [
-1,
-1,
-1,
-1,
7,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"ICRn4sRPk3WJ",
"tqsWm659Uer",
"yLbctIH-uWg",
"k0t_nHxwFVv",
"nips_2022_Qoow6uXwjnA",
"nips_2022_Qoow6uXwjnA",
"nips_2022_Qoow6uXwjnA",
"nips_2022_Qoow6uXwjnA"
] |
nips_2022_ymAsTHhrnGm | Inverse Game Theory for Stackelberg Games: the Blessing of Bounded Rationality | Optimizing strategic decisions (a.k.a. computing equilibrium) is key to the success of many non-cooperative multi-agent applications. However, in many real-world situations, we may face the exact opposite of this game-theoretic problem --- instead of prescribing equilibrium of a given game, we may directly observe the agents' equilibrium behaviors but want to infer the underlying parameters of an unknown game. This research question, also known as inverse game theory, has been studied in multiple recent works in the context of Stackelberg games. Unfortunately, existing works exhibit quite negative results, showing statistical hardness and computational hardness, assuming follower's perfectly rational behaviors. Our work relaxes the perfect rationality agent assumption to the classic quantal response model, a more realistic behavior model of bounded rationality. Interestingly, we show that the smooth property brought by such bounded rationality model actually leads to provably more efficient learning of the follower utility parameters in general Stackelberg games. Systematic empirical experiments on synthesized games confirm our theoretical results and further suggest its robustness beyond the strict quantal response model. | Accept | High-level view: this paper presents some interesting observations around
learning against a Stackelberg follower that corresponds to a quantal response model.
The learning seemingly relies strongly on the follower being a quantal responder with a logit regularizer, but this is an interesting setting to study, and one that seems to have been overlooked in the literature.
Thus I am broadly in favor of the paper.
Now, on a more detailed level, I do disagree with some claims made in the paper regarding bounded rationality. I would expect the authors to handle bounded rationality versus logit QRE more carefully in the camera ready. Personally, I would say that even the title of the paper ought to be changed. The short of it is that the authors conflate bounded rationality and quantal response behavior, but these are not the same thing. In fact, both the AC and other reviewers feel that the results are *not* likely to extend to most other models of bounded rationality. They may extend to other models of quantal response behavior, though.
Now, in more detail:
The authors state
> We remark that in general, even without such extreme case of dominated actions, the extra payoff information are now available on how much worse (or better) using the empirical frequency of actions from the boundedly rational responses, which has been overlooked due to the assumption of perfectly rational follower
But this can't be true: a special case of bounded rationality is that the
boundedly-rational player is rational enough to rule out the dominant action in
the example; in that case bounded rationality does not let us learn anything
from that action! The reason the authors can learn here is because they study a
very specific type of bounded rationality: quantal follower response.
Learnability then becomes doable specifically because the follower plays all
actions with non-zero probability.
A second thing is that the authors call it "perhaps surprising" etc that bounded
rationality helps. However, again, the authors are looking specifically at
quantal response behavior under the logit response model, which is well-known to be easier to handle. In
particular, adding the logit quantal response assumption corresponds to adding a
strongly-convex regularizer to the follower's best response problem. Regularizing
a nonsmooth function is known to lead to much nicer behavior in many settings,
e.g. in optimization, learning, etc. Moreover, the logit model specifically leads to a form of regularization that plays every action with non-zero probability, again a very useful fact for learning the follower model.
Finally, the authors say
> While our quantitative results do rely on the form of this model, we believe
53 the insight revealed from our model generalizes to most problems of learning from boundedly rational
54 agent behaviors.
This seems very unlikely, given what I stated above. Quantal response is a very
special type of bounded rationality. The results should probably only be
expected to extend to other problems that share some of these characteristics,
e.g. playing every action with non-zero probability, and possibly also corresponding to strongly
convex regularization.
Let me finish by reiterating that this is not a huge issue for the paper: the logit QRE setting is definitely important and of interest in its own right. But it's important to better delineate where we may hope for these results to generalize or not generalize. | train | [
"pS7HwPjx11Bt",
"rjRJR9_qp2v",
"vJufpV1hb6J",
"PmlUDntRTK9",
"CuJi35zOOyW",
"7MDD-D6ADOV",
"lgZIwkuGZrK",
"OqgxvONleaY",
"Hs-fDAiHLi0"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Sorry, I am little late in the discussion. First, I want to point out that NeurIPS allows updating the paper as a rebuttal revision to include new/modified things. I see that some other comments also asked for some discussion, etc. I do expect the authors to update the paper instead of saying that will add the di... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"7MDD-D6ADOV",
"vJufpV1hb6J",
"CuJi35zOOyW",
"lgZIwkuGZrK",
"OqgxvONleaY",
"Hs-fDAiHLi0",
"nips_2022_ymAsTHhrnGm",
"nips_2022_ymAsTHhrnGm",
"nips_2022_ymAsTHhrnGm"
] |
nips_2022_NMTSIY6ykw7 | Semi-Discrete Normalizing Flows through Differentiable Tessellation | Mapping between discrete and continuous distributions is a difficult task and many have had to resort to heuristical approaches. We propose a tessellation-based approach that directly learns quantization boundaries in a continuous space, complete with exact likelihood evaluations. This is done through constructing normalizing flows on convex polytopes parameterized using a simple homeomorphism with an efficient log determinant Jacobian. We explore this approach in two application settings, mapping from discrete to continuous and vice versa. Firstly, a Voronoi dequantization allows automatically learning quantization boundaries in a multidimensional space. The location of boundaries and distances between regions can encode useful structural relations between the quantized discrete values. Secondly, a Voronoi mixture model has near-constant computation cost for likelihood evaluation regardless of the number of mixture components. Empirically, we show improvements over existing methods across a range of structured data modalities. | Accept | The authors develop a tesselation based approach to map between discrete and continuous spaces. They use this approach to dequantize data to port likelihood based models on continuous spaces to discrete spaces and to scale mixture models where each mixture component has disjoint support. From the view of normalizing flows, the approach is neat, and the results are supportive. The two outstanding things, I'd encourage to authors to work on are 1) the accessibility of the writing and 2) placing the work in the context of generative models outside of normalizing flows | train | [
"1sCOOudUFdf",
"6ezUyvW-51a",
"0BpMt3MuY_i",
"kdzXsvQaeut",
"v0j2XMKTm-r",
"X4bxTntZ3U0",
"P45bYGgRViH",
"GMO4KBkmh58",
"BZAr-5CbXVh",
"VL4P_W_FLdw",
"fQ0VPbqPdoB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the revision! The newly added figure 2 does explain the concept in much simpler terms. My apologies for not having found the supplementary material before, thank you for pointing it out. Including all these details in the appendix is indeed much appreciated!\n\n",
" Thank you to the authors for th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"v0j2XMKTm-r",
"P45bYGgRViH",
"X4bxTntZ3U0",
"fQ0VPbqPdoB",
"BZAr-5CbXVh",
"GMO4KBkmh58",
"VL4P_W_FLdw",
"nips_2022_NMTSIY6ykw7",
"nips_2022_NMTSIY6ykw7",
"nips_2022_NMTSIY6ykw7",
"nips_2022_NMTSIY6ykw7"
] |
nips_2022_z9poo2GhOh6 | Trajectory of Mini-Batch Momentum: Batch Size Saturation and Convergence in High Dimensions | We analyze the dynamics of large batch stochastic gradient descent with momentum (SGD+M) on the least squares problem when both the number of samples and dimensions are large. In this setting, we show that the dynamics of SGD+M converge to a deterministic discrete Volterra equation as dimension increases, which we analyze. We identify a stability measurement, the implicit conditioning ratio (ICR), which regulates the ability of SGD+M to accelerate the algorithm. When the batch size exceeds this ICR, SGD+M converges linearly at a rate of $\mathcal{O}(1/\sqrt{\kappa})$, matching optimal full-batch momentum (in particular performing as well as a full-batch but with a fraction of the size). For batch sizes smaller than the ICR, in contrast, SGD+M has rates that scale like a multiple of the single batch SGD rate. We give explicit choices for the learning rate and momentum parameter in terms of the Hessian spectra that achieve this performance. | Accept | The paper analyses an SGD with Momentum (SGD+M) in a setting where the dimension and number of samples are large. The authors provide a theoretical justification for a least square problem.
They identify two settings based on the implicit conditioning ratio (ICR). In one setting, the SGD+M achieves linear convergence, whereas, for smaller batch sizes, the convergence speed gets worse.
In general, the paper presents novel ideas that provide new insides to a well-known and heavily used algorithm such as SGD+M. For a camera-ready version, please try to incorporate the valuable suggestions from reviewers. Thanks
| train | [
"0ubEhA6PoRW",
"4vb23U_8cQD",
"hF8AukSVVhC",
"yJnihRN_f1",
"wXlaLnYHYuD",
"RpjDCvxR0TZ3",
"-yA5D3XP2cp",
"MqAzhRGTk_",
"wZnxLIBig9j",
"GC8VijXwuTW",
"tlkfOggzNWF",
"yFiPDClknly",
"7n9NJZMUYog",
"2M-nOu6hYDc",
"FoEeqR18HW",
"P9plx6Juyd",
"I7AxjeXhHjz",
"CTYXlMwAKtx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_rev... | [
" Thanks for addressing the questions and weaknesses in my comment. I have raised the score according to the responses.",
" Thanks for addressing questions in my original review and thanks for the clarification.",
" Thanks for the explanation! It makes perfect sense to me, and I can imagine that this enjoys cer... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"tlkfOggzNWF",
"2M-nOu6hYDc",
"yJnihRN_f1",
"wXlaLnYHYuD",
"RpjDCvxR0TZ3",
"-yA5D3XP2cp",
"MqAzhRGTk_",
"wZnxLIBig9j",
"7n9NJZMUYog",
"nips_2022_z9poo2GhOh6",
"I7AxjeXhHjz",
"CTYXlMwAKtx",
"P9plx6Juyd",
"FoEeqR18HW",
"nips_2022_z9poo2GhOh6",
"nips_2022_z9poo2GhOh6",
"nips_2022_z9poo2... |
nips_2022_pn5trhFskOt | A Closer Look at Weakly-Supervised Audio-Visual Source Localization | Audio-visual source localization is a challenging task that aims to predict the location of visual sound sources in a video. Since collecting ground-truth annotations of sounding objects can be costly, a plethora of weakly-supervised localization methods that can learn from datasets with no bounding-box annotations have been proposed in recent years, by leveraging the natural co-occurrence of audio and visual signals. Despite significant interest, popular evaluation protocols have two major flaws. First, they allow for the use of a fully annotated dataset to perform early stopping, thus significantly increasing the annotation effort required for training. Second, current evaluation metrics assume the presence of sound sources at all times. This is of course an unrealistic assumption, and thus better metrics are necessary to capture the model's performance on (negative) samples with no visible sound sources. To accomplish this, we extend the test set of popular benchmarks, Flickr SoundNet and VGG-Sound Sources, in order to include negative samples, and measure performance using metrics that balance localization accuracy and recall. Using the new protocol, we conducted an extensive evaluation of prior methods, and found that most prior works are not capable of identifying negatives and suffer from significant overfitting problems (rely heavily on early stopping for best results). We also propose a new approach for visual sound source localization that addresses both these problems. In particular, we found that, through extreme visual dropout and the use of momentum encoders, the proposed approach combats overfitting effectively, and establishes a new state-of-the-art performance on both Flickr SoundNet and VGG-Sound Source. Code and pre-trained models are available at https://github.com/stoneMo/SLAVC. | Accept | The authors seem to have addressed most if not all of the reviewers recommendations, leading to a much improved paper compared to the initial manuscript. The updated scores from the reviewers reflect the major improvements and therefore I recommend this paper be accepted in its updated form. | train | [
"aZJcKrJEhp_",
"jUDFjrfXOU",
"LQpkXag3gs-s",
"YtA-_fcS2_W",
"xe3yTODkPa-",
"miLSpwuulc-V",
"bkCD7i-IK-2",
"c12kofOJOq1",
"TF1fgmh4Q0m",
"RGDVWZvN2sD",
"pc6ctyDbGqI",
"QApnguVYFQ_",
"hr_MSjlnYA",
"ALpOS9tyQAh",
"tiLq31jU05t",
"WccGVGVEqyR",
"UbQLuPm983y",
"U6BsKTxwCx",
"WlNgoi5cjN... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" We sincerely thank all reviewers for the thoughtful responses and constructive feedback. We truly believe they improved the quality of the paper overall.\n\nWeakly-supervised audio-visual source localization is a challenging task that aims to predict the location of visual sound sources for enhanced audio-visual ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"nips_2022_pn5trhFskOt",
"LQpkXag3gs-s",
"miLSpwuulc-V",
"bkCD7i-IK-2",
"miLSpwuulc-V",
"QApnguVYFQ_",
"c12kofOJOq1",
"TF1fgmh4Q0m",
"RGDVWZvN2sD",
"ALpOS9tyQAh",
"tiLq31jU05t",
"mFWqHg5VHYm",
"WlNgoi5cjNx",
"U6BsKTxwCx",
"UbQLuPm983y",
"nips_2022_pn5trhFskOt",
"nips_2022_pn5trhFskOt... |
nips_2022_CKbqDtZnSc | A Policy-Guided Imitation Approach for Offline Reinforcement Learning | Offline reinforcement learning (RL) methods can generally be categorized into two types: RL-based and Imitation-based. RL-based methods could in principle enjoy out-of-distribution generalization but suffer from erroneous off-policy evaluation. Imitation-based methods avoid off-policy evaluation but are too conservative to surpass the dataset. In this study, we propose an alternative approach, inheriting the training stability of imitation-style methods while still allowing logical out-of-distribution generalization. We decompose the conventional reward-maximizing policy in offline RL into a guide-policy and an execute-policy. During training, the guide-poicy and execute-policy are learned using only data from the dataset, in a supervised and decoupled manner. During evaluation, the guide-policy guides the execute-policy by telling where it should go so that the reward can be maximized, serving as the \textit{Prophet}. By doing so, our algorithm allows \textit{state-compositionality} from the dataset, rather than \textit{action-compositionality} conducted in prior imitation-style methods. We dumb this new approach Policy-guided Offline RL (\texttt{POR}). \texttt{POR} demonstrates the state-of-the-art performance on D4RL, a standard benchmark for offline RL. We also highlight the benefits of \texttt{POR} in terms of improving with supplementary suboptimal data and easily adapting to new tasks by only changing the guide-poicy. | Accept | This paper proposes an interesting new idea that is well-motivated through illustrative examples and is thoroughly evaluated. There are some ways in which the paper could be improved, e.g. by including additional experiments (e.g. with high-dim observation spaces, transfer across action spaces, and discrete action spaces), but there don't appear to be any major weaknesses. The paper is clearly above the bar for acceptance at NeurIPS. | train | [
"Bd3ybizhtIV",
"g48dVxlFvW_",
"F6T51b7yF-dq",
"8Jfmw8RsJW_",
"Ll7aJdQz7oe",
"nFZ3685pCVu",
"9Wo4XGd4LA2",
"8_cG-74A4Om",
"-hNZGQ4AAQh",
"6h8nVNvyfnG",
"j_HThPtGF1h",
"ELLD1V0c9mL"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I will keep my score unchanged",
" Thank you for answering my questions. I decide to increase my score to a 6.",
" Dear reviewer, \n\nPlease let us know if our response has addressed the issues raised in your review. We hope that our corrections, clarifications, and additional results... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"Ll7aJdQz7oe",
"9Wo4XGd4LA2",
"6h8nVNvyfnG",
"nFZ3685pCVu",
"ELLD1V0c9mL",
"j_HThPtGF1h",
"6h8nVNvyfnG",
"-hNZGQ4AAQh",
"nips_2022_CKbqDtZnSc",
"nips_2022_CKbqDtZnSc",
"nips_2022_CKbqDtZnSc",
"nips_2022_CKbqDtZnSc"
] |
nips_2022_zAc2a6_0aHb | Posterior Collapse of a Linear Latent Variable Model | This work identifies the existence and cause of a type of posterior collapse that frequently occurs in the Bayesian deep learning practice. For a general linear latent variable model that includes linear variational autoencoders as a special case, we precisely identify the nature of posterior collapse to be the competition between the likelihood and the regularization of the mean due to the prior. Our result also suggests that posterior collapse may be a general problem of learning for deeper architectures and deepens our understanding of Bayesian deep learning. | Accept | This paper analyzes the phenomenon of posterior collapse in linear variational autoencoders. While only the linear case is addressed, all reviewers found the work worthy of acceptance, citing its clear contributions to this line of literature that seeks to understand how deep architectures interact with the evidence lower bound. In particular, this paper (for the linear model class) is able to pinpoint collapse to regularization of the mean of the latent variables. | train | [
"pHmZc94-nr",
"dV5VuRHmrv",
"_pzH64ZNIkZ",
"UG-vK2d9xZ2",
"1yVJxhbtR1L",
"hxT02hTE_u",
"YdmltLDNir5F",
"y2Q3ddAQ6H",
"PKUhjol0X03",
"iQ4mTUewKVI",
"gPMHtWAwjWB"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed feedback. We will improve our manuscript further in the final version.",
" Thank you for your detailed response. I also apologize for the late reply --- there were significant changes to the paper and a lot of details to carefully review. I'd also like to apologize that some of my critic... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"dV5VuRHmrv",
"_pzH64ZNIkZ",
"iQ4mTUewKVI",
"iQ4mTUewKVI",
"iQ4mTUewKVI",
"gPMHtWAwjWB",
"PKUhjol0X03",
"nips_2022_zAc2a6_0aHb",
"nips_2022_zAc2a6_0aHb",
"nips_2022_zAc2a6_0aHb",
"nips_2022_zAc2a6_0aHb"
] |
nips_2022_pk1C2qQ3nEQ | Active Learning in Bayesian Neural Networks: Balanced Entropy Learning Principle | Acquiring labeled data is challenging in many machine learning applications with limited budgets. Active learning gives a procedure to select the most informative data points and improve data efficiency by reducing the cost of labeling. The info-max learning principle maximizing mutual information such as BALD has been successful and widely adapted in various active learning applications. However, this pool-based specific objective inherently introduces a redundant selection. In this paper, we design and propose a new uncertainty measure, Balanced Entropy Acquisition (BalEntAcq), which captures the information balance between the uncertainty of underlying softmax probability and the label variable. To do this, we approximate each marginal distribution by Beta distribution. Beta approximation enables us to formulate BalEntAcq as a ratio between a shifted entropy and the marginalized joint entropy. The closed-form expression of BalEntAcq facilitates parallelization by estimating two parameters in each marginal Beta distribution. BalEntAcq is a purely standalone measure without requiring any relational computations with other data points. Nevertheless, BalEntAcq captures a well-diversified selection near the decision boundary with a margin, unlike other existing uncertainty measures such as BALD, Entropy, or Mean Standard Deviation (MeanSD). Finally, we demonstrate that our balanced entropy learning principle with BalEntAcq consistently outperforms well-known linearly scalable active learning methods, including a recently proposed PowerBALD, a simple but diversified version of BALD, by showing experimental results obtained from MNIST, CIFAR-100, SVHN, and TinyImageNet datasets. | Reject | The majority of reviewers found this paper to be confusing in its presentation, lacking novelty (e.g. Section 3), and not well motivated (e.g. BalEntAcq), with 3 out of 4 recommending rejection. I find that the paper particularly falters in its explanation of the point process entropy and derivation of the ultimate acquisition function. As the author-review discussion makes clear, the deviation from Shannon / differential entropy to point process entropy is at the core of the paper, but this is lost in the current draft. Due to this and moderate-to-minor issues such as the validity of the Beta approximation, the use of only MC dropout, and relationship to ELR schemes, I recommend rejection at this time. | test | [
"lPkjXkeA_f",
"0HbMW58Nip4",
"No5WLH60hoG",
"J0Xz25xwSZN",
"qG_os2P3fv8",
"cSbSqvBRFKW",
"bipFrzVe_MC",
"UTvh-8B78ju",
"ICg9kou8vnd",
"14MMqUee_n",
"iR3f1_e5LHu",
"ngmn3CYxeoH",
"WvWCwrCG9J2",
"k50T6lBAj_X",
"Xbh7KWQrIov"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **We are afraid to say that both convergence proofs have severe logical flaws, which are unacceptable for us in any case.** Again, the convergence claim *on the null set* is vacuously true. *The reviewer *fhr7* must show a counter-example that does NOT converge to the true model in a finite-data regime.* Otherwis... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"0HbMW58Nip4",
"iR3f1_e5LHu",
"cSbSqvBRFKW",
"qG_os2P3fv8",
"14MMqUee_n",
"nips_2022_pk1C2qQ3nEQ",
"ngmn3CYxeoH",
"WvWCwrCG9J2",
"WvWCwrCG9J2",
"Xbh7KWQrIov",
"k50T6lBAj_X",
"nips_2022_pk1C2qQ3nEQ",
"nips_2022_pk1C2qQ3nEQ",
"nips_2022_pk1C2qQ3nEQ",
"nips_2022_pk1C2qQ3nEQ"
] |
nips_2022_gyZMZBiI9Cw | Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigation | We address a practical yet challenging problem of training robot agents to navigate in an environment following a path described by some language instructions. The instructions often contain descriptions of objects in the environment. To achieve accurate and efficient navigation, it is critical to build a map that accurately represents both spatial location and the semantic information of the environment objects. However, enabling a robot to build a map that well represents the environment is extremely challenging as the environment often involves diverse objects with various attributes. In this paper, we propose a multi-granularity map, which contains both object fine-grained details (\eg, color, texture) and semantic classes, to represent objects more comprehensively. Moreover, we propose a weakly-supervised auxiliary task, which requires the agent to localize instruction-relevant objects on the map. Through this task, the agent not only learns to localize the instruction-relevant objects for navigation but also is encouraged to learn a better map representation that reveals object information. We then feed the learned map and instruction to a waypoint predictor to determine the next navigation goal. Experimental results show our method outperforms the state-of-the-art by 4.0% and 4.6% w.r.t. success rate both in seen and unseen environments, respectively on VLN-CE dataset. The code is available at https://github.com/PeihaoChen/WS-MGMap. | Accept | The paper received all positive reviews (3x accept ratings, 1x strong accept rating). The meta-reviewer agrees with the reviewers' assessment of the paper. | train | [
"AsoTAOvzHTe",
"QuTH3-YBRLI",
"B0GelQznc6O",
"oNhISEgcBEq",
"R9isFYPWowS",
"5so-KWzecbO",
"C7RXbXrsx0y",
"kB0MuSd-I_2",
"K4jHTdXyBu-",
"YUkZK8OjEs1",
"-9SVisgBpl7",
"uYFVvAgRPle",
"r1T_4wE1wwF",
"RcOOWiEW-ZH",
"9ScPfZgeMM-",
"FmDeuaql3h0",
"-xXAtkiw_Id",
"06N1mDn3HYJ",
"k9MT2sk0m... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thanks for your valuable comments!",
" Thanks for answering the questions and the additional analysis. I am increasing my rating to 8.",
" Thanks for your valuable reviews. We’re pleased that our response addresses your concerns and the Reviewer is happy to increase the rating to accept. \n\nAs the rebuttal p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"QuTH3-YBRLI",
"-xXAtkiw_Id",
"5so-KWzecbO",
"5so-KWzecbO",
"uYFVvAgRPle",
"C7RXbXrsx0y",
"kB0MuSd-I_2",
"K4jHTdXyBu-",
"r1T_4wE1wwF",
"-9SVisgBpl7",
"O93fOlHIqhc",
"mOZQ5CJSazh",
"RcOOWiEW-ZH",
"9ScPfZgeMM-",
"k9MT2sk0mSE",
"nips_2022_gyZMZBiI9Cw",
"06N1mDn3HYJ",
"nips_2022_gyZMZB... |
nips_2022_iH4eyI5A7o | Learning Active Camera for Multi-Object Navigation | Getting robots to navigate to multiple objects autonomously is essential yet difficult in robot applications. One of the key challenges is how to explore environments efficiently with camera sensors only. Existing navigation methods mainly focus on fixed cameras and few attempts have been made to navigate with active cameras. As a result, the agent may take a very long time to perceive the environment due to limited camera scope. In contrast, humans typically gain a larger field of view by looking around for a better perception of the environment. How to make robots perceive the environment as efficiently as humans is a fundamental problem in robotics. In this paper, we consider navigating to multiple objects more efficiently with active cameras. Specifically, we cast moving camera to a Markov Decision Process and reformulate the active camera problem as a reinforcement learning problem. However, we have to address two new challenges: 1) how to learn a good camera policy in complex environments and 2) how to coordinate it with the navigation policy. To address these, we carefully design a reward function to encourage the agent to explore more areas by moving camera actively. Moreover, we exploit human experience to infer a rule-based camera action to guide the learning process. Last, to better coordinate two kinds of policies, the camera policy takes navigation actions into account when making camera moving decisions. Experimental results show our camera policy consistently improves the performance of multi-object navigation over four baselines on two datasets. | Accept | This paper proposes to decouple the camera policy from the navigation policy in goal-driven navigation agents trained using RL, and builds upon the local and global mapping and planning approach by adding an additional recurrent network that takes as inputs global reconstructed maps, heuristic directions, and navigation actions, to predict camera left/none/right turn actions. Rewards for the camera policy come from a camera turning heuristic and from map exploration heuristic. The agent is evaluated on multi-goal navigation tasks and tested and transferred to Matterport 3D and Gibson. The authors conduct a large set of ablations and comparison b/w different mapping and planning, SLAM-based or deep RL methods.
Reviewers praised the clarity and motivation of the method (iN1f, pNpw, uqJh), the ablations (iN1f, uqJh), evaluation and improvement on a SLAM baseline (pNpw, uqJh) and the generalisation performance (iN1f, pNpw, uqJh).
Reviewers noted that camera policy was optimized irrespective of future navigation strategies, which could be an issue (iN1f), and a lack of discussion about POMDP formulations of the navigation policy and assumptions about perfect state estimation (iN1f), limitations to 2D motion (uqJh), and recommended that the authors emphasize that the method can work with pure SLAM mapping (pNpw). The authors added experiments demonstrating that a single active camera outperformed non-active multi-camera systems (pNpw, uqJh), experiments on different sizes of the egocentric map in the planner (uqJh) and experiments with actuator noise (pNpw).
Reviewers agree on high scores (6, 6, 7) and therefore I would recommend this paper for acceptance.
Thank you,
Sincerely,
Area Chair
| train | [
"eA56QAsGp3A",
"M6dzKx5KdqS",
"gzqcQ5y9JNv",
"tumZj8JG5m5",
"5BniBY6R2cy",
"0NfpjVWoO2l",
"kEv5Kx33_8i",
"uVgfCR1IBJdD",
"eYUlHPwePCG",
"b_9P96pwI7E",
"51rhZtcqOP_",
"WjuQq9gfoHM",
"RQba-I2AUB4",
"BIqCjZUEBoq",
"bgkTbZHXX8m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As suggested by the Reviewer, applying noise at train time has the potential to improve robustness in a noisy evaluation environment. We totally agree with this idea. To evaluate its effectiveness, we conduct an experiment where we train and evaluate the agents in an environment with actuation noise and pose sens... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"0NfpjVWoO2l",
"tumZj8JG5m5",
"5BniBY6R2cy",
"51rhZtcqOP_",
"eYUlHPwePCG",
"kEv5Kx33_8i",
"uVgfCR1IBJdD",
"BIqCjZUEBoq",
"b_9P96pwI7E",
"RQba-I2AUB4",
"bgkTbZHXX8m",
"nips_2022_iH4eyI5A7o",
"nips_2022_iH4eyI5A7o",
"nips_2022_iH4eyI5A7o",
"nips_2022_iH4eyI5A7o"
] |
nips_2022_qqIrESv4f_L | Signal Processing for Implicit Neural Representations | Implicit Neural Representations (INRs) encoding continuous multi-media data via multi-layer perceptrons has shown undebatable promise in various computer vision tasks. Despite many successful applications, editing and processing an INR remains intractable as signals are represented by latent parameters of a neural network. Existing works manipulate such continuous representations via processing on their discretized instance, which breaks down the compactness and continuous nature of INR. In this work, we present a pilot study on the question: how to directly modify an INR without explicit decoding? We answer this question by proposing an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR. Our key insight is that spatial gradients of neural networks can be computed analytically and are invariant to translation, while mathematically we show that any continuous convolution filter can be uniformly approximated by a linear combination of high-order differential operators. With these two knobs, INSP-Net instantiates the signal processing operator as a weighted composition of computational graphs corresponding to the high-order derivatives of INRs, where the weighting parameters can be data-driven learned. Based on our proposed INSP-Net, we further build the first Convolutional Neural Network (CNN) that implicitly runs on INRs, named INSP-ConvNet. Our experiments validate the expressiveness of INSP-Net and INSP-ConvNet in fitting low-level image and geometry processing kernels (e.g. blurring, deblurring, denoising, inpainting, and smoothening) as well as for high-level tasks on implicit fields such as image classification. | Accept | The paper proposes a framework to perform signal processing tasks on a signal represented with an implicit neural representation directly in the representation space, without the need to instantiate the signal.
After the rebuttal period, all reviewers recommend acceptance.
In particular reviewer 1Yx6, an expert on the topic, finds the idea original, and the quality and clarity of the paper to be high. The reviewer finds that while for now the significance is limited (since working as proposed in the representation space is computationally prophibitavely expensive), this is not a major issue, since the the paper is likely to inspire work that will push the idea further.
Reviesers edqT and qGk1 also liked the general idea of the paper and find the proposed method to directly perform operations on the representation space of implicit neural representations to be novel and interesting
Reviewer Uh51 initially identified a few issues regarding experimental design, the validation, and on the included literature. The main concern regarding the experimental design of the reviewer was that the paper focuses on images (and not signal processing tasks more broadly), which I don't consider a shortcoming, due to the importance of image processing tasks. The concerns on the validation issues have been addressed as well, and the reviewer raised their score.
I recommend acceptance of the paper.
| train | [
"2JTMG58rKME",
"dXVK3fe_f7",
"5Y3WB-jp_KM",
"i0WJNiz_YkH",
"9rJtPH00wzh",
"ygOOybgIAI",
"EiDChq7MSTZ",
"dftRrH0uYG",
"k4DW-Z1-_g9",
"xq2V7ox9q0A",
"DSRtoXRN8c7",
"l2d_WfsRZXS",
"XEA1Ai6fY4",
"vrtUbXesFLr",
"CdXcHBzQ9ji",
"SueoUOqUiX",
"U8JK7cxakOSS",
"D1cBAthyIX",
"u78amX2Rj2M",
... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" I appreciate the responses from the authors. I'm increasing my score to 5 at this point, while I look forward to discussing more with the other reviewers in the next phase.",
" We appreciate the reviewer's positive comments. We have updated Fig. 4 which visualizes comparisons against other image denoising metho... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"9rJtPH00wzh",
"ygOOybgIAI",
"k4DW-Z1-_g9",
"dftRrH0uYG",
"xq2V7ox9q0A",
"DSRtoXRN8c7",
"nips_2022_qqIrESv4f_L",
"vrtUbXesFLr",
"l2d_WfsRZXS",
"U8JK7cxakOSS",
"B3bzwgmnmI",
"_z7F07wK6zw",
"M6iD6n_FP-M",
"A3BPjTcVr7J",
"nips_2022_qqIrESv4f_L",
"B3bzwgmnmI",
"D1cBAthyIX",
"M6iD6n_FP-... |
nips_2022_0ucMtEKCihU | Stochastic Window Transformer for Image Restoration | Thanks to the powerful representation capabilities, transformers have made impressive progress in image restoration. However, existing transformers-based methods do not carefully consider the particularities of image restoration. In general, image restoration requires that an ideal approach should be translation-invariant to the degradation, i.e., the undesirable degradation should be removed irrespective of its position within the image. Furthermore, the local relationships also play a vital role, which should be faithfully exploited for recovering clean images. Nevertheless, most transformers either adopt local attention with the fixed local window strategy or global attention, which unfortunately breaks the translation invariance and causes huge loss of local relationships. To address these issues, we propose an elegant stochastic window strategy for transformers. Specifically, we first introduce the window partition with stochastic shift to replace the original fixed window partition for training. Then, we design a new layer expectation propagation algorithm to efficiently approximate the expectation of the induced stochastic transformer for testing. Our stochastic window transformer not only enjoys powerful representation but also maintains the desired property of translation invariance and locality. Experiments validate the stochastic window strategy consistently improves performance on various image restoration tasks (deraining, denoising and deblurring) by significant margins. The code is available at https://github.com/jiexiaou/Stoformer. | Accept | The paper proposes a new stochastic window strategy for image restoration. The stochastic window transformer layer is invariant to translations and is applied to the image degradation and mitigates loss of locality, hence making the approach potentially more robust.
The reviews of the papers were mixed, with strong pro- and contra. One point of contention was whether the layer made sense at all, that is whether the invariance assumption is correct for image restoration. Another point of criticism was the additional computational burden imposed by the stochastic window layer. On the other hand reviewers liked SOTA performance, the novelty of the presented ideas and the paper writing. The rebuttal phase addressed many issues raised by reviewers, including the equivariance vs. invariance issue.
In my opinion the paper is an interesting and elegant theoretical contribution. I agree that *invariance w.r.t. image degradation* is indeed the right concept. While the additional runtime overhead may prevent the paper's method from application in time intensive scenarios, I still think that the approach has enough practical and theoretical strong points to merit publication. | train | [
"INjPip1OWD",
"5PZt6MjdA12",
"DvJ901IXT65",
"_PgC_gtF85W",
"0XjrPMg0oVP",
"qEmuWStn581",
"_Z-RvLfCSbc",
"AH2tsiIdC-a",
"hEUk35pS2SH",
"-irANMddrbJ",
"1UBikxWW91a",
"Jp_0ZPwekbH",
"zGZt3VlWZ3_",
"US2ILHA-SIU",
"_R9GtChfZpl",
"lie0Cgor6n6",
"HuClDt4VCTO"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your patient reply. After reading the feedback from the authors, I understand how the method work.\n\n1. The problem is interesting. But, as shown in Table. 2, the computational cost is extremely heavy (more than x10). While the proposed method costs less memory compared with the sliding window methods... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"_Z-RvLfCSbc",
"DvJ901IXT65",
"_PgC_gtF85W",
"0XjrPMg0oVP",
"Jp_0ZPwekbH",
"1UBikxWW91a",
"zGZt3VlWZ3_",
"hEUk35pS2SH",
"-irANMddrbJ",
"HuClDt4VCTO",
"lie0Cgor6n6",
"_R9GtChfZpl",
"US2ILHA-SIU",
"nips_2022_0ucMtEKCihU",
"nips_2022_0ucMtEKCihU",
"nips_2022_0ucMtEKCihU",
"nips_2022_0uc... |
nips_2022_fJguu0okUY1 | An Empirical Study on Disentanglement of Negative-free Contrastive Learning | Negative-free contrastive learning methods have attracted a lot of attention with simplicity and impressive performances for large-scale pretraining. However, its disentanglement property remains unexplored. In this paper, we examine negative-free contrastive learning methods to study the disentanglement property empirically. We find that existing disentanglement metrics fail to make meaningful measurements for high-dimensional representation models, so we propose a new disentanglement metric based on Mutual Information between latent representations and data factors. With this proposed metric, we benchmark the disentanglement property of negative-free contrastive learning on both popular synthetic datasets and a real-world dataset CelebA. Our study shows that the investigated methods can learn a well-disentangled subset of representation. As far as we know, we are the first to extend the study of disentangled representation learning to high-dimensional representation space and introduce negative-free contrastive learning methods into this area. The source code of this paper is available at https://github.com/noahcao/disentanglement_lib_med. | Accept | There was a consensus among reviewers that this paper should be accepted. The key convincing arguments that this paper studies a novel setting: how to measure the disentanglement in high-dimensional spaces. For this, the authors perform extensive experiments and come up with a novel metric. The reviewers further felt that concerns raised in the initial reviews were subsequently addressed in the author rebuttal. | train | [
"5ve0CHEmbL5",
"GB9uktcFyOk",
"k3cmX6TRy6L",
"e4B_8WmgnNY",
"_wJz9mH2wYB",
"RrqTESF49n",
"F4bZORMR-Ik",
"c5Q-Vh6OWIj",
"TbfsqMY0U5wO",
"aj-rfx2XwWYH",
"aWaRz0U8fq",
"_P1b8gj3N19",
"apAuRsuANRa",
"dMLNMf7I2_F",
"solBD3SzCwQ",
"iSsosWT0upi6",
"V3tY56-7OOj",
"eaqMF7TtCXW",
"-r-qr-Zd... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" Dear Reviewer s3YA,\n\nAs stated in the discussion with Reviewer beE6, regarding your concern that we did not show the superiority of MED, we have added more evidence to demonstrate its superiority. In the revised draft, we added **Appendix I**, which contains both experimental and theoretical analysis to show t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"solBD3SzCwQ",
"T5q3SoJ-Nr",
"hIDsJpsHTqL",
"U_6Vl5DNUxp",
"F4bZORMR-Ik",
"c5Q-Vh6OWIj",
"ij4wkUqhGXB",
"solBD3SzCwQ",
"apAuRsuANRa",
"apAuRsuANRa",
"apAuRsuANRa",
"dMLNMf7I2_F",
"5NwnmMi0xI6D",
"solBD3SzCwQ",
"hIDsJpsHTqL",
"hIDsJpsHTqL",
"hIDsJpsHTqL",
"ij4wkUqhGXB",
"ij4wkUqhG... |
nips_2022_59pMU2xFxG | What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods | A multitude of explainability methods has been described to try to help users better understand how modern AI systems make decisions. However, most performance metrics developed to evaluate these methods have remained largely theoretical -- without much consideration for the human end-user. In particular, it is not yet clear (1) how useful current explainability methods are in real-world scenarios; and (2) whether current performance metrics accurately reflect the usefulness of explanation methods for the end user. To fill this gap, we conducted psychophysics experiments at scale ($n=1,150$) to evaluate the usefulness of representative attribution methods in three real-world scenarios. Our results demonstrate that the degree to which individual attribution methods help human participants better understand an AI system varies widely across these scenarios. This suggests the need to move beyond quantitative improvements of current attribution methods, towards the development of complementary approaches that provide qualitatively different sources of information to human end-users. | Accept | This paper introduces a human evaluation framework for benchmarking current explainers.
There was an engaged discussion between authors and reviewers. Many concerns were clarified and the average score was raised from 4.75 to 5.5. Some concerns remain regarding the intrinsic limit of human evaluation, but overall, there are no major flaws with the current study and the framework is likely to be valuable since human evaluation of explainers is central to the progress of XAI.
Hence, I recommend acceptance of the paper.
| test | [
"Ogq71NPljpu",
"pwtQO-OYonk",
"t0Q0gO_DqvF",
"9uCidmised6",
"FLIezUj0wYf",
"dkyahZIHfAO",
"1VgxA8lzrnB",
"_SgSaqbnCIX",
"rtbVifDy4naX",
"QD_N6rj0xyf",
"TVcnXf27GiE",
"s0e6VhbLAbx",
"fQbFQraV-R3",
"OzP2iE7rn_",
"Dovw34AVwKV",
"TXDes3J-aih",
"e5Bc5UmMfg2",
"Uc3VcYyfQv",
"6KTFD95uEN... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" The major concerns I've raised have been sucessfully adressed by the authors, thus I am updating my score accordingly.",
" We are pleased to have successfully addressed most of the reviewer's concerns. Please kindly find additional clarifications below.\n\n**Yes, my initial comment was misguided, I did not phra... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"pwtQO-OYonk",
"rtbVifDy4naX",
"_SgSaqbnCIX",
"dkyahZIHfAO",
"1VgxA8lzrnB",
"Dovw34AVwKV",
"fQbFQraV-R3",
"TVcnXf27GiE",
"s0e6VhbLAbx",
"TVcnXf27GiE",
"s0e6VhbLAbx",
"6KTFD95uEN",
"Uc3VcYyfQv",
"e5Bc5UmMfg2",
"TXDes3J-aih",
"nips_2022_59pMU2xFxG",
"nips_2022_59pMU2xFxG",
"nips_2022... |
nips_2022_K48UYo0glaJ | Theseus: A Library for Differentiable Nonlinear Optimization | We present Theseus, an efficient application-agnostic open source library for differentiable nonlinear least squares (DNLS) optimization built on PyTorch, providing a common framework for end-to-end structured learning in robotics and vision. Existing DNLS implementations are application specific and do not always incorporate many ingredients important for efficiency. Theseus is application-agnostic, as we illustrate with several example applications that are built using the same underlying differentiable components, such as second-order optimizers, standard costs functions, and Lie groups. For efficiency, Theseus incorporates support for sparse solvers, automatic vectorization, batching, GPU acceleration, and gradient computation with implicit differentiation and direct loss minimization. We do extensive performance evaluation in a set of applications, demonstrating significant efficiency gains and better scalability when these features are incorporated. Project page: https://sites.google.com/view/theseus-ai/ | Accept | This paper presents Theseus, a software library which provides a new layer in the form of a differentiable nonlinear least squares (DNLS) solver. Forward pass solves the problem and the backward pass provides derivates for the optimum with respect to parameters. The reviewers uniformly appreciated the presentation of the paper and the usefulness and packaging of the proposed library. The library's features such as sparse solvers and Lie group algebra was also appreciated by the reviewers. Overall the paper and the library are a strong contribution to the Neurips community and hence I am happy to recommend for acceptance. I would urge the authors to make sure they follow through on feature requests and other suggestions made by the reviewers. | train | [
"SSMrTU744Oi",
"noBoymSGu6-",
"yp9p2Cih7xo",
"pFJIRP4d7fM",
"QHCL1WOEYtn",
"hf5NeA1B9_l",
"piQcOLT4Vkv",
"3JQPwPGiLCn"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their feedback. We are pleased to see that the reviewer thinks that with this library researchers “can more easily design differentiable programming systems while spending less time...”; this is one of our main goals! We are also glad the reviewer recognized some of the engineering chall... | [
-1,
-1,
-1,
-1,
7,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
3,
2,
3,
3
] | [
"3JQPwPGiLCn",
"piQcOLT4Vkv",
"hf5NeA1B9_l",
"QHCL1WOEYtn",
"nips_2022_K48UYo0glaJ",
"nips_2022_K48UYo0glaJ",
"nips_2022_K48UYo0glaJ",
"nips_2022_K48UYo0glaJ"
] |
nips_2022_IiCsx9KNVa0 | Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models | Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) are proposed to explore DPMs for representation learning via autoencoding and succeed in various downstream tasks. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose \textbf{P}re-trained \textbf{D}PM \textbf{A}uto\textbf{E}ncoding (\textbf{PDAE}), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reanson that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By resuing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments denonstrate the effectiveness, efficiency and flexibility of PDAE. | Accept | This paper presents a new unsupervised learning method by making full use of pre-trained diffusion probabilistic models. Extensive experiments show that the proposed method can obtain an improvement in performance and learning time. Four reviewers voted for accepting the paper after the rebuttal and the discussion. All concerns raised by the reviewers have been well addressed by the authors. The AC agrees with the reviewers and recommends accepting the paper. Also, AC urges the authors to improve their paper by taking into account all the suggestions from reviewers. | train | [
"KsDw-wpGpZ",
"_VGELRxW2zA",
"7MUtGSMTyre",
"pihWp54Dqgz",
"2tP79gRUzAR",
"jm669frnFzs",
"6EmKglXS1z",
"GD9yWj9EvuX",
"m68yJcLOE1d",
"d8JMWd3UVaQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I vote for accepting this paper as I think that it proposes a great approach and presents an improvement in the performance and learning time.",
" Dear reviewers,\n\nwe first thank you again for your valuable comments and suggestions. In the previous replies, we have tried our best to address your questions poi... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3
] | [
"pihWp54Dqgz",
"nips_2022_IiCsx9KNVa0",
"m68yJcLOE1d",
"d8JMWd3UVaQ",
"GD9yWj9EvuX",
"6EmKglXS1z",
"nips_2022_IiCsx9KNVa0",
"nips_2022_IiCsx9KNVa0",
"nips_2022_IiCsx9KNVa0",
"nips_2022_IiCsx9KNVa0"
] |
nips_2022_JVoKzM_-lhz | SPoVT: Semantic-Prototype Variational Transformer for Dense Point Cloud Semantic Completion | Point cloud completion is an active research topic for 3D vision and has been widely
studied in recent years. Instead of directly predicting missing point cloud from
the partial input, we introduce a Semantic-Prototype Variational Transformer
(SPoVT) in this work, which takes both partial point cloud and their semantic
labels as the inputs for semantic point cloud object completion. By observing
and attending at geometry and semantic information as input features, our SPoVT
would derive point cloud features and their semantic prototypes for completion
purposes. As a result, our SPoVT not only performs point cloud completion with
varying resolution, it also allows manipulation of different semantic parts of an
object. Experiments on benchmark datasets would quantitatively and qualitatively
verify the effectiveness and practicality of our proposed model.
| Accept | The paper received mixed reviews. After rebuttal, reviewers Bq58 and QsJ3 decided to raise the rating to weak accept. So all the reviewers give positive ratings and think the authors have addressed their concerns well. Taking the comments of the reviewers into account, the AC decided to accept this paper at NeurIPS. | val | [
"OveTOV07DNq",
"pnahU-9ewT6",
"_YDZpSg3qov",
"I4ww292c6ai",
"0Ze815vMyjm",
"aAXgXXanf6",
"bblpeRhwe-E",
"2aBjkzcQuBz",
"E2t6mp6qHH",
"XgLwZoTfZv",
"Gahens_EyLU",
"Pfoz2XPoKJS",
"YXkTr588ypS",
"1vf2858IZIr",
"6FX9lkwtct5"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer for willing to update the rating to weak accept (6) after the discussion period. We appreciate the reviewer for clarifying the above particular issue. We understand that the use of pre-trained DGCNN for evaluation and comparison may still be a concern. In recent works on semantic i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
5
] | [
"pnahU-9ewT6",
"XgLwZoTfZv",
"2aBjkzcQuBz",
"0Ze815vMyjm",
"aAXgXXanf6",
"bblpeRhwe-E",
"6FX9lkwtct5",
"1vf2858IZIr",
"YXkTr588ypS",
"Gahens_EyLU",
"Pfoz2XPoKJS",
"nips_2022_JVoKzM_-lhz",
"nips_2022_JVoKzM_-lhz",
"nips_2022_JVoKzM_-lhz",
"nips_2022_JVoKzM_-lhz"
] |
nips_2022_GoOuIrDHG_Y | End-to-end Symbolic Regression with Transformers | Symbolic regression, the task of predicting the mathematical expression of a function from the observation of its values, is a difficult task which usually involves a two-step procedure: predicting the "skeleton" of the expression up to the choice of numerical constants, then fitting the constants by optimizing a non-convex loss function. The dominant approach is genetic programming, which evolves candidates by iterating this subroutine a large number of times. Neural networks have recently been tasked to predict the correct skeleton in a single try, but remain much less powerful.
In this paper, we challenge this two-step procedure, and task a Transformer to directly predict the full mathematical expression, constants included. One can subsequently refine the predicted constants by feeding them to the non-convex optimizer as an informed initialization. We present ablations to show that this end-to-end approach yields better results, sometimes even without the refinement step. We evaluate our model on problems from the SRBench benchmark and show that our model approaches the performance of state-of-the-art genetic programming with several orders of magnitude faster inference. | Accept | The paper proposes a transformer-based approach to perform end-to-end symbolic regression. All three reviewers seem to agree on the usefulness of the proposed approach to reduce inference time. As pointed out by Reviewer Hxn5, although the performance is not superior, the advantage of using pre-training over GP-based approach is promising. The discussion phase has allowed covering important criticisms and the autors have included in the appendix a discussion on inference time (App. G), an extended comparison with other DL skeleton approaches (App. H), as well as ablation studies (App. E), commiting in turn to integrate the latter as well as possible to the main paper in the camera ready version. Furthermore it was stressed in the discussion by one of the referees that the implementation released in the supplementals is of high quality and bug-free, which should be of help to the future research community once open-sourced. For all these reasons, I am recommending the paper to be accepted. | train | [
"dwc8TcUG5pi",
"PK7b310vOC6",
"-jM4PNItdVq",
"Th5iiD2PgIQ",
"CwkZtvpTZ8I",
"9C6cG7sPmQx",
"CMissa5zgmZ",
"sWQ_Pe4tjFN",
"1Bw8upTML-Z"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Due to time constraints with respect to today’s deadline, we included in the appendix the discussion on inference time (App. G), an extended comparison with other DL skeleton approaches (App. H), as well as the ablation studies (App. E). We commit to integrate them as well as possible to the main paper in the cam... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
5
] | [
"PK7b310vOC6",
"Th5iiD2PgIQ",
"1Bw8upTML-Z",
"CwkZtvpTZ8I",
"sWQ_Pe4tjFN",
"CMissa5zgmZ",
"nips_2022_GoOuIrDHG_Y",
"nips_2022_GoOuIrDHG_Y",
"nips_2022_GoOuIrDHG_Y"
] |
nips_2022_BRIL0EFvTgc | Pay attention to your loss : understanding misconceptions about Lipschitz neural networks | Lipschitz constrained networks have gathered considerable attention in the deep learning community, with usages ranging from Wasserstein distance estimation to the training of certifiably robust classifiers. However they remain commonly considered as less accurate, and their properties in learning are still not fully understood. In this paper we clarify the matter: when it comes to classification 1-Lipschitz neural networks enjoy several advantages over their unconstrained counterpart. First, we show that these networks are as accurate as classical ones, and can fit arbitrarily difficult boundaries. Then, relying on a robustness metric that reflects operational needs we characterize the most robust classifier: the WGAN discriminator. Next, we show that 1-Lipschitz neural networks generalize well under milder assumptions. Finally, we show that hyper-parameters of the loss are crucial for controlling the accuracy-robustness trade-off. We conclude that they exhibit appealing properties to pave the way toward provably accurate, and provably robust neural networks. | Accept | The submission proposes a series of novel results for Lipschitz models on robustness, generalization, and empirical performances opening a new venue for working on Lipschitz neural networks for example. While these results are important and interesting, the authors have struggled to provide a clear takeaway from this submission but discussions with reviewers have provided improvements to this paper. Despite its clarity issues and after reading the paper, I still recommend this paper for acceptance. | train | [
"bN4-nwtggcX",
"R9kn6E8lrJH",
"5jRE-IPDjv9",
"ZQ3sC4ZbkCX",
"gxRNgwxI958",
"u0grOrh2eq",
"jvfabJMTqqN",
"3aJ0kWAeCbw",
"5rdN57ilJuk",
"qPjoYjNMK_2",
"rQH8Czcpsu",
"ccQQl2RPvFS",
"m6FFNP7VFp",
"WKEs49JhlEF",
"62wT0gu6Gbt",
"OiW-mZgUHHJ",
"tJ3Se8vODd",
"Mdx2L7jO5-R"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your answer and the proposed developements. I will need some time to fully process this discussion, thank you very much for the interaction!",
" Thank you for your kind words.",
" Thank you for your thoughtful answer and your willingness to improve your rating. We have worked toward the changes ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"5jRE-IPDjv9",
"ZQ3sC4ZbkCX",
"gxRNgwxI958",
"5rdN57ilJuk",
"qPjoYjNMK_2",
"nips_2022_BRIL0EFvTgc",
"3aJ0kWAeCbw",
"ccQQl2RPvFS",
"Mdx2L7jO5-R",
"tJ3Se8vODd",
"OiW-mZgUHHJ",
"62wT0gu6Gbt",
"WKEs49JhlEF",
"nips_2022_BRIL0EFvTgc",
"nips_2022_BRIL0EFvTgc",
"nips_2022_BRIL0EFvTgc",
"nips... |
nips_2022_hH9ohGbhyv | Panchromatic and Multispectral Image Fusion via Alternating Reverse Filtering Network | Panchromatic (PAN) and multi-spectral (MS) image fusion, named Pan-sharpening, refers to super-resolve the low-resolution (LR) multi-spectral (MS) images in the spatial domain to generate the expected high-resolution (HR) MS images, conditioning on the corresponding high-resolution PAN images. In this paper, we present a simple yet effective alternating reverse filtering network for pan-sharpening. Inspired by the classical reverse filtering that reverses images to the status before filtering, we formulate pan-sharpening as an alternately iterative reverse filtering process, which fuses LR MS and HR MS in an interpretable manner. Different from existing model-driven methods that require well-designed priors and degradation assumptions, the reverse filtering process avoids the dependency on pre-defined exact priors. To guarantee the stability and convergence of the iterative process via contraction mapping on a metric space, we develop the learnable multi-scale Gaussian kernel module, instead of using specific filters. We demonstrate the theoretical feasibility of such formulations. Extensive experiments on diverse scenes to thoroughly verify the performance of our method, significantly outperforming the state of the arts. | Accept | The paper presents a pan sharpening image fusion approach using deep learning. The overall review sentiment leaned towards accepting the paper. The reviewers appreciated the reformulation of the problem as an iterative reverse filtering process and thought the technique was generalizable, broadening its potential impact. Some concern was mentioned that the paper directly adapts existing techniques, and the theory is directly pulled from those papers. In that sense the paper is applied. The experimental results were convincing to the reviewers in general, which helps justify the mostly applied nature of the paper. | train | [
"kfAKFgwsxp7",
"VipxkskhMt7",
"XDdH1dKm8GX",
"RqA1yQ1duaH",
"092RU8AosP",
"cbl6gaPu18",
"yNtiSijn2Fr",
"9icFk68Lfdh"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response. Some questions are addressed properly, including the efficiency evaluation and the comparison between initialized kernels and trained kernels, which improve the completeness. \n\nHowever, the claimed main contribution that solving the multiple image fusion pro... | [
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"VipxkskhMt7",
"9icFk68Lfdh",
"yNtiSijn2Fr",
"yNtiSijn2Fr",
"cbl6gaPu18",
"nips_2022_hH9ohGbhyv",
"nips_2022_hH9ohGbhyv",
"nips_2022_hH9ohGbhyv"
] |
nips_2022_ZChgD8OoGds | Joint Entropy Search for Multi-Objective Bayesian Optimization | Many real-world problems can be phrased as a multi-objective optimization problem, where the goal is to identify the best set of compromises between the competing objectives. Multi-objective Bayesian optimization (BO) is a sample efficient strategy that can be deployed to solve these vector-valued optimization problems where access is limited to a number of noisy objective function evaluations. In this paper, we propose a novel information-theoretic acquisition function for BO called Joint Entropy Search (JES), which considers the joint information gain for the optimal set of inputs and outputs. We present several analytical approximations to the JES acquisition function and also introduce an extension to the batch setting. We showcase the effectiveness of this new approach on a range of synthetic and real-world problems in terms of the hypervolume and its weighted variants. | Accept | The authors propose an entropy search method for multi-objective Bayesian optimization that considers the mutual information gain of the location and value of the optimizer simultaneously while selecting query points. Most reviewers found the approach to be interesting. The work is commendable in its attempt to rigorously compare different information-based acquisition functions via high-quality re-implementations of algorithms (e.g., PESMO). Many reviewers left detailed comments that the authors addressed in part, or agreed to examine in the camera ready. Examples include more rigorous examination of the performance with respect to noise and batch size, inferential statistics for the experiments, and transparent reporting on the strengths and weaknesses relative to the existing literature. | val | [
"b9XM4mNTjP1",
"i2wcOzaE5IL",
"Zrx-TsL8PL1",
"O1dYLZVUEOC",
"gtP4QqVQZuh",
"6cmx0ukY_rM",
"5jE4Hgt8dZT",
"Br-T4-9FgGU",
"i1LujlxnxhI",
"7B_mEA__h7L",
"mHIQkinxLeT",
"9K24YaJZe0u",
"gqZk_jLYlzh",
"6IXpA48XML5",
"UvCMz6rsSMg",
"IvlOaJMq0WH",
"5tTteeERxQN",
"ytB6xUGp7f6O",
"nlwUP-QF... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" My recommendations to the authors are:\n1) Make clear up front about the sources of error in the batch acquisition function: the potential of error from the lower bound (this error relative to true batch acquisition function seems possibly quite large), MC error and submodularity error (the latter two really only... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
3,
4,
5
] | [
"009daQaPBk",
"7B_mEA__h7L",
"5jE4Hgt8dZT",
"gtP4QqVQZuh",
"Br-T4-9FgGU",
"UvCMz6rsSMg",
"mHIQkinxLeT",
"gqZk_jLYlzh",
"IvlOaJMq0WH",
"nlwUP-QFiSv",
"9K24YaJZe0u",
"4x35Ky8_7eI",
"6IXpA48XML5",
"009daQaPBk",
"2ggTWRWEulj",
"5jCPk6pD-3n",
"QzV_YTyqiHV",
"nips_2022_ZChgD8OoGds",
"n... |
nips_2022_b9APFSTylGT | Prompt Learning with Optimal Transport for Vision-Language Models | With the increasing attention to large vision-language models such as CLIP, there has been a significant amount of effort dedicated to building efficient prompts. Unlike conventional methods of only learning one single prompt, we propose to learn multiple comprehensive prompts to describe diverse characteristics of categories such as intrinsic attributes or extrinsic contexts. However, directly matching each prompt to the same visual feature is problematic, as it pushes the prompts to converge to one point. To solve this problem, we propose to apply optimal transport to match the vision and text modalities. Specifically, we first model images and the categories with visual and textual feature sets. Then, we apply a two-stage optimization strategy to learn the prompts. In the inner loop, we optimize the optimal transport distance to align visual features and prompts by the Sinkhorn algorithm, while in the outer loop, we learn the prompts by this distance from the supervised data. Extensive experiments are conducted on the few-shot recognition task and the significant improvement demonstrates the superiority of our method. | Reject | This paper presents a novel perspective of prompt tuning for few-shot visual recognition: a dynamic matching algorithm between the prompt candidate and the visual features. Compared to the existing CoOp and CoCoOp algorithm, the proposed "Optimal Transportation" idea definitely sounds better and indeed achieves better performance. All the reviewers acknowledge the merits of the paper.
Though AC also acknowledges the merits of the paper, there are two unaddressed demerits:
1) As the proposed method is essentially an ensemble method, comparisons with prompt ensemble should be conducted. Unfortunately, the reported "G" in Table 2 and Line 254-265 is not a proper ensemble, because the "G" baseline may degenerate into many duplicate single CoOp models with different random seeds. AC conjectures that this is the reason why G's performance is only slightly different from CoOp. By "proper ensemble", the authors may want to try initializations by "this is a photo"+"a picture of" + "there is an xx of" etc, or, augmenting each image by random crops, each of which corresponds to a "G", and then ensemble.
2) This paper lacks an important "Base to New" setting as proposed by CoCoOp. As the proposed PLOT in this paper has significantly more tunable parameters, AC doubts that it may lead to the overfitting of the training classes but ruins (or forgets) other classes which were used to have good zero-shot performance without training.
Unfortunately, AC regrets to recommend reject and wishes the best of luck in re-submitting the paper to other venues. | train | [
"fdRTzTh5IYfn",
"QmtUsr1cx2ZK",
"o9KlftyyJUp",
"9G4heUKrxa9",
"2qDQI0589s7",
"FhIl8macL16",
"P4aQgeusq3V",
"pk-p9nwMETe",
"GbKOG8TMbYp",
"-LeYm67kwv"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for checking our response and the updated paper. We are glad that our response and updated presentation have resolved your concerns regarding the motivation, contribution, and clarity of the paper. Your valuable comments have improved our presentation a lot and made the paper more readable. Thank you... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5
] | [
"QmtUsr1cx2ZK",
"P4aQgeusq3V",
"-LeYm67kwv",
"nips_2022_b9APFSTylGT",
"GbKOG8TMbYp",
"pk-p9nwMETe",
"pk-p9nwMETe",
"nips_2022_b9APFSTylGT",
"nips_2022_b9APFSTylGT",
"nips_2022_b9APFSTylGT"
] |
nips_2022_G1vrYk9uX-_ | Mining Unseen Classes via Regional Objectness: A Simple Baseline for Incremental Segmentation | Incremental or continual learning has been extensively studied for image classification tasks to alleviate catastrophic forgetting, a phenomenon in which earlier learned knowledge is forgotten when learning new concepts. For class incremental semantic segmentation, such a phenomenon often becomes much worse due to the semantic shift of the background class, \ie, some concepts learned at previous stages are assigned to the background class at the current training stage, therefore, significantly reducing the performance of these old concepts. To address this issue, we propose a simple yet effective method in this paper, named Mining unseen Classes via Regional Objectness (MicroSeg). Our MicroSeg is based on the assumption that \emph{background regions with strong objectness possibly belong to those concepts in the historical or future stages}. Therefore, to avoid forgetting old knowledge at the current training stage, our MicroSeg first splits the given image into hundreds of segment proposals with a proposal generator. Those segment proposals with strong objectness from the background are then clustered and assigned new defined labels during the optimization. In this way, the distribution characterizes of old concepts in the feature space could be better perceived, relieving the catastrophic forgetting caused by the semantic shift of the background class accordingly. We conduct extensive experiments on Pascal VOC and ADE20K, and competitive results well demonstrate the effectiveness of our MicroSeg. Code is available at \href{https://github.com/zkzhang98/MicroSeg}{\textcolor{orange}{\texttt{https://github.com/zkzhang98/MicroSeg}}}. | Accept | Most of the reviewers pointed out that the motivation of the method is clear, and the method is novel and interesting. The proposed method is also effective on multiple benchmarks. One of the reviewer has concerns about the choice of a parameter (K), and another reviewer has concerns about details of the method. AC admits that these points need to be further improved, but thinks these points can be clarified in the camera ready version. Thus, considering the novelty and efficacy of the method, AC recommends accept for this paper, yet suggests the authors to carefully take the reviewers' comments into account when preparing the final version. | train | [
"rQgzh4B9ui",
"9pkawbgfbCV",
"iyab_oHosrW",
"0AJNwpRfZbo",
"HgbtnaAIJFy",
"Ob6gQFg2MK",
"AmxPEWv_t80u",
"pJT1XxZ220m",
"o8XW_AWYMIy",
"KUfDHslwyRP",
"vuZtZZx9iuw",
"0CAc7e-R8Su",
"Cf3zn1xk6br",
"5S1y4w8nq8D",
"6eYjbLNtUW",
"21KaTFRC6KD",
"9fajU1YZK6A",
"IC0WdkM92H_",
"BpKJv8eLxs-... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thanks for your further response and acknowledging our efforts. Intuitively, $K$ for ADE20K should be larger than that of VOC, since one image is often with more classes in ADE20K. However, it is worth noting that the number of future classes in ADE20K is of the same order of magnitude as in VOC in a single image... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4,
4
] | [
"9pkawbgfbCV",
"iyab_oHosrW",
"HgbtnaAIJFy",
"0CAc7e-R8Su",
"vuZtZZx9iuw",
"nips_2022_G1vrYk9uX-_",
"nips_2022_G1vrYk9uX-_",
"o8XW_AWYMIy",
"KUfDHslwyRP",
"5S1y4w8nq8D",
"bloyOSjfIy",
"BpKJv8eLxs-",
"IC0WdkM92H_",
"9fajU1YZK6A",
"21KaTFRC6KD",
"nips_2022_G1vrYk9uX-_",
"nips_2022_G1vr... |
nips_2022_zfo2LqFEVY | Multi-modal Grouping Network for Weakly-Supervised Audio-Visual Video Parsing | The audio-visual video parsing task aims to parse a video into modality- and category-aware temporal segments. Previous work mainly focuses on weakly-supervised approaches, which learn from video-level event labels. During training, they do not know which modality perceives and meanwhile which temporal segment contains the video event. Since there is no explicit grouping in the existing frameworks, the modality and temporal uncertainties make these methods suffer from false predictions. For instance, segments in the same category could be predicted in different event classes. Learning compact and discriminative multi-modal subspaces is essential for mitigating the issue. To this end, in this paper, we propose a novel Multi-modal Grouping Network, namely MGN, for explicitly semantic-aware grouping. Specifically, MGN aggregates event-aware unimodal features through unimodal grouping in terms of learnable categorical embedding tokens. Furthermore, it leverages the cross-modal grouping for modality-aware prediction to match the video-level target. Our simple framework achieves improving results against previous baselines on weakly-supervised audio-visual video parsing. In addition, our MGN is much more lightweight, using only 47.2% of the parameters of baselines (17 MB vs. 36 MB). Code is available at https://github.com/stoneMo/MGN. | Accept | The authors propose an approach for weakly supervised audio-visual parsing of videos. They propose using learnable categorical embedding to do class-aware unimodal grouping, combined with cross-modal grouping to time-stamp audio, visual and audio-visual events using only video level labels.
Based on the feedback provided by the reviewers, especially since Reviewer KNZ9 increased their score to Borderline Accept after the rebuttal period, we recommend this paper for publication at NeurIPS 2022.
The reviewers had some concerns about the paper. Reviewer KNZ9 mentioned that the relations of the listed papers to the proposed model were not well explained -- they also had some concerns about the experimental results, and the fact that only one dataset was used in the evaluation. Reviewer kAVj had questions about model performance with event scaling, and time resolution lower bounds. Reviewer f8HF commented on the difficulty in following the notation in the paper, and pointed out the results on audio events is not improved.
We thank the authors for addressing the comments of the reviewers in their review during the author feedback period. The authors seem to have addressed some of the concerns/feedback from the reviewers with detailed discussions -- it would be good to include these discussions, as much as possible, in the updated paper or supplemental materials. | val | [
"9jLNNFAoyOi",
"6S6TyTKN74r",
"mw1pMJDfTbG",
"xBNx6nOPkuY",
"ct6tRWNXfj1",
"LNfHalPt0y9",
"IhT0cGJ_okD",
"lZByZKH5iqP"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Q6**\n*In Equation (10), it is not clear why learned weights are needed to transform both the class tokens and the modality specific features. Is it not equivalent to just transform the features? That is $Ax\\cdot By = x^TA^TBy = B^TAx\\cdot y = Wx\\cdot y$, where $W=B^TA$.*\n\nYes, they are theoretically equiv... | [
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"6S6TyTKN74r",
"lZByZKH5iqP",
"IhT0cGJ_okD",
"LNfHalPt0y9",
"nips_2022_zfo2LqFEVY",
"nips_2022_zfo2LqFEVY",
"nips_2022_zfo2LqFEVY",
"nips_2022_zfo2LqFEVY"
] |
nips_2022_MG3YN3z1J4M | Unveiling The Mask of Position-Information Pattern Through the Mist of Image Features | Recent studies show that paddings in convolutional neural networks encode absolute position information which can negatively affect the model performance for certain tasks. However, existing metrics for quantifying the strength of positional information remain unreliable and frequently lead to erroneous results. To address this issue, we propose novel metrics for measuring (and visualizing) the encoded positional information. We formally define the encoded information as PPP (Position-information Pattern from Padding) and conduct a series of experiments to study its properties as well as its formation. The proposed metrics measure the presence of positional information more reliably than the existing metrics based on PosENet and a test in F-Conv. We also demonstrate that for any extant (and proposed) padding schemes, PPP is primarily a learning artifact and is less dependent on the characteristics of the underlying padding schemes. | Reject | The three reviewers all leaned towards rejection for this paper. One reviewer was concerned with the relatively small number of images used in the experiment and how valid the conclusions can be from that for PPP as a better metric. Another confusion was over how optimality in padding can be defined. This was important because the MAE and SNR measures were based off of this. | val | [
"tDD6FC8249",
"3-K8RnSHNX",
"PnCPScbqvZ",
"a73dv8IIqFm",
"ZGdTXTs7Yxl",
"REekGxlmnge",
"GIj57bviwux",
"3YHsS2z31C8T",
"V-W9UZRuazp",
"vTN8dcV8Nm",
"DJ7YBXUfiqR"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for responding to our rebuttal, we would like to further discuss some of the details as follows.\n\n1. **`The definition of optimal padding`**\n - To clarify, Eq (1) and Eq (2) are the definitions of paddings, following how images are captured from the physical world. It is not clear to us why the defin... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
3,
4
] | [
"3-K8RnSHNX",
"3YHsS2z31C8T",
"V-W9UZRuazp",
"3YHsS2z31C8T",
"GIj57bviwux",
"GIj57bviwux",
"nips_2022_MG3YN3z1J4M",
"DJ7YBXUfiqR",
"vTN8dcV8Nm",
"nips_2022_MG3YN3z1J4M",
"nips_2022_MG3YN3z1J4M"
] |
nips_2022_oW4Zz0zlbFF | Understanding Benign Overfitting in Gradient-Based Meta Learning | Meta learning has demonstrated tremendous success in few-shot learning with limited supervised data. In those settings, the meta model is usually overparameterized. While the conventional statistical learning theory suggests that overparameterized models tend to overfit, empirical evidence reveals that overparameterized meta learning methods still work well -- a phenomenon often called ``benign overfitting.'' To understand this phenomenon, we focus on the meta learning settings with a challenging bilevel structure that we term the gradient-based meta learning, and analyze its generalization performance under an overparameterized meta linear regression model. While our analysis uses the relatively tractable linear models, our theory contributes to understanding the delicate interplay among data heterogeneity, model adaptation and benign overfitting in gradient-based meta learning tasks. We corroborate our theoretical claims through numerical simulations. | Accept | This paper explores the generalization of minimum norm optima for various meta-learning objectives, including basic ERM, model-agnostic meta-learning (MAML) and implicit MAML (iMAML). The generative model considered is "mixed linear regression", in which each of tasks follows a linear + Gaussian noise data model (a different direction per task). The main conceptual takeaway is to tease out how the task heterogeneity affects the generalization bounds (through a "cross variance" quantity), and to compare how much "overfitting" happens for the different objectives. The proof techniques largely follow [Bartlett et al. PNAS 2020] --- due to the linearity/Gaussianity, it boils down to a series of concentration bounds. Since data coming from different tasks has a different SVD, this makes the proofs non-trivial to extend. | val | [
"CZ2C-yBniTI",
"V9Gb5hi6Jt",
"EQmP0aJoFzA",
"KkVjMl-CXmT",
"yZpFeAgvjo",
"e13RZp4XWBs",
"decxuukYx16",
"FQWnocYFKU",
"vNAv6jjZVN7",
"GColPEnaGjq",
"IbhBUT8m8F1",
"QDx8QlCs77A"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I acknowledge that I have read the authors’ responses and thank them for positively addressing my comments.",
" I have read the response, and the authors have addressed my concerns.",
" **Q1. Add empirical results of various meta learning methods at first to show whether benign overfitting phenomenon will occ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
3,
4
] | [
"KkVjMl-CXmT",
"decxuukYx16",
"GColPEnaGjq",
"vNAv6jjZVN7",
"QDx8QlCs77A",
"IbhBUT8m8F1",
"GColPEnaGjq",
"nips_2022_oW4Zz0zlbFF",
"nips_2022_oW4Zz0zlbFF",
"nips_2022_oW4Zz0zlbFF",
"nips_2022_oW4Zz0zlbFF",
"nips_2022_oW4Zz0zlbFF"
] |
nips_2022_nJWcpq2fco3 | Representing Spatial Trajectories as Distributions | We introduce a representation learning framework for spatial trajectories. We represent partial observations of trajectories as probability distributions in a learned latent space, which characterize the uncertainty about unobserved parts of the trajectory. Our framework allows us to obtain samples from a trajectory for any continuous point in time—both interpolating and extrapolating. Our flexible approach supports directly modifying specific attributes of a trajectory, such as its pace, as well as combining different partial observations into single representations. Experiments show our method's superiority over baselines in prediction tasks. | Accept | This paper presents a new method for learning spatial partial trajectories. The trajectories are embedded as probability distributions in a learned latent space. The proposed framework is shown to interpolate and extrapolate partially observed trajectories. Experiments on three real datasets show that the proposed method outperforms existing state of the art methods.
The reviewers find the paper well-written and the proposed method novel, technically strong and interesting. The reviewer raised a few issues regarding the lack of additional ablation studies and the fact that the method is evaluated on the single task of joint-space trajectory domain. These issues are not considered by the reviewers as very crucial. | train | [
"wBp6y53AknGa",
"AqtPMtNRHEr",
"tojIyrIBClP",
"d0bFDXJgfNg",
"v9J6Sjd-2S",
"Hi6KerbCJKf",
"lst4fjrKHW",
"fJY4oGgJItC",
"E3yH2coeRxX",
"2HmoHzSiUpn"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors' replies resolve my concerns about the method presentations and ablation studies. I would like to raise my ranking from 3 to 6.",
" We appreciate the reviewer’s interest in the paper, as well as their raised suggestions and comments. Next we answer the questions they raised.\n\n**Exploring the multi... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
3
] | [
"E3yH2coeRxX",
"2HmoHzSiUpn",
"E3yH2coeRxX",
"E3yH2coeRxX",
"fJY4oGgJItC",
"lst4fjrKHW",
"nips_2022_nJWcpq2fco3",
"nips_2022_nJWcpq2fco3",
"nips_2022_nJWcpq2fco3",
"nips_2022_nJWcpq2fco3"
] |
nips_2022_Sj2z__i1wX- | Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization | The stochastic gradient Langevin Dynamics is one of the most fundamental algorithms to solve sampling problems and non-convex optimization appearing in several machine learning applications. Especially, its variance reduced versions have nowadays gained particular attention. In this paper, we study two variants of this kind, namely, the Stochastic Variance Reduced Gradient Langevin Dynamics and the Stochastic Recursive Gradient Langevin Dynamics. We prove their convergence to the objective distribution in terms of KL-divergence under the sole assumptions of smoothness and Log-Sobolev inequality which are weaker conditions than those used in prior works for these algorithms. With the batch size and the inner loop length set to $\sqrt{n}$, the gradient complexity to achieve an $\epsilon$-precision is $\tilde{O}((n+dn^{1/2}\epsilon^{-1})\gamma^2 L^2\alpha^{-2})$, which is an improvement from any previous analyses. We also show some essential applications of our result to non-convex optimization. | Accept | This paper proposes an improved convergence rate for stochastic gradient Langevin dynamics with variance reduction under smoothness and Log-Sobolev inequality assumptions, which improves a long line of prior works. After author response and reviewer discussion, the paper receives unanimous support from the reviewers. Thus, I recommend acceptance. | train | [
"KzLUosRug_x",
"1HvdJ9S92q",
"qXZ_pNXFvd_",
"dvhTHbXSky",
"hz1TTFxIJkj",
"DMGan2Bvr0N",
"yTJ0gpD46Hd",
"HG90utBhjnt",
"y1-Ykus1v2s"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your clarifications. I think they have addressed most of my concerns. I will rise my score to 6.",
" The paper deserves to be published. I raise my score to 7.\n\n\nminor remark: The hyperlink (Section 4) in the discussion section does not work.\n",
" Thank you for these clarifications... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"hz1TTFxIJkj",
"DMGan2Bvr0N",
"dvhTHbXSky",
"y1-Ykus1v2s",
"HG90utBhjnt",
"yTJ0gpD46Hd",
"nips_2022_Sj2z__i1wX-",
"nips_2022_Sj2z__i1wX-",
"nips_2022_Sj2z__i1wX-"
] |
nips_2022_DgM7-7eMkq0 | Decoupling Features in Hierarchical Propagation for Video Object Segmentation | This paper focuses on developing a more effective method of hierarchical propagation for semi-supervised Video Object Segmentation (VOS). Based on vision transformers, the recently-developed Associating Objects with Transformers (AOT) approach introduces hierarchical propagation into VOS and has shown promising results. The hierarchical propagation can gradually propagate information from past frames to the current frame and transfer the current frame feature from object-agnostic to object-specific. However, the increase of object-specific information will inevitably lead to the loss of object-agnostic visual information in deep propagation layers. To solve such a problem and further facilitate the learning of visual embeddings, this paper proposes a Decoupling Features in Hierarchical Propagation (DeAOT) approach. Firstly, DeAOT decouples the hierarchical propagation of object-agnostic and object-specific embeddings by handling them in two independent branches. Secondly, to compensate for the additional computation from dual-branch propagation, we propose an efficient module for constructing hierarchical propagation, i.e., Gated Propagation Module, which is carefully designed with single-head attention. Extensive experiments show that DeAOT significantly outperforms AOT in both accuracy and efficiency. On YouTube-VOS, DeAOT can achieve 86.0% at 22.4fps and 82.0% at 53.4fps. Without test-time augmentations, we achieve new state-of-the-art performance on four benchmarks, i.e., YouTube-VOS (86.2%), DAVIS 2017 (86.2%), DAVIS 2016 (92.9%), and VOT 2020 (0.622 EAO). Project page: https://github.com/z-x-yang/AOT. | Accept | The paper obtains three accept and one borderline reject recommendations. Yet all reviewers pointed out that the paper has novelty and originality in the domain of video object segmentation, and also the method works quite well on the tested datasets. The reviewer recommending rejection does not comment at the post-rebuttal phase, and the AC has checked authors' response, and most concerns of the reviewer have been addressed - though the theoretical analysis can be a future work, the feature visualization still can be done in the camera ready version. Authors also need to carefully prepare the camera ready version, since all the reviewers indeed give valuable comments, which are helpful for the paper. | train | [
"0nr2UZoFb6E",
"8uvWd-0AkYJ",
"uiLK9IKLnHW",
"iXnM3Pd1qwG",
"3jHyJRau8Bz",
"FU0wsiIjUw",
"Q5Tlme3AH8JR",
"mugDV_wWdTB",
"tMXlL28L07",
"QWJbnFtClD",
"5YdHLQCVp-n",
"4oeoAga8gpM",
"4l-Jn0ip_Si",
"tJs-Su1f-8X",
"c7yV904pnEC",
"Q3WfJKjMuHw"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 43Nu,\n\nThanks for your valuable response again. Please forgive us if you felt offended. We didn't mean to offend you or anyone. \n\nWe were trying to clear up your misunderstanding, and we bolded that sentence so that you could more clearly see our motivation for designing GPM. \n\nWe are glad we ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"8uvWd-0AkYJ",
"3jHyJRau8Bz",
"4oeoAga8gpM",
"Q5Tlme3AH8JR",
"FU0wsiIjUw",
"QWJbnFtClD",
"5YdHLQCVp-n",
"nips_2022_DgM7-7eMkq0",
"c7yV904pnEC",
"Q3WfJKjMuHw",
"tJs-Su1f-8X",
"4l-Jn0ip_Si",
"nips_2022_DgM7-7eMkq0",
"nips_2022_DgM7-7eMkq0",
"nips_2022_DgM7-7eMkq0",
"nips_2022_DgM7-7eMkq0... |
nips_2022_ZYKWi6Ylfg | Harmonizing the object recognition strategies of deep neural networks with humans | The many successes of deep neural networks (DNNs) over the past decade have largely been driven by computational scale rather than insights from biological intelligence. Here, we explore if these trends have also carried concomitant improvements in explaining the visual strategies humans rely on for object recognition. We do this by comparing two related but distinct properties of visual strategies in humans and DNNs: where they believe important visual features are in images and how they use those features to categorize objects. Across 84 different DNNs trained on ImageNet and three independent datasets measuring the where and the how of human visual strategies for object recognition on those images, we find a systematic trade-off between DNN categorization accuracy and alignment with human visual strategies for object recognition. State-of-the-art DNNs are progressively becoming less aligned with humans as their accuracy improves}. We rectify this growing issue with our neural harmonizer: a general-purpose training routine that both aligns DNN and human visual strategies and improves categorization accuracy. Our work represents the first demonstration that the scaling laws that are guiding the design of DNNs today have also produced worse models of human vision. We release our code and data at https://serre-lab.github.io/Harmonization to help the field build more human-like DNNs.
| Accept | The reviewers have brought up important concerns around the framing of the paper contributions, the presentation and application of the neural harmonizer method, and the use of saliency maps. The authors have addressed some of these concerns in their rebuttal.
However, I think the main contribution of this paper is the evaluation study itself, which examines 85 different neural network architectures and compares the important features for those architectures (via saliency maps) to the features humans consider important, for varying levels of performance.
This topic of human vision and computer vision inductive biases has been studied through e.g. comparisons of shape vs texture bias, but I think the study presented in this paper (Spearman correlation between feature importance maps) is an interesting addition to work in this area and has scope to be built on by the community, helped by the code release.
I do think the authors could have explored additional empirical studies around this main point, e.g. comparing saliency approaches to self-attention for vision transformers, or looking at the effect of architecture scale on salient features and comparison to human salient features. This to me is the main gap of the paper.
The paper is thus borderline, with an interesting main study contribution, with scope to be built on and a code release, but lacking additional related empirical experiments. | train | [
"7EbWx26R1dp",
"Bv97-AJwM_w",
"IHy6kYVR72v",
"fYvSShGl5mW",
"zp7XLw964V-",
"xUXtHOzTjV",
"cx6ZxTW7CNd",
"eq40OcpSCKw",
"82VS2tjNuPh",
"3vgCF708j7",
"R1973UQFvgc",
"06-NbhnCi5z",
"uRQ19eHwa7-",
"jSIEIZkAEOq",
"7lM5hTjm2d0",
"Ks1jBZ2ONQ4",
"sj10B557NrL"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their response and second look at the paper.\n\nWe would like to respond to your comment on the quoted claim.\n\nBrain Score describes the performance of models trained to *predict neural activity recorded in primates, not humans*. In our work (and the quote) we are solely comparing DNNs... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"Bv97-AJwM_w",
"06-NbhnCi5z",
"uRQ19eHwa7-",
"zp7XLw964V-",
"cx6ZxTW7CNd",
"sj10B557NrL",
"Ks1jBZ2ONQ4",
"Ks1jBZ2ONQ4",
"jSIEIZkAEOq",
"7lM5hTjm2d0",
"7lM5hTjm2d0",
"jSIEIZkAEOq",
"nips_2022_ZYKWi6Ylfg",
"nips_2022_ZYKWi6Ylfg",
"nips_2022_ZYKWi6Ylfg",
"nips_2022_ZYKWi6Ylfg",
"nips_20... |
nips_2022_JoZyVgp1hm | Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide Image Classification | Computer-aided pathology diagnosis based on the classification of Whole Slide Image (WSI) plays an important role in clinical practice, and it is often formulated as a weakly-supervised Multiple Instance Learning (MIL) problem. Existing methods solve this problem from either a bag classification or an instance classification perspective. In this paper, we propose an end-to-end weakly supervised knowledge distillation framework (WENO) for WSI classification, which integrates a bag classifier and an instance classifier in a knowledge distillation framework to mutually improve the performance of both classifiers. Specifically, an attention-based bag classifier is used as the teacher network, which is trained with weak bag labels, and an instance classifier is used as the student network, which is trained using the normalized attention scores obtained from the teacher network as soft pseudo labels for the instances in positive bags. An instance feature extractor is shared between the teacher and the student to further enhance the knowledge exchange between them. In addition, we propose a hard positive instance mining strategy based on the output of the student network to force the teacher network to keep mining hard positive instances. WENO is a plug-and-play framework that can be easily applied to any existing attention-based bag classification methods. Extensive experiments on five datasets demonstrate the efficiency of WENO. Code is available at https://github.com/miccaiif/WENO. | Accept | This submission was reviewed by three reviewers. All three reviewers provided detailed comments during the review period. The authors provided detailed responses to the initial set of reviews. The rebuttals lead to improved scores of some reviewers while other reviewers confirmed that their concerns have been addressed. Given the above evaluations and interactions, an accept is recommended. | test | [
"TklI7RZvFh1",
"DlZM2gIRrF5",
"KxI25pvLPsZ",
"p52BoPpNkr",
"3YX9MZWuva3",
"WWsekjFUV9x",
"-kZbJy-ocq",
"hD_IeBN5PYA",
"-HfQZj4sDad",
"8yB6yA38mhA",
"xJDznhIyL_",
"Q0f9WoCyQcR",
"rtSO4vbbyxc",
"DcxZ3g3FBiW",
"ZC6iayMFdq5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the author's rebuttal and response to the questions. This helps confirm my good score.",
" I thank the authors for their detailed response. My rating for that paper has been increased to a score of 6 (weak accept). ",
" Thank you very much for your valuable comments, which are very helpful to f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"ZC6iayMFdq5",
"DcxZ3g3FBiW",
"ZC6iayMFdq5",
"DcxZ3g3FBiW",
"DcxZ3g3FBiW",
"DcxZ3g3FBiW",
"DcxZ3g3FBiW",
"DcxZ3g3FBiW",
"DcxZ3g3FBiW",
"rtSO4vbbyxc",
"rtSO4vbbyxc",
"rtSO4vbbyxc",
"nips_2022_JoZyVgp1hm",
"nips_2022_JoZyVgp1hm",
"nips_2022_JoZyVgp1hm"
] |
nips_2022_-deKNiSOXLG | RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection | The task of out-of-distribution (OOD) detection is crucial for deploying machine learning models in real-world settings. In this paper, we observe that the singular value distributions of the in-distribution (ID) and OOD features are quite different: the OOD feature matrix tends to have a larger dominant singular value than the ID feature, and the class predictions of OOD samples are largely determined by it. This observation motivates us to propose RankFeat, a simple yet effective post hoc approach for OOD detection by removing the rank-1 matrix composed of the largest singular value and the associated singular vectors from the high-level feature. RankFeat achieves state-of-the-art performance and reduces the average false positive rate (FPR95) by 17.90% compared with the previous best method. Extensive ablation studies and comprehensive theoretical analyses are presented to support the empirical results. | Accept | Thanks for your submission to NeurIPS.
This paper generated quite a bit of discussion, with several reviewers having lengthy discussions with the authors on various points in the paper. At the end of the day, it seems that three of the four reviewers are mostly happy with the paper (with scores of 6, 6, 5, though the 5 reviewer indicated that they would raise their score to 6 but never did, so I'm assuming this is 6, 6, 6). One of the reviewers was more negative, giving a 3.
The biggest issue of the negative reviewer revolves around the experimental setup, and in particular the comparison of post-hoc and non-post-hoc methods (or lack thereof) in the experimental results. Though I'm not sure the reviewer was ever fully satisfied with the resulting experiments, I do see the differences between the experimental methodologies and am satisfied that the experiments presented in the paper are sufficient and reasonable. Given that the other reviewers are happy with the paper (and I've also read the paper and am OK with it), I will recommend accepting the paper.
When preparing a final version of the manuscript, please do keep in mind the many comments of the reviewers, and try to address these as much as possible. | train | [
"g9G-IPHsTFF",
"6Pd6oIOxq-",
"x7_2T8TXlok",
"x_vmhjBRqV",
"zuzO1L_m7wUb",
"l2-87TFn_8e",
"YnAItZpbkwo",
"xSGZwbU3-5P",
"iHXjiv6FtAZ",
"GBrDV2W8Z7I",
"0Tqg1P8cRs-",
"ROnG7qN0ULW",
"2Rf56lZiEO0",
"0NHxqESH6V",
"4w2qIgZpLj",
"iuRXfVwv_Zc",
"BU5XvfmQQX2",
"vZrgtsuh1a5",
"_h5Xbwpc4YF"... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_re... | [
" Below we update the result of [1,2] of fine-tuning for $15$ epochs.\n___\n\n|Method|training-free? |\tFPR95 ($\\downarrow$)\t| AUORC ($\\uparrow$)|\n|:-:|:-:|:-:|:-:|\n| [1] |\tx |\t56.36|\t86.91|\n| [2] |\tx\t| 52.78\t| 87.83|\n|RankFeat |✔\t|**36.80** | **92.15** |\n\nGiven the huge time cost of fine-tuning for... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"x7_2T8TXlok",
"x7_2T8TXlok",
"x_vmhjBRqV",
"zuzO1L_m7wUb",
"l2-87TFn_8e",
"xSGZwbU3-5P",
"xSGZwbU3-5P",
"iHXjiv6FtAZ",
"GBrDV2W8Z7I",
"FLOg7pb4sNi",
"ROnG7qN0ULW",
"2Rf56lZiEO0",
"0NHxqESH6V",
"S4oqMeOCT4W",
"iuRXfVwv_Zc",
"BU5XvfmQQX2",
"vZrgtsuh1a5",
"_h5Xbwpc4YF",
"dDL5xaHq2-... |
nips_2022_ikXoMuy_H4 | In the Eye of the Beholder: Robust Prediction with Causal User Modeling | Accurately predicting the relevance of items to users is crucial to the success of many social platforms. Conventional approaches train models on logged historical data; but recommendation systems, media services, and online marketplaces all exhibit a constant influx of new content---making relevancy a moving target, to which standard predictive models are not robust. In this paper, we propose a learning framework for relevance prediction that is robust to changes in the data distribution. Our key observation is that robustness can be obtained by accounting for \emph{how users causally perceive the environment}. We model users as boundedly-rational decision makers whose causal beliefs are encoded by a causal graph, and show how minimal information regarding the graph can be used to contend with distributional changes. Experiments in multiple settings demonstrate the effectiveness of our approach. | Accept | This paper studies user-item relevance prediction and proposes a novel learning framework that is robust to distributional shifts in observed user-item attributes. All the reviewers appreciated the significance of the problem, the novelty of the solution, and the thorough empirical evaluation. The reviewers were confused by the exposition in some places, and the authors comprehensively addressed the questions during the feedback phase. Please include the extensive clarifying discussions with the reviewers in the revised paper, which will likely be of interest to the community. | train | [
"qy01B6EbJ1V",
"tqWVUHJhWM",
"RlVXa6NpH-6X",
"n1zf1Kfr88",
"_uJ7vRdgg4-",
"NAuycHTh4zU",
"pL0iwZ6BM4O",
"srsd11UZNdU",
"VtMtcdQVmi0",
"xpFs33qT7gJ",
"U3H8haJHnHP",
"muwJMxw-2GN"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I really appreciate the time you've taken to engage with all my questions and comments. I found your answers helpful for my understanding. I also reviewed some of the changes you've made to the manuscript and I agree with you that your changes to Figure 3 really help. As such, I've revised my score upwards.",
"... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
4
] | [
"tqWVUHJhWM",
"RlVXa6NpH-6X",
"muwJMxw-2GN",
"muwJMxw-2GN",
"muwJMxw-2GN",
"muwJMxw-2GN",
"U3H8haJHnHP",
"xpFs33qT7gJ",
"nips_2022_ikXoMuy_H4",
"nips_2022_ikXoMuy_H4",
"nips_2022_ikXoMuy_H4",
"nips_2022_ikXoMuy_H4"
] |
nips_2022_2OdAggzzF3z | ResT V2: Simpler, Faster and Stronger | This paper proposes ResTv2, a simpler, faster, and stronger multi-scale vision Transformer for visual recognition. ResTv2 simplifies the EMSA structure in ResTv1 (i.e., eliminating the multi-head interaction part) and employs an upsample operation to reconstruct the lost medium- and high-frequency information caused by the downsampling operation. In addition, we explore different techniques for better applying ResTv2 backbones to downstream tasks. We find that although combining EMSAv2 and window attention can greatly reduce the theoretical matrix multiply FLOPs, it may significantly decrease the computation density, thus causing lower actual speed. We comprehensively validate ResTv2 on ImageNet classification, COCO detection, and ADE20K semantic segmentation. Experimental results show that the proposed ResTv2 can outperform the recently state-of-the-art backbones by a large margin, demonstrating the potential of ResTv2 as solid backbones. The code and models will be made publicly available at \url{https://github.com/wofmanaf/ResT}. | Accept | This paper introduced an improvement over ResT by addressing the issues introduced by downsampling operations in MSA. All reviewers have recognized the contribution of this paper and the impressive performance achieved by the proposed algorithm. In the rebuttal, the authors have well-fixed reviewers' major concerns and new results have been updated. | train | [
"65U49Rg-skb",
"LyJ9tM3v9T",
"H90ET4ijme_",
"8EyMGgOKRi",
"9UbIqPxb4rl",
"NP6bahx8IpS",
"h74mGOJDfIN",
"sNR5TsJ7W2d",
"ke830KfrIZz",
"TtOEfVfcggF",
"gmoLHX33KOy",
"PsMZvgXftyI",
"0X9_fcsILKT",
"9cSoH2JHrGH",
"2RsnTS5eUBr",
"s6dwonP-qQJ",
"x5Ths3AKScc",
"Je7SdHDvcSi"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We greatly appreciate your precious feedback on our research. Experiments of adding back features before downsampling have been included in the revision (Appendix D).",
" Hi authors,\n\nthanks for the quick response regarding the experiments results request. The table resolved my concern and I suggest add this ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"LyJ9tM3v9T",
"9UbIqPxb4rl",
"8EyMGgOKRi",
"PsMZvgXftyI",
"NP6bahx8IpS",
"9cSoH2JHrGH",
"Je7SdHDvcSi",
"x5Ths3AKScc",
"s6dwonP-qQJ",
"nips_2022_2OdAggzzF3z",
"Je7SdHDvcSi",
"0X9_fcsILKT",
"x5Ths3AKScc",
"2RsnTS5eUBr",
"s6dwonP-qQJ",
"nips_2022_2OdAggzzF3z",
"nips_2022_2OdAggzzF3z",
... |
nips_2022_CCBJf9xJo2X | Dataset Inference for Self-Supervised Models | Self-supervised models are increasingly prevalent in machine learning (ML) since they reduce the need for expensively labeled data. Because of their versatility in downstream applications, they are increasingly used as a service exposed via public APIs. At the same time, these encoder models are particularly vulnerable to model stealing attacks due to the high dimensionality of vector representations they output. Yet, encoders remain undefended: existing mitigation strategies for stealing attacks focus on supervised learning. We introduce a new dataset inference defense, which uses the private training set of the victim encoder model to attribute its ownership in the event of stealing. The intuition is that the log-likelihood of an encoder's output representations is higher on the victim's training data than on test data if it is stolen from the victim, but not if it is independently trained. We compute this log-likelihood using density estimation models. As part of our evaluation, we also propose measuring the fidelity of stolen encoders and quantifying the effectiveness of the theft detection without involving downstream tasks; instead, we leverage mutual information and distance measurements. Our extensive empirical results in the vision domain demonstrate that dataset inference is a promising direction for defending self-supervised models against model stealing. | Accept | This paper joins an interesting area that tackles the use of models in the real world that are accessible publicly via APIs. In these cases, there may be adversaries that attempt to steal the model. This can be done by accessing information about the model from particular queries. One of the approaches used to tackle such scenarios is to develop defenses that can detect when such models are being stolen. Much of this area is focused on looking at supervised models; the authors introduce a similar approach for encoders.
In the encoder case, techniques that involve the decision boundary, suitable for supervised models, no longer apply. The authors provide some technical innovations to get around this issue. Essentially they look for evidence of memorization by using a metamodel.
Overall, the problem is well-motivated and the tools the authors develop are interesting; the results also appear convincing. The reviewers reached a near-consensus that the paper is worth accepting, and I agree. Nothing here is extremely groundbreaking, but it's a well-crafted approach to handle a case of an important problem that hasn't been addressed yet.
The authors generally responded to all of the questions the reviewers had, and I furthermore found the answers convincing. Ultimately I think it's worth accepting. | val | [
"jr60FKOwGVA",
"InyFbVFRmk_",
"NsZgZtoix_X",
"i_fUcMMJ88W",
"mFW8XR5rkwY",
"O6O1VZCOiK0",
"rjfwV1JMxnk",
"Md62xw4p3E",
"zOT-2mWo2BB",
"OSIEbY6x1t8",
"t8qCd-4y-jB",
"cQdxBRoF8Wt",
"x_lFXdfZO8D",
"GGfluge0qr1",
"u--QtGS6fj8",
"K64diYZ_nc",
"fjGiTIVQaUZ",
"9H7u7Ao6YHl",
"RZDpTqILqg2... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you for the response. Unfortunately we did not update Section 3.3 according to the new experiments which is why the description there suggests that the whole private training dataset is used. In the new results above, $D_{P1}$ is \\# GMM, $D_{P2}$ is \\# train so that $D_P \\neq D_{P1} \\cup D_{P2}$. Simila... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"i_fUcMMJ88W",
"mFW8XR5rkwY",
"5CiMN0te7q7",
"HSd2tQ4dOfT",
"V3ybhIP9d-p",
"Md62xw4p3E",
"t8qCd-4y-jB",
"WNlitlmLZn",
"OSIEbY6x1t8",
"u--QtGS6fj8",
"cQdxBRoF8Wt",
"x_lFXdfZO8D",
"fv4Rt-Xf8-I",
"WNlitlmLZn",
"2gxLjDEV13C",
"V3ybhIP9d-p",
"nips_2022_CCBJf9xJo2X",
"t4udpnSokX1",
"za... |
nips_2022_RYZyj_wwgfa | Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks | We propose an algorithm that compresses the critical information of a large dataset into compact addressable memories. These memories can then be recalled to quickly re-train a neural network and recover the performance (instead of storing and re-training on the full original dataset). Building upon the dataset distillation framework, we make a key observation that a shared common representation allows for more efficient and effective distillation. Concretely, we learn a set of bases (aka ``memories'') which are shared between classes and combined through learned flexible addressing functions to generate a diverse set of training examples. This leads to several benefits: 1) the size of compressed data does not necessarily grow linearly with the number of classes; 2) an overall higher compression rate with more effective distillation is achieved; and 3) more generalized queries are allowed beyond recalling the original classes. We demonstrate state-of-the-art results on the dataset distillation task across five benchmarks, including up to 16.5% and 9.7% accuracy improvement when distilling CIFAR10 and CIFAR100 respectively. We then leverage our framework to perform continual learning, achieving state-of-the-art results on four benchmarks, with 23.2% accuracy improvement on MANY. | Accept | This paper proposes a new dataset distillation method that achieves SotA results on several benchmarks. Authors were very responsive to answer reviewers' questions, and made significant improvements to the manuscript, also adding additional results confirming the benefits of their approach. At the end of the discussion period, there is a clear consensus for acceptance, due to the fact that this approach is original, well motivated and achieves strong results.
I thus recommend to accept the paper, even though some concerns remain regarding the scalability of the algorithm (in terms of memory usage and running time). | train | [
"5-KPKKRU10L",
"BDi8itl42Z",
"AqOB92l_oiW",
"hyQkA4s68hQ",
"z4FcS-MgkDL",
"MTVEquc-lFp",
"5pooTuQXfCl",
"f6uTpc-N9z",
"UhI-_IA2rLl4",
"cc-8ItjQbvd",
"RfasM-yXnCF",
"d5RzeBawjY",
"zisilOKt83M"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nThank you again for providing feedback and questions on our paper. We will incorporate the discussion so far into our paper, and welcome any additional comments!\n\nIn the meantime, we were actually able to update and verify our findings on the higher-resolution TinyImageNet 64x64, which was on... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"nips_2022_RYZyj_wwgfa",
"AqOB92l_oiW",
"RfasM-yXnCF",
"MTVEquc-lFp",
"MTVEquc-lFp",
"UhI-_IA2rLl4",
"zisilOKt83M",
"d5RzeBawjY",
"cc-8ItjQbvd",
"RfasM-yXnCF",
"nips_2022_RYZyj_wwgfa",
"nips_2022_RYZyj_wwgfa",
"nips_2022_RYZyj_wwgfa"
] |
nips_2022_QK38rpF8RWL | GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions | We investigate the generalization capabilities of neural signed distance functions (SDFs) for learning 3D object representations for unseen and unlabeled point clouds. Existing methods can fit SDFs to a handful of object classes and boast fine detail or fast inference speeds, but do not generalize well to unseen shapes. We introduce a two-stage semi-supervised meta-learning approach that transfers shape priors from labeled to unlabeled data to reconstruct unseen object categories. The first stage uses an episodic training scheme to simulate training on unlabeled data and meta-learns initial shape priors. The second stage then introduces unlabeled data with disjoint classes in a semi-supervised scheme to diversify these priors and achieve generalization. We assess our method on both synthetic data and real collected point clouds. Experimental results and analysis validate that our approach outperforms existing neural SDF methods and is capable of robust zero-shot inference on 100+ unseen classes. Code can be found at https://github.com/princeton-computational-imaging/gensdf | Accept | This paper studies the generalization ability of neural signed distance functions by proposing a two-stage semi-supervised meta-learning framework. The method has been tested on both synthetic data and real point clouds. The paper received a total of 4 reviews. After the rebuttal, Reviewers 84Fy (accept), 8obm (week accept), GFH2 (week accept) voted for accepting the paper because they reached an agreement that the paper proposes a novel, simple yet effective method to learn generalizable signed distance functions. Reviewer 39TN voted for a “Borderline reject” due to his/her concern about the lack of theoretical understanding of the proposed method, but in the rebuttal, the authors actively replied to Reviewer 39TN’s concern and provided additional intuition and experiments to illustrate the effectiveness of our approach. After an internal discussion, AC recommends accepting the paper because it presents a very useful tool for 3D representation and all the major concerns raised by the reviewers have been addressed during the rebuttal. AC urges the authors to improve their paper by taking into account all the suggestions from reviewers.
| train | [
"L-vR9uzQxmL",
"zW6IE1ik1X-",
"4mw_8Y4b2F8",
"q7aUlZGF5kD",
"-V0XxKzGkJ8",
"yrZE10XmleEt",
"CjJAsWuYtvw",
"fKVq1-YA9Z1",
"5bDANs0tFFZ",
"S1DsTBBFHv6",
"pMw2lbuyEb_",
"RMOPfLSKojI6",
"kbpn40SChz82",
"02KvufBCPM4",
"DzreTeINLPh",
"LBD4iquI7Ir",
"GC8PjZrarz6",
"NEQ8I_5NU-4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" A few dozen point clouds is insufficient for our model to learn a strong generalized prior. The remaining time in the discussion phase did not allow us to report results on 10 or 20 instances per category. In the final version, we will analyze both training stages for varying instance counts per category. \n\nWe ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"-V0XxKzGkJ8",
"S1DsTBBFHv6",
"CjJAsWuYtvw",
"fKVq1-YA9Z1",
"yrZE10XmleEt",
"pMw2lbuyEb_",
"RMOPfLSKojI6",
"kbpn40SChz82",
"S1DsTBBFHv6",
"NEQ8I_5NU-4",
"GC8PjZrarz6",
"LBD4iquI7Ir",
"DzreTeINLPh",
"nips_2022_QK38rpF8RWL",
"nips_2022_QK38rpF8RWL",
"nips_2022_QK38rpF8RWL",
"nips_2022_... |
nips_2022_wxWTyJtiJZ | Product Ranking for Revenue Maximization with Multiple Purchases | Product ranking is the core problem for revenue-maximizing online retailers. To design proper product ranking algorithms, various consumer choice models are proposed to characterize the consumers' behaviors when they are provided with a list of products. However, existing works assume that each consumer purchases at most one product or will keep viewing the product list after purchasing a product, which does not agree with the common practice in real scenarios. In this paper, we assume that each consumer can purchase multiple products at will. To model consumers' willingness to view and purchase, we set a random attention span and purchase budget, which determines the maximal amount of products that he/she views and purchases, respectively. Under this setting, we first design an optimal ranking policy when the online retailer can precisely model consumers' behaviors. Based on the policy, we further develop the Multiple-Purchase-with-Budget UCB (MPB-UCB) algorithms with $\tilde{O}(\sqrt{T})$ regret that estimate consumers' behaviors and maximize revenue simultaneously in online settings. Experiments on both synthetic and semi-synthetic datasets prove the effectiveness of the proposed algorithms. | Accept | The paper studies the problem of choosing a ranked list of products to show to consumers in a regret minimization model. Consumers are assumed to follow a certain search rule to purchase a subset of presented products, and the goal is to maximize the revenue of the product listing under this search model. The model makes certain assumptions of the previously studied models somewhat more realistic, for example, it allows the consumer to purchase more than one product. The main result is a UCB-like algorithm with the regret of O(sqrt(T)).
On the negative side: Even though the model claims to make the model more realistic than the other models previously studied in the literature, I still find the model quite stylized, and would not call it a practical model for capturing consumer behavior. For example, in the real world, one would expect the consumers to compare their options and take this comparison into account when selecting which item to purchase, whereas in this paper, the consumer makes a probabilistic decision on each item it sees independent of the other items (only conditioning on not having already purchased enough items).
On the positive side, the paper solves a meaningful and non-trivial, though somewhat stylized problem, and the results are interesting, at least from a theoretical point of view.
For these reasons, I'm leaning to accept this paper, if it fares well in comparison with other papers on the borderline. | train | [
"KMhi_C6AxtA",
"8Ddh0yuNPjS",
"ZNOOVzo99Gc",
"IxKkCLm77rF",
"vwuZdGYVPiz",
"U7PIre2Bww2",
"XNVwKYdAiM",
"Kp0oifL-DIW",
"mLoJGE0-4mY",
"ESSOJNSMvR",
"cg4Lv7tHP1A",
"RIv0Mjxajvf"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely appreciate your response. We will incorporate your feedback and discuss the relationship with Liang et al. in the final version of the paper. Thank you for your questions and suggestions again!",
" Thank you for the explanations.",
" Dear reviewers,\n\nWe appreciate your invaluable feedback and c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"8Ddh0yuNPjS",
"vwuZdGYVPiz",
"nips_2022_wxWTyJtiJZ",
"RIv0Mjxajvf",
"cg4Lv7tHP1A",
"ESSOJNSMvR",
"mLoJGE0-4mY",
"mLoJGE0-4mY",
"nips_2022_wxWTyJtiJZ",
"nips_2022_wxWTyJtiJZ",
"nips_2022_wxWTyJtiJZ",
"nips_2022_wxWTyJtiJZ"
] |
nips_2022_32Ryt4pAHeD | Explainable Reinforcement Learning via Model Transforms | Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks. | Accept | This paper is about explainable AI: explaining a black-box agent's learned behavior via how it aligns with an observers anticipated behaviour
This paper was a bit polarizing with the reviewers. First let's summarize on the agreements between the reviewers. They all agreed:
+ the problem of study is interesting and important
+ the paper was well written and engaging
+ the formalism and overall approach is unique
The disagreements between the reviewers revolved around:
- sharpening the text and claims within (addressed via extensive author engagement)
- the availability of good transforms
- the meaningfullness of the explanations when the agent's behavior is degenerate or there is agreement with the observer
- the completeness of the experiments provided (limited space and focus compared to other parts of the paper)
- computational tractability of the approach (searching transforms and solving for optimal policies)
- applicability in real world settings (focus on discrete symbolic domains, computation again)
As one reviewer put it "would anyone really use this approach?". Three reviewers aligned on clear reject concluding that the paper was intriguing but there were too many loose ends and open questions left to future work. Whereas the 4th reviewer found the work to be more than enough.
The AC found both sides of the argument compelling with a slight concern that more substantive ideas were required to convince the reader that the approach is applicable to high-dimensional, messy domains like pixel-based control and robotics. Indeed, symbolic domains are great for illustration purposes, but those domains are so intuitive that behavior is often interpretable by design---afterall they are toy problems designed to highlight specific agent properties. Whereas, messy domains, like autonomous driving (operating on multiple dimensions of sensors, actuators --- all at different timescales) which the paper used as motivation are the end goal, and it remains unclear if this paper can get us there.
In the end the paper is well executed and correct so should be considered for acceptance. | train | [
"G6xJ2DOQ9rC",
"Q2qcbRjYATG",
"8lThaZGio7yq",
"gS6Dbflv5L1",
"zDEuVbqM3ct6",
"_e6ZNtmnw6t",
"vLB3JKPdY9V",
"Pnoq1lHGuRa",
"6PL5Vr0O2jV",
"_AND-rsgIXY",
"dmh00jZ2h44",
"2CMsU7lGiE",
"Tn-xc9G_gf",
"mieK-h3iejF",
"QqDPLcGSztY",
"7I_uvQGeKJu"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers, \nAs the author-reviewer discussion period is about to end, we would like to know if you have any additional concerns or questions in light of our responses? If so, we will be happy to address them.",
" The reviewer made a valid point in this statement \"Conversely, if the actor's policy is alre... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"_e6ZNtmnw6t",
"gS6Dbflv5L1",
"zDEuVbqM3ct6",
"Pnoq1lHGuRa",
"_AND-rsgIXY",
"2CMsU7lGiE",
"6PL5Vr0O2jV",
"7I_uvQGeKJu",
"QqDPLcGSztY",
"mieK-h3iejF",
"Tn-xc9G_gf",
"nips_2022_32Ryt4pAHeD",
"nips_2022_32Ryt4pAHeD",
"nips_2022_32Ryt4pAHeD",
"nips_2022_32Ryt4pAHeD",
"nips_2022_32Ryt4pAHeD... |
nips_2022_kb33f8J83c | One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations | Free-form text prompts allow users to describe their intentions during image manipulation conveniently. Based on the visual latent space of StyleGAN[21] and text embedding space of CLIP[34], studies focus on how to map these two latent spaces for text-driven attribute manipulations. Currently, the latent mapping between these two spaces is empirically designed and confines that each manipulation model can only handle one fixed text prompt. In this paper, we propose a method named Free-Form CLIP (FFCLIP), aiming to establish an automatic latent mapping so that one manipulation model handles free-form text prompts. Our FFCLIP has a cross-modality semantic modulation module containing semantic alignment and injection. The semantic alignment performs the automatic latent mapping via linear transformations with a cross attention mechanism. After alignment, we inject semantics from text prompt embeddings to the StyleGAN latent space. For one type of image (e.g., `human portrait'), one FFCLIP model can be learned to handle free-form text prompts. Meanwhile, we observe that although each training text prompt only contains a single semantic meaning, FFCLIP can leverage text prompts with multiple semantic meanings for image manipulation. In the experiments, we evaluate FFCLIP on three types of images (i.e., `human portraits', `cars', and `churches'). Both visual and numerical results show that FFCLIP effectively produces semantically accurate and visually realistic images. Project page: https://github.com/KumapowerLIU/FFCLIP. | Accept | The paper develops an image manipulation method FF-CLIP (Freeform CLIP) to edit image semantics based on the text prompt guidance. A cross-attention module is developed to align the visual representations and text semantic embeddings. The results show the effectiveness of the approach. Reviewers had concerns on the novelty and comparison with previous similar image editing methods. Yet the reviewer-author discussion effectively addressed some of the concerns, and two reviewers raised their scores. The ethics reviewers expressed concerns on the potential ethics issues as one of the major applications of the work is human face editing, which is known to amplify biases. Though the revised version has added more discussion on the ethnics issue, more in-depth discussion/analysis on potential solutions would be desired. | train | [
"vbUueMh0Ls",
"a5v5NABH_xu",
"-g1SVc-gZ0A",
"sS1f2l02Af2",
"6ELRvhhRSyJ3",
"mvgmYkVP7p5",
"6CQuvsYM5Zu",
"qgFh9_8g7q",
"kCUXVDmIC-M",
"mJXpXEXAqCp",
"SGdwMUrtKu",
"v-iE6rhvS-v",
"_lJ6QFQ-x0J",
"mnfNif6bRBL",
"u8F0fL41qvF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read over the author rebuttal and the paper changes. These adequately address my main concerns, and so I am happy to raise my score accordingly.\n\nI believe clarity could still be improved, but the issue is not so severe that it should block possible publication. ",
" The authors have addressed many of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"kCUXVDmIC-M",
"SGdwMUrtKu",
"mvgmYkVP7p5",
"_lJ6QFQ-x0J",
"nips_2022_kb33f8J83c",
"6CQuvsYM5Zu",
"mJXpXEXAqCp",
"nips_2022_kb33f8J83c",
"u8F0fL41qvF",
"mnfNif6bRBL",
"_lJ6QFQ-x0J",
"nips_2022_kb33f8J83c",
"nips_2022_kb33f8J83c",
"nips_2022_kb33f8J83c",
"nips_2022_kb33f8J83c"
] |
nips_2022_6tRhLrki6b8 | Privacy-Preserving Logistic Regression Training with A Faster Gradient Variant | Logistic regression training over encrypted data has been an attractive idea to security concerns for years. In this paper, we propose a faster gradient variant called quadratic gradient to implement logistic regression training in a homomorphic encryption domain, the core of which can be seen as an extension of the simplified fixed Hessian. We enhance Nesterov's accelerated gradient (NAG) and Adaptive Gradient Algorithm (Adagrad) respectively with this gradient variant and evaluate the enhanced algorithms on several datasets.
Experimental results show that the enhanced methods have a state-of-the-art performance in convergence speed compared to the naive first-order gradient methods. We then adopt the enhanced NAG method to implement homomorphic logistic regression training and obtain a comparable result by only 3 iterations. | Reject | Reviewers remained concerned about the novelty of the contribution, about the extent/limitations of experiments/comparisons to other methods, as well as about the fact that the method does not seem to outperform competitors in certain cases. | train | [
"8D6r8rE8fds",
"tM3w2MMIA-v",
"ILvAMp-05j",
"ga2Il6963qG",
"VlxYxuTw9e",
"nhmDJPyvy6S",
"RaHZt0fJwxF",
"xYCAg_QDrO"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am glad to help with the questions in my paper.\nAnd I am very grateful for the time you and other reviewers spent reading my work.",
" Thanks for the meaningful response. And after reading my colleague's comments, I decided to maintain my scores.",
" We would like to thank the reviewers for their input and... | [
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"tM3w2MMIA-v",
"ga2Il6963qG",
"xYCAg_QDrO",
"RaHZt0fJwxF",
"nhmDJPyvy6S",
"nips_2022_6tRhLrki6b8",
"nips_2022_6tRhLrki6b8",
"nips_2022_6tRhLrki6b8"
] |
nips_2022_T5TtjbhlAZH | Towards Practical Control of Singular Values of Convolutional Layers | In general, convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control. Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties and offered several methods for controlling them. Nevertheless, these methods present an intractable computational challenge or resort to coarse approximations. In this paper, we offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity. Our method is based on the tensor-train decomposition; it retains control over the actual singular values of convolutional mappings while providing structurally sparse and hardware-friendly representation. We demonstrate the improved properties of modern CNNs with our method and analyze its impact on the model performance, calibration, and adversarial robustness. The source code is available at: https://github.com/WhiteTeaDragon/practical_svd_conv | Accept | This paper introduced a tensor decomposition, and associated theory, which allows for the control of singular values in convolutional layers.
Based upon the reviews, rebuttal, and reviewer discussion, I recommend paper acceptance. All reviewers recommend acceptance. The rebuttal was effective, with one reviewer who initially recommended rejection raising their score.
The authors should be sure to follow through, and update the paper to include changes discussed during the review period. Especially, it seems as if the framing of the paper shifted during the review period from centering on practically computing singular values to practically controlling singular values. From my understanding of the work, I agree this second framing makes more sense. | train | [
"WEbZTNlkNkK",
"hVn3xoeeAhV",
"5yp6nkGP0l",
"JWrqHrAPWC9",
"kjfx6mn0QHy",
"FNxnYXkdU6q",
"5cyuhiz-vb0",
"A7tXgkTIXdv",
"NquNO11iH4_",
"FTzsTun_sD",
"QKOdb7Yh-hH",
"RcIV1Gt4jSV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for addressing my concerns, and I raised my rating based on the responses.",
" Thanks for addressing my concerns!\n\nI want to clarify that for the last point:\n> We appreciate the reviewer's suggestion regarding a more thorough comparison with [Sedghi et al., 2019] in the intr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"5cyuhiz-vb0",
"FNxnYXkdU6q",
"kjfx6mn0QHy",
"RcIV1Gt4jSV",
"QKOdb7Yh-hH",
"FTzsTun_sD",
"NquNO11iH4_",
"nips_2022_T5TtjbhlAZH",
"nips_2022_T5TtjbhlAZH",
"nips_2022_T5TtjbhlAZH",
"nips_2022_T5TtjbhlAZH",
"nips_2022_T5TtjbhlAZH"
] |
nips_2022_QUyasQGv1Nl | Hyperbolic Contrastive Learning for Visual Representations beyond Objects | Despite the rapid progress in visual representation learning driven by self-/un-supervised methods, both objects and scenes have been primarily treated using the same lens. In this paper, we focus on learning representations for objects and scenes explicitly in the same space. Motivated by the observation that visually similar objects are close in the representation space, we argue that the scenes and objects should further follow a hierarchical structure based on their compositionality. To exploit such a structure, we propose a contrastive learning framework where a Euclidean loss is used to learn object representations and a hyperbolic loss is used to regularize scene representations according to the hierarchy. This novel hyperbolic objective encourages the scene-object hypernymy among the representations by optimizing the magnitude of their norms. We show that when pretraining on the COCO and OpenImages datasets, the hyperbolic loss improves downstream performance across multiple datasets and tasks, including image classification, object detection, and semantic segmentation. We also show that the properties of the learned representations allow us to solve various vision tasks that involve the interaction between scenes and objects in a zero-shot way. | Reject | Overall, reviewers found that the method is sound but the results are marginal.
There are numerous frameworks for self-supervised learning today. The one introduced here underperforms compared to others, like ORL and Dense-CL, as pointed out by the reviewers. The authors in their response then combined their method with ORL and Dense-CL. This resulted in 1.1% improvement over ORL. This is marginal. I understand that the authors intended to reposition their work as a general-purpose add-on that increases performance. But, far more analysis is required to establish that this 1.1% improvement is real and that is meaningful. There are many tricks for SSL that improve performance by 1% or so, often these are not used because the slowdown they incur is not worth the effort. And the increase in performance is not noticeable.
At least, if the authors wish the pivot in this way, then the manuscript requires a rewrite to read as an add-on to many methods and to properly evaluate this. As it stands, by not comparing against ORL and other methods in the main manuscript, the submission cannot be accepted as is.
I encourage the authors to submit to a computer vision venue where such results may be appreciated more; where they may be given more room to thrive into something bigger. And to fully incorporate methods like ORL while demonstrating that their method really does produce a meaningful improvement.
| val | [
"wRVABRkCO_s",
"D8R2jAik_Ri",
"wNwd4nTgUJ8",
"xbUU9aItzB",
"Dl-fesxKio_",
"dYh_t4AhUb",
"9mcS3Cajvzd",
"e_lsy98jWb",
"oNEd7qLAp1g",
"2dJRI1nNtSh",
"bPluvFd_feQ",
"Xv5xBVStvD",
"TfOaOCTVgWM",
"SwXPFbHFFd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author provides additional experimental results, showing the scalability of HCL on existing object-level learning methods (ORL), which supports their claims and address my main concerns. So, I increase my score.\n\nI would recommend the authors add these results to the main text to support the effectiveness o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"e_lsy98jWb",
"2dJRI1nNtSh",
"TfOaOCTVgWM",
"Xv5xBVStvD",
"dYh_t4AhUb",
"SwXPFbHFFd",
"SwXPFbHFFd",
"TfOaOCTVgWM",
"Xv5xBVStvD",
"Xv5xBVStvD",
"nips_2022_QUyasQGv1Nl",
"nips_2022_QUyasQGv1Nl",
"nips_2022_QUyasQGv1Nl",
"nips_2022_QUyasQGv1Nl"
] |
nips_2022_ZMrZ5SC2G3_ | Towards Versatile Embodied Navigation | With the emergence of varied visual navigation tasks (e.g., image-/object-/audio-goal and vision-language navigation) that specify the target in different ways, the community has made appealing advances in training specialized agents capable of handling individual navigation tasks well. Given plenty of embodied navigation tasks and task-specific solutions, we address a more fundamental question: can we learn a single powerful agent that masters not one but multiple navigation tasks concurrently? First, we propose VXN, a large-scale 3D dataset that instantiates~four classic navigation tasks in standardized, continuous, and audiovisual-rich environments. Second, we propose Vienna, a versatile embodied navigation agent that simultaneously learns to perform the four navigation tasks with one model. Building upon a full-attentive architecture, Vienna formulates various navigation tasks as a unified, parse-and-query procedure: the target description, augmented with four task embeddings, is comprehensively interpreted into a set of diversified goal vectors, which are refined as the navigation progresses, and used as queries to retrieve supportive context from episodic history for decision making. This enables the reuse of knowledge across navigation tasks with varying input domains/modalities. We empirically demonstrate that, compared with learning each visual navigation task individually, our multitask agent achieves comparable or even better performance with reduced complexity. | Accept | This paper introduces a novel indoor navigation dataset that is both continuous and audio+visual. Within this setting, they include popular tasks and their audio-generalizations (e.g. image-goal nav --> audio-goal nav). Particularly of note is the leveraging of unification of these tasks during training for a better overall agent. This is a necessary and important step for the community.
There are several minor concerns regarding exposition and claims which were addressed in responses/updates which will strengthen the final paper. This includes task/model variances and clarifying why the reported variances are smaller than typically seen in related EAI tasks. | train | [
"Iqv7b_vbV0s",
"17WxtGrX02f",
"lR-dLdnjlQx",
"T9jEMgCUaPj",
"hoYPOajmWoB",
"T_7lFQLVOsn",
"r9jEOhjCEe0",
"-t4_W1NiIl9",
"y__7ENcrDyX",
"OjCy0h5dYLA",
"ccs5Eihfwyj",
"IvAE_8aqd3X"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the revisions. I will increase the score.\n\n",
" Thanks for your feedback. The range of training noise is: *vision-language nav.* ($0.19$ SPL), *image-goal nav.* ($0.50$ SPL), *audio-goal nav.* ($0.22$ SPL), *object-goal nav.* ($0.42$ SPL). Most of the numbers in Table 3 are beyond the range of noi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"17WxtGrX02f",
"lR-dLdnjlQx",
"hoYPOajmWoB",
"y__7ENcrDyX",
"T_7lFQLVOsn",
"r9jEOhjCEe0",
"IvAE_8aqd3X",
"ccs5Eihfwyj",
"OjCy0h5dYLA",
"nips_2022_ZMrZ5SC2G3_",
"nips_2022_ZMrZ5SC2G3_",
"nips_2022_ZMrZ5SC2G3_"
] |
nips_2022_wfel7CjOYk | Resource-Adaptive Federated Learning with All-In-One Neural Composition | Conventional Federated Learning (FL) systems inherently assume a uniform processing capacity among clients for deployed models. However, diverse client hardware often leads to varying computation resources in practice. Such system heterogeneity results in an inevitable trade-off between model complexity and data accessibility as a bottleneck. To avoid such a dilemma and achieve resource-adaptive federated learning, we introduce a simple yet effective mechanism, termed All-In-One Neural Composition, to systematically support training complexity-adjustable models with flexible resource adaption. It is able to efficiently construct models at various complexities using one unified neural basis shared among clients, instead of pruning the global model into local ones. The proposed mechanism endows the system with unhindered access to the full range of knowledge scattered across clients and generalizes existing pruning-based solutions by allowing soft and learnable extraction of low footprint models. Extensive experiment results on popular FL benchmarks demonstrate the effectiveness of our approach. The resulting FL system empowered by our All-In-One Neural Composition, called FLANC, manifests consistent performance gains across diverse system/data heterogeneous setups while keeping high efficiency in computation and communication. | Accept | This paper proposes a method to cope with heterogeneous computation capabilities of clients in federated learning. The initial reviews were positive, but some the high-score reviewers indicated low confidence. The following concerns were raised.
1. Limitations in the experimental baselines
2. Lack of theoretical justification for the convergence and the communication/computation complexity
3. Somewhat incremental novelty
The authors put in significant effort to address the concerns during the rebuttal which led to a slight increase in the average score. Therefore, I recommend acceptance of the paper. I strongly encourage the authors to take the reviewers' constructive feedback into account when revising the paper. | train | [
"zL7DDChiGN",
"pFxb30vbaDO",
"8BpEvd9mV3G",
"TM61FMGNgP",
"JGmNNuwTwZ7",
"7n_HnkA1apf",
"h7bzZFIW7V9",
"rCysc4n2BnK",
"XD09fisWSOf",
"9LLTASey98"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your valuable comments. As the discussion period is closing soon, could you please take a look at our response and reevaluate the submission? Please let us know if there is any further question about the submission. We look forward to hearing from you.",
" Given the upcoming OpenReview dead... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2
] | [
"rCysc4n2BnK",
"rCysc4n2BnK",
"rCysc4n2BnK",
"rCysc4n2BnK",
"rCysc4n2BnK",
"9LLTASey98",
"XD09fisWSOf",
"nips_2022_wfel7CjOYk",
"nips_2022_wfel7CjOYk",
"nips_2022_wfel7CjOYk"
] |
nips_2022_Lp-QFq2QRXA | Decision Trees with Short Explainable Rules | Decision trees are widely used in many settings where interpretable models are preferred or required. As confirmed by recent empirical studies, the interpretability/explanability of a decision tree critically depends on some of its structural parameters, like size and the average/maximum depth of its leaves. There is indeed a vast literature on the design and analysis of decision tree algorithms that aim at optimizing these parameters.
This paper contributes to this important line of research: we propose as a novel criterion of measuring the interpretability of a decision tree, the sparsity of the set of attributes that are (on average) required to explain the classification of the examples. We give a tight characterization of the best possible guarantees achievable by a decision tree built to optimize both our new
measure (which we call the {\em explanation size}) and the more classical measures of worst-case and average depth. In particular, we give an algorithm that guarantees $O(\ln n )$-approximation (hence optimal if $P \neq NP$) for the minimization of both the average/worst-case explanation size and the average/worst-case depth. In addition to our theoretical contributions, experiments with 20 real datasets show that our algorithm has accuracy competitive with {\tt CART} while producing trees that allow for much simpler explanations. | Accept | The paper presents an interesting approach for using decision trees in order to provide explainable classifiers | train | [
"p6enO84TBFX",
"0lAQd8EJE1T",
"fgznnBgP7cX",
"yaByflM5XZB",
"BwPgyctBuXE",
"MbelBBlkyt4",
"w5bHfYptqB",
"SJjCG9ZvYwq",
"vyQLyeX7ITA"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your time and your constructive criticism!",
" Thank you for your responses, especially the clarification regarding pruning algorithms. I appreciate the addition of post-pruning results in the final version, as well as the EC2 results added in the supplement. Given that my main concern was thes... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"0lAQd8EJE1T",
"BwPgyctBuXE",
"nips_2022_Lp-QFq2QRXA",
"vyQLyeX7ITA",
"SJjCG9ZvYwq",
"w5bHfYptqB",
"nips_2022_Lp-QFq2QRXA",
"nips_2022_Lp-QFq2QRXA",
"nips_2022_Lp-QFq2QRXA"
] |
nips_2022_hOVEBHpHrMu | MsSVT: Mixed-scale Sparse Voxel Transformer for 3D Object Detection on Point Clouds | 3D object detection from the LiDAR point cloud is fundamental to autonomous driving. Large-scale outdoor scenes usually feature significant variance in instance scales, thus requiring features rich in long-range and fine-grained information to support accurate detection. Recent detectors leverage the power of window-based transformers to model long-range dependencies but tend to blur out fine-grained details. To mitigate this gap, we present a novel Mixed-scale Sparse Voxel Transformer, named MsSVT, which can well capture both types of information simultaneously by the divide-and-conquer philosophy. Specifically, MsSVT explicitly divides attention heads into multiple groups, each in charge of attending to information within a particular range. All groups' output is merged to obtain the final mixed-scale features. Moreover, we provide a novel chessboard sampling strategy to reduce the computational complexity of applying a window-based transformer in 3D voxel space. To improve efficiency, we also implement the voxel sampling and gathering operations sparsely with a hash map. Endowed by the powerful capability and high efficiency of modeling mixed-scale information, our single-stage detector built on top of MsSVT surprisingly outperforms state-of-the-art two-stage detectors on Waymo. Our project page: https://github.com/dscdyc/MsSVT. | Accept | After the rebuttal and discussion two reviewers are positive, one remains negative. The reviewers liked the overall approach, the writing, and the core experimental results. Some reviewers asked for additional broader experiments and comparisons, which the authors were able to provide. The main concern of reviewer QRYf is the close relation to PointPillar's, which the authors are able to clarify sufficiently in their rebuttal.
There is thus sufficient evidence to accept this submission. | train | [
"OASJ0s3lzcm",
"mfHcjVRflKg",
"xAlrZyaT58N",
"AvplZjYa8rs",
"MAgWr0LFMdR",
"NV5bg0taw05",
"tqQFp_MLiU",
"0rvtF-iYsbv",
"Ro09zGEyvHw",
"NzTfeGbhSkd",
"Paa1Sju5CNJ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, we have answered your questions in the author response and also uploaded a revised manuscript by following your suggestions for paper writing. We hope that we have addressed all your concerns. Do you have any further assessment (or concerns) of our work? Thanks for your kind consideration.",
" I ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"MAgWr0LFMdR",
"0rvtF-iYsbv",
"Ro09zGEyvHw",
"NzTfeGbhSkd",
"Ro09zGEyvHw",
"NzTfeGbhSkd",
"NzTfeGbhSkd",
"Paa1Sju5CNJ",
"nips_2022_hOVEBHpHrMu",
"nips_2022_hOVEBHpHrMu",
"nips_2022_hOVEBHpHrMu"
] |
nips_2022_IvJj3CvjqHC | Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems | Predicting conversion rate (e.g., the probability that a user will purchase an item) is a fundamental problem in machine learning based recommender systems. However, accurate conversion labels are revealed after a long delay, which harms the timeliness of recommender systems. Previous literature concentrates on utilizing early conversions to mitigate such a delayed feedback problem. In this paper, we show that post-click user behaviors are also informative to conversion rate prediction and can be used to improve timeliness. We propose a generalized delayed feedback model (GDFM) that unifies both post-click behaviors and early conversions as stochastic post-click information, which could be utilized to train GDFM in a streaming manner efficiently. Based on GDFM, we further establish a novel perspective that the performance gap introduced by delayed feedback can be attributed to a temporal gap and a sampling gap. Inspired by our analysis, we propose to measure the quality of post-click information with a combination of temporal distance and sample complexity. The training objective is re-weighted accordingly to highlight informative and timely signals. We validate our analysis on public datasets, and experimental performance confirms the effectiveness of our method. | Accept | The paper presents an approach for dealing with delayed feedback in online learning settings such as large scale recommender systems, where the delays may be significant as in the case of predicting conversion rate for online shopping where a user may spend days or weeks deciding to finally click "purchase" after first viewing listings or putting items in a shopping cart.
There were quite a range of viewpoints on this paper, including a clear rejection from RiVF. After reading through the paper myself and the responses from the authors and other reviewers to the points raised by this reviewer, I have determined that the key argument made by this reviewer -- that the paper is not novel, due to prior work on Efficient Heterogeneous Collaborative Filtering -- is not sufficient to warrant rejection of the paper. The GDFM and EHCF settings are fundamentally different, and while one could imagine using methods from EHCF as part of a solution in this problem space, the current paper focuses clearly on the inherent difficulty of delayed feedback. Furthermore, as the other two reviewers note, the problem itself is highly important, difficult to solve, and the current paper puts forward interesting and effective methods that are reasonably evaluated on publicly available static datasets. For these reasons, I am discounting the rejection and recommending acceptance.
I do agree with reviewers who suggest that online ("real world") experiments would strengthen the paper significantly. I understand that the nature of production level / deployed industrial settings can make it difficult to exhaustively report results, but even a paragraph of anecdotal evidence or experience here would be helpful. I believe that the paper is still worthy of acceptance, but am marking "less certain" because of this factor.
| train | [
"B2etkGOKyTa",
"ktP5WeL2qCd",
"kWWMAVNq1Sl",
"fy7PTl-Rdh4",
"CKgqkYSply5",
"-7RD21LPnsC",
"IYM5kYYqLl7",
"5HOS7HEzO3B",
"ylXW3eGgez_",
"I3GqaYfjSr_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification.\n\nI have raised the score to 7.",
" We use $p()$ to denote the ground-truth probability distribution,\nand $q()$ to denote the corresponding estimated probability distribution.\nHere, $q(a|y)$ is an estimation of $p(a|y, x)$.\nSorry for the confusion.\n\nIt's hard to tell whether ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"ktP5WeL2qCd",
"fy7PTl-Rdh4",
"ylXW3eGgez_",
"-7RD21LPnsC",
"5HOS7HEzO3B",
"I3GqaYfjSr_",
"ylXW3eGgez_",
"nips_2022_IvJj3CvjqHC",
"nips_2022_IvJj3CvjqHC",
"nips_2022_IvJj3CvjqHC"
] |
nips_2022_r9b6T088_75 | Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging | In coded aperture snapshot spectral compressive imaging (CASSI) systems, hyperspectral image (HSI) reconstruction methods are employed to recover the spatial-spectral signal from a compressed measurement. Among these algorithms, deep unfolding methods demonstrate promising performance but suffer from two issues. Firstly, they do not estimate the degradation patterns and ill-posedness degree from CASSI to guide the iterative learning. Secondly, they are mainly CNN-based, showing limitations in capturing long-range dependencies. In this paper, we propose a principled Degradation-Aware Unfolding Framework (DAUF) that estimates parameters from the compressed image and physical mask, and then uses these parameters to control each iteration. Moreover, we customize a novel Half-Shuffle Transformer (HST) that simultaneously captures local contents and non-local dependencies. By plugging HST into DAUF, we establish the first Transformer-based deep unfolding method, Degradation-Aware Unfolding Half-Shuffle Transformer (DAUHST), for HSI reconstruction. Experiments show that DAUHST surpasses state-of-the-art methods while requiring cheaper computational and memory costs. Code and models are publicly available at https://github.com/caiyuanhao1998/MST | Accept | This paper integrates a Half-shuffle Transformer (HST) into the deep unfolding framework, establishing an effective method for hyperspectral image (HSI) reconstruction. The reviewers generally agree that the paper is well-written and technically-solid. The majority of the reviews assert that the technical novelty is not so dramatic. This I concur, from the perspective of learning-based reconstruction. On the other hand, from the viewpoint of spectral compressive imaging, I also agree with the authors that the paper has certain novelty and significance. Thus, I would recommend accepting the paper if the space is enough. | train | [
"0VJcsckZsN",
"-Xhq1d_xWzz",
"sAilVEEfL3",
"L0oRfmtBOID",
"dvgo5Kd5Me",
"_5hNC_wA9i",
"jc2SdJPknx",
"Lcc4HmGfKC2",
"iz6ZyEtRnk",
"nHR4QH99Ce",
"aaNE8zrDq2e",
"SmOcQwgUvZ-",
"_liLFMmB_w6",
"yywQQNI6Oy6",
"DcNV_wqAHjN",
"_nkmP0-G_H_",
"KEbbpws-wpg",
"9HJF5hNHbl1G",
"jFzCCEmU5l",
... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" Dear Reviewer 6Fb8,\n\nThanks for discussing with us and agreeing that our response addresses some of your concerns. Thank you for approving the $\\textbf{contribution}$ of our proposed Transformer and $\\textbf{practical significance}$ of our work. We will kindly cite and introduce U-Transformer in the revision.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
3
] | [
"IerZYHjNIUi",
"Lcc4HmGfKC2",
"SmOcQwgUvZ-",
"nips_2022_r9b6T088_75",
"nHR4QH99Ce",
"jc2SdJPknx",
"iz6ZyEtRnk",
"_nkmP0-G_H_",
"2SyiNX-aV5u",
"IerZYHjNIUi",
"KEbbpws-wpg",
"Qgz9CE4k0QrZ",
"6-F1huulKSq",
"gduagZVCegM",
"2SyiNX-aV5u",
"5sg1IBU24pn",
"5sg1IBU24pn",
"2SyiNX-aV5u",
"2... |
nips_2022_Blbzv2ZjT7 | PerfectDou: Dominating DouDizhu with Perfect Information Distillation | As a challenging multi-player card game, DouDizhu has recently drawn much attention for analyzing competition and collaboration in imperfect-information games. In this paper, we propose PerfectDou, a state-of-the-art Doudizhu AI system that summits the game, in an actor-critic framework with a proposed technique named perfect information distillation.
In detail, we adopt a perfect-training-imperfection-execution framework that allows the agents to utilize the global information to guide the training of the policies as if it is a perfect information game and the trained policies can be used to play the imperfect information game during the actual gameplay. Correspondingly, we characterize card and game features for DouDizhu to represent the perfect and imperfect information. To train our system, we adopt proximal policy optimization with generalized advantage estimation in a parallel training paradigm. In experiments we show how and why PerfectDou beats all existing programs, and achieves state-of-the-art performance. | Accept | The reviewers appreciate both main contributions, namely the PTIE concept and the feature and reward engineering. While there are concerns that neither may generalize beyond the specific game of DouDizhu, and that PTIE may be somewhat incremental given CTDE (or even not novel at all; several CTDE works use the entire state for the centralized training), successfully demonstrating PTIE on even this single domain and methodically evaluating its effect should be of interest to the community. The paper is mostly well written, and the authors are requested to incorporate the specific reviewer feedback. | test | [
"MScvLL49pSd",
"GrpV_qZdL9",
"PykriOpU1So",
"4RjQVaF2Pk",
"HefFB1GX-aen",
"nZp-0IKiIX",
"ivWHAJjwo1g",
"DPt8GD_jrx9",
"eOjYt4iVWKG"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nWe first thank you again for your valuable comments and suggestions. In the previous replies, we have tried our best to address your questions point by point and supplemented more experiments.\n\nWe sincerely look forward to your reply to our response. And we are open to any discussion to impro... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_Blbzv2ZjT7",
"eOjYt4iVWKG",
"4RjQVaF2Pk",
"eOjYt4iVWKG",
"ivWHAJjwo1g",
"DPt8GD_jrx9",
"nips_2022_Blbzv2ZjT7",
"nips_2022_Blbzv2ZjT7",
"nips_2022_Blbzv2ZjT7"
] |
nips_2022_ouXTjiP0ffV | NCP: Neural Correspondence Prior for Effective Unsupervised Shape Matching | We present Neural Correspondence Prior (NCP), a new paradigm for computing correspondences between 3D shapes. Our approach is fully unsupervised and can lead to high quality correspondences even in challenging cases such as sparse point clouds or non-isometric meshes, where current methods fail. Our first key observation is that, in line with neural priors observed in other domains, recent network architectures on 3D data, even without training, tend to produce pointwise features that induce plausible maps between rigid or non-rigid shapes. Secondly, we show that given a noisy map as input, training a feature extraction network with the input map as supervision, tends to remove artifacts from the input and can act as a powerful correspondence denoising mechanism, both between individual pairs and within a collection. With these observations in hand, we propose a two-stage unsupervised paradigm for shape matching, by (i) performing unsupervised training by adapting an existing approach to obtain an initial set of noisy matches, (ii) using these matches to train a network in a supervised manner. We demonstrate that this approach significantly improves the accuracy of the maps, especially when trained within a collection. We show that NCP is data-efficient, fast, and achieves state-of-the-art results on many tasks. Our code will be released after publication. | Accept | This paper received mixed scores, with three reviewer recommending acceptance and one rejection. The reviewers appreciated the simplicity and effectiveness of the method, but nonetheless raised many questions about the method, requesting the authors to clarify several points. The authors' feedback addressed most of these questions. During the discussion, 2gYm, the most negative reviewer, mentioned that they found the contributions interesting for the shape matching community but not significant enough to be published in NeurIPS. Considering that three reviewers found this work sufficiently interesting to recommend acceptance, the AC deems this to be a secondary concern. The AC nonetheless strongly encourages the authors to revise their paper based on their feedback for the camera-ready version. | train | [
"XCsQOTm7EO",
"3y5yTqJvioi",
"XaakeppozZ-",
"bjd-Q8vXJ_h",
"x9coXOnGxD",
"d9AnOMFhyVw",
"kC0IL_SmOGQ",
"-sbat-U9AG",
"LVcqdde0NZj",
"7QmE59LCp7z",
"rYlseSeDM4R",
"HA20apUD-XF",
"Sl_aXdWGuIL",
"roJ5XXilQ3",
"n0L6VJYeTZS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their response and constructive feedback, as well as for considering our rebuttal and increasing the score for our paper. We will make sure to include the evaluation of smoothness and injectivity, as suggested by the reviewer. We will also make sure to include an analysis of the differen... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"3y5yTqJvioi",
"x9coXOnGxD",
"n0L6VJYeTZS",
"n0L6VJYeTZS",
"roJ5XXilQ3",
"roJ5XXilQ3",
"roJ5XXilQ3",
"Sl_aXdWGuIL",
"Sl_aXdWGuIL",
"HA20apUD-XF",
"nips_2022_ouXTjiP0ffV",
"nips_2022_ouXTjiP0ffV",
"nips_2022_ouXTjiP0ffV",
"nips_2022_ouXTjiP0ffV",
"nips_2022_ouXTjiP0ffV"
] |
nips_2022_SsA-0BZa7B_ | A2: Efficient Automated Attacker for Boosting Adversarial Training | Based on the significant improvement of model robustness by AT (Adversarial Training), various variants have been proposed to further boost the performance. Well-recognized methods have focused on different components of AT (e.g., designing loss functions and leveraging additional unlabeled data). It is generally accepted that stronger perturbations yield more robust models.
However, how to generate stronger perturbations efficiently is still missed. In this paper, we propose an efficient automated attacker called A2 to boost AT by generating the optimal perturbations on-the-fly during training. A2 is a parameterized automated attacker to search in the attacker space for the best attacker against the defense model and examples. Extensive experiments across different datasets demonstrate that A2 generates stronger perturbations with low extra cost and reliably improves the robustness of various AT methods against different attacks. | Accept | Based on the idea of AutoML, this paper proposes an attack method that efficiently generates strong adversarial perturbations. The main idea is to use an attention mechanism to score possible attacks in the attacker space, then sample the attack to perform based on the assigned scores. The experimental results show that the proposed method can increase the attack power and improve the adversarial training performance without too much overhead. The reviewers suggest the authors to release code to help others reproduce the results. | train | [
"_Ib2iTQC2iPt",
"IVBO5IJSWe25",
"9iXUj9Mm4UEj",
"thiHwpDplIO",
"cHKVbOSUywU",
"Nr7zBhJa9-C",
"3bu35i2dQ5E",
"1ceJ_PWumHz",
"YwzeHiThYN_",
"x966UKTEVRE",
"83pv1_ycmAZ",
"xyZSjb7QBcp"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your insightful suggestions.\nThe distribution of selected attacks and other issues help a lot in further improving our paper.",
" EOM",
" We sincerely appreciate your time and efforts in reviewing our paper.\nWe truly thank you for the useful suggestions and the acknowledgment of $A^2$''s improvem... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"IVBO5IJSWe25",
"3bu35i2dQ5E",
"thiHwpDplIO",
"1ceJ_PWumHz",
"Nr7zBhJa9-C",
"YwzeHiThYN_",
"x966UKTEVRE",
"xyZSjb7QBcp",
"83pv1_ycmAZ",
"nips_2022_SsA-0BZa7B_",
"nips_2022_SsA-0BZa7B_",
"nips_2022_SsA-0BZa7B_"
] |
nips_2022_V9ngeCMsZK3 | Efficient learning of nonlinear prediction models with time-series privileged information | In domains where sample sizes are limited, efficient learning algorithms are critical. Learning using privileged information (LuPI) offers increased sample efficiency by allowing prediction models access to auxiliary information at training time which is unavailable when the models are used. In recent work, it was shown that for prediction in linear-Gaussian dynamical systems, a LuPI learner with access to intermediate time series data is never worse and often better in expectation than any unbiased classical learner. We provide new insights into this analysis and generalize it to nonlinear prediction tasks in latent dynamical systems, extending theoretical guarantees to the case where the map connecting latent variables and observations is known up to a linear transform. In addition, we propose algorithms based on random features and representation learning for the case when this map is unknown. A suite of empirical results confirm theoretical findings and show the potential of using privileged time-series information in nonlinear prediction. | Accept | This paper considers a particular setting of time series prediction with privileged information. A special case can be described as predicting x(t+k) from x(t). At training time one is also given x(t+1), x(t+2), ..., x(t+k-1) and a latent dynamics is assumed. The paper presents a learning algorithm that leverages privileged info at train time, provides rigorous theoretical analysis of this algorithm and convincing numerical experiments. This paper is definitely of interest to ML community and would serve as an interesting contribution to the conference. | train | [
"R7cCUfAeCPc",
"61vFPoLBYis",
"r3DIuGN3vo-",
"R4Hzibc1Qp",
"aNC-uep6hO-",
"QcKqi8-IvYea",
"pvgjP5ppXRH",
"3orR65qT7DJ",
"zqBrSbkUnWy",
"fXqVGJtGmYM",
"VOuBf7pIWU8"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for answering my questions.\nThough I think there is still some space for making the paper more clear, I raised the score from 4 to 5.",
" Dear reviewers and chairs, \n\nPlease let us know if there are any further clarifications needed concerning our paper after the rebuttal.\nWe would be happy to ans... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"r3DIuGN3vo-",
"nips_2022_V9ngeCMsZK3",
"VOuBf7pIWU8",
"fXqVGJtGmYM",
"zqBrSbkUnWy",
"3orR65qT7DJ",
"nips_2022_V9ngeCMsZK3",
"nips_2022_V9ngeCMsZK3",
"nips_2022_V9ngeCMsZK3",
"nips_2022_V9ngeCMsZK3",
"nips_2022_V9ngeCMsZK3"
] |
nips_2022_lAN7mytwrIy | ElasticMVS: Learning elastic part representation for self-supervised multi-view stereopsis | Self-supervised multi-view stereopsis (MVS) attracts increasing attention for learning dense surface predictions from only a set of images without onerous ground-truth 3D training data for supervision. However, existing methods highly rely on the local photometric consistency, which fails to identify accurately dense correspondence in broad textureless and reflectance areas.In this paper, we show that geometric proximity such as surface connectedness and occlusion boundaries implicitly inferred from images could serve as reliable guidance for pixel-wise multi-view correspondences. With this insight, we present a novel elastic part representation which encodes physically-connected part segmentations with elastically-varying scales, shapes and boundaries. Meanwhile, a self-supervised MVS framework namely ElasticMVS is proposed to learn the representation and estimate per-view depth following a part-aware propagation and evaluation scheme. Specifically, the pixel-wise part representation is trained by a contrastive learning-based strategy, which increases the representation compactness in geometrically concentrated areas and contrasts otherwise. ElasticMVS iteratively optimizes a part-level consistency loss and a surface smoothness loss, based on a set of depth hypotheses propagated from the geometrically concentrated parts. Extensive evaluations convey the superiority of ElasticMVS in the reconstruction completeness and accuracy, as well as the efficiency and scalability. Particularly, for the challenging large-scale reconstruction benchmark, ElasticMVS demonstrates significant performance gain over both the supervised and self-supervised approaches. | Accept | All the reviewers acknowledged the strength of the paper: self-supervised learning for MVS using contrastive learning to help correspondence based on learned features, SOTA results are obtained on DTU and T&T benchmarks, and the evaluations/ablation studies are well presented. The reviewers also shared weaknesses: the elastic part representation may not deal with textureless regions, the core idea of “propagation” in the paper is very close to PatchmatchNet and too many parameters for threshold. The reviewers engaged in the rebuttal and discussion phase, and they all decided to keep their ratings. In general, the proposed part representation is interesting and leads to promising results. Please address the issues of how the learned representation can fix the missing depth regions caused by texturelessness as pointed out by both reviewer pJ8f and HBPd. | train | [
"IYszVERuDaZ",
"qznqleJ0GEr",
"CQNiqcQuhU7",
"cwRdAV6kcF2",
"i5b_NE7xXo5",
"CRH_dtSyXZi",
"OLgg5Frs1wp",
"wby08EFaTx5",
"QXPGDOiFp1K"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers for their thorough reviews and for appreciating the novelty of our method. We have highlighted the corresponding changes in the revised manuscript. We look forward to the discussion with all the reviewers. In the following, we will give our responses to the major issues mentioned by the... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
5
] | [
"nips_2022_lAN7mytwrIy",
"QXPGDOiFp1K",
"wby08EFaTx5",
"OLgg5Frs1wp",
"CRH_dtSyXZi",
"nips_2022_lAN7mytwrIy",
"nips_2022_lAN7mytwrIy",
"nips_2022_lAN7mytwrIy",
"nips_2022_lAN7mytwrIy"
] |
nips_2022_SeHslYhFx5- | Interaction Modeling with Multiplex Attention | Modeling multi-agent systems requires understanding how agents interact. Such systems are often difficult to model because they can involve a variety of types of interactions that layer together to drive rich social behavioral dynamics. Here we introduce a method for accurately modeling multi-agent systems. We present Interaction Modeling with Multiplex Attention (IMMA), a forward prediction model that uses a multiplex latent graph to represent multiple independent types of interactions and attention to account for relations of different strengths. We also introduce Progressive Layer Training, a training strategy for this architecture. We show that our approach outperforms state-of-the-art models in trajectory forecasting and relation inference, spanning three multi-agent scenarios: social navigation, cooperative task achievement, and team sports. We further demonstrate that our approach can improve zero-shot generalization and allows us to probe how different interactions impact agent behavior. | Accept | The reviewers agreed this paper was presented well and a valuable contribution. We urge the authors to take the reviewers' comments into account in the final version.
Also, please increase the size of the tables -- the font size is quite small (maybe too small). | train | [
"HvW5zssEWe",
"TNdkWtXZ5-R",
"WpOK0QhfLX4",
"B9JvWvzBw9h",
"nFksoTtOCgE",
"2bjbm1e745b",
"FTlXwwFaq0d",
"f8ANnv2t0qr",
"vVbQ6mNn7RC4",
"LONqgVUAVu",
"pueVCJBYumR",
"CTbU5OATMR_",
"LYdFRQ67ipB",
"wb90dt9ubV3",
"rRBXK81luln",
"1B0rPRpbjHp",
"pURzF73UVDk",
"SxicR6ApZzo"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the rebuttal. \nI understood your responses other than the third one. \nFor visualizing layers of relations, I saw Figures 10 and 11. I understood Figure 10, but I cannot interpret the second layer in Figure 11 (right bottom). \nTotally, my unclear points are clarified, but probably due to the combi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"vVbQ6mNn7RC4",
"B9JvWvzBw9h",
"nFksoTtOCgE",
"pueVCJBYumR",
"f8ANnv2t0qr",
"1B0rPRpbjHp",
"1B0rPRpbjHp",
"1B0rPRpbjHp",
"pURzF73UVDk",
"SxicR6ApZzo",
"SxicR6ApZzo",
"rRBXK81luln",
"rRBXK81luln",
"nips_2022_SeHslYhFx5-",
"nips_2022_SeHslYhFx5-",
"nips_2022_SeHslYhFx5-",
"nips_2022_Se... |
nips_2022_uP9RiC4uVcR | When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment | AI systems are becoming increasingly intertwined with human life. In order to effectively collaborate with humans and ensure safety, AI systems need to be able to understand, interpret and predict human moral judgments and decisions. Human moral judgments are often guided by rules, but not always. A central challenge for AI safety is capturing the flexibility of the human moral mind -- the ability to determine when a rule should be broken, especially in novel or unusual situations. In this paper, we present a novel challenge set consisting of rule-breaking question answering (RBQA) of cases that involve potentially permissible rule-breaking -- inspired by recent moral psychology studies. Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MORALCOT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments. MORALCOT outperforms seven existing LLMs by 6.2% F1, suggesting that modeling human reasoning might be necessary to capture the flexibility of the human moral mind. We also conduct a detailed error analysis to suggest directions for future work to improve AI safety using RBQA. Our data and code are available at https://github.com/feradauto/MoralCoT | Accept | This paper addresses an important question of whether LLMs understand human flexible moral judgments. This is a crucial task in AI safety, as it deals with the capability of understanding ethics in relation to heterogeneous contexts. The proposed approach consists in a prompting strategy that generates a sequence of questions and answers that can be exploited to predict human moral judgment.
The reviewers agree that the problem is important and timely, the constructed data resource is sound and of great interest to the community, and the evaluation is done thoroughly. Reviewers' raised concerns and questions are properly addressed by the author's response. | train | [
"_CRUqiKIsuJ",
"erFfooc0s8",
"Xi48liWF6H",
"LWUqKKrZf5B",
"rHahkxLmZT0",
"Zd1nOh380Ap",
"_AKV1pv3x64",
"QR6RBKAzCb0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response and incorporating this important extra information in the revision. My questions have been adequately addressed",
" We thank the reviewer for the valuable comment, pointing out that \"this paper addresses a very significant subject\", \"dataset construction is sound\" and \"the paper is ... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"erFfooc0s8",
"QR6RBKAzCb0",
"_AKV1pv3x64",
"Zd1nOh380Ap",
"nips_2022_uP9RiC4uVcR",
"nips_2022_uP9RiC4uVcR",
"nips_2022_uP9RiC4uVcR",
"nips_2022_uP9RiC4uVcR"
] |
nips_2022_9njZa1fm35 | Matryoshka Representation Learning | Learned representations are a central component in modern ML systems, serving a multitude of downstream tasks. When training such representations, it is often the case that computational and statistical constraints for each downstream task are unknown. In this context rigid, fixed capacity representations can be either over or under-accommodating to the task at hand. This leads us to ask: can we design a flexible representation that can adapt to multiple downstream tasks with varying computational resources? Our main contribution is Matryoshka Representation Learning (MRL) which encodes information at different granularities and allows a single embedding to adapt to the computational constraints of downstream tasks. MRL minimally modifies existing representation learning pipelines and imposes no additional cost during inference and deployment. MRL learns coarse-to-fine representations that are at least as accurate and rich as independently trained low-dimensional representations. The flexibility within the learned Matryoshka Representations offer: (a) up to $\mathbf{14}\times$ smaller embedding size for ImageNet-1K classification at the same level of accuracy; (b) up to $\mathbf{14}\times$ real-world speed-ups for large-scale retrieval on ImageNet-1K and 4K; and (c) up to $\mathbf{2}\%$ accuracy improvements for long-tail few-shot classification, all while being as robust as the original representations. Finally, we show that MRL extends seamlessly to web-scale datasets (ImageNet, JFT) across various modalities -- vision (ViT, ResNet), vision + language (ALIGN) and language (BERT). MRL code and pretrained models are open-sourced at https://github.com/RAIVNLab/MRL. | Accept | This paper proposes a Matryoshka Representation Learning paradigm to learn representations at multiple granularities, which can adapt to downstream tasks with different computational budgets. All the reviewers find the idea simple and interesting, and acknowledge that the experiments are thorough and impressive. The authors were successful at addressing the reviewers' concerns. Overall, the meta-reviewer recommends acceptance of the paper. | train | [
"mF7J2DpyDkh",
"uRExL0Yg_w",
"AC0-z2GG8ga",
"Ud4y0RsSm6",
"Hmbjc_42_vO",
"LXf6a4SZ2k",
"m-nKVapmyWA4",
"hGOLsqpWTg7",
"JFeMFthioB",
"Pm7UdhvIjGv",
"XbaARUESk8u",
"uUmACsunpG7",
"GauFpDsTQiN",
"VSbOGeHZXsI",
"QfDyZT6Va4c",
"2zGRRWU9vn",
"Of6FCavBFPp",
"jyVSDZCGFKj"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for upgrading the score. ",
" Thank you for your further clarification. I will upgrade my score.",
" Dear reviewer,\n\nWe are grateful for your kind words and support. We are glad that the rebuttal and additional experiments adequately addressed your concerns. Finally, thanks for recogni... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
"uRExL0Yg_w",
"GauFpDsTQiN",
"Ud4y0RsSm6",
"Pm7UdhvIjGv",
"LXf6a4SZ2k",
"GauFpDsTQiN",
"hGOLsqpWTg7",
"uUmACsunpG7",
"QfDyZT6Va4c",
"QfDyZT6Va4c",
"2zGRRWU9vn",
"2zGRRWU9vn",
"Of6FCavBFPp",
"jyVSDZCGFKj",
"nips_2022_9njZa1fm35",
"nips_2022_9njZa1fm35",
"nips_2022_9njZa1fm35",
"nips... |
nips_2022_NIrbtCdxfBl | Deep Fourier Up-Sampling | Existing convolutional neural networks widely adopt spatial down-/up-sampling for multi-scale modeling. However, spatial up-sampling operators (e.g., interpolation, transposed convolution, and un-pooling) heavily depend on local pixel attention, incapably exploring the global dependency. In contrast, the Fourier domain is in accordance with the nature of global modeling according to the spectral convolution theorem. Unlike the spatial domain that easily performs up-sampling with the property of local similarity, up-sampling in the Fourier domain is more challenging as it does not follow such a local property. In this study, we propose a theoretically feasible Deep Fourier Up-Sampling (FourierUp) to solve these issues. We revisit the relationships between spatial and Fourier domains and reveal the transform rules on the features of different resolutions in the Fourier domain, which provide key insights for FourierUp's designs. FourierUp as a generic operator consists of three key components: 2D discrete Fourier transform, Fourier dimension increase rules, and 2D inverse Fourier transform, which can be directly integrated with existing networks. Extensive experiments across multiple computer vision tasks, including object detection, image segmentation, image de-raining, image dehazing, and guided image super-resolution, demonstrate the consistent performance gains obtained by introducing our FourierUp. Code will be publicly available. | Accept | This paper proposes using Fourier up-sampling for multi-scale modeling. The paper received initial scores of 8 8 5 3. After the rebuttal and in-depth discussions, most reviewers are satisfied with the authors' replies. Reviewer-tPpX who gives negative scores still has concerns about the novelty and presentation of this paper. As I discussed with the reviewer, we finally did not find similar works that share the similar idea as this paper, so I recommend acceptance for this paper. However, I agree with the reviewer-tPpX that the presentation of this paper should be improved. There are actually too many equations and derivations in the manuscript, and this is not friendly for a NeurIPS paper. Please remember to move them into the appendix as much as possible if preparing for the camera-ready version. | train | [
"VVZkAYxy2dX",
"UdPfIAF5f0H",
"o5dHr59CC60",
"bUtQcQnoRBH",
"WgPwAgXplLw",
"yyPH-ZUUYMK",
"fZciHkVkD2m",
"Wr7xuKfcDQk",
"6N7YResymnE",
"kLsGvODxOaA",
"DYlHzuuGNS9",
"WuAC62jn_o7",
"-_Xzwdn1fZ",
"Bz7hsF4igaQ",
"Gl95N6_2LfB",
"DcZS8xHBqd"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comment!\n\nI totally disagree with you. First, according to the spectral convolution theorem in Fourier theory, updating a single value in the spectral domain globally affects all original spatial data, which sheds light on design efficient neural architectures with non-local receptive field. The... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
4
] | [
"UdPfIAF5f0H",
"bUtQcQnoRBH",
"DcZS8xHBqd",
"DcZS8xHBqd",
"Gl95N6_2LfB",
"Bz7hsF4igaQ",
"Bz7hsF4igaQ",
"Bz7hsF4igaQ",
"Bz7hsF4igaQ",
"-_Xzwdn1fZ",
"Gl95N6_2LfB",
"-_Xzwdn1fZ",
"nips_2022_NIrbtCdxfBl",
"nips_2022_NIrbtCdxfBl",
"nips_2022_NIrbtCdxfBl",
"nips_2022_NIrbtCdxfBl"
] |
nips_2022_Mftcm8i4sL | Trajectory Inference via Mean-field Langevin in Path Space | Trajectory inference aims at recovering the dynamics of a population from snapshots of its temporal marginals. To solve this task, a min-entropy estimator relative to the Wiener measure in path space was introduced in [Lavenant et al., 2021], and shown to consistently recover the dynamics of a large class of drift-diffusion processes from the solution of an infinite dimensional convex optimization problem. In this paper, we introduce a grid-free algorithm to compute this estimator. Our method consists in a family of point clouds (one per snapshot) coupled via Schrödinger bridges which evolve with noisy gradient descent. We study the mean-field limit of the dynamics and prove its global convergence to the desired estimator. Overall, this leads to an inference method with end-to-end theoretical guarantees that solves an interpretable model for trajectory inference. We also present how to adapt the method to deal with mass variations, a useful extension when dealing with single cell RNA-sequencing data where cells can branch and die. | Accept | This paper studies the challenging problem of inferring the trajectory of a stochastic process from sample observations of its marginals. Earlier work of Lavenant et al. introduced a consistent estimator based on an optimization problem over continuous time. The main contributions of this paper are in (1) introducing a discrete time variant, based on Schrodinger bridges and minimizing an entropy regularized optimal transport problem over the marginals rather than over path space. In Theorem 3.1 they show an interesting "Representer theorem" which shows that the two formulations are equivalent. And (2) they prove consistency of their estimator. In Theorem 3.3 they show exponential convergence. The main weakness is that the bounds are asymptotic in nature, and they do not get, for example quantitative bounds on how many particles are needed as the dimension grows. Overall this is still a nice contribution and seems like an accept. | train | [
"DQz-dUkVHn1",
"IYoet_JYFh",
"Gd_li4OW2ax",
"hBUYRpH2t6",
"FoBqzat1scU",
"Zqg4v7reUMx",
"EdiYctKNv-",
"S9lgAihtW_",
"zbqwxAqwhw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for answering my questions. It would have been interesting to check your hypotheses about Q 4) with extra experiments, but this is probably an unreasonable request given the limited time of the rebuttal period. I still believe this paper provides a solid theoretical contribution to tackle the considered... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"hBUYRpH2t6",
"FoBqzat1scU",
"zbqwxAqwhw",
"S9lgAihtW_",
"EdiYctKNv-",
"nips_2022_Mftcm8i4sL",
"nips_2022_Mftcm8i4sL",
"nips_2022_Mftcm8i4sL",
"nips_2022_Mftcm8i4sL"
] |
nips_2022_NhrbIME2Ljl | Divert More Attention to Vision-Language Tracking | Relying on Transformer for complex visual feature learning, object tracking has witnessed the new standard for state-of-the-arts (SOTAs). However, this advancement accompanies by larger training data and longer training period, making tracking increasingly expensive. In this paper, we demonstrate that the Transformer-reliance is not necessary and the pure ConvNets are still competitive and even better yet more economical and friendly in achieving SOTA tracking. Our solution is to unleash the power of multimodal vision-language (VL) tracking, simply using ConvNets. The essence lies in learning novel unified-adaptive VL representations with our modality mixer (ModaMixer) and asymmetrical ConvNet search. We show that our unified-adaptive VL representation, learned purely with the ConvNets, is a simple yet strong alternative to Transformer visual features, by unbelievably improving a CNN-based Siamese tracker by 14.5% in SUC on challenging LaSOT (50.7%$\rightarrow$65.2%), even outperforming several Transformer-based SOTA trackers. Besides empirical results, we theoretically analyze our approach to evidence its effectiveness. By revealing the potential of VL representation, we expect the community to divert more attention to VL tracking and hope to open more possibilities for future tracking beyond Transformer. Code and models are released at https://github.com/JudasDie/SOTS. | Accept | All three reviewers lean towards the acceptance of the paper. Reviewer YvUr was not 100% excited about the paper, pointing out the simplicity of the approach and lacking ablations. We encourage the authors to include the new materials they prepared for the rebuttal in the final version of the paper. | train | [
"hx0NZ3PyTq2",
"ziv4uBJfu3S",
"VZRJF8UgfOD",
"CiymgoahqJs",
"eDftctIi9Lm",
"Hp9FVPnvnJQ",
"wq1fM92_-j1",
"covoajdZpvR",
"e8pLsN2czLx",
"-vEU8AuO3M_",
"LdZCnPe-EWs"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I highly value this work due to its novelty and promising improvements. I read other reviewers' comments and the rebuttal, and find that the authors have carefully and adequately addressed all my concerns. This work is the best tracking paper among ones that I have reviewed in NeurIPS. Thus, I keep my original ra... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"VZRJF8UgfOD",
"LdZCnPe-EWs",
"-vEU8AuO3M_",
"LdZCnPe-EWs",
"LdZCnPe-EWs",
"LdZCnPe-EWs",
"e8pLsN2czLx",
"e8pLsN2czLx",
"nips_2022_NhrbIME2Ljl",
"nips_2022_NhrbIME2Ljl",
"nips_2022_NhrbIME2Ljl"
] |
nips_2022_znNmsN_O7Sh | Object Scene Representation Transformer | A compositional understanding of the world in terms of objects and their geometry in 3D space is considered a cornerstone of human cognition. Facilitating the learning of such a representation in neural networks holds promise for substantially improving labeled data efficiency. As a key step in this direction, we make progress on the problem of learning 3D-consistent decompositions of complex scenes into individual objects in an unsupervised fashion. We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis. OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods. At the same time, it is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder. We believe this work will not only accelerate future architecture exploration and scaling efforts, but it will also serve as a useful tool for both object-centric as well as neural scene representation learning communities. | Accept | The paper received positive leaning reviews (2x borderline accept, 1x weak accept, 1x accept). The meta-reviewer agrees with the reviewers' assessment of the paper. | train | [
"oY4qxj0EmKv",
"wlCPfY5CuCN",
"clLgt4ykSVf",
"BZLRkgAb-r0",
"c6czX2E9hhp",
"mfg7V2LxBi",
"GbpUiXOW7B",
"AH0VNcK4abc",
"lo6A3sO6LI",
"WiNyOfOwdLN",
"K59pMKllt7S"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I do hope that the final text incorporates the additional explanations/results:\na) Includes the extended version of Tab2 reported above instead of only the MSN-H dataset\nb) has a more qualified statement on the speedup\n\nOverall, I would like to keep my current rating as I believe this... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"c6czX2E9hhp",
"BZLRkgAb-r0",
"K59pMKllt7S",
"WiNyOfOwdLN",
"lo6A3sO6LI",
"AH0VNcK4abc",
"nips_2022_znNmsN_O7Sh",
"nips_2022_znNmsN_O7Sh",
"nips_2022_znNmsN_O7Sh",
"nips_2022_znNmsN_O7Sh",
"nips_2022_znNmsN_O7Sh"
] |
nips_2022_xs9Sia9J_O | Rethinking Individual Global Max in Cooperative Multi-Agent Reinforcement Learning | In cooperative multi-agent reinforcement learning, centralized training and decentralized execution (CTDE) has achieved remarkable success. Individual Global Max (IGM) decomposition, which is an important element of CTDE, measures the consistency between local and joint policies. The majority of IGM-based research focuses on how to establish this consistent relationship, but little attention has been paid to examining IGM's potential flaws. In this work, we reveal that the IGM condition is a lossy decomposition, and the error of lossy decomposition will accumulated in hypernetwork-based methods. To address the above issue, we propose to adopt an imitation learning strategy to separate the lossy decomposition from Bellman iterations, thereby avoiding error accumulation. The proposed strategy is theoretically proved and empirically verified on the StarCraft Multi-Agent Challenge benchmark problem with zero sight view. The results also confirm that the proposed method outperforms state-of-the-art IGM-based approaches. | Accept | This paper revisits the notion of Individual Global Max in multi-agent reinforcement learning, in particular considering how to address the fact that individual greedy actions may not be globally optimal in cooperative settings.
Overall, the general sentiment is that this is interesting work with a useful contribution, but that the paper could be further improved. There were some specific concerns regarding the experimental results, which the authors answered in the rebuttal. The results for sight view 5 are particularly relevant, given that they identify a setting in which the system is not extremely partially observable, but where their algorithm still provides benefits.
I also note that the paper's presentation is less polished than it could be in a number of places (For example, Table captions are sometimes brief / missing punctuation, graphs are somewhat hard to read, the equation in Prop. 5 should be indented). | train | [
"LNsI_b8PR3",
"aNijtD_-nA0",
"fBy0fVMNqp4",
"erC6DtfYxJ2",
"s-Icu9OyGsu",
"_LvlM2tjCzi",
"svBK6KNMD4y",
"4Ql-cw5Kxj_",
"KoRasQfxCk7",
"_nV140U6ksR",
"2Ssjl0ff_zJ",
"73jmrkd9U-N",
"w28xugoktA0",
"HfHYPuuW1r",
"LAyV9YSxvA_",
"ATGI8cVL8Y2",
"oVkjssIRrhg",
"ZIN0S4yVz-a",
"leWzAEVRrty... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Because the $error_{dec}$ caused by partial observation is inevitable, this is the reason why $error_{dec}$ of time step t is left in Eq.(9). However, by comparing equation 7 and equation 9, we can see that error accumulation resulting from $error_{dec}$ can be avoided. ",
" I am always glad to receive your rep... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"fBy0fVMNqp4",
"fBy0fVMNqp4",
"HfHYPuuW1r",
"_nV140U6ksR",
"73jmrkd9U-N",
"HfHYPuuW1r",
"4Ql-cw5Kxj_",
"KoRasQfxCk7",
"LAyV9YSxvA_",
"2Ssjl0ff_zJ",
"leWzAEVRrty",
"w28xugoktA0",
"ZIN0S4yVz-a",
"oVkjssIRrhg",
"ATGI8cVL8Y2",
"nips_2022_xs9Sia9J_O",
"nips_2022_xs9Sia9J_O",
"nips_2022_... |
nips_2022_-GgDBzwZ-e7 | Discrete-Convex-Analysis-Based Framework for Warm-Starting Algorithms with Predictions | Augmenting algorithms with learned predictions is a promising approach for going beyond worst-case bounds. Dinitz, Im, Lavastida, Moseley, and Vassilvitskii~(2021) have demonstrated that warm-starts with learned dual solutions can improve the time complexity of the Hungarian method for weighted perfect bipartite matching. We extend and improve their framework in a principled manner via \textit{discrete convex analysis} (DCA), a discrete analog of convex analysis. We show the usefulness of our DCA-based framework by applying it to weighted perfect bipartite matching, weighted matroid intersection, and discrete energy minimization for computer vision. Our DCA-based framework yields time complexity bounds that depend on the $\ell_\infty$-distance from a predicted solution to an optimal solution, which has two advantages relative to the previous $\ell_1$-distance-dependent bounds: time complexity bounds are smaller, and learning of predictions is more sample efficient. We also discuss whether to learn primal or dual solutions from the DCA perspective. | Accept | In this paper, the authors provide new theoretical guarantees for augmenting algorithms with learned predictions. Based on discrete convex analysis (DCA), they generalize previous results of Dinitz et al, and obtain better time complexity bounds for a number of online problems. The application of DCA to online algorithms with predictions is interesting, and the improvements in the bounds are significant. | test | [
"_irYiAOF6xP",
"pqjsKgz7dF3",
"_aCjNLtXm-",
"gmoXgeiHOM",
"oscVwJw6XQF",
"LcbZc8Qk2L",
"4BjR6vUUuk6"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response and for running this experiment! I think it helps to see the contribution in this work as giving an extended and tighter analysis of this warm-starting approach.",
" We appreciate the reviewer's thoughtful comments and questions on the worst-case bounds.\n\n \n> What is the reason ... | [
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
1
] | [
"gmoXgeiHOM",
"4BjR6vUUuk6",
"oscVwJw6XQF",
"LcbZc8Qk2L",
"nips_2022_-GgDBzwZ-e7",
"nips_2022_-GgDBzwZ-e7",
"nips_2022_-GgDBzwZ-e7"
] |
nips_2022_vExdPu73R2z | R^2-VOS: Robust Referring Video Object Segmentation via Relational Cycle Consistency | Referring video object segmentation (R-VOS) aims to segment the object masks in a video given a referring linguistic expression to the object. It is a recently introduced task attracting growing research attention. However, all existing works make a strong assumption: The object depicted by the expression must exist in the video, namely, the expression and video must have an object-level semantic consensus. This is often violated in real-world applications where an expression can be queried to false videos, and existing methods always fail in such false queries due to abusing the assumption. In this work, we emphasize that studying semantic consensus is necessary to improve the robustness of R-VOS. Accordingly, we pose an extended task from R-VOS without the semantic consensus assumption, named Robust R-VOS ($\mathrm{R}^2$-VOS). The $\mathrm{R}^2$-VOS task is essentially related to the joint modeling of the primary R-VOS task and its dual problem (text reconstruction). We embrace the observation that the embedding spaces have relational consistency through the cycle of text-video-text transformation which connects the primary and dual problems. We leverage the cycle consistency to discriminate and augment the semantic consensus, thus advancing the primary task. Parallel optimization of the primary and dual problems are enabled by introducing an early grounding medium. A new evaluation dataset, $\mathrm{R}^2$-Youtube-VOS, is collected to measure the robustness of R-VOS models against unpaired videos and expressions. Our method not only identifies negative pairs of unrelated expressions and videos, but also improves the segmentation accuracy for positive pairs with a superior disambiguating ability. The proposed model achieves the state-of-the-art performance on Ref-DAVIS17, Ref-Youtube-VOS, and the novel $\mathrm{R}^2$-Youtube-VOS dataset. | Reject | This paper presents an approach for video object segmentation. The paper considers the possibility that an (object) expression may not correspond to any object in the given video. The approach is based on relational cycle consistency, which the reviewers find technically sound. The paper also has a dataset contribution.
After the rebuttal and discussions, the reviewers still maintained split ratings. While two reviewers find the paper has its merit, Reviewer Fhve is against the paper, pointing out his/her concern regarding the experiments' fairness. Specifically, the reviewer points out that the comparison against the other works not using negative samples is unfair, as the training of the proposed approach benefits from such negative samples.
Although we agree to the authors' explanation that the ability to explicitly consider negative samples for out-of-distribution discrimination is the strength and that it is an interesting technical aspect of the paper, the paper lacks sufficient experiments to support the argument. It would have been better if the authors provided explicit experiments comparing their approach against the baselines like ReferFormer by adding classification heads for the background with negative samples in a meaningful way, as the authors also suggested. Also, there is a bit of novelty concern shared by Fhve and Gh1L.
Considering these aspects, the ACs recommend the rejection of the paper. | train | [
"aWr-TEBWcyq",
"sT3wKtNWDxt",
"iXHD21a0pId",
"Plck1DfDKAO",
"jO3fWBcTWgx",
"OfEszt74USL",
"oA4jHF1--5t",
"Ue4Cx1uq0H6",
"i-wWVRkHOKs",
"XRAoXJR3HDH",
"4svVX8ETKTR",
"tP02CGWOE7",
"yjrei3MTm1y",
"22deXEFa6Jv"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comments.\n\n---\n**1. Handle unmatched expressions by adding a background class to ReferFormer and other methods.**\n\nWe provide a further discussion to address that adding a naïve classification model that treats negative samples as an additional class is limited, while our method exploiting th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iXHD21a0pId",
"Plck1DfDKAO",
"XRAoXJR3HDH",
"4svVX8ETKTR",
"OfEszt74USL",
"Ue4Cx1uq0H6",
"nips_2022_vExdPu73R2z",
"22deXEFa6Jv",
"22deXEFa6Jv",
"yjrei3MTm1y",
"tP02CGWOE7",
"nips_2022_vExdPu73R2z",
"nips_2022_vExdPu73R2z",
"nips_2022_vExdPu73R2z"
] |
nips_2022_u4dXcUEsN7B | Exploring Example Influence in Continual Learning | Continual Learning (CL) sequentially learns new tasks like human beings, with the goal to achieve better Stability (S, remembering past tasks) and Plasticity (P, adapting to new tasks). Due to the fact that past training data is not available, it is valuable to explore the influence difference on S and P among training examples, which may improve the learning pattern towards better SP. Inspired by Influence Function (IF), we first study example influence via adding perturbation to example weight and computing the influence derivation. To avoid the storage and calculation burden of Hessian inverse in neural networks, we propose a simple yet effective MetaSP algorithm to simulate the two key steps in the computation of IF and obtain the S- and P-aware example influence. Moreover, we propose to fuse two kinds of example influence by solving a dual-objective optimization problem, and obtain a fused influence towards SP Pareto optimality. The fused influence can be used to control the update of model and optimize the storage of rehearsal. Empirical results show that our algorithm significantly outperforms state-of-the-art methods on both task- and class-incremental benchmark CL datasets. | Accept | There was a consensus among reviewers that this paper should be accepted. The paper investigates an interesting direction of combining research on Example Influence with Continual Learning. The methods they introduce was considered to be novel and well-motivated by the reviewers and the experiments show good improvement over the many rehearsal-based baselines. | train | [
"jv1ek0VuWDJ",
"Gs0EZrtbyWi",
"OpNRQF0g4dP",
"mG-Wt7fN9Lr",
"7yBACbxzGW3",
"mAHcA0LSyj",
"3a4uNgp51UC",
"EisE-2-remd",
"NPmZQqZfD9L",
"0YeHvlOws3a",
"BrwRVd52tvO",
"ETqf1_tlUgG",
"jGTmZb1lJCM",
"D452RkQ30d3",
"vlt0HF_VcSr",
"s2x1OqNETKI",
"mDYZ2tlZUvI"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank your for your valuable suggestions! \n\n> I would advise to replace \"Finished Accuracy\" by \"Final Accuracy\" since it is supposed to be the same thing and \"Final Accuracy\" is mostly used in the literature.\n\n**Response:** Thank you for your suggestion. We have changed the 'finished acc' to 'final acc'... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"Gs0EZrtbyWi",
"ETqf1_tlUgG",
"mG-Wt7fN9Lr",
"7yBACbxzGW3",
"s2x1OqNETKI",
"s2x1OqNETKI",
"s2x1OqNETKI",
"s2x1OqNETKI",
"mDYZ2tlZUvI",
"mDYZ2tlZUvI",
"mDYZ2tlZUvI",
"mDYZ2tlZUvI",
"vlt0HF_VcSr",
"vlt0HF_VcSr",
"nips_2022_u4dXcUEsN7B",
"nips_2022_u4dXcUEsN7B",
"nips_2022_u4dXcUEsN7B"
... |
nips_2022_9t-j3xDm7_Q | Motion Transformer with Global Intention Localization and Local Movement Refinement | Predicting multimodal future behavior of traffic participants is essential for robotic vehicles to make safe decisions. Existing works explore to directly predict future trajectories based on latent features or utilize dense goal candidates to identify agent's destinations, where the former strategy converges slowly since all motion modes are derived from the same feature while the latter strategy has efficiency issue since its performance highly relies on the density of goal candidates. In this paper, we propose the Motion TRansformer (MTR) framework that models motion prediction as the joint optimization of global intention localization and local movement refinement. Instead of using goal candidates, MTR incorporates spatial intention priors by adopting a small set of learnable motion query pairs. Each motion query pair takes charge of trajectory prediction and refinement for a specific motion mode, which stabilizes the training process and facilitates better multimodal predictions. Experiments show that MTR achieves state-of-the-art performance on both the marginal and joint motion prediction challenges, ranking 1st on the leaderbaords of Waymo Open Motion Dataset. Code will be available at https://github.com/sshaoshuai/MTR. | Accept | This paper proposes to model traffic vehicles using a transformer-based architecture for iteratively refining multimodal trajectory predictions. While the method is related to and builds upon several similar works in the area, it does also introduce some interesting new components such as the iterative refinement and the dynamic attention. Further, the strength of the experimental results from the combined system alone makes this paper important for researchers working in these areas: the method achieves the state of the art for trajectory prediction on two very widely used datasets (Waymo and Argoverse), compared to published leaderboards. All four reviewers unanimously agree that this paper is above the bar for acceptance, and I concur. | train | [
"wTRg0zr3rQL",
"eiBdVTXcXw",
"JPSSfkJw-ej",
"pHJep_s_0ve",
"bttcmk1hJ3l",
"1IqDjqswtbc",
"dmddg2xzGbB",
"wWgrosHNLjz",
"mKp53-kA82z",
"2sw1vrXSIJd",
"trULltT-lIl6",
"swAsmsa16D3",
"iy6N46x13dd",
"miAW_MP8fEc",
"EsAEXZmp7gn",
"nI_-TLdMHRL",
"MIuRPHkh6rJ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for acknowledging our additional experiments and providing positive feedback! \n\nYour constructive comments and suggestions are very helpful in improving our paper quality. Thanks!\n\n",
" Thank you for uploading the revised paper. It's a great work and I increased my score to 7.",
" Than... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
5
] | [
"JPSSfkJw-ej",
"bttcmk1hJ3l",
"iy6N46x13dd",
"1IqDjqswtbc",
"dmddg2xzGbB",
"mKp53-kA82z",
"wWgrosHNLjz",
"MIuRPHkh6rJ",
"2sw1vrXSIJd",
"nI_-TLdMHRL",
"swAsmsa16D3",
"EsAEXZmp7gn",
"miAW_MP8fEc",
"nips_2022_9t-j3xDm7_Q",
"nips_2022_9t-j3xDm7_Q",
"nips_2022_9t-j3xDm7_Q",
"nips_2022_9t-... |
nips_2022_aKXBrj0DHm | Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection | Existing open-vocabulary object detectors typically enlarge their vocabulary sizes by leveraging different forms of weak supervision. This helps generalize to novel objects at inference. Two popular forms of weak-supervision used in open-vocabulary detection (OVD) include pretrained CLIP model and image-level supervision. We note that both these modes of supervision are not optimally aligned for the detection task: CLIP is trained with image-text pairs and lacks precise localization of objects while the image-level supervision has been used with heuristics that do not accurately specify local object regions. In this work, we propose to address this problem by performing object-centric alignment of the language embeddings from the CLIP model. Furthermore, we visually ground the objects with only image-level supervision using a pseudo-labeling process that provides high-quality object proposals and helps expand the vocabulary during training. We establish a bridge between the above two object-alignment strategies via a novel weight transfer function that aggregates their complimentary strengths. In essence, the proposed model seeks to minimize the gap between object and image-centric representations in the OVD setting. On the COCO benchmark, our proposed approach achieves 36.6 AP50 on novel classes, an absolute 8.2 gain over the previous best performance. For LVIS, we surpass the state-of-the-art ViLD model by 5.0 mask AP for rare categories and 3.4 overall. Code: https://github.com/hanoonaR/object-centric-ovd. | Accept | The paper receives overall positive ratings after rebuttal. The major concern before rebuttal is that the benefits and limitations from using MViT are unclear. The rebuttal has addressed most concerns from reviewers. AC encourages authors to make the final revision with review comments. | train | [
"eA3OVdS5TJk",
"uF_SodZGbO9",
"NmK6yZsCFkJ",
"mn84fgpbbis",
"XyMjKmOmMlv",
"Kbiak1N698i",
"HlF9duuyGfE-",
"XiTkxlFKyKY",
"16Kg5maX2nL",
"sxo0QsVu9l9",
"knlAr3X4Nz3",
"UUS25ObYcl-",
"XjwqiiC4bSd",
"uePzSrnxwir",
"gy9MKU7SI_Z"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers for going through our response and providing support. As suggested by the reviewers, we will update the numbers which are obtained after exclusion of all novel/rare classes. Our response shows that the benefit of our approach in comparison to state-of-the-art methods ViLD (ICLR'22) and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"nips_2022_aKXBrj0DHm",
"sxo0QsVu9l9",
"mn84fgpbbis",
"16Kg5maX2nL",
"Kbiak1N698i",
"HlF9duuyGfE-",
"uePzSrnxwir",
"gy9MKU7SI_Z",
"XjwqiiC4bSd",
"UUS25ObYcl-",
"nips_2022_aKXBrj0DHm",
"nips_2022_aKXBrj0DHm",
"nips_2022_aKXBrj0DHm",
"nips_2022_aKXBrj0DHm",
"nips_2022_aKXBrj0DHm"
] |
nips_2022_6avZnPpk7m9 | What Makes a "Good" Data Augmentation in Knowledge Distillation - A Statistical Perspective | Knowledge distillation (KD) is a general neural network training approach that uses a teacher to guide a student. Existing works mainly study KD from the network output side (e.g., trying to design a better KD loss function), while few have attempted to understand it from the input side. Especially, its interplay with data augmentation (DA) has not been well understood. In this paper, we ask: Why do some DA schemes (e.g., CutMix) inherently perform much better than others in KD? What makes a “good” DA in KD? Our investigation from a statistical perspective suggests that a good DA scheme should reduce the variance of the teacher’s mean probability, which will eventually lead to a lower generalization gap for the student. Besides the theoretical understanding, we also introduce a new entropy-based data-mixing DA scheme to enhance CutMix. Extensive empirical studies support our claims and demonstrate how we can harvest considerable performance gains simply by using a better DA scheme in knowledge distillation. | Accept | After a lively and interactive author discussion period all reviewers ended up recommending to accept this paper.
The work examines the ways in which different data augmentation schemes can increase knowledge distillation performance, providing some theoretical analysis with actionable insights and experiments to back it up. The work focuses on the generalization gap of the student under different sampling schemes, and asserts that their study leads to the conclusion that a good data augmentation scheme should reduce the variance of the empirical distilled risk between the teacher and student. Reviewers were generally positive about the clarity of the manuscript after some changes during the author discussion.
The AC recommends acceptance. | test | [
"zHGgnRqcdF",
"h193_rVJfgV",
"8qF-zRm7kmt",
"HfpXweNZQnM",
"v2s5FD7SaZ-",
"Bih03GkXgJ",
"FcRnURGdvxj",
"Bclvdf7t0in",
"PbGsE8syp3n",
"iHArleH0vaB",
"UvkEOgN_wL_",
"3FirgJ3JgqI",
"Z_8rDY7S41k",
"eUlJSSudJK",
"a-R3Y-7Pp0a",
"YrO73iAT3Gq",
"1ac2y7awZZL",
"Mh6MOKvbiqs",
"VBKKZa2NR23"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you *so much* for generously raising the score! Your suggestions are well-taken. We *promise* to materialize the 3 conditional changes in our revised version. Thanks again!",
" **Edited review**\n\nIn light of the authors _active discussion and contributions_ during this phase, as well as their addressal ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"h193_rVJfgV",
"8qF-zRm7kmt",
"a-R3Y-7Pp0a",
"Bclvdf7t0in",
"Bih03GkXgJ",
"iHArleH0vaB",
"Bclvdf7t0in",
"PbGsE8syp3n",
"3FirgJ3JgqI",
"UvkEOgN_wL_",
"Mh6MOKvbiqs",
"WECmLYG0VOr",
"nips_2022_6avZnPpk7m9",
"a-R3Y-7Pp0a",
"YrO73iAT3Gq",
"eaVXVhySeUk",
"eaVXVhySeUk",
"jZtCYAS3H9",
"j... |
nips_2022_agTr-vRQsa | Behavior Transformers: Cloning $k$ modes with one stone | While behavior learning has made impressive progress in recent times, it lags behind computer vision and natural language processing due to its inability to leverage large, human-generated datasets. Human behavior has a wide variance, multiple modes, and human demonstrations naturally do not come with reward labels. These properties limit the applicability of current methods in Offline RL and Behavioral Cloning to learn from large, pre-collected datasets. In this work, we present Behavior Transformer (BeT), a new technique to model unlabeled demonstration data with multiple modes. BeT retrofits standard transformer architectures with action discretization coupled with a multi-task action correction inspired by offset prediction in object detection. This allows us to leverage the multi-modal modeling ability of modern transformers to predict multi-modal continuous actions. We experimentally evaluate BeT on a variety of robotic manipulation and self-driving behavior datasets. We show that BeT significantly improves over prior state-of-the-art work on solving demonstrated tasks while capturing the major modes present in the pre-collected datasets. Finally, through an extensive ablation study, we further analyze the importance of every crucial component in BeT. Videos of behavior generated by BeT are available here: https://mahis.life/bet | Accept | *Summary*
The paper addresses the problem of learning from expert demonstrations, focusing on the setting where the demonstrations are pre-collected, rewards are absent, and the distribution of demonstration trajectories contains multiple modes (limiting the performance of behavior cloning). The proposed approach uses k-means to cluster continuous actions into discrete tokens which are modeled using a transformer architecture with an additional continuous offset from the cluster centers.
*Reviews*
The discussion has resolved all reviewer concerns and strengthened the paper with additional baselines, ablations, and positioning relative to related work. All four reviewers are in agreement that this approach is novel, well justified, effective in multiple empirical settings, and that the ablation studies clearly establish why the model performs well. The final reviewer ratings are 6 (WA), 6 (WA), 7 (A) and 8 (SA). At least two of the reviewers see this paper as high impact, and I agree. I therefore recommend this submission can be accepted and considered for an outstanding paper award.
*Potential Impact*
In generative image modeling, the shift to modeling the continuous space of pixels using discrete tokens and transformers helped underpin recent massive improvements in quality (e.g., in the [DALL-E](https://arxiv.org/pdf/2102.12092.pdf) paper and others). In the image domain, the two-stage approach (learning a codebook, then training a transformer) seems to successfully capture both high-frequency details and low-frequency structure. Therefore it's interesting to see this paper apply similar ideas to the modeling of behavior/actions. To my knowledge this is the first paper to open up this direction and it could lead to further advancements with the application of more advanced clustering / codebook learning techniques and so on (as has occurred in the image domain). Therefore I see the potential impact of this paper as high. | train | [
"1VBrr5PyyBV",
"qP0PGiSj7d6",
"dNasDBSCDnx",
"1UiOp6730mA",
"860DlDAf1is",
"IH6wdhqhzu",
"wRUP6i1qfbl",
"OdTr2pXE2DB",
"9__5KmWX8qU",
"5-cN-AUSf9-",
"cVAgZsLmfAv",
"lh5DKv8sMQM",
"YJ2jXwme3C2",
"H2qy1GPqgkN",
"PuCDI2ydm2X",
"-ve5-Yx-37j"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" First of all, we thank you for your follow-up comments – we are happy to provide more context to the best of our abilities. Our responses are as follows:\n1. **IBC + MinGPT:** \na. **Success rate:** The IBC + MinGPT model completes 0 tasks after 72 hours of training on both Kitchen and Block Pushing tasks. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"qP0PGiSj7d6",
"9__5KmWX8qU",
"cVAgZsLmfAv",
"OdTr2pXE2DB",
"nips_2022_agTr-vRQsa",
"wRUP6i1qfbl",
"-ve5-Yx-37j",
"PuCDI2ydm2X",
"H2qy1GPqgkN",
"YJ2jXwme3C2",
"YJ2jXwme3C2",
"nips_2022_agTr-vRQsa",
"nips_2022_agTr-vRQsa",
"nips_2022_agTr-vRQsa",
"nips_2022_agTr-vRQsa",
"nips_2022_agTr-... |
nips_2022_ievxJqXwPCm | Deliberated Domain Bridging for Domain Adaptive Semantic Segmentation | In unsupervised domain adaptation (UDA), directly adapting from the source to the target domain usually suffers significant discrepancies and leads to insufficient alignment. Thus, many UDA works attempt to vanish the domain gap gradually and softly via various intermediate spaces, dubbed domain bridging (DB). However, for dense prediction tasks such as domain adaptive semantic segmentation (DASS), existing solutions have mostly relied on rough style transfer and how to elegantly bridge domains is still under-explored. In this work, we resort to data mixing to establish a deliberated domain bridging (DDB) for DASS, through which the joint distributions of source and target domains are aligned and interacted with each in the intermediate space. At the heart of DDB lies a dual-path domain bridging step for generating two intermediate domains using the coarse-wise and the fine-wise data mixing techniques, alongside a cross-path knowledge distillation step for taking two complementary models trained on generated intermediate samples as ‘teachers’ to develop a superior ‘student’ in a multi-teacher distillation manner. These two optimization steps work in an alternating way and reinforce each other to give rise to DDB with strong adaptation power. Extensive experiments on adaptive segmentation tasks with different settings demonstrate that our DDB significantly outperforms state-of-the-art methods. | Accept | **Summary**: This paper proposes an effective Deliberated Domain Bridging (DDB) approach for domain adaptive semantic segmentation (DASS). It leverages two data mixing techniques: region-level mix and class-level mix, to train two corresponding teacher models, which then guide one student model on the target domain. It is evaluated on multiple benchmarks.
**Strength**: The paper is a well-written paper. It is well-motivated based on the limitations of previous methods. The proposed approach is novel, interesting, and effective. The experiments (with the toy game) are solid.
**Weakness**: Training efficiency and complexity. Lack of ablation study on some hyperparameters and design choices. Some missing references/comparisons; unclear positioning of the work w.r.t. prior work.
**Recommendation**: The paper receives consistently positive ratings. After rebuttal, most of the reviewers’ concerns are addressed and the paper clearly has strengths. The AC thus suggests acceptance. The AC strongly suggests that the authors incorporate their rebuttal (e.g., additional results) into their camera-ready version.
| train | [
"wMhfYyMsKFm",
"dKUSOnT9H-w",
"dyiKmA2CW1Y",
"k3syKxm9RuX",
"jC8CimEKtpe",
"P8M17nZVp6K",
"4E5vIhlxN9a",
"8klHHtS5tua",
"zj1YQ48d4bC",
"9AmzA4k08eH",
"ihyqYlXfmY",
"09gg1dybTjl",
"QgrZS8-pHRk",
"L5iGHCnkY6N",
"zJHLKFA27ix",
"XGBHrBKHI0W",
"1nA60WQ20v",
"Z4kn0K_8PW0"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you again for your valuable comments and the kind support of this work. ",
" Thanks very much for your appreciation and recognition,but may I respectively ask you to rise your rating for our paper to promise an acceptance, thanks.",
" Dear authors, your response has cleared my concerns.\nThanks,\ni4C... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
4
] | [
"k3syKxm9RuX",
"dyiKmA2CW1Y",
"8klHHtS5tua",
"4E5vIhlxN9a",
"P8M17nZVp6K",
"09gg1dybTjl",
"Z4kn0K_8PW0",
"XGBHrBKHI0W",
"nips_2022_ievxJqXwPCm",
"Z4kn0K_8PW0",
"Z4kn0K_8PW0",
"1nA60WQ20v",
"XGBHrBKHI0W",
"zJHLKFA27ix",
"nips_2022_ievxJqXwPCm",
"nips_2022_ievxJqXwPCm",
"nips_2022_ievx... |
nips_2022_b90lKL1IqcF | VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids | State-of-the-art 3D-aware generative models rely on coordinate-based MLPs to parameterize 3D radiance fields. While demonstrating impressive results, querying an MLP for every sample along each ray leads to slow rendering.
Therefore, existing approaches often render low-resolution feature maps and process them with an upsampling network to obtain the final image.
Albeit efficient, neural rendering often entangles viewpoint and content such that changing the camera pose results in unwanted changes of geometry or appearance.
Motivated by recent results in voxel-based novel view synthesis, we investigate the utility of sparse voxel grid representations for fast and 3D-consistent generative modeling in this paper.
Our results demonstrate that monolithic MLPs can indeed be replaced by 3D convolutions when combining sparse voxel grids with progressive growing, free space pruning and appropriate regularization.
To obtain a compact representation of the scene and allow for scaling to higher voxel resolutions, our model disentangles the foreground object (modeled in 3D) from the background (modeled in 2D).
In contrast to existing approaches, our method requires only a single forward pass to generate a full 3D scene. It hence allows for efficient rendering from arbitrary viewpoints while yielding 3D consistent results with high visual fidelity. Code and models are available at https://github.com/autonomousvision/voxgraf. | Accept | It is valuable now to introduce this technical idea, even if the results do not quite match existing methods. Future work building on this idea may well do so, and it would impede the progress of the subfield to demand both the new idea and SOTA results.
The rebuttal takes on extra work building on the reviewers' suggestions, which gives confidence that the final paper will be of high quality.
At the same time, the primary criterion for oral presentation is importance to the community as a whole, so the relatively narrow scope (3D computer vision) would really require more world-leading results, and possibly demonstrations of a wider set of applications in order to alert practitioners in adjacent subfields.
| val | [
"OB7l08-I05z",
"LNx-kMEu9YC",
"i0kQAfB5v8B",
"_iVhwM0dPrF",
"zm8tQTuah66",
"DAl5FALoshn",
"RdCbUWsRIbB",
"SN1g22Q9_fM",
"m69Da5QZDRK",
"wDgSHSERWD"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' rebuttal. Since EG3D was indeed a concurrent work, I think it is fair to not consider it in this review. Hence I am leaning towards acceptance.",
" Thank you, that answers my questions.",
" Thank you for the quick reply to our rebuttal.\n\n1. Yes, exactly. We ran all methods using ou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"DAl5FALoshn",
"i0kQAfB5v8B",
"_iVhwM0dPrF",
"zm8tQTuah66",
"wDgSHSERWD",
"m69Da5QZDRK",
"SN1g22Q9_fM",
"nips_2022_b90lKL1IqcF",
"nips_2022_b90lKL1IqcF",
"nips_2022_b90lKL1IqcF"
] |
nips_2022_wQ2QNNP8GtM | Cross Aggregation Transformer for Image Restoration | Recently, Transformer architecture has been introduced into image restoration to replace convolution neural network (CNN) with surprising results. Considering the high computational complexity of Transformer with global attention, some methods use the local square window to limit the scope of self-attention. However, these methods lack direct interaction among different windows, which limits the establishment of long-range dependencies. To address the above issue, we propose a new image restoration model, Cross Aggregation Transformer (CAT). The core of our CAT is the Rectangle-Window Self-Attention (Rwin-SA), which utilizes horizontal and vertical rectangle window attention in different heads parallelly to expand the attention area and aggregate the features cross different windows. We also introduce the Axial-Shift operation for different window interactions. Furthermore, we propose the Locality Complementary Module to complement the self-attention mechanism, which incorporates the inductive bias of CNN (e.g., translation invariance and locality) into Transformer, enabling global-local coupling. Extensive experiments demonstrate that our CAT outperforms recent state-of-the-art methods on several image restoration applications. The code and models are available at https://github.com/zhengchen1999/CAT. | Accept | This paper proposes a cross aggregation transformer for image restoration. The Rwin-SA with axial-shift is introduced to aggregates the features cross different windows and the locality complementary module (LCM) is introduced to capture both local and global information. Massive experiments on different datasets and tasks demonstrate that the CAT outperforms the state-of-art methods. Several concerns are raised by the reviewers including the novelty, discussion and experiments. After an in-depth discussion between the reviewers and authors, three reviewers hold a positive side (strong accept) for this work, and reviewer RMwD increases the rating to a borderline reject and approves the new experiments provided in the rebuttal. Considering the average score and the contribution of this work, the AC recommends acceptance. The AC strongly urges the authors to consider all the comments in preparing the final version. | train | [
"gOOsAF-QGqW",
"F4KkKZoj_c8",
"KmjoCIUlrRB",
"YofjfiVCGC",
"UUgQMTrE09",
"uWuHy7Yyt4b",
"xGXPrMNETeV",
"kcIJRR1PDjc",
"6DfniVRuul",
"HepkOcAvaKm",
"TprsIrA-39z",
"_IHQtBWS3ea",
"_641aCNHdXyv",
"r_HRB09L0IO",
"3YHve2Bhba",
"6hPPTS2l8_r",
"ntFxVSpE46",
"CsKmDmOKQB",
"K5SSEQ2npkb",
... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
" We thank all reviewers and area chairs for their valuable time and comments. After discussing with reviewers and providing more clarifications/results/analyses, we would like to give a brief response.\n\nReviewer mWKh (denoted as R1), Reviewer LKyp (denoted as R3), and Reviewer J9TR (denoted as R4) all hold a **p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
5
] | [
"nips_2022_wQ2QNNP8GtM",
"KmjoCIUlrRB",
"YofjfiVCGC",
"VzjEgWVanXS",
"VzjEgWVanXS",
"Lg5AqAG0VoI",
"HepkOcAvaKm",
"_IHQtBWS3ea",
"Lg5AqAG0VoI",
"nvOE5jXoIR6",
"VzjEgWVanXS",
"Z27T25YoQdA",
"Z27T25YoQdA",
"nips_2022_wQ2QNNP8GtM",
"VzjEgWVanXS",
"VzjEgWVanXS",
"VzjEgWVanXS",
"VzjEgWV... |
nips_2022_iCxRsZcVVAH | Optimistic Curiosity Exploration and Conservative Exploitation with Linear Reward Shaping | In this work, we study the simple yet universally applicable case of reward shaping in value-based Deep Reinforcement Learning (DRL). We show that reward shifting in the form of a linear transformation is equivalent to changing the initialization of the $Q$-function in function approximation. Based on such an equivalence, we bring the key insight that a positive reward shifting leads to conservative exploitation, while a negative reward shifting leads to curiosity-driven exploration. Accordingly, conservative exploitation improves offline RL value estimation, and optimistic value estimation improves exploration for online RL. We validate our insight on a range of RL tasks and show its improvement over baselines: (1) In offline RL, the conservative exploitation leads to improved performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to tackle the exploration-exploitation dilemma for better sample efficiency; (3) In discrete control tasks, a negative reward shifting yields an improvement over the curiosity-based exploration method. | Accept | This paper proposes a simple but general way to improve exploration in RL based on the equivalence between reward shifting and the initialization of value function. The paper shows that it is straightforward to implement conservative exploitation and curiosity-driven exploration based on this idea. The results on a variety of offline/online RL settings show that a properly initialized value function (i.e., shifted reward) can achieve a better exploration/exploitation and improve the performance of existing RL algorithms as a result.
In general, most of the reviewers found that the proposed method and the results are quite interesting enough to be presented at NeurIPS. While some reviewers had a concern about the lack of challenging tasks, the authors addressed it with updated results in the appendix. The only reviewer with a negative score did not respond during the discussion period. Thus, I recommend this paper to be accepted. In the meantime, there are still remaining (minor) concerns about the presentation of the paper (e.g., grammar, lengthy description of a simple idea, etc) and the lack of discussion on limitations and related work. I highly recommend the authors to improve them by reorganizing the paper (e.g., omit some details and move some important discussion from the appendix to the main text) for the camera-ready version. | val | [
"nE8tFBz0B4p",
"75wwWSeYZ-v",
"pSsSGvJ8d7E",
"bN1t_pEfNvS",
"h7JD-FfQigz",
"TYUMI7jNuDG",
"LrmL6IalSG",
"Eed9ZTFAgsJ",
"yShCaviCHCi",
"7zifPaybfhd",
"5GDCA1gDD6E",
"4RAp62Ly3Xm",
"Dze7gDSV_tL",
"Ai7bCg-yGJF",
"1vJ1yCvzRQj",
"Zx0dt72fi5L",
"BRv6no2Wnse",
"P8N4RWJ07X",
"XAzD-gDqJ92... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the response, which helps me understand a bit more about the paper. The work is interesting, but it needs to be refined. I would like to maintain a borderline reject.",
" We would like to thank all reviewers for their time, generous comments, and suggestions for improving the paper. \n\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"pSsSGvJ8d7E",
"nips_2022_iCxRsZcVVAH",
"XAzD-gDqJ92",
"P8N4RWJ07X",
"XAzD-gDqJ92",
"LrmL6IalSG",
"N1wk8skA2CU",
"XAzD-gDqJ92",
"nips_2022_iCxRsZcVVAH",
"nips_2022_iCxRsZcVVAH",
"N1wk8skA2CU",
"N1wk8skA2CU",
"N1wk8skA2CU",
"XAzD-gDqJ92",
"XAzD-gDqJ92",
"P8N4RWJ07X",
"P8N4RWJ07X",
"... |
nips_2022_lgj33-O1Ely | TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies | Recent advances in implicit neural representations make it possible to reconstruct a human-body model from a monocular self-rotation video. While previous works present impressive results of human body reconstruction, the quality of reconstructed face and hands are relatively low. The main reason is that the image region occupied by these parts is very small compared to the body. To solve this problem, we propose a new approach named TotalSelfScan, which reconstructs the full-body model from several monocular self-rotation videos that focus on the face, hands, and body, respectively. Compared to recording a single video, this setting has almost no additional cost but provides more details of essential parts. To learn the full-body model, instead of encoding the whole body in a single network, we propose a multi-part representation to model separate parts and then fuse the part-specific observations into a single unified human model. Once learned, the full-body model enables rendering photorealistic free-viewpoint videos under novel human poses. Experiments show that TotalSelfScan can significantly improve the reconstruction and rendering quality on the face and hands compared to the existing methods. The code is available at \url{https://zju3dv.github.io/TotalSelfScan}. | Accept | This paper was reviewed by three experts in the field. Based on the reviewers' feedback, the decision is to recommend the paper for acceptance to NeurIPS 2022.
The reviewers did raise some valuable concerns that should be addressed in the final camera-ready version of the paper. For example, more discussion can be added on the key limitation of TotalSelfScan when applied to animation. The authors are encouraged to make the necessary changes to the best of their ability. We congratulate the authors on the acceptance of their paper! | test | [
"yWgWd3SR3GC",
"bh2mltBvc6b",
"A0MMqtfaqLN",
"wy3DMy5ENTa",
"0YBgaNynY6C",
"c-BE_bgyAA",
"CgbdeGOX9G"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors’ effort in the feedback. All my concerns have been answered. I would recommend accepting this work.",
" We thank the reviewer for the valuable comments and will add the discussions below to our revised paper.\n\n\n> More qualitative comparisons and analysis on the non-rigid ray transforma... | [
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"bh2mltBvc6b",
"CgbdeGOX9G",
"c-BE_bgyAA",
"0YBgaNynY6C",
"nips_2022_lgj33-O1Ely",
"nips_2022_lgj33-O1Ely",
"nips_2022_lgj33-O1Ely"
] |
nips_2022_EAcWgk7JM58 | PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies | PointNet++ is one of the most influential neural architectures for point cloud understanding. Although the accuracy of PointNet++ has been largely surpassed by recent networks such as PointMLP and Point Transformer, we find that a large portion of the performance gain is due to improved training strategies, i.e. data augmentation and optimization techniques, and increased model sizes rather than architectural innovations. Thus, the full potential of PointNet++ has yet to be explored. In this work, we revisit the classical PointNet++ through a systematic study of model training and scaling strategies, and offer two major contributions. First, we propose a set of improved training strategies that significantly improve PointNet++ performance. For example, we show that, without any change in architecture, the overall accuracy (OA) of PointNet++ on ScanObjectNN object classification can be raised from 77.9% to 86.1%, even outperforming state-of-the-art PointMLP. Second, we introduce an inverted residual bottleneck design and separable MLPs into PointNet++ to enable efficient and effective model scaling and propose PointNeXt, the next version of PointNets. PointNeXt can be flexibly scaled up and outperforms state-of-the-art methods on both 3D classification and segmentation tasks. For classification, PointNeXt reaches an overall accuracy of 87.7 on ScanObjectNN, surpassing PointMLP by 2.3%, while being 10x faster in inference. For semantic segmentation, PointNeXt establishes a new state-of-the-art performance with 74.9% mean IoU on S3DIS (6-fold cross-validation), being superior to the recent Point Transformer. The code and models are available at https://github.com/guochengqian/pointnext. | Accept | This paper presents a series of training strategies and settings that can improve PointNet++ to match the performance of state-of-the-art architectures. The AC agrees with reviewer jBW5 that the novelty of the paper is limited and some phenomena were observed before. However, the detailed training strategies might have the potential to benefit the research community. Open-sourcing the code will be therefore important.
| train | [
"LzT7kbsWNaC",
"xPBzVldABte",
"xEAl7fcvlD",
"dMsOQMxJlce",
"1r8GsFTJZy",
"jc6AXln9a_",
"99-uxjgNCbi",
"qntPRKmWSJl",
"p4wbN2TBjw",
"Wl9neW5kbD5",
"ccOrWYObTM",
"fzwsYq0Eugi",
"ituZsgRBBlIl",
"1f_212SYaDAF",
"1wMiuOqOJh",
"2mFGERe0rPQ",
"0QouSVlRys7",
"pxzfHSfBdh",
"zXjO2yvGOEi",
... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Dear reviewers and ACs: \n\nWe sincerely thank all reviewers for their insightful feedback and constructive suggestions. \n\n\nWe would like to emphasize that our work not only introduces a simple yet effective module InvResMLP for scaling up PointNet++. Our work also proposes a systematical analysis of the moder... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
5
] | [
"nips_2022_EAcWgk7JM58",
"xEAl7fcvlD",
"1r8GsFTJZy",
"qntPRKmWSJl",
"jc6AXln9a_",
"Wl9neW5kbD5",
"pxzfHSfBdh",
"p4wbN2TBjw",
"fzwsYq0Eugi",
"ccOrWYObTM",
"ituZsgRBBlIl",
"1f_212SYaDAF",
"2mFGERe0rPQ",
"1wMiuOqOJh",
"2GYnhDYK8a",
"jFk3dYVtln-",
"zXjO2yvGOEi",
"n_w2z7l38F",
"1q2oN6... |
nips_2022_wtuYr8_KhyM | Stochastic Adaptive Activation Function | The simulation of human neurons and neurotransmission mechanisms has been realized in deep neural networks based on the theoretical implementations of activation functions. However, recent studies have reported that the threshold potential of neurons exhibits different values according to the locations and types of individual neurons, and that the activation functions have limitations in terms of representing this variability. Therefore, this study proposes a simple yet effective activation function that facilitates different thresholds and adaptive activations according to the positions of units and the contexts of inputs. Furthermore, the proposed activation function mathematically exhibits a more generalized form of Swish activation function, and thus we denoted it as Adaptive SwisH (ASH). ASH highlights informative features that exhibit large values in the top percentiles in an input, whereas it rectifies low values. Most importantly, ASH exhibits trainable, adaptive, and context-aware properties compared to other activation functions. Furthermore, ASH represents general formula of the previously studied activation function and provides a reasonable mathematical background for the superior performance. To validate the effectiveness and robustness of ASH, we implemented ASH into many deep learning models for various tasks, including classification, detection, segmentation, and image generation. Experimental analysis demonstrates that our activation function can provide the benefits of more accurate prediction and earlier convergence in many deep learning applications. | Accept | Reviewers appreciated the novelty of the proposed activation function, the theoretical motivation and its connection to the SwisH activation.
In terms of presentation and soundness of the results, Reviewers pointed out some weaknesses in the initial reviews for this paper. In particular, the reviews voiced some concerns with the clarity and formatting of some figures, the lack of clarity of a mathematical derivation, and most of all, issues in the presentation of the empirical results that didn't report confidence intervals allowing for an assessment of the statistical significance of accuracy differences. These weaknesses were however addressed in ways that satisfied the Reviewers in the rebuttals and subsequent versions of the paper.
Thanks to these welcome changes the paper has now garnered unanimous consensus among Reviewers that it should be accepted. | train | [
"EeFmr-V0FNj",
"V9wSPPhhp2Q",
"RDlJK5DiF_",
"8hwgk53FqJ8",
"IstzT1eqW2r",
"WZIuZ2Zf9g",
"_Q2bpNZQ6xsS",
"etvQiqBbaNS",
"kwTMLCvo9hC",
"WZsfJ7YeHpI",
"z6JwFlxiy8B",
"LSr0VSxfLff",
"arD_qaGqu5Q",
"WxEnunbhMei"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" - Thank you for adding variances. \n- An honest discussion of limitations is often a good starting point for future work. Thank you for pointing out possible limitations in the rebuttal.\n- I have raised my rating from six to seven.\n\n- My only concern at this point is the source code. I don't think the results ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
9,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"WZsfJ7YeHpI",
"RDlJK5DiF_",
"8hwgk53FqJ8",
"IstzT1eqW2r",
"kwTMLCvo9hC",
"nips_2022_wtuYr8_KhyM",
"arD_qaGqu5Q",
"LSr0VSxfLff",
"WxEnunbhMei",
"z6JwFlxiy8B",
"nips_2022_wtuYr8_KhyM",
"nips_2022_wtuYr8_KhyM",
"nips_2022_wtuYr8_KhyM",
"nips_2022_wtuYr8_KhyM"
] |
nips_2022_YTXIIc7cAQ | Improved Fine-Tuning by Better Leveraging Pre-Training Data | As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many deep learning applications, especially for small data sets. However, recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy once the number of training samples is increased in some vision tasks. In this work, we revisit this phenomenon from the perspective of generalization analysis by using excess risk bound which is popular in learning theory. The result reveals that the excess risk bound may have a weak dependency on the pre-trained model. The observation inspires us to leverage pre-training data for fine-tuning, since this data is also available for fine-tuning. The generalization result of using pre-training data shows that the excess risk bound on a target task can be improved when the appropriate pre-training data is included in fine-tuning. With the theoretical motivation, we propose a novel selection strategy to select a subset from pre-training data to help improve the generalization on the target task. Extensive experimental results for image classification tasks on 8 benchmark data sets verify the effectiveness of the proposed data selection based fine-tuning pipeline. | Accept | The paper studies reuse of source data (originally used for pre-training) in the fine-tuning phase. Due to the difference between source and target data, use of the entire source data for fine-tuning can degrade generalization for the target task. However, the paper shows that by carefully choosing a subset of the source data, the generalization performance can exceed what fine-tuning on target data alone can achieve. The scheme used for subset selection is based on unbalanced optimal transport and is theoretically justified via a Theorem in the paper. Empirical results on different datasets show that the proposed scheme indeed adds some gain in generalization.
The authors and reviewers were engaged in active discussion. Reviewers raised interesting questions including, when the source data and really benefit learning the target task, choice of neural architectures, relations to catastrophic forgetting, sensitivity to hyperparameters, usefulness of Euclidean distance for clustering in high dimensions, and practicality of the assumption that both pre-trained model, and its data are available at the fine-tuning time.
Authors provided a thorough answer to these questions. Reviewer 51Vh who was the most skeptical raised their score after the rebuttal. While the paper's final score ends up being in borderline, all the scores are on the accept side. I think the contributions of the paper are interesting enough to be published, and i recommend accept. I encourage authors to incorporate the feedback they received from the reviewers in the final version of the paper. | train | [
"nLJQPIKBbd",
"KYhMYWv3axQ",
"fG-k2FoVcfU",
"QxhhzQqsh50",
"dbtiwN1AiEx",
"03VdJoPiGWd",
"jZ1WxLuWrEx",
"btzhyOYiV7y",
"jaOKce1QBXd",
"NF2oRxFryGB",
"4gYIeq9tj3KL",
"flvFodYDVAm",
"fyZd4vGBPBp",
"saD2IJIP2W2",
"_8_fQbMBul",
"8krOTLgLNkF",
"WYeSUMbe6n8",
"mfvjiw2Vpn",
"9TcvwldS5io... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Thanks for your insightful comment. Here is our response.\n\nIn the data reusing framework, the random data selection plays the role of constraining the model to be close to the initialization (pre-trained model), in that it uniformly samples images from the pre-training data and aims to maintain the generic solu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"KYhMYWv3axQ",
"8krOTLgLNkF",
"dbtiwN1AiEx",
"eG17XHYLBjt",
"03VdJoPiGWd",
"jZ1WxLuWrEx",
"jaOKce1QBXd",
"jaOKce1QBXd",
"4gYIeq9tj3KL",
"flvFodYDVAm",
"flvFodYDVAm",
"fyZd4vGBPBp",
"saD2IJIP2W2",
"qlnIz_SJbGE",
"eG17XHYLBjt",
"hCeMBT1lrN5",
"mfvjiw2Vpn",
"wf7AfZTAaZT",
"nips_2022... |
nips_2022_0cn6LSqwjUv | RainNet: A Large-Scale Imagery Dataset and Benchmark for Spatial Precipitation Downscaling | AI-for-science approaches have been applied to solve scientific problems (e.g., nuclear fusion, ecology, genomics, meteorology) and have achieved highly promising results. Spatial precipitation downscaling is one of the most important meteorological problem and urgently requires the participation of AI. However, the lack of a well-organized and annotated large-scale dataset hinders the training and verification of more effective and advancing deep-learning models for precipitation downscaling. To alleviate these obstacles, we present the first large-scale spatial precipitation downscaling dataset named RainNet, which contains more than 62,400 pairs of high-quality low/high-resolution precipitation maps for over 17 years, ready to help the evolution of deep learning models in precipitation downscaling. Specifically, the precipitation maps carefully collected in RainNet cover various meteorological phenomena (e.g., hurricane, squall), which is of great help to improve the model generalization ability. In addition, the map pairs in RainNet are organized in the form of image sequences (720 maps per month or 1 map/hour), showing complex physical properties, e.g., temporal misalignment, temporal sparse, and fluid properties. Furthermore, two deep-learning-oriented metrics are specifically introduced to evaluate or verify the comprehensive performance of the trained model (e.g., prediction maps reconstruction accuracy). To illustrate the applications of RainNet, 14 state-of-the-art models, including deep models and traditional approaches, are evaluated. To fully explore potential downscaling solutions, we propose an implicit physical estimation benchmark framework to learn the above characteristics. Extensive experiments demonstrate the value of RainNet in training and evaluating downscaling models. Our dataset is available at https://neuralchen.github.io/RainNet/. | Accept | This paper describes SPDNet, a dataset for spatial precipitation downscaling.
Experiments are provided using a fairly wide set of alternative methods - 14 models (including Kriging which is a widely used standard method in the meteorological community) - as well as a novel architecture proposed by the authors. The authors also extended SRGAN, EDSR, ESRGAN from Single Image Super Resolution (SISR) methods to Video Super Resolution (VSR) methods. While the level of innovation on the neural architecture side of the work is not extreme, clear value is provided in terms of contributions to neural architecture development. Reviewers felt that the dataset itself, the wide variety of models examined and the large set of evaluation metrics offers value to the community and that this dataset could help bring more interest to the problem domain.
During the discussion period it was made clear that "All relevant codes and datasets are open-source for research purposes" and that
"The dataset and the code are not proprietary. We will build a dedicated github repository and website for users to easily use our datasets and codes." It is important that this is indeed is fully executed by the authors.
Three of four reviewers recommended acceptance.
For all these reasons the AC recommends acceptance. | test | [
"7hoTfc2gAyo",
"lnVp28VEsCi",
"Smp8psilatv",
"5wwckMiMQ2N",
"ndu1tUMJVku",
"4N-CL_8pR09",
"9vOAx1JCX0r",
"ZIIO_BtU8bA",
"iyh5uAQH1sE",
"nf3SWnPfY5",
"bc26pvVJeJA",
"8DMtDBXurHN",
"WBmpyaytXL",
"jjMUDMTFaxK",
"yXth6737L-j",
"1mMfRqp4pty"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank you for the review and comments.\n\nAlso thank you for acknowledging the value of our dataset.\n\nAfter discussions in our team, we thought that we should reduce the discussion of metrics and focus on metric that are very familiar to the computer field (such as RMSE).\nWe will add more content ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"lnVp28VEsCi",
"iyh5uAQH1sE",
"jjMUDMTFaxK",
"1mMfRqp4pty",
"WBmpyaytXL",
"9vOAx1JCX0r",
"nf3SWnPfY5",
"1mMfRqp4pty",
"1mMfRqp4pty",
"yXth6737L-j",
"jjMUDMTFaxK",
"WBmpyaytXL",
"nips_2022_0cn6LSqwjUv",
"nips_2022_0cn6LSqwjUv",
"nips_2022_0cn6LSqwjUv",
"nips_2022_0cn6LSqwjUv"
] |
nips_2022_XdDl3bFUNn5 | Towards Robust Blind Face Restoration with Codebook Lookup Transformer | Blind face restoration is a highly ill-posed problem that often requires auxiliary guidance to 1) improve the mapping from degraded inputs to desired outputs, or 2) complement high-quality details lost in the inputs. In this paper, we demonstrate that a learned discrete codebook prior in a small proxy space largely reduces the uncertainty and ambiguity of restoration mapping by casting \textit{blind face restoration} as a \textit{code prediction} task, while providing rich visual atoms for generating high-quality faces. Under this paradigm, we propose a Transformer-based prediction network, named \textit{CodeFormer}, to model the global composition and context of the low-quality faces for code prediction, enabling the discovery of natural faces that closely approximate the target faces even when the inputs are severely degraded. To enhance the adaptiveness for different degradation, we also propose a controllable feature transformation module that allows a flexible trade-off between fidelity and quality. Thanks to the expressive codebook prior and global modeling, \textit{CodeFormer} outperforms the state of the arts in both quality and fidelity, showing superior robustness to degradation. Extensive experimental results on synthetic and real-world datasets verify the effectiveness of our method. | Accept | This work establishes a face restoration algorithm via integrating and optimizing several existing techniques, including VQ-GAN, Codebook prediction and Transformer. The key innovation comes from a Transformer-based prediction network, named CodeFormer, which may somehow exploit the global contexts helpful for codebook lookup. The experiments are reasonably designed, and the results are convincing. All the reviews agree that the paper is well-written and contains solid contributions, thus I would recommend accepting the paper. | train | [
"PtG45NQpkkJ",
"Ply0VluRFTh",
"tb0L7U4mSC9",
"CQVYb2nxWMh",
"BMQYtEixcZa",
"KM4jVjaztpu",
"8jhUY9VZDme",
"e-acn3TDh47",
"MnrrwlkBJ9J",
"_PGPuyLSGf9n",
"j0mKxBWR2X3",
"xDl_9vXlFNg",
"6ixFh7wv5Z-",
"RIp-yQeSiFA",
"7Gb3Z_F59BM",
"48-nm_Xzd3z",
"WjK18NnVFnW",
"GaqDNYh_rEb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_rev... | [
" No ethical issues No ethical issues No ethical issues",
" The paper is flagged potentially because of bias/discrimination concerns with the output of the algorithm which restores blurred images. There is no discussion in the paper or in the rebuttal on bias properties of the proposed algorithm. The authors did ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
5,
5
] | [
"nips_2022_XdDl3bFUNn5",
"nips_2022_XdDl3bFUNn5",
"xDl_9vXlFNg",
"nips_2022_XdDl3bFUNn5",
"e-acn3TDh47",
"8jhUY9VZDme",
"MnrrwlkBJ9J",
"j0mKxBWR2X3",
"_PGPuyLSGf9n",
"xDl_9vXlFNg",
"GaqDNYh_rEb",
"WjK18NnVFnW",
"48-nm_Xzd3z",
"7Gb3Z_F59BM",
"nips_2022_XdDl3bFUNn5",
"nips_2022_XdDl3bFUN... |
nips_2022_fiBnhdazkyx | A Coupled Design of Exploiting Record Similarity for Practical Vertical Federated Learning | Federated learning is a learning paradigm to enable collaborative learning across different parties without revealing raw data. Notably, vertical federated learning (VFL), where parties share the same set of samples but only hold partial features, has a wide range of real-world applications. However, most existing studies in VFL disregard the "record linkage'' process. They design algorithms either assuming the data from different parties can be exactly linked or simply linking each record with its most similar neighboring record. These approaches may fail to capture the key features from other less similar records. Moreover, such improper linkage cannot be corrected by training since existing approaches provide no feedback on linkage during training. In this paper, we design a novel coupled training paradigm, FedSim, that integrates one-to-many linkage into the training process. Besides enabling VFL in many real-world applications with fuzzy identifiers, FedSim also achieves better performance in traditional VFL tasks. Moreover, we theoretically analyze the additional privacy risk incurred by sharing similarities. Our experiments on eight datasets with various similarity metrics show that FedSim outperforms other state-of-the-art baselines. The codes of FedSim are available at https://github.com/Xtra-Computing/FedSim. | Accept | This paper proposes a VFL technique that is effective in practice (for some datasets) but intuitively may not be general enough for a significant portion of common settings, such as when the identifiers are names. While we recommend to accept this work, we hope the authors can seriously revise this paper in the final version on:
1. Ensuring that overclaiming statements are removed.
2. Adding evidence showing that when the identifiers are names, it can also be effective, in the final version.
3. Adding discussion on how the idea of "similar but not exactly the same samples can be beneficial" used in other topics, e.g., kNN, graph neural networks, semi-supervised learning, personalized federated learning, etc. is related to the one used in VFL. | train | [
"w0PCNKJrV_P",
"HTqs_LbvBQl",
"5EGfdb3OAxw",
"5Vgw8y8hHd5",
"RiKd1VhyYBj",
"TemxPtrwDWP",
"ur9CcokaJ_W",
"nvCNi8OTHKh",
"k7JsXSVSNJy",
"6ZzXvXflBt",
"L9iob09rkhe",
"sWyDy-hr2Vh",
"QktqnbbVZke",
"jteJqdcYkXK",
"yjLmL3o0eF_",
"_NSBEaUxNWI",
"qZkpfsJO1vW",
"ElKklpTWxVs",
"clAaUW6Aup... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" \n\n\nWe thank the reviewer for raising the score. To address the reviewer's last concern regarding the effectiveness of FedSim on identifiers like \"names\", we conduct experiments on an **additional real-world dataset \"_company_\"**, the identifiers in which are \"**company names**\". Our experimental results ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
3,
4
] | [
"HTqs_LbvBQl",
"TemxPtrwDWP",
"nips_2022_fiBnhdazkyx",
"RiKd1VhyYBj",
"QktqnbbVZke",
"ur9CcokaJ_W",
"nvCNi8OTHKh",
"k7JsXSVSNJy",
"6ZzXvXflBt",
"Og0Hc7y55hl",
"Og0Hc7y55hl",
"clAaUW6Aup4",
"ElKklpTWxVs",
"qZkpfsJO1vW",
"_NSBEaUxNWI",
"nips_2022_fiBnhdazkyx",
"nips_2022_fiBnhdazkyx",
... |
nips_2022_x8DNliTBSYY | Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization | The Neural Tangent Kernel (NTK) has emerged as a powerful tool to provide memorization, optimization and generalization guarantees in deep neural networks. A line of work has studied the NTK spectrum for two-layer and deep networks with at least a layer with $\Omega(N)$ neurons, $N$ being the number of training samples. Furthermore, there is increasing evidence suggesting that deep networks with sub-linear layer widths are powerful memorizers and optimizers, as long as the number of parameters exceeds the number of samples. Thus, a natural open question is whether the NTK is well conditioned in such a challenging sub-linear setup. In this paper, we answer this question in the affirmative. Our key technical contribution is a lower bound on the smallest NTK eigenvalue for deep networks with the minimum possible over-parameterization: up to logarithmic factors, the number of parameters is $\Omega(N)$ and, hence, the number of neurons is as little as $\Omega(\sqrt{N})$. To showcase the applicability of our NTK bounds, we provide two results concerning memorization capacity and optimization guarantees for gradient descent training. | Accept | solid contribution to ntk theory | train | [
"klfd30Ta0X1",
"OA0FbNEl-I1",
"nLOPqHm_Gd5",
"hJunvw7KtVN",
"njEsyOZMxCh",
"MhSPWfvg_Dr",
"YUJw4qR7uch",
"WKUsCo819J4",
"rsjnTR1m-px",
"PRS1t0Gr8t",
"nGiVEgz1q_E"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer *mvcT* for the constructive comments and for raising the score. We have uploaded a slightly edited revision which incorporates the follow-up comment.\n\nThe main change is to replace the requirement (4) in Assumption 2.5 by $N\\log^{8} N=o(n_{L-2}n_{L-1})$. This last expression does not contain ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
2,
4
] | [
"OA0FbNEl-I1",
"hJunvw7KtVN",
"nips_2022_x8DNliTBSYY",
"nGiVEgz1q_E",
"PRS1t0Gr8t",
"rsjnTR1m-px",
"WKUsCo819J4",
"nips_2022_x8DNliTBSYY",
"nips_2022_x8DNliTBSYY",
"nips_2022_x8DNliTBSYY",
"nips_2022_x8DNliTBSYY"
] |
nips_2022_pMumil2EJh | Multivariate Time-Series Forecasting with Temporal Polynomial Graph Neural Networks | Modeling multivariate time series (MTS) is critical in modern intelligent systems. The accurate forecast of MTS data is still challenging due to the complicated latent variable correlation. Recent works apply the Graph Neural Networks (GNNs) to the task, with the basic idea of representing the correlation as a static graph. However, predicting with a static graph causes significant bias because the correlation is time-varying in the real-world MTS data. Besides, there is no gap analysis between the actual correlation and the learned one in their works to validate the effectiveness. This paper proposes a temporal polynomial graph neural network (TPGNN) for accurate MTS forecasting, which represents the dynamic variable correlation as a temporal matrix polynomial in two steps. First, we capture the overall correlation with a static matrix basis. Then, we use a set of time-varying coefficients and the matrix basis to construct a matrix polynomial for each time step. The constructed result empirically captures the precise dynamic correlation of six synthetic MTS datasets generated by a non-repeating random walk model. Moreover, the theoretical analysis shows that TPGNN can achieve perfect approximation under a commutative condition. We conduct extensive experiments on two traffic datasets with prior structure and four benchmark datasets. The results indicate that TPGNN achieves the state-of-the-art on both short-term and long-term MTS forecastings. | Accept | This well-written paper has been carefully evaluated by four competent reviewers. Three of them rated the work as marginally acceptable, one gave it full accept score. In despite of a few identified deficiencies, including limited cohort of comparison models, overstated claims about performance of the proposed model at long-range forecasting, and some minor limitations of the empirical evaluation protocol, the reviewers were confidently positive about the work. I recommend acceptance. | train | [
"K8eh7afAiuM",
"I4ezJh56Djb",
"U0viBC3-xxB",
"iuBZcc_z88T",
"VwYPH4wq3N",
"G83cKZmT7yo",
"HhTZvAl58Ue",
"mSAKjTuIyh",
"DQQ3B1w4RWK",
"IcHtlTVZkh",
"geJ2vc6yJya",
"re7MRekgf5O",
"3d3Spev6GfS",
"X4oAlc_pgbp",
"HgnMuTsdWp1"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n Do you have any further concerns or suggestions? We are very delighted to discuss them with you.",
" Dear Reviewer,\n\nSince the rebuttal discussion is about to end soon, we are wondering if our response and revision have cleared your concerns. We would appreciate it if you could kindly let ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"U0viBC3-xxB",
"re7MRekgf5O",
"HgnMuTsdWp1",
"DQQ3B1w4RWK",
"nips_2022_pMumil2EJh",
"IcHtlTVZkh",
"nips_2022_pMumil2EJh",
"HgnMuTsdWp1",
"X4oAlc_pgbp",
"3d3Spev6GfS",
"re7MRekgf5O",
"nips_2022_pMumil2EJh",
"nips_2022_pMumil2EJh",
"nips_2022_pMumil2EJh",
"nips_2022_pMumil2EJh"
] |
nips_2022_Zk1SbbdZwS | Model-Based Imitation Learning for Urban Driving | An accurate model of the environment and the dynamic agents acting in it offers great potential for improving motion planning. We present MILE: a Model-based Imitation LEarning approach to jointly learn a model of the world and a policy for autonomous driving. Our method leverages 3D geometry as an inductive bias and learns a highly compact latent space directly from high-resolution videos of expert demonstrations. Our model is trained on an offline corpus of urban driving data, without any online interaction with the environment. MILE improves upon prior state-of-the-art by 35% in driving score on the CARLA simulator when deployed in a completely new town and new weather conditions. Our model can predict diverse and plausible states and actions, that can be interpretably decoded to bird's-eye view semantic segmentation. Further, we demonstrate that it can execute complex driving manoeuvres from plans entirely predicted in imagination. Our approach is the first camera-only method that models static scene, dynamic scene, and ego-behaviour in an urban driving environment. The code and model weights are available at https://github.com/wayveai/mile. | Accept | This work introduced a model-based framework for offline imitation learning of autonomous driving policies in simulated urban environments. The proposed model MILE jointly learns a world model and predicts expert actions using a variational generative model. This paper was reviewed by three expert reviewers. At the initial reviews, the reviewers raised several questions about technical details and gave valuable suggestions on the overall presentation of the paper. In particular, Reviewer W3CA pointed out that some of the claims made in the paper were not sufficiently supported by quantitative evidence, and Reviewer s7YQ suggested an additional discussion of trajectory prediction methods. The authors did a good job drafting a detailed rebuttal and updating the paper revision, which addressed most of the reviewers' concerns. In the end, all three reviewers leaned towards accepting this paper. The AC read the paper, the reviews, and the discussions in detail and believed that this paper had presented a strong showcase of using model-based approaches for challenging vision-based autonomous driving problems. Taking all these into account, the AC recommends accepting this paper at NeurIPS 2022. | train | [
"03IiqkZjlB9",
"78SDO689yZ-",
"S9-4xxO3nWC",
"Cus5jFCq4g-",
"4gfdso1EgDra",
"LfnRz9JQuJ8",
"0qgbUOiUo9",
"D5IoKk3FvYy",
"yu4PIur3o7C",
"HstKp2gbBx",
"yGSKYtZJF2V",
"yJFla3eoT8D",
"1-cSaYl7JhN",
"4NmaW94cdwi"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Yes the appendix was originally in the supplementary.zip but we have included the updated Appendix directly in the main paper so that it was easier for reviewers to have access to all the modifications in a single document. Sorry for the confusion this has created.\n\n- __\"Prediction of diverse and plausible fut... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"78SDO689yZ-",
"S9-4xxO3nWC",
"Cus5jFCq4g-",
"yu4PIur3o7C",
"LfnRz9JQuJ8",
"yGSKYtZJF2V",
"D5IoKk3FvYy",
"HstKp2gbBx",
"4NmaW94cdwi",
"1-cSaYl7JhN",
"yJFla3eoT8D",
"nips_2022_Zk1SbbdZwS",
"nips_2022_Zk1SbbdZwS",
"nips_2022_Zk1SbbdZwS"
] |
nips_2022_2EwEWrNADpT | Learning Multi-resolution Functional Maps with Spectral Attention for Robust Shape Matching | In this work, we present a novel non-rigid shape matching framework based on multi-resolution functional maps with spectral attention. Existing functional map learning methods all rely on the critical choice of the spectral resolution hyperparameter, which can severely affect the overall accuracy or lead to overfitting, if not chosen carefully. In this paper, we show that spectral resolution tuning can be alleviated by introducing spectral attention. Our framework is applicable in both supervised and unsupervised settings, and we show that it is possible to train the network so that it can adapt the spectral resolution, depending on the given shape input. More specifically, we propose to compute multi-resolution functional maps that characterize correspondence across a range of spectral resolutions, and introduce a spectral attention network that helps to combine this representation into a single coherent final correspondence. Our approach is not only accurate with near-isometric input, for which a high spectral resolution is typically preferred, but also robust and able to produce reasonable matching even in the presence of significant non-isometric distortion, which poses great challenges to existing methods. We demonstrate the superior performance of our approach through experiments on a suite of challenging near-isometric and non-isometric shape matching benchmarks. | Accept | All reviewers voted for acceptance of the paper. Reviewers acknowledge that the paper addresses an important problem: choosing the size of the truncated Eigenbasis for matching using functional maps. Also strong empirical performance on a number of datasets was noted. The rebuttal also addressed many points raised by reviewers and generally improved our impression of the paper. Overall this paper is a nice mix of theoretical contribution and practical performance. Therefore the paper is recommended for acceptance.
We ask the authors to incorporate the feedback given in the review and discussion phase. | train | [
"9yr2kl4am0L",
"Jif9NygUN3p",
"YAdHHLw7kG_",
"XSjwVfUckYo",
"J_qVE1RrTq6",
"8UNbYB8J2z",
"Jw6aTZ6qBFDx",
"YiVsKoQK5PO",
"magoZYwGwlP",
"uzOlryqTcR-",
"r0XtbfdqFum",
"PM3oDztoQJ_",
"On4k4-3Y2zb"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the positive feedback. We will, for sure, include all the clarifications and the additional experiments in our paper.",
" I would like thank the authors for their hard work to address my comments (including performing additional experiments). After reading authors' response as well as ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"Jif9NygUN3p",
"Jw6aTZ6qBFDx",
"XSjwVfUckYo",
"uzOlryqTcR-",
"8UNbYB8J2z",
"YiVsKoQK5PO",
"On4k4-3Y2zb",
"magoZYwGwlP",
"PM3oDztoQJ_",
"r0XtbfdqFum",
"nips_2022_2EwEWrNADpT",
"nips_2022_2EwEWrNADpT",
"nips_2022_2EwEWrNADpT"
] |
nips_2022_evWx_rWWJuG | Fully Sparse 3D Object Detection | As the perception range of LiDAR increases, LiDAR-based 3D object detection becomes a dominant task in the long-range perception task of autonomous driving. The mainstream 3D object detectors usually build dense feature maps in the network backbone and prediction head. However, the computational and spatial costs on the dense feature map are quadratic to the perception range, which makes them hardly scale up to the long-range setting. To enable efficient long-range LiDAR-based object detection, we build a fully sparse 3D object detector (FSD). The computational and spatial cost of FSD is roughly linear to the number of points and independent of the perception range. FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module. SIR first groups the points into instances and then applies instance-wise feature extraction and prediction. In this way, SIR resolves the issue of center feature missing, which hinders the design of the fully sparse architecture for all center-based or anchor-based detectors. Moreover, SIR avoids the time-consuming neighbor queries in previous point-based methods by grouping points into instances. We conduct extensive experiments on the large-scale Waymo Open Dataset to reveal the working mechanism of FSD, and state-of-the-art performance is reported. To demonstrate the superiority of FSD in long-range detection, we also conduct experiments on Argoverse 2 Dataset, which has a much larger perception range ($200m$) than Waymo Open Dataset ($75m$). On such a large perception range, FSD achieves state-of-the-art performance and is 2.4$\times$ faster than the dense counterpart. Codes will be released. | Accept | After the rebuttal and discussion all reviewers are positive, and recommend acceptance. The AC agrees with this recommendation. | test | [
"uiXX1Tysm4",
"lZuEbZxdsdM",
"6Zx9D0g6hxQ",
"c0dzrDsKDabk",
"vpxEq7O9IBH",
"VySn_DM_qj3",
"5xWzcTZVB9q",
"AEX3uX3wU87",
"SMOEh_fQxk",
"JMaXyxyiibv",
"OSLHuaIcuO6",
"DJ_59eLceC9"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your efforts on the additional experiments and detailed response. They have resolved most of my concerns. Therefore, I will increase my rating to 6. Great work :)",
" We really appreciate your positive comments, which means a lot to us!\\\nWe will definitely follow all reviewers' comments to impro... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"VySn_DM_qj3",
"6Zx9D0g6hxQ",
"vpxEq7O9IBH",
"SMOEh_fQxk",
"DJ_59eLceC9",
"5xWzcTZVB9q",
"OSLHuaIcuO6",
"JMaXyxyiibv",
"nips_2022_evWx_rWWJuG",
"nips_2022_evWx_rWWJuG",
"nips_2022_evWx_rWWJuG",
"nips_2022_evWx_rWWJuG"
] |
nips_2022_fU-m9kQe0ke | Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer | The large pre-trained vision transformers (ViTs) have demonstrated remarkable performance on various visual tasks, but suffer from expensive computational and memory cost problems when deployed on resource-constrained devices. Among the powerful compression approaches, quantization extremely reduces the computation and memory consumption by low-bit parameters and bit-wise operations. However, low-bit ViTs remain largely unexplored and usually suffer from a significant performance drop compared with the real-valued counterparts. In this work, through extensive empirical analysis, we first identify the bottleneck for severe performance drop comes from the information distortion of the low-bit quantized self-attention map. We then develop an information rectification module (IRM) and a distribution guided distillation (DGD) scheme for fully quantized vision transformers (Q-ViT) to effectively eliminate such distortion, leading to a fully quantized ViTs. We evaluate our methods on popular DeiT and Swin backbones. Extensive experimental results show that our method achieves a much better performance than the prior arts. For example, our Q-ViT can theoretically accelerates the ViT-S by 6.14x and achieves about 80.9% Top-1 accuracy, even surpassing the full-precision counterpart by 1.0% on ImageNet dataset. Our codes and models are attached on https://github.com/YanjingLi0202/Q-ViT | Accept | This paper proposes a novel method for Vision Transformers quantization. The IRM and DGD scheme is developed to solve the bottleneck of low-bit quantized Vision Transformers. All the reviewers agree that the proposed method is novel and effective. The concerns and questions are well addressed during the rebuttal period. The overall quality is clearly above the bar, and thus the paper should be accepted for publication. | test | [
"OsAkyTVB652",
"ziXmX5YZ5GA",
"7HTGJj32uVQ",
"UgoGjslx21b",
"9fvDUWTvgV3",
"6sXRlFMDV1t",
"-omq-dVGfjw",
"g7epMkrpr86",
"dsvR1RjL0i0",
"Vnz70NEln7j",
"nXSVVIc-GgW"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your valuable time and constructive comments in reviewing our paper. We will further revise and polish our final version towards publication.",
" I have read all the reviews and author response, the authors made significant efforts to address all the raised concerns. I would keep my decision a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
5
] | [
"ziXmX5YZ5GA",
"UgoGjslx21b",
"nips_2022_fU-m9kQe0ke",
"nXSVVIc-GgW",
"dsvR1RjL0i0",
"Vnz70NEln7j",
"g7epMkrpr86",
"nips_2022_fU-m9kQe0ke",
"nips_2022_fU-m9kQe0ke",
"nips_2022_fU-m9kQe0ke",
"nips_2022_fU-m9kQe0ke"
] |
nips_2022_QYD9bDWR3R_ | Stability and Generalization of Kernel Clustering: from Single Kernel to Multiple Kernel | Multiple kernel clustering (MKC) is an important research topic that has been widely studied for decades. However, current methods still face two problems: inefficient when handling out-of-sample data points and lack of theoretical study of the stability and generalization of clustering. In this paper, we propose a novel method that can efficiently compute the embedding of out-of-sample data with a solid generalization guarantee. Specifically, we approximate the eigen functions of the integral operator associated with the linear combination of base kernel functions to construct low-dimensional embeddings of out-of-sample points for efficient multiple kernel clustering. In addition, we, for the first time, theoretically study the stability of clustering algorithms and prove that the single-view version of the proposed method has uniform stability as $\mathcal{O}\left(Kn^{-3/2}\right)$ and establish an upper bound of excess risk as $\widetilde{\mathcal{O}}\left(Kn^{-3/2}+n^{-1/2}\right)$, where $K$ is the cluster number and $n$ is the number of samples. We then extend the theoretical results to multiple kernel scenarios and find that the stability of MKC depends on kernel weights. As an example, we apply our method to a novel MKC algorithm termed SimpleMKKM and derive the upper bound of its excess clustering risk, which is tighter than the current results. Extensive experimental results validate the effectiveness and efficiency of the proposed method. | Accept | The paper introduces a methodology for clustering out-of-sample data in the multiple kernel clustering (MKC) problem by leveraging the relationship between the empirical kernel matrix and the integral operator of the kernel function. Clustering risk bounds for the proposed method are provided that compare favorably with the literature, and numerical experimentation shows that the methodology performs well when applied to algorithms developed for both single and multiple kernel clustering. The reviewers concur that the methodology enables efficient large-scale MKC and provides a novel perspective on its generalization analysis. | train | [
"6XwF0V8K9AW",
"XJVYFvgmzsN",
"rukZFYrwHyF",
"zVf86OiBd-J",
"SdzGInVSzN_",
"vtm_sCsFqQAc",
"sePpVVAhPwl",
"qpXTN14FIYe",
"0dlkQasAO3r",
"ViyP9R4pdxl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the explanations. I have read the author's feedback. I would like to keep my review unchanged, and continue to support acceptance for this paper!",
" The author-rebuttal phase closes today. Please acknowledge the author rebuttal and state if your position has changed. Thanks!",
" The author-rebutta... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"sePpVVAhPwl",
"0dlkQasAO3r",
"qpXTN14FIYe",
"SdzGInVSzN_",
"ViyP9R4pdxl",
"0dlkQasAO3r",
"qpXTN14FIYe",
"nips_2022_QYD9bDWR3R_",
"nips_2022_QYD9bDWR3R_",
"nips_2022_QYD9bDWR3R_"
] |
nips_2022_IzpgGB5pC_s | UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup | Subpopulation shift widely exists in many real-world machine learning applications, referring to the training and test distributions containing the same subpopulation groups but varying in subpopulation frequencies. Importance reweighting is a normal way to handle the subpopulation shift issue by imposing constant or adaptive sampling weights on each sample in the training dataset. However, some recent studies have recognized that most of these approaches fail to improve the performance over empirical risk minimization especially when applied to over-parameterized neural networks. In this work, we propose a simple yet practical framework, called uncertainty-aware mixup (UMIX), to mitigate the overfitting issue in over-parameterized models by reweighting the ''mixed'' samples according to the sample uncertainty. The training-trajectories-based uncertainty estimation is equipped in the proposed UMIX for each sample to flexibly characterize the subpopulation distribution. We also provide insightful theoretical analysis to verify that UMIX achieves better generalization bounds over prior works. Further, we conduct extensive empirical studies across a wide range of tasks to validate the effectiveness of our method both qualitatively and quantitatively. Code is available at https://github.com/TencentAILabHealthcare/UMIX. | Accept | The reviewers unanimously agreed here that incorporating uncertainty scores as importance weights for mixup, and empirically the authors' method seems to lead to substantial quantitative performance improvements. I think the heuristic use of the model's parameter history to estimate uncertainty is reasonable. However, while I am recommending acceptance, the SAC and I feel that there are a few concerns that arose in discussion we'd urge the authors to address in the camera ready version. In particular:
1. The authors should clarify whether the approximation in (6) actually converges in any meaningful sense to the posterior expectation in (5). My initial impression is that the answer to this is probably no. While the discussion on lines 195-202 reasonably motivates the use of equation 6, I think motivating this approach through a posterior expectation in equation 5 may slightly oversell the rigor of equation 6, at least as currently described. The authors should consider address whether the approximation is good by running an experiment and comparing their approximation to equation (5) to a monte carlo approximation on a toy scale model where this is feasible, which would more carefully isolate whether (6) is an approximation to (5) or a heuristic.
2. Some of the advantages of the authors' approach are a bit overstated. For example, using SWAG with optimizers other than SGD is fairly common in practice. While obviously this doesn't diminish the authors' results, I think it's worth fixing to ensure technical correctness here in the camera ready.
| train | [
"hYE8X_oSC4w",
"tTcqaF7RjHl",
"yQ1JpaeSwax",
"qIFHCTq5Hd0",
"m-9mshDqJBI",
"vthQM_yYTv",
"RUnMpC_AQCW",
"blo9WDS4f-n",
"hWuBNAvhq5U",
"7d7DzWlXxt2",
"1mOZnBje3CF",
"PQP9Ewfs2gk",
"sukZ66QsCY",
"2wHsr1CHoxp",
"AyWDbSRAUBv"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my concerns. After reading the rebuttal, I have adjusted my rating accordingly. ",
" Dear Reviewer,\n\nWe are wondering whether your concerns have been properly addressed.\nIf you have further questions after reading the answers, it would be great to let us know. \n\nBest regards, \nThe au... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"PQP9Ewfs2gk",
"PQP9Ewfs2gk",
"vthQM_yYTv",
"hWuBNAvhq5U",
"AyWDbSRAUBv",
"AyWDbSRAUBv",
"2wHsr1CHoxp",
"2wHsr1CHoxp",
"sukZ66QsCY",
"PQP9Ewfs2gk",
"PQP9Ewfs2gk",
"nips_2022_IzpgGB5pC_s",
"nips_2022_IzpgGB5pC_s",
"nips_2022_IzpgGB5pC_s",
"nips_2022_IzpgGB5pC_s"
] |
nips_2022_QRKmc0dRP75 | On the Strong Correlation Between Model Invariance and Generalization | Generalization and invariance are two essential properties of machine learning models. Generalization captures a model's ability to classify unseen data while invariance measures consistency of model predictions on transformations of the data. Existing research suggests a positive relationship: a model generalizing well should be invariant to certain visual factors. Building on this qualitative implication we make two contributions. First, we introduce effective invariance (EI), a simple and reasonable measure of model invariance which does not rely on image labels. Given predictions on a test image and its transformed version, EI measures how well the predictions agree and with what level of confidence. Second, using invariance scores computed by EI, we perform large-scale quantitative correlation studies between generalization and invariance, focusing on rotation and grayscale transformations. From a model-centric view, we observe generalization and invariance of different models exhibit a strong linear relationship, on both in-distribution and out-of-distribution datasets. From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets. Apart from these major findings, other minor but interesting insights are also discussed. | Accept | This work proposes a very simple to implement, yet effective, metric (effective invariance or EI) to assess the invariance of a model with respect to some input transformation. The main novelty of the proposed method is that it does not rely on the true label, but rather on the agreement between the predictions given an image and its transformed version.
The committee appreciates the comprehensiveness of the empirical evaluation conducted in the paper. Although theoretical analysis on how invariances improves generalization exists, as pointed out by the reviewers (missed in the paper's initial version), this paper performs a large-scale quantitative correlation study using various models and different test sets and empirically reports that model invariance and generalization exhibit a strong linear correlation on both in- and out-of-distribution test sets.
There is a heated discussion about the novelty of this work, as one reviewer pointed out that the authors missed a large subfield of existing literature in neuroscience and AI. The rest of the committee, however, does recognize the differences and believes that the efforts of the large scale experimental evaluations outweigh. The authors, however, are strongly suggested to provide a detailed comparison of the mentioned works in the revised paper. | train | [
"KNAqEeAI90Z",
"lhctoqmb57V",
"-2Oj-Qfks6",
"39FzGQaoPiR",
"3xHEkG9x1Ff",
"S15M6yBlnpqM",
"Ao9lBGTqIQI",
"USWN_okhorXn",
"gH1VnGI2z60P",
"dZitzt40BMW",
"a1eRTqP6kjn",
"48EMTfdxkFn",
"y2m7LMIYV8",
"Y8H7Sm1dyq9",
"vrUEweqH-qi",
"CVnvpNtK0an",
"X5qdsAAacNE",
"ucMAUxUaK2d",
"GG1KR1m9... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer h3Wy,\n\nThank you for acknowledging that our research problem is *\"extremely important\"* and *\"scale of the experiments conducted is large\"*.\nWe also thank you for pointing out the computational neuroscience papers. After reading them carefully, we find that they *do not* perform large-scale q... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"CVnvpNtK0an",
"-2Oj-Qfks6",
"vrUEweqH-qi",
"3xHEkG9x1Ff",
"y2m7LMIYV8",
"nips_2022_QRKmc0dRP75",
"USWN_okhorXn",
"48EMTfdxkFn",
"CVnvpNtK0an",
"CVnvpNtK0an",
"X5qdsAAacNE",
"X5qdsAAacNE",
"ucMAUxUaK2d",
"GG1KR1m9ylv",
"GG1KR1m9ylv",
"nips_2022_QRKmc0dRP75",
"nips_2022_QRKmc0dRP75",
... |
nips_2022__w2-1nXNjvv | Unsupervised Multi-Object Segmentation by Predicting Probable Motion Patterns | We propose a new approach to learn to segment multiple image objects without manual supervision. The method can extract objects form still images, but uses videos for supervision. While prior works have considered motion for segmentation, a key insight is that, while motion can be used to identify objects, not all objects are necessarily in motion: the absence of motion does not imply the absence of objects. Hence, our model learns to predict image regions that are likely to contain motion patterns characteristic of objects moving rigidly. It does not predict specific motion, which cannot be done unambiguously from a still image, but a distribution of possible motions, which includes the possibility that an object does not move at all. We demonstrate the advantage of this approach over its deterministic counterpart and show state-of-the-art unsupervised object segmentation performance on simulated and real-world benchmarks, surpassing methods that use motion even at test time. As our approach is applicable to variety of network architectures that segment the scenes, we also apply it to existing image reconstruction-based models showing drastic improvement. Project page and code: https://www.robots.ox.ac.uk/~vgg/research/ppmp. | Accept | This paper presents an approach for unsupervised multi-object segmentation. The majority of the reviewers believe the paper contains interesting technical materials that warrants its acceptance. The (only) remaining concern is from Reviewer bRe4, pointing out that the paper uses a more advanced backbone than the baselines. Although the other reviewers also agree to this point, they believe the ablations in the paper are valid and they justify the benefits of the approach. Overall, the ACs recommend the acceptance of the paper. | train | [
"mKk8J---6Jf",
"c2-dqqIphpy",
"XEyIwtjEig",
"xun9GD2NB0y",
"gX9iLKo-J7Z",
"G6nyYipbZTV",
"l1t-ZRcetn7",
"u82gicOm7QX",
"7bD4LCabMeP",
"r2luKeGmC_I",
"w6AN2XOsET",
"RnQQwiuAMhY",
"mHGXalAwqaA",
"q4kA6Bk_3wH",
"atbG9TkI4MRU",
"MO4lZd71BE",
"BLr5UzOaPZr",
"fwPnWdGy-L",
"qNh5ckeQ8UJ"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"... | [
" Thank you for your detailed response. The additional results on real-world data look indeed very promising. I would like to encourage you to extend the limitations and conclusion section regarding scaling to real-world data with reference to the new results.\n\nOverall my concerns have been well addressed. I am k... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"w6AN2XOsET",
"gX9iLKo-J7Z",
"q4kA6Bk_3wH",
"l1t-ZRcetn7",
"G6nyYipbZTV",
"atbG9TkI4MRU",
"r2luKeGmC_I",
"mHGXalAwqaA",
"qNh5ckeQ8UJ",
"qNh5ckeQ8UJ",
"fwPnWdGy-L",
"BLr5UzOaPZr",
"BLr5UzOaPZr",
"MO4lZd71BE",
"nips_2022__w2-1nXNjvv",
"nips_2022__w2-1nXNjvv",
"nips_2022__w2-1nXNjvv",
... |
nips_2022_QjurhjyTAb | Roadblocks for Temporarily Disabling Shortcuts and Learning New Knowledge | Deep learning models have been found with a tendency of relying on shortcuts, i.e., decision rules that perform well on standard benchmarks but fail when transferred to more challenging testing conditions. Such reliance may hinder deep learning models from learning other task-related features and seriously affect their performance and robustness. Although recent studies have shown some characteristics of shortcuts, there are few investigations on how to help the deep learning models to solve shortcut problems. This paper proposes a framework to address this issue by setting up roadblocks on shortcuts. Specifically, roadblocks are placed when the model is urged to learn to complete a gently modified task to ensure that the learned knowledge, including shortcuts, is insufficient the complete the task. Therefore, the model trained on the modified task will no longer over-rely on shortcuts. Extensive experiments demonstrate that the proposed framework significantly improves the training of networks on both synthetic and real-world datasets in terms of both classification accuracy and feature diversity. Moreover, the visualization results show that the mechanism behind the proposed our method is consistent with our expectations. In summary, our approach can effectively disable the shortcuts and thus learn more robust features. | Accept | The submission describes a new method to avoid the shortcut learning behaviour in DNNs. After the rebuttal and discussion, most of the reviewers are positive about this submission since the proposed method does not require prior knowledge about the dataset and the strong empirical results for the debasing task. On the negative results, the reviewers argue that the experimental evaluation is not thorough enough. Overall, AC recommends acceptance but asks the authors to perform more rigorous evaluation for the camera-ready version including the fair tuning of the hyperparameters. | train | [
"evE5eTX0-u1",
"wQBZQCszvGc",
"nLJ0XztbFkS",
"ey2y9HKZGLt",
"zrIcrnsYFuv",
"uzxPaoo_1W",
"GCbCxqyW-kB",
"MicqF2OinCS",
"jrHfQSnQbxe",
"eUteb_cZV1f",
"jyJ_ES_smDU",
"E4ELTfU6ZYN",
"DLfQ19cnLm0",
"0Sfo9MplANP",
"UC6yCqOq0OU",
"vJHXoD7KF3B",
"G6mZH01bCnE",
"QaKaUVYvaCe"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewers and AC for their contributions.\n\nThe reviewers asked many thoughtful questions, which inspired us a lot. Their meticulous review also helped us to better present our work.\n\nAfter discussion, most of the reviewer's concerns were resolved (2/4 reviewers raised their rating). We ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"0Sfo9MplANP",
"ey2y9HKZGLt",
"zrIcrnsYFuv",
"GCbCxqyW-kB",
"MicqF2OinCS",
"jyJ_ES_smDU",
"eUteb_cZV1f",
"jrHfQSnQbxe",
"DLfQ19cnLm0",
"QaKaUVYvaCe",
"G6mZH01bCnE",
"vJHXoD7KF3B",
"UC6yCqOq0OU",
"nips_2022_QjurhjyTAb",
"nips_2022_QjurhjyTAb",
"nips_2022_QjurhjyTAb",
"nips_2022_Qjurhj... |
nips_2022_RgWjps_LdkJ | Synthetic Model Combination: An Instance-wise Approach to Unsupervised Ensemble Learning | Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data - instead given access to a set of expert models and their predictions alongside some limited information about the dataset used to train them. In scenarios from finance to the medical sciences, and even consumer practice, stakeholders have developed models on private data they either cannot, or do not want to, share. Given the value and legislation surrounding personal information, it is not surprising that only the models, and not the data, will be released - the pertinent question becoming: how best to use these models? Previous work has focused on global model selection or ensembling, with the result of a single final model across the feature space. Machine learning models perform notoriously poorly on data outside their training domain however, and so we argue that when ensembling models the weightings for individual instances must reflect their respective domains - in other words models that are more likely to have seen information on that instance should have more attention paid to them. We introduce a method for such an instance-wise ensembling of models, including a novel representation learning step for handling sparse high-dimensional domains. Finally, we demonstrate the need and generalisability of our method on classical machine learning tasks as well as highlighting a real world use case in the pharmacological setting of vancomycin precision dosing. | Accept | This work suggests that in cases where data is sensitive it might be easier to gain access to pre-trained models instead of to the data used for training them. However, since these models were trained on different distributions, their prediction may be better/worse depending on whether the point of interest in in the support of the distribution they trained on. Hence, the setting is a sort of learning from experts’ advice [1] where the best expert should be selected locally.
In this work it is assumed that each model (expert) is published together with some information about the distribution on which the model was trained. The assumption that such data may be provided is justified by the common practice of providing descriptive statistics of the data used in publications in the medical domain, typically in Table 1 of such papers.
Several related problems have been studied before this work. One of the main criticism reviewers had about this work was the incomplete positioning of this work in relation to these earlier studies, especially in the first version of this work submitted for review. The clarifications were given by the authors in the rebuttal. Some differences between this work in prior art might be since some earlier studies tried to provide theoretical guarantees which forced the use of stronger (and explicitly) assumptions. It may be that the current work did not have to make such assumptions since it does not contain theoretical analysis.
The term “synthetic” here is used as a nod to synthetic control. The idea is that both in synthetic control and here a convex combination of weak models is used. However, this is true for almost any ensemble model (bagging, random forest, adaboost…). Moreover, in synthetic control the weighting is fixed (global) as opposed to the main selling point of this work which is the local weighting.
A key assumption made in this work is that a model will be confident in its predictions on regions that are in the support of the dataset used for training it. This is stated, for example, in lines 184-185. This assumption is not always correct since the decision boundary of a model could be a region of high density on which the model is not confident about. Another case in which this assumption might fail is when the information about the distribution used for training, I_m, fails to provide good description. For example, if the data contains two well separated clusters and the information in I_m is the mean and the variance then the samples generated are likely to be from the area of low density between the two clusters on which the model might have low confidence.
The solution provided is based on intuition and is not equipped with theoretical support. The experiments show encouraging results, but they have their own limitations. For example, in the MNIST experiment, what is the information about the underlying distribution provided to the algorithm? The reviewers made several comments about the empirical evaluation and the authors discussed this at length in their response explaining that since this is a new problem domain, there are no standard benchmarks available.
Overall, this work studies an interesting problem and presents novel ideas. However, these ideas are not fully analyzed since there is no theoretical analysis and the empirical evaluation has its own limitations. It does seem that this work may contribute to problems such as medication dosing demonstrated in Section 4.3 but if this is the main contribution then it is not clear the NeurIPS is the right venue for such work.
This puts this work as a borderline case for NeurIPS: it does present new problem and some novel ideas; however, the analysis has many loose ends.
[1] Cesa-Bianchi, Nicolo, et al. "How to use expert advice." Journal of the ACM (JACM) 44.3 (1997): 427-485.
| train | [
"x6DFYSe9pxg",
"yTYLR1x5nIb",
"6LRcylo0FZZ",
"10Ft_siXU8v",
"CVSAo1u_KX1",
"eGc9EWdE7uV",
"SR3SG16CTTx",
"EwrLZnnp6K",
"nMmeMgybSwY",
"2SLN6IS0I9B",
"Qcpd-I-38K3",
"s7BO_Pz5E08",
"G0JoAMPW6Um",
"8q21Epv-p2D",
"B_sjXTHkp6_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for your detailed response. The authors' explanation t on the \"real case\" makes good sense to me, but from the perspective of evaluating and demonstrating the proposed method, the \"real case\" may not provide strong evidence/confidence. Overall I feel that the method is promising and meanwhi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"2SLN6IS0I9B",
"6LRcylo0FZZ",
"nMmeMgybSwY",
"B_sjXTHkp6_",
"8q21Epv-p2D",
"nips_2022_RgWjps_LdkJ",
"B_sjXTHkp6_",
"nMmeMgybSwY",
"8q21Epv-p2D",
"G0JoAMPW6Um",
"s7BO_Pz5E08",
"nips_2022_RgWjps_LdkJ",
"nips_2022_RgWjps_LdkJ",
"nips_2022_RgWjps_LdkJ",
"nips_2022_RgWjps_LdkJ"
] |
nips_2022_eQfuHqEsUj | 4D Unsupervised Object Discovery | Object discovery is a core task in computer vision. While fast progresses have been made in supervised object detection, its unsupervised counterpart remains largely unexplored. With the growth of data volume, the expensive cost of annotations is the major limitation hindering further study. Therefore, discovering objects without annotations has great significance. However, this task seems impractical on still-image or point cloud alone due to the lack of discriminative information. Previous studies underlook the crucial temporal information and constraints naturally behind multi-modal inputs. In this paper, we propose 4D unsupervised object discovery, jointly discovering objects from 4D data -- 3D point clouds and 2D RGB images with temporal information. We present the first practical approach for this task by proposing a ClusterNet on 3D point clouds, which is jointly iteratively optimized with a 2D localization network. Extensive experiments on the large-scale Waymo Open Dataset suggest that the localization network and ClusterNet achieve competitive performance on both class-agnostic 2D object detection and 3D instance segmentation, bridging the gap between unsupervised methods and full supervised ones. Codes and models will be made available at https://github.com/Robertwyq/LSMOL. | Accept | This paper focuses on expanding the problem of unsupervised object discovery (detection) to a new setup, where a 3D point cloud is available as well as an RGB sequence. The paper received three detailed reviews from expert reviewers, all of which had their major concerns about the paper resolved through the author rebuttal and author-reviewer discussion period. With the extra analyses and experiments presented in the discussion period, the paper has reached the level of impact and contribution expected by NeurIPS papers. The authors are recommended to add these extra items to the final version of the paper. | test | [
"D8pbxXp2bx",
"x4f5UpbwVOP",
"NqewAhhhQCM",
"vmgMJ2Kv4A",
"OriT8efpZTb",
"tU66dVVuOp",
"JWAh5rlcnV-"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi,\n\nThe provided response answer my questions. Thanks!",
" Thanks for your valuable comments. Appreciation for the approval and constructive suggestions.\n\nQ1: Thanks for this. We agree that it is important for object discovery to reduce human effort by automatically generating object labels. We attempt to ... | [
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"vmgMJ2Kv4A",
"JWAh5rlcnV-",
"tU66dVVuOp",
"OriT8efpZTb",
"nips_2022_eQfuHqEsUj",
"nips_2022_eQfuHqEsUj",
"nips_2022_eQfuHqEsUj"
] |
nips_2022_BNqRpzwyOFU | Hierarchical Normalization for Robust Monocular Depth Estimation | In this paper, we address monocular depth estimation with deep neural networks. To enable training of deep monocular estimation models with various sources of datasets, state-of-the-art methods adopt image-level normalization strategies to generate affine-invariant depth representations. However, learning with the image-level normalization mainly emphasizes the relations of pixel representations with the global statistic in the images, such as the structure of the scene, while the fine-grained depth difference may be overlooked. In this paper, we propose a novel multi-scale depth normalization method that hierarchically normalizes the depth representations based on
spatial information and depth distributions. Compared with previous normalization strategies applied only at the holistic image level, the proposed hierarchical normalization can effectively preserve the fine-grained details and improve accuracy. We present two strategies that define the hierarchical normalization contexts in the depth domain and the spatial domain, respectively. Our extensive experiments show that the proposed normalization strategy remarkably outperforms previous normalization methods, and we set new state-of-the-art on five zero-shot transfer benchmark datasets. | Accept | This paper addresses the problem of training a monocular depth estimation network from variable sources of data. As opposed to only using a single scaling factor as in existing work, the authors propose local schemes for normalising. While the proposed approaches are conceptually simple, they result in a non-trivial boost in performance (both qualitatively and quantitatively) and will likely be of interest in the field of monocular depth estimation.
The reviewers were broadly in support of this paper. This area-chair agress, and recommends acceptance. However, the authors are strongly encouraged to incorporate the valuable comments and suggestions from the reviewers into the revised text.
Minor comments:
* Fig 2 (a) is not clear and should be revised to make it clearer what it is trying to communicate.
* Re-title section 5.1. To “Limitations”
* Add the new results for NYUv2
* The two supplementary videos are not very informative. Should consider using different examples
| train | [
"cgv2699_0B",
"XMiM1ap0zxz",
"iVwxX7w-0JY",
"o2dxZsuceGn",
"dRjDHGyEDw_",
"L2krzNkrFR",
"zcdj85SVNql",
"xhjGETmP95y",
"4DOeKoHwQOf",
"3yBC9hcgzJ2",
"s9GblvLaZCj"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your suggestions. We will add this part to our manuscript and elaborate more. Yes, noise is an important reason for using median, and the mean vary per change in the shift. For example, inaccurate predictions in distant areas may constitute noise in the mean representations. When the depth values of al... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"XMiM1ap0zxz",
"o2dxZsuceGn",
"dRjDHGyEDw_",
"3yBC9hcgzJ2",
"4DOeKoHwQOf",
"s9GblvLaZCj",
"xhjGETmP95y",
"nips_2022_BNqRpzwyOFU",
"nips_2022_BNqRpzwyOFU",
"nips_2022_BNqRpzwyOFU",
"nips_2022_BNqRpzwyOFU"
] |
nips_2022_TN4UpY_Qzo | Whitening Convergence Rate of Coupling-based Normalizing Flows | Coupling-based normalizing flows (e.g. RealNVP) are a popular family of normalizing flow architectures that work surprisingly well in practice. This calls for theoretical understanding. Existing work shows that such flows weakly converge to arbitrary data distributions. However, they make no statement about the stricter convergence criterion used in practice, the maximum likelihood loss. For the first time, we make a quantitative statement about this kind of convergence: We prove that all coupling-based normalizing flows perform whitening of the data distribution (i.e. diagonalize the covariance matrix) and derive corresponding convergence bounds that show a linear convergence rate in the depth of the flow. Numerical experiments demonstrate the implications of our theory and point at open questions. | Accept | In this work, the authors analyze the convergence of affine coupling flows by providing a theoretical analysis of the whitening convergence rate. While previous analyses were derived for the optimal transport, the reviewers have appreciated the point of view provided by viewing the affine coupling layers as whitening transformations. They have however regretted that the non-Gaussianity term was ignored in the theoretical analysis. All the reviewers found the work relevant, interesting, with meaningful theoretical and empirical results. Therefore I do recommend acceptance of this paper. | train | [
"98umemuh2wM",
"_ANhalNCWc",
"MyDJaCq92f2",
"IQTwNdpcJUE",
"HBJ2TFX5dN-",
"I1IsdUnd4Xg",
"_XuqSAFXfCk",
"YQTbYn6xCO7",
"HxXBOkYRBRs",
"lImxYnZcwM8",
"SkgyUAJdz5L"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your answers and clarifications. I updated my review and raised the score.\n\n",
" I thank the authors for addressing my concerns. I upgraded my score to 7.",
" We cordially thank you for your helpful feedback and hope to address the limitations you mentioned in the following:\n\n> Whe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"IQTwNdpcJUE",
"MyDJaCq92f2",
"SkgyUAJdz5L",
"lImxYnZcwM8",
"HxXBOkYRBRs",
"YQTbYn6xCO7",
"nips_2022_TN4UpY_Qzo",
"nips_2022_TN4UpY_Qzo",
"nips_2022_TN4UpY_Qzo",
"nips_2022_TN4UpY_Qzo",
"nips_2022_TN4UpY_Qzo"
] |
nips_2022_GwXrGy_vc8m | Estimating Noise Transition Matrix with Label Correlations for Noisy Multi-Label Learning | In label-noise learning, the noise transition matrix, bridging the class posterior for noisy and clean data, has been widely exploited to learn statistically consistent classifiers. The effectiveness of these algorithms relies heavily on estimating the transition matrix. Recently, the problem of label-noise learning in multi-label classification has received increasing attention, and these consistent algorithms can be applied in multi-label cases. However, the estimation of transition matrices in noisy multi-label learning has not been studied and remains challenging, since most of the existing estimators in noisy multi-class learning depend on the existence of anchor points and the accurate fitting of noisy class posterior. To address this problem, in this paper, we first study the identifiability problem of the class-dependent transition matrix in noisy multi-label learning, and then inspired by the identifiability results, we propose a new estimator by exploiting label correlations without neither anchor points nor accurate fitting of noisy class posterior. Specifically, we estimate the occurrence probability of two noisy labels to get noisy label correlations. Then, we perform sample selection to further extract information that implies clean label correlations, which is used to estimate the occurrence probability of one noisy label when a certain clean label appears. By utilizing the mismatch of label correlations implied in these occurrence probabilities, the transition matrix is identifiable, and can then be acquired by solving a simple bilinear decomposition problem. Empirical results demonstrate the effectiveness of our estimator to estimate the transition matrix with label correlations, leading to better classification performance. Source codes are available at https://github.com/ShikunLi/Estimating_T_For_Noisy_Mutli-Labels. | Accept | Estimating the noisy transition matrix for handling noisy labels with multi-labels. Good experimental work illustrating estimating transition matrices. reviewers liked theory and the writeup. Paper has had improved citations and writing.
There was some discussion about the assumptions. Nuances of this should be addressed in the revised paper, for instance your comments about class imbalance.
Regarding reviewer KMFh's Q5: Note retrieval metrics (e.g., R@K) have been widely used in multi-label classification, although versions of F1 are probably more common. They give alternative looks at the errors.
Regarding Reviewer n5cR's weakness 2: would be nice to do summary plots and/or win/loss tables and put some of the big tables in appendices.
| train | [
"kMb14T_2bo3",
"bn2Mqazl2a",
"rEhznd9Q7Kb",
"mi65h3Uu7OL",
"ImrwxFMDsG",
"g_ru2EyLv2",
"I5I8QUQ3UZ",
"eEvU4YFw7k",
"O8ssLygdECe",
"kcJCCPtCW1U",
"sa--Du3gzTB",
"AvsDpF2jR-k",
"02nadlsOd5g",
"JIDiriK_7l7",
"LwXTdj7ToEY",
"8aEznvc_ozd",
"lLwAdAoH1Ph",
"rRQ00r1SpUO",
"fdUA2hJmOHu",
... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" Thanks again for your kind comments. We will carefully consider your suggestions to further revise our paper.",
" Thanks a lot for your kind reminder. We will further carefully consider Eq.(1) and add more explanations in the revised version.",
" Thanks very much for your careful and insightful review! We al... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
1
] | [
"I5I8QUQ3UZ",
"eEvU4YFw7k",
"ImrwxFMDsG",
"AvsDpF2jR-k",
"g_ru2EyLv2",
"AvsDpF2jR-k",
"40mwN1tRK7",
"O8ssLygdECe",
"sa--Du3gzTB",
"pGYGI3H1HhB",
"j6cJY-KvHiH",
"rbtIWurtrn",
"pGYGI3H1HhB",
"EK_fvoWwaiH",
"JKzoCZfWy1Y",
"pGYGI3H1HhB",
"ScNtecbcONF",
"EK_fvoWwaiH",
"F1AsW4wF82",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.