paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_DfMqlB0PXjM | Interpretable Unsupervised Diversity Denoising and Artefact Removal | Image denoising and artefact removal are complex inverse problems admitting multiple valid solutions. Unsupervised diversity restoration, that is, obtaining a diverse set of possible restorations given a corrupted image, is important for ambiguity removal in many applications such as microscopy where paired data for supervised training are often unobtainable. In real world applications, imaging noise and artefacts are typically hard to model, leading to unsatisfactory performance of existing unsupervised approaches. This work presents an interpretable approach for unsupervised and diverse image restoration. To this end, we introduce a capable architecture called Hierarchical DivNoising (HDN) based on hierarchical Variational Autoencoder. We show that HDN learns an interpretable multi-scale representation of artefacts and we leverage this interpretability to remove imaging artefacts commonly occurring in microscopy data. Our method achieves state-of-the-art results on twelve benchmark image denoising datasets while providing access to a whole distribution of sensibly restored solutions.
Additionally, we demonstrate on three real microscopy datasets that HDN removes artefacts without supervision, being the first method capable of doing so while generating multiple plausible restorations all consistent with the given corrupted image. | Accept (Spotlight) | A multi-scale hierarchical variational autoencoder based technique is developed for unsupervised image denoising and artefact removal. The method is shown to achieve state of the art performance on several datasets. Further, the multi-scale latent representation leads to an interpretable visualization of the denoising process.
The reviewers unanimously recommend acceptance. | train | [
"I4UXy_gpWu",
"c4Gq_eyvDw",
"8PK3vovCMj-",
"5yw0sTFgxWV",
"saFriN4jBY",
"NnW6muGghOk",
"s47PkI2Lt78",
"60HQCXrpC3p",
"4SrxxsMAjj-",
"h4IxSBmLfUS",
"p1wFaCi9dhX",
"bu7dBcSmtTT",
"5ST-v2KwGpa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for incorporating the additional analyses.",
"The paper proposes an image restoration method that can not only reduce pixelwise noises but also remove artifacts in resultant images. Introducing the idea of hierarchical representation of latent variables analogous to VAEs, the proposed method improves ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"60HQCXrpC3p",
"iclr_2022_DfMqlB0PXjM",
"5yw0sTFgxWV",
"c4Gq_eyvDw",
"h4IxSBmLfUS",
"4SrxxsMAjj-",
"5yw0sTFgxWV",
"p1wFaCi9dhX",
"5ST-v2KwGpa",
"bu7dBcSmtTT",
"iclr_2022_DfMqlB0PXjM",
"iclr_2022_DfMqlB0PXjM",
"iclr_2022_DfMqlB0PXjM"
] |
iclr_2022_IK9ap6nxXr2 | Interacting Contour Stochastic Gradient Langevin Dynamics | We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions. We show that ICSGLD can be theoretically more efficient than a single-chain CSGLD with an equivalent computational budget. We also present a novel random-field function, which facilitates the estimation of self-adapting parameters in big data and obtains free mode explorations. Empirically, we compare the proposed algorithm with popular benchmark methods for posterior sampling. The numerical results show a great potential of ICSGLD for large-scale uncertainty estimation tasks. | Accept (Poster) | This paper proposes a new variant of a stochastic gradient Langevin dynamics sampler that relies on two key ideas: approximation of the target density with a simpler function (as in [Deng, 2020]) and the parallel simulation of many chains. The authors also prove that their approach can be theoretically more efficient than a single-chain algorithm.
The reviewers see the contribution as significant although they did raise some concerns regarding the clarity of the paper. Since these concerns do not appear to be major, I recommend acceptance but I advise the authors to address the comments of the reviewers to maximize the impact of the paper. | train | [
"a6__SsHyPI9",
"ppk5eN-KALF",
"oXYFOMTyhZw",
"jalOMjJsUKK",
"3nH0FKr3clc",
"duSrMoFZ2g",
"_w_GqrJfYuH",
"E_RapCEtLhF",
"CxR0lva-tdg",
"fZLzd44xMLz",
"Uln5nPnijne",
"QgSMJYUYcM",
"zglGAvRAlL",
"cVHMt3oavwL",
"zuFEojA82rI",
"rnohZ3P9jXO",
"LEfVK3i_vGY",
"11POWKlWMRY",
"RrfG69viQY9"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_rev... | [
" I thank the authors for their response.\n\nAlthough the additional experiment was only in the toy model, it did answer my question.\nAlthough how $\\zeta$ should be tuned is still a mystery, I agree that it is an additional hyperparameter that affects convergence.\nAlthough the property of $\\theta_\\ast$ is not ... | [
-1,
6,
-1,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"CxR0lva-tdg",
"iclr_2022_IK9ap6nxXr2",
"zuFEojA82rI",
"duSrMoFZ2g",
"iclr_2022_IK9ap6nxXr2",
"zglGAvRAlL",
"cVHMt3oavwL",
"iclr_2022_IK9ap6nxXr2",
"Uln5nPnijne",
"QgSMJYUYcM",
"LEfVK3i_vGY",
"11POWKlWMRY",
"rnohZ3P9jXO",
"E_RapCEtLhF",
"6aK2w4NZ8y",
"3nH0FKr3clc",
"ppk5eN-KALF",
"... |
iclr_2022_J_F_qqCE3Z5 | DKM: Differentiable k-Means Clustering Layer for Neural Network Compression | Deep neural network (DNN) model compression for efficient on-device inference is becoming increasingly important to reduce memory requirements and keep user data on-device. To this end, we propose a novel differentiable k-means clustering layer (DKM) and its application to train-time weight clustering-based DNN model compression. DKM casts k-means clustering as an attention problem and enables joint optimization of the DNN parameters and clustering centroids. Unlike prior works that rely on additional regularizers and parameters, DKM-based compression keeps the original loss function and model architecture fixed. We evaluated DKM-based compression on various DNN models for computer vision and natural language processing (NLP) tasks. Our results demonstrate that DKM delivers superior compression and accuracy trade-off on ImageNet1k and GLUE benchmarks. For example, DKM-based compression can offer 74.5% top-1 ImageNet1k accuracy on ResNet50 DNN model with 3.3MB model size (29.4x model compression factor). For MobileNet-v1, which is a challenging DNN to compress, DKM delivers 63.9% top-1 ImageNet1k accuracy with 0.72 MB model size (22.4x model compression factor). This result is 6.8% higher top-1accuracy and 33% relatively smaller model size than the current state-of-the-art DNN compression algorithms. Additionally, DKM enables compression of DistilBERT model by 11.8x with minimal (1.1%) accuracy loss on GLUE NLP benchmarks. | Accept (Poster) | The paper proposes a simple approach to quantizing neural network weights with encouraging empirical results. The authors did work hard to improve the paper and address reviewers' concerns during the discussion period. I believe the presentation of results can improve by adding a discussion of inference time. I am not sure if all of the baselines (e.g., in Figure 4) have the same inference cost.
PS1: The method does seem to unroll the iterative optimization process (ie. EM) of a Gaussian mixture model (GMM) and differentiates through the unrolled iterations. The paper makes the connection to attention, but does not seem to make a clear connection with GMM and EM. If this connection is correct, adding a discussion can be helpful.
PS2: I am not a big fan of using differentiable k-means as the method name. Differentiable k-means is confusing partly because k-means is differentiable, i.e., one can optimize k-means centers using gradient descent. The proposed approach seems more relevant to meta-learning, where one differentiate though one optimization process to optimize a secondary objective. | train | [
"QRzNi1QVr7C",
"LlgNCSRUluN",
"tk0t7YGd_ry",
"BvFrtLtjok_",
"weqNwzRCNXa",
"VXTNT16FtjS",
"f0X8V5NmXSc",
"n7WWYa3f3A",
"H6OcImOkA30",
"7-MFcVXqOVd",
"DHtsAcFfKv",
"kKVyxLb_TV",
"HcvmCMq5hkx",
"7Hwohdq60IM",
"hfEs_-kJ_Uf",
"R2fnFWA4rf9",
"jy5W-y7fBRQ",
"YvR2_KKTQpv",
"LF_sYxu7C3u"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors again for addressing my concerns and revising the paper. I think the paper has useful results and I keep my original score.",
" We appreciate your valuable reviews. Based on your reviews, we make our theoretical connection to Expectation-Maximization clearer in Section 3 and Appendix G, to p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"n7WWYa3f3A",
"LF_sYxu7C3u",
"BvFrtLtjok_",
"kKVyxLb_TV",
"7-MFcVXqOVd",
"YvR2_KKTQpv",
"n7WWYa3f3A",
"hfEs_-kJ_Uf",
"iclr_2022_J_F_qqCE3Z5",
"7Hwohdq60IM",
"H6OcImOkA30",
"HcvmCMq5hkx",
"VXTNT16FtjS",
"DHtsAcFfKv",
"jy5W-y7fBRQ",
"iclr_2022_J_F_qqCE3Z5",
"iclr_2022_J_F_qqCE3Z5",
"... |
iclr_2022__BNiN4IjC5 | PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior | Denoising diffusion probabilistic models have been recently proposed to generate high-quality samples by estimating the gradient of the data density. The framework assumes the prior noise as a standard Gaussian distribution, whereas the corresponding data distribution may be more complicated than the standard Gaussian distribution, which potentially introduces inefficiency in denoising the prior noise into the data sample because of the discrepancy between the data and the prior. In this paper, we propose PriorGrad to improve the efficiency of the conditional diffusion model (for example, a vocoder using a mel-spectrogram as the condition) by applying an adaptive prior derived from the data statistics based on the conditional information. We formulate the training and sampling procedures of PriorGrad and demonstrate the advantages of an adaptive prior through a theoretical analysis. Focusing on the audio domain, we consider the recently proposed diffusion-based audio generative models based on both the spectral and time domains and show that PriorGrad achieves faster convergence and superior performance, leading to an improved perceptual quality and tolerance to a smaller network capacity, and thereby demonstrating the efficiency of a data-dependent adaptive prior. | Accept (Poster) | This paper suggests using a conditional prior in conditional diffusion-based generative models. Typically, only the score function estimator is provided with the conditioning signal, and the prior is an unconditional standard Gaussian distribution. It is shown that making the prior conditional improves results on speech generation tasks.
Several reviewers initially recommended rejection, but after extensive discussion and interaction with the authors, all reviewers have given this work a "borderline accept" rating.
Criticisms included that the idea is too simple or obvious to warrant an ICLR paper. I am inclined to disagree: simple ideas that work are often the ones that persist and see rapid adoption (dropout regularisation is my favourite example). Like the authors, I believe simplicity is an advantage in this respect, rather than a disadvantage. Of course, simple ideas do require extensive and convincing empirical validation to be worth publishing at ICLR. After the authors' updates, I believe the work meets this bar.
Another issue raised by several reviewers is the limited theoretical justification for the approach. However, combined with the simplicity of the method, I believe the empirical results of the revised version sufficiently justify the approach on their own. Nevertheless, I would recommend that the authors consider further how they could address this issue in the final version of their manuscript, as they have already begun to do during the discussion phase.
Another way to strengthen the paper further would be to demonstrate how the generic approach can be applied in a different domain (e.g. conditional image generation), but I do not consider this addition necessary for the work to warrant publication.
In light of this, I am recommending acceptance. | train | [
"i0Rtry-6j0Y",
"XZNYQygE5Wz",
"cpKYDZcr506",
"8J-advdpWlv",
"vgMiNZCck3M",
"3gu552yj45T",
"E-w4-vYAzN2",
"qUeW-RYfx4I",
"oAOqGpnW81_",
"6leNNNyVUT",
"vSjzl-1aZtO",
"NyYqkL-M52",
"Un6zRy4WvrM",
"oj9cP2XEtcT",
"XJweN-nVSyI",
"diN9N-_X-_x"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This work builds on denoising diffusion probabilistic models (DDPM), and argues for to modify the forward and backward diffusion processes such that instead of using an uninformative prior $p(\\mathbf{x}_T) = \\mathcal{N}(0,\\mathbf{I})$, they use a data-dependent prior $p(\\mathbf{x}_T) = \\mathcal{N}(\\mu, \\Sig... | [
6,
6,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022__BNiN4IjC5",
"iclr_2022__BNiN4IjC5",
"iclr_2022__BNiN4IjC5",
"vgMiNZCck3M",
"qUeW-RYfx4I",
"iclr_2022__BNiN4IjC5",
"vSjzl-1aZtO",
"XJweN-nVSyI",
"XJweN-nVSyI",
"i0Rtry-6j0Y",
"NyYqkL-M52",
"3gu552yj45T",
"oj9cP2XEtcT",
"XZNYQygE5Wz",
"cpKYDZcr506",
"iclr_2022__BNiN4IjC5"
] |
iclr_2022_PlKWVd2yBkY | Pseudo Numerical Methods for Diffusion Models on Manifolds | Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce a sample. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a new perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that pseudo linear multi-step method is the best method in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. | Accept (Poster) | This paper presents a new DDPM model based on solving differential equations on a manifold. The resulting numerics appear to be favorable, with faster performance than past models.
Most of the reviews thought the main result was of interest and were impressed with the performance. Reviewer c9bY points out some challenging issues and analytical questions that remain unanswered in the text; they also have some simpler textual revisions that seem less important.
In general, this paper has the misfortune of receiving reviews whose confidence appears to be low. While partially this is a byproduct of the noisy machine learning review system, the difficulty of the text itself is substantial and made the paper less than approachable; the authors are encouraged to continue to revise their text based on feedback from as many readers as possible. That said, the authors were quite responsive to reviewer comments during the rebuttal phase, which significantly improved the text.
Overall this is a borderline case, and the AC also had some difficulty following details of this technically dense paper. Given the positive *technical* assessments of the work and at least one reviewer defending the paper's clarity, the AC is willing to give this paper the benefit of the doubt. | train | [
"MGDCJSB5NAc",
"waaiNwfqqPQ",
"yhs3-_Wdcp",
"6NuNll_oBIH",
"pGzHmWvLEsq",
"7kkVy8bv4",
"Z4C92AU04X_",
"3wJWg-p3VZ",
"QV5rZ30BUBa",
"jDHwJB3qaw",
"wbh8Qpg1358",
"R924us_t71b",
"4NDo13d7bhU"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer jhKu,\n\nWe hope you've had a chance to read our response and the revised paper. We would really appreciate a reply before the end of the discussion period about whether we have addressed your concerns or if any additional concerns remain. We are happy to address any remaining concerns and eagerly l... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3,
3
] | [
"3wJWg-p3VZ",
"yhs3-_Wdcp",
"pGzHmWvLEsq",
"iclr_2022_PlKWVd2yBkY",
"4NDo13d7bhU",
"pGzHmWvLEsq",
"R924us_t71b",
"jDHwJB3qaw",
"wbh8Qpg1358",
"iclr_2022_PlKWVd2yBkY",
"iclr_2022_PlKWVd2yBkY",
"iclr_2022_PlKWVd2yBkY",
"iclr_2022_PlKWVd2yBkY"
] |
iclr_2022_EMigfE6ZeS | Hybrid Random Features | We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs) that automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest. Special instantiations of HRFs lead to well-known methods such as trigonometric (Rahimi & Recht, 2007) or (recently introduced in the context of linear-attention Transformers) positive random features (Choromanski et al., 2021). By generalizing Bochner’s Theorem for softmax/Gaussian kernels and leveraging random features for compositional kernels, the HRF-mechanism provides strong theoretical guarantees - unbiased approximation and strictly smaller worst-case relative errors than its counterparts. We conduct exhaustive empirical evaluation of HRF ranging from pointwise kernel estimation experiments, through tests on data admitting clustering structure to benchmarking implicit-attention Transformers (also for downstream Robotics applications), demonstrating its quality in a wide spectrum of machine learning problems. | Accept (Poster) | Kernel methods are among the most flexible and powerful approaches of our times. Random features (RF) provide a recent mechanism to also make them scalable due to the associated finite (and often small)-dimensional approximate feature map (in the paper referred to as linearization). The focus of the submission is the linearization of the softmax kernel (defined in (1)) while making sure that the obtained RF approximation is accurate simultaneously for the small and the large kernel values. The authors present a hybrid random feature (HRF, defined in (8)) construction parameterized by base estimators and weights, and show that specific choice of these parameters is capable of implementing the goal. Some of the HRF estimators are also accompanied by theoretical guarantees (Section 3). Their numerical efficiency is illustrated (Section 4) on synthetic examples and in the context of natural language and speech modelling, and in robotics.
Scaling up kernel methods is a fundamental task of machine learning. The authors present a nice and valuable construction in this direction which can be of both theoretical and practical interest to the community.
The submission would benefit from implementing the remarks of the reviewers to improve its clarity. | train | [
"Lhg4dYszOJV",
"Zqxy986QS1m",
"14b1hghv4V",
"lpQQVZTXl2p",
"aLoEsc5FJ8v",
"e0iBxC2lRm9",
"b3DOQoBJZnT",
"dkscsTOP_13",
"GBSmDphT9r",
"fsG29rgZtg5",
"ddtx2u24u_b",
"sXk-e-k_XVK",
"24rsZzzQ0jYG",
"rmYsMaTuXF6",
"0J9a9a65t3ne",
"nFTg1-YeFU",
"VsKjT_TJZahT",
"T9n5LeJ6iG",
"NI_-_DJK-A... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We once more thank the Reviewer for the very valuable and detailed feedback that helped us to improve the presentation of the paper. ",
" I thank the authors for addressing my remarks and improving the presentation of the paper. Please find my review updated above.",
"The paper proposes a new type of estimato... | [
-1,
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Zqxy986QS1m",
"b3DOQoBJZnT",
"iclr_2022_EMigfE6ZeS",
"e0iBxC2lRm9",
"iclr_2022_EMigfE6ZeS",
"dkscsTOP_13",
"14b1hghv4V",
"aLoEsc5FJ8v",
"sXk-e-k_XVK",
"ddtx2u24u_b",
"T9n5LeJ6iG",
"iclr_2022_EMigfE6ZeS",
"aLoEsc5FJ8v",
"14b1hghv4V",
"sXk-e-k_XVK",
"sXk-e-k_XVK",
"sXk-e-k_XVK",
"sX... |
iclr_2022_qj1IZ-6TInc | Real-Time Neural Voice Camouflage | Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.We propose a method to camouflage a person's voice from these systems without inconveniencing the conversation between people in the room. Standard adversarial attacks are not effective in real-time streaming situations because the characteristics of the signal will have changed by the time the attack is executed. We introduce predictive adversarial attacks, which achieves real-time performance by forecasting the attack vector that will be the most effective in the future. Under real-time constraints, our method jams the established speech recognition system DeepSpeech 3.9x more than online projected gradient descent as measured through word error rate, and 6.6x more as measured through character error rate. We furthermore demonstrate our approach is practically effective in realistic environments with complex scene geometries. | Accept (Oral) | This paper proposes a novel neural voice camouflage method that learns predictive attacks without any constraints about input and output. It is general, robust, and real-time that could be used in a real-world scenario. The experiments are solid, the in-depth analyses are convincing. | train | [
"WFjPtorzCIv",
"C8Bth22lVHo",
"mDL72nYr2_0",
"swxzrIXYiH",
"EncMF6s6j97",
"YTae_1Nyv4Q",
"bcdKdmdRpuf",
"G1jJVppXsH1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a Neural Voice Camouflage (NVC) method that has three important characteristics, which are essential for an NVC method to be used in practical scenarios: general, real-time, and robust. Since the proposed method trains a model to learn predictive attacks without any constraints about input and ... | [
8,
-1,
-1,
8,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
4
] | [
"iclr_2022_qj1IZ-6TInc",
"EncMF6s6j97",
"YTae_1Nyv4Q",
"iclr_2022_qj1IZ-6TInc",
"WFjPtorzCIv",
"swxzrIXYiH",
"G1jJVppXsH1",
"iclr_2022_qj1IZ-6TInc"
] |
iclr_2022_ieNJYujcGDO | Towards Understanding the Data Dependency of Mixup-style Training | In the Mixup training paradigm, a model is trained using convex combinations of data points and their associated labels. Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training. In this paper, we investigate how these benefits of Mixup training rely on properties of the data in the context of classification. For minimizing the original empirical risk, we compute a closed form for the Mixup-optimal classification, which allows us to construct a simple dataset on which minimizing the Mixup loss leads to learning a classifier that does not minimize the empirical loss on the data. On the other hand, we also give sufficient conditions for Mixup training to also minimize the original empirical risk. For generalization, we characterize the margin of a Mixup classifier, and use this to understand why the decision boundary of a Mixup classifier can adapt better to the full structure of the training data when compared to standard training. In contrast, we also show that, for a large class of linear models and linearly separable datasets, Mixup training leads to learning the same classifier as standard training. | Accept (Spotlight) | This paper presents an interesting analysis of mixup, discussing when it works and when it fails. The theory is further illustrated with small but intuitive examples, which facilitates understanding the underlying phenomena and verifies correctness of the predictions made by the theory. The submission has received three reviews with high variance ranging from 3 to 8: mn55 favoring rejection while eGEK recommending accept. I read all the reviews and authors' response. Unfortunately, mn55 did not follow up to express how convinced they are with author's reply, but I do find the responses to mn55 very solid and convincing. In concordance with eGEK, I do find the provided analysis important and helpful, and the presentation of the theory through concrete examples very compelling. | train | [
"oCVSFg4S93j",
"5djwWk-Jc-N",
"0jQbYrUZjcT",
"xEL0KRhNdCv",
"oAudC9AZgi",
"uhSwB7qSVAG",
"tst7V4nbYlz",
"_Q0amTQuSAM",
"x0BV-f9GIBd"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors for answering my raised questions in detail. \nThe revision at the end of Section 2.4 is now concise and clear to me.\n\nI have no other questions at this point and will discuss the paper with other reviewers and ACs.",
" We would like to thank the reviewers for their many helpful comm... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"xEL0KRhNdCv",
"iclr_2022_ieNJYujcGDO",
"_Q0amTQuSAM",
"x0BV-f9GIBd",
"0jQbYrUZjcT",
"tst7V4nbYlz",
"iclr_2022_ieNJYujcGDO",
"iclr_2022_ieNJYujcGDO",
"iclr_2022_ieNJYujcGDO"
] |
iclr_2022_wENMvIsxNN | D-CODE: Discovering Closed-form ODEs from Observed Trajectories | For centuries, scientists have manually designed closed-form ordinary differential equations (ODEs) to model dynamical systems. An automated tool to distill closed-form ODEs from observed trajectories would accelerate the modeling process. Traditionally, symbolic regression is used to uncover a closed-form prediction function $a=f(b)$ with label-feature pairs $(a_i, b_i)$ as training examples. However, an ODE models the time derivative $\dot{x}(t)$ of a dynamical system, e.g. $\dot{x}(t) = f(x(t),t)$, and the "label" $\dot{x}(t)$ is usually *not* observed. The existing ways to bridge this gap only perform well for a narrow range of settings with low measurement noise, frequent sampling, and non-chaotic dynamics. In this work, we propose the Discovery of Closed-form ODE framework (D-CODE), which advances symbolic regression beyond the paradigm of supervised learning. D-CODE leverages a novel objective function based on the variational formulation of ODEs to bypass the unobserved time derivative. For formal justification, we prove that this objective is a valid proxy for the estimation error of the true (but unknown) ODE. In the experiments, D-CODE successfully discovered the governing equations of a diverse range of dynamical systems under challenging measurement settings with high noise and infrequent sampling. | Accept (Spotlight) | This paper introduces a new technique for discovering closed-form functional forms (ordinary differential equations) that explain noisy observed trajectories x(t) where the "label" x'(t) = f(x(t), t) is not observed, but without trying to approximate it. The method first tries to approximate a smoother trajector x^hat(t), then relies on a variational formulation using a loss function over functionals {C_j}_j, defined in terms of an orthonormal basis {g_1, …, g_S} of sampling functions such that the sum of squares of all the C_j approximates the theoretical distance between f(x) and the solution f*(x). These sampling functions are typically chosen to be a basis of sine functions. The method is evaluated on several canonical ODEs (growth model, glycolitic oscillator, Lorenz chaotic attractor) and compared to gaussian processes-based differentiation, to spline-based differentiation, regularised differentiation, and applied to model the temporal effect of chemotherapy on tumor volume.
Reviewers found that the paper was well-motivated and easy to follow (EBvJ), well evaluated (EBvJ), offering new perspectives to symbolic regression (79Ft). Reviewer vaG3 had their concerns addressed. Reviewer ZddY had concerns about the running time (a misunderstanding that was clarified) and the lack of comparison to a simple baseline consisting in double optimisation over f and x^hat(0) using Neural ODEs (the authors have added a Neural ODE baseline but were in disagreement with ZddY and 79Ft about their limitations).
Reviewers engaged in a discussion with the authors, and the scores are 6, 6, 8, 8. I believe that the paper definitely meets the conference acceptance bar and would advocate for its inclusion as a spotlight in the conference. | val | [
"DK4TZf1rje4",
"oE6KjWhbWbz",
"DF2jinkPevJ",
"IAM0iK0rJFW",
"UytEx9uVru8",
"dRMTemIvN7M",
"hOH9yQJtrhO",
"rFVVNuwxWe3",
"9DVrOIObVC1",
"0GXXGlLcVC8",
"037N81BTy55",
"bUzvuHd1rV",
"89kz9b7Pszm",
"plA2ubIFr_n",
"RUFW6diwWQ",
"9yzvjFOj1JJ",
"KBJWD3eWt0z",
"xhw7Qu61zZM",
"Vk8787RNRuh... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
"This paper proposes a new methodology to infer symbolic ODE representation from observed time series. In contrast to previous methods, they bypass the inference of the time derivative and rather proceed by first estimating a continuous approximation of the trajectory and then optmizing a novel objective function. ... | [
6,
-1,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_wENMvIsxNN",
"rFVVNuwxWe3",
"iclr_2022_wENMvIsxNN",
"iclr_2022_wENMvIsxNN",
"KBJWD3eWt0z",
"hOH9yQJtrhO",
"9yzvjFOj1JJ",
"9DVrOIObVC1",
"plA2ubIFr_n",
"89kz9b7Pszm",
"DF2jinkPevJ",
"DK4TZf1rje4",
"9yzvjFOj1JJ",
"DK4TZf1rje4",
"IAM0iK0rJFW",
"Vk8787RNRuh",
"DF2jinkPevJ",
... |
iclr_2022_EMxu-dzvJk | GRAND++: Graph Neural Diffusion with A Source Term | We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks. | Accept (Poster) | The paper presents a continuous framework for GNNs based on neural diffusion PDE and is an evolution of a previous method (GRAND). The main novelty appears to be the additional source term, which the author show to be beneficial in reducing the oversmoothing effect typical in deep GNNs. While novelty is somewhat limited, the paper provided detailed theoretical and experimental assessment of the idea. Overall, the reviewers liked the approach and expressed some questions/concerns that were satisfactorily addressed in the rebuttal. We recommend acceptance. | train | [
"S33GuE0yfHZ",
"q0xlJ5HAumT",
"tSqOHdVGbO-",
"NJIa9noy0r6",
"dG0UoTCnqNK",
"b2d0tAd9M_",
"VY64ta2qgT",
"G4JnTP3q20f",
"hO4dkX3Q21R",
"q2jWI-PJQlL",
"u_SQ6_1D7U_",
"mPMOf2tQOUe",
"UMueEMq9Q23",
"yDED0_aK9Kt",
"vYUj1Sttfm",
"33PS_OPr-kh",
"fWvLL4TB9bc",
"XcqweEPxJWC",
"QvzgxDRlqLy"... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_... | [
" Thanks for your responses and we appreciate your endorsement.",
"This paper studies neural-based diffusion models for graph data. It considers deep learning on graphs as a continuous diffusion process and treats GNNs as discretizations of an underlying PDE. The authors build upon an existing work (GRAND) by add... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
3,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"tSqOHdVGbO-",
"iclr_2022_EMxu-dzvJk",
"33PS_OPr-kh",
"dG0UoTCnqNK",
"cnUpz_t7Qtd",
"iclr_2022_EMxu-dzvJk",
"hO4dkX3Q21R",
"iclr_2022_EMxu-dzvJk",
"XcqweEPxJWC",
"iclr_2022_EMxu-dzvJk",
"iclr_2022_EMxu-dzvJk",
"G4JnTP3q20f",
"cg5IxwOH8WL",
"mPMOf2tQOUe",
"b2d0tAd9M_",
"q0xlJ5HAumT",
... |
iclr_2022_6YVIk0sAkF_ | Multi-Mode Deep Matrix and Tensor Factorization | Recently, deep linear and nonlinear matrix factorizations gain increasing attention in the area of machine learning. Existing deep nonlinear matrix factorization methods can only exploit partial nonlinearity of the data and are not effective in handling matrices of which the number of rows is comparable to the number of columns. On the other hand, there is still a gap between deep learning and tensor decomposition. This paper presents a framework of multi-mode deep matrix and tensor factorizations to explore and exploit the full nonlinearity of the data in matrices and tensors. We use the factorization methods to solve matrix and tensor completion problems and prove that our methods have tighter generalization error bounds than conventional matrix and tensor factorization methods. The experiments on synthetic data and real datasets showed that the proposed methods have much higher recovery accuracy than many baselines. | Accept (Poster) | The paper considers matrix and tensor factorization, and provides a bound on the excess risk which is an improved bound over the bounds for ordinary matrix factorization. The authors also show how to solve the model with standard gradient-based optimization algorithms, and present results showing good accuracy. The method can be a bit slow but this depends a bit on the number of iterations, and in general it achieves better accuracy in a similar amount of time to other baseline algorithms.
The reviewers raised a few points, such as jdoi noting the tensor experiments were for small tensors and should include the method Costco as well; other reviewers mentioned more methods as well. The authors seemed to address most of these concerns in the rebuttal, adding more experiments and more details on timing. 26KD mentioned the optimization procedure was unclear, but the revision includes pseudocode in the appendix that clarifies.
Overall, the paper has both a theoretical and algorithmic contribution, and would be of interest to many ICLR readers. | train | [
"G7ajHHKqOaj",
"wZ5BExgI_cc",
"o4IiDhDS4RX",
"XxnbW901n2",
"z-ucZHWX1uY",
"Ly6sLYzD1Xy",
"AmSQXfTummi",
"LWvA7V5Rjew",
"gFL8L1c253",
"RD2Jz-0yNmq"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nThank you very much for your response and positive comment on our work. Indeed in the above table (Table 4 in the paper), the time cost of our method is almost 7 times of KBR-TC. The reason is that for the synthetic data, the maximum iteration of our method is 3000 while the stop criterial (rela... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
2
] | [
"wZ5BExgI_cc",
"XxnbW901n2",
"RD2Jz-0yNmq",
"gFL8L1c253",
"AmSQXfTummi",
"iclr_2022_6YVIk0sAkF_",
"iclr_2022_6YVIk0sAkF_",
"iclr_2022_6YVIk0sAkF_",
"iclr_2022_6YVIk0sAkF_",
"iclr_2022_6YVIk0sAkF_"
] |
iclr_2022_UtGtoS4CYU | Measuring CLEVRness: Black-box Testing of Visual Reasoning Models | How can we measure the reasoning capabilities of intelligence systems? Visual question answering provides a convenient framework for testing the model's abilities by interrogating the model through questions about the scene. However, despite scores of various visual QA datasets and architectures, which sometimes yield even a super-human performance, the question of whether those architectures can actually reason remains open to debate.
To answer this, we extend the visual question answering framework and propose the following behavioral test in the form of a two-player game. We consider black-box neural models of CLEVR. These models are trained on a diagnostic dataset benchmarking reasoning. Next, we train an adversarial player that re-configures the scene to fool the CLEVR model. We show that CLEVR models, which otherwise could perform at a ``human-level'', can easily be fooled by our agent. Our results
put in doubt whether data-driven approaches can do reasoning without exploiting the numerous biases that are often present in those datasets. Finally, we also propose a controlled experiment measuring the efficiency of such models to learn and perform reasoning. | Accept (Poster) | Introducing an adversarial agent that re-configures the rendered scenes of CLEVR to demonstrate that models that appear achieve super-human performance are actually easily fooled due to their lack of ability to reason, provides a nice insight into limitations with existing approaches and correspondingly how we evaluate on some benchmarks. There is a persistent concern that the results are only on CLEVR, and that the adversarial examples are not really disproving reasoning but rather issues with vision. However, overall reviewers were generally positive about the aims the work. | train | [
"o6W65GE0ho7",
"SuyTtA5CUY-",
"pHpSGWrZWII",
"P9H3DDN_YB2",
"22PJaTkrIjZ",
"1PaIZCRjpt",
"z91QGXVc-Uz",
"qPBwfG24rmu",
"d6pqmPM_eZV",
"rEjMGDs3Kj2",
"Ngztv4vKaqB",
"kxQ9mx-Axk",
"w6wCXKXO2CU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper explores measuring VQA models under adversarial settings, by having two competing models, when reasoning over a CLEVR scene as usual, and the other seeks to make the scene more challenging to answer. These settings reveal the weaknesses of visual reasoning models, and demonstrate that their apparent almo... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_UtGtoS4CYU",
"o6W65GE0ho7",
"P9H3DDN_YB2",
"rEjMGDs3Kj2",
"o6W65GE0ho7",
"o6W65GE0ho7",
"o6W65GE0ho7",
"Ngztv4vKaqB",
"kxQ9mx-Axk",
"w6wCXKXO2CU",
"d6pqmPM_eZV",
"iclr_2022_UtGtoS4CYU",
"iclr_2022_UtGtoS4CYU"
] |
iclr_2022_6u6N8WWwYSM | Bootstrapping Semantic Segmentation with Regional Contrast | We present ReCo, a contrastive learning framework designed at a regional level to assist learning in semantic segmentation. ReCo performs pixel-level contrastive learning on a sparse set of hard negative pixels, with minimal additional memory footprint. ReCo is easy to implement, being built on top of off-the-shelf segmentation networks, and consistently improves performance, achieving more accurate segmentation boundaries and faster convergence. The strongest effect is in semi-supervised learning with very few labels. With ReCo, we achieve high quality semantic segmentation model, requiring only 5 examples of each semantic class. | Accept (Poster) | An interesting paper, with non-trivial results. The reviewers all agree that the paper is above bar (with two of them indicating strong vote for acceptance). The simplicity of the proposed approach (noted by some of the reviewers) is in my view a positive. Overall, a worthy contribution. | test | [
"FpD195_x0As",
"_WuTdXDc3AA",
"g0bUtoGzyTy",
"sT3cLt16FlQ",
"pk8qZz4VpA",
"hUbKZDj4Z-",
"fIJx9ajD1JH",
"5HyMa3esAQ4",
"WnX9oGVPOT",
"1Hj0aPlkAWs",
"KPZlZ54407",
"ScLlsBK8Zwm",
"ezorcw-vGSR",
"Ckerx5SupVe"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes ReCo, a regional contrastive learning method for semi-supervised semantic segmentation. The query and key pixel sampling methods are proposed for efficient learning. The proposed method showed state-of-the-art level performances in various settings and various datasets. === Strength ===\n1. Th... | [
6,
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_6u6N8WWwYSM",
"sT3cLt16FlQ",
"iclr_2022_6u6N8WWwYSM",
"ezorcw-vGSR",
"iclr_2022_6u6N8WWwYSM",
"WnX9oGVPOT",
"KPZlZ54407",
"ScLlsBK8Zwm",
"g0bUtoGzyTy",
"pk8qZz4VpA",
"Ckerx5SupVe",
"iclr_2022_6u6N8WWwYSM",
"FpD195_x0As",
"iclr_2022_6u6N8WWwYSM"
] |
iclr_2022_wkMG8cdvh7- | Understanding and Improving Graph Injection Attack by Promoting Unnoticeability | Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i.e., Graph Modification Attack (GMA). Although GIA has achieved promising results, little is known about why it is successful and whether there is any pitfall behind the success. To understand the power of GIA, we compare it with GMA and find that GIA can be provably more harmful than GMA due to its relatively high flexibility. However, the high flexibility will also lead to great damage to the homophily distribution of the original graph, i.e., similarity among neighbors. Consequently, the threats of GIA can be easily alleviated or even prevented by homophily-based defenses designed to recover the original homophily. To mitigate the issue, we introduce a novel constraint – homophily unnoticeability that enforces GIA to preserve the homophily, and propose Harmonious Adversarial Objective (HAO) to instantiate it. Extensive experiments verify that GIA with HAO can break homophily-based defenses and outperform previous GIA attacks by a significant margin. We believe our methods can serve for a more reliable evaluation of the robustness of GNNs. | Accept (Poster) | The reviewers agree that this paper studies an important problem, provides theoretically analysis to understand graph injection attack.
The authors propose a new regularizer to improve the attack success. Extensive experimental results also show the effectiveness of the proposed method. | train | [
"cH14aMIrca4",
"_5MHodtE8Fi",
"qR4fxJq2PLg",
"_EUYNlxQ4Q",
"2GlKZVj5hEO",
"bfx6PR6eF35",
"FTQvHVTIxm0",
"TkZgn2rrhk",
"DjGOJlpPSxr",
"kopQzliTJ7p",
"DZcepq_bq64",
"xaRX9TN_wFh",
"uk-NeRaMGUE",
"KEUHT4MkI9E",
"3N2BX0U-Ebb",
"Fye2v_2Z6Fg",
"1EXjeR12D8d",
"n-r3mZnPNyn"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the advantages and drawbacks of node injection attacks to graph neural networks. The authors demonstrate that, in general, node injection attacks are more powerful than the node modification attacks when there is no defense. But when the model trainer adopts some homophily based defense, the nod... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"iclr_2022_wkMG8cdvh7-",
"qR4fxJq2PLg",
"uk-NeRaMGUE",
"iclr_2022_wkMG8cdvh7-",
"n-r3mZnPNyn",
"n-r3mZnPNyn",
"n-r3mZnPNyn",
"cH14aMIrca4",
"cH14aMIrca4",
"cH14aMIrca4",
"cH14aMIrca4",
"1EXjeR12D8d",
"1EXjeR12D8d",
"1EXjeR12D8d",
"Fye2v_2Z6Fg",
"iclr_2022_wkMG8cdvh7-",
"iclr_2022_wkM... |
iclr_2022_MSwEFaztwkE | Learning Weakly-supervised Contrastive Representations | We argue that a form of the valuable information provided by the auxiliary information is its implied data clustering information. For instance, considering hashtags as auxiliary information, we can hypothesize that an Instagram image will be semantically more similar with the same hashtags. With this intuition, we present a two-stage weakly-supervised contrastive learning approach. The first stage is to cluster data according to its auxiliary information. The second stage is to learn similar representations within the same cluster and dissimilar representations for data from different clusters. Our empirical experiments suggest the following three contributions. First, compared to conventional self-supervised representations, the auxiliary-information-infused representations bring the performance closer to the supervised representations, which use direct downstream labels as supervision signals. Second, our approach performs the best in most cases, when comparing our approach with other baseline representation learning methods that also leverage auxiliary data information. Third, we show that our approach also works well with unsupervised constructed clusters (e.g., no auxiliary information), resulting in a strong unsupervised representation learning approach. | Accept (Poster) | The paper proposes a weakly supervised contrastive learning, using auxiliary cluster information, for representation learning. Their method generates similar representations for the intra-cluster samples and dissimilar representations for inter-cluster samples via a clustering InfoNCE objective. Their approach is evaluated thoroughly on three image classification task.
The reviewers agree that the paper is well written, presenting interesting theoretical analysis (Reviewer h3zd, a8kw) and solid experimetal results (Reviewer RhYi, 1ziy). The core idea of the paper is relatively simple and well motivated (Reviewer h3zd). While the focus is using the clustering with auxiliary labels, the method can be applied without auxiliary labels with K-means.
There were some concerns from the reviewers: the overlap with a concurrent work [1]. The authors have provided detailed discussions on conceptual (concurrent work focuses on unsupervised cases where this work focuses on weakly-supervised setting) and emprical comparisons. Accordingly, reviewer a8kw and 1ziy had some issues with the novelty of the paper, as it can be interpreted as slight modification from previously explored idea (vanilla InfoNCE loss).
Despite some overlap with existing approaches, the paper presents an interesting and well conducted study of integrating clustering information for learning representation, so I vote for acceptance.
[1] Weakly Supervised Contrastive Learning. ICCV 2021. | test | [
"43fmQCekW2R",
"72zma4qrJSK",
"gHeXbuQSCyF",
"mof9IdboIcu",
"w0_V755kDf",
"Mwv74YtUK-f",
"KeUwn_G5Dxm",
"_CCv28cuClX",
"f9PK_tCB3TS",
"hHXOiMvQYSt",
"D6iZxLm2H6S",
"mHrItMUKT6F",
"-NcYY1-I65a",
"xfnw4Nkq3fS",
"-xwim28ueeb",
"4sb1AwbcHB",
"JG1Aus_U-rV",
"NoZKzslmqx",
"2hy5EghZcWP"... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Dear AC, \n\nWe performed the following experiments. We ran the ImageNet version provided in ICCV work [1] and reported our and theirs results [1]. For our result, we report the number using the batch size $128$. We directly run their code for their [1] result and change the batch size to $128$. The only differen... | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"gHeXbuQSCyF",
"gHeXbuQSCyF",
"D6iZxLm2H6S",
"w0_V755kDf",
"KeUwn_G5Dxm",
"iclr_2022_MSwEFaztwkE",
"hHXOiMvQYSt",
"f9PK_tCB3TS",
"xfnw4Nkq3fS",
"-NcYY1-I65a",
"iclr_2022_MSwEFaztwkE",
"iclr_2022_MSwEFaztwkE",
"Mwv74YtUK-f",
"Q7IbjY_RZx",
"Q7IbjY_RZx",
"2hy5EghZcWP",
"NoZKzslmqx",
"... |
iclr_2022_Oxeka7Z7Hor | Gaussian Mixture Convolution Networks | This paper proposes a novel method for deep learning based on the analytical convolution of multidimensional Gaussian mixtures.
In contrast to tensors, these do not suffer from the curse of dimensionality and allow for a compact representation, as data is only stored where details exist.
Convolution kernels and data are Gaussian mixtures with unconstrained weights, positions, and covariance matrices.
Similar to discrete convolutional networks, each convolution step produces several feature channels, represented by independent Gaussian mixtures.
Since traditional transfer functions like ReLUs do not produce Gaussian mixtures, we propose using a fitting of these functions instead.
This fitting step also acts as a pooling layer if the number of Gaussian components is reduced appropriately.
We demonstrate that networks based on this architecture reach competitive accuracy on Gaussian mixtures fitted to the MNIST and ModelNet data sets. | Accept (Poster) | This paper presents a deep learning method that aims to address the curse-of-dimensionality problem of conventional convolutional neural networks (CNNs) by representing data and kernels with unconstrained ‘mixtures’ of Gaussians and exploiting the analytical form of the convolution of multidimensional Gaussian mixtures. Since the number of mixture components rapidly increases from layer to layer (after convolution) and common activation functions such as ReLU do not preserve the Gaussian Mixtures (GM), the paper proposes a fitting stage that fits a GM to the output of the transfer function and uses a heuristic to reduce the number of mixture components. Experiments are presented on MNIST (2d) and ModelNet10 (3D), which show competitive performance compared to other approaches such as classic CNNs, PointNet and PontNet++ methods.
There is somewhat an overall consensus on the novelty of the proposed approach and its potential to pave the way for further research. There were, however, several issues raised by the reviewers in terms of clarity, memory footprint and computational cost that limits the applicability of the method to more complex datasets. While the authors expanded on the dense fitting in their comments and in the revised version of the paper, it still remains unclear the role of the negative weights, as the dense fitting stage seems to constrain all the weights to be positive. In terms of memory footprint, the authors refer to the theoretical footprint and their implementation does not match this. Finally, it is acknowledged by the authors that the computational cost is a limitation that hinders the method from achieving competitive performance in more complex tasks. | train | [
"NVHY95Be2OA",
"zZiLjtmQb8x",
"qBcys8ebwLS",
"13g9ApzfIE",
"Hx4DeLMuDA1",
"-XJOKVz2_Nw",
"1v22qZSCXDy",
"19p3Ue7ZMc6",
"LII34mZ58WV",
"Z3aqPFUDuMD",
"LQQjlbPyZ4X",
"Z1l2JadxByD",
"ZWlMv1Z-WO",
"0KNAz3oDKsn",
"xTUxOSTnwCf",
"q0cL6kC-lDZ",
"8HTUpHlTn6l"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Having read all reviews comments and answers, I still think the paper is worth publication for its novelty of flowing functions (i.e. gmm) through a conv net. The proposed non linear step may not be great (i.e. not mathematically elegant, not computationally efficient, etc.), \nand I think improving on it is an i... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"8HTUpHlTn6l",
"8HTUpHlTn6l",
"iclr_2022_Oxeka7Z7Hor",
"Hx4DeLMuDA1",
"-XJOKVz2_Nw",
"1v22qZSCXDy",
"LII34mZ58WV",
"iclr_2022_Oxeka7Z7Hor",
"Z3aqPFUDuMD",
"ZWlMv1Z-WO",
"8HTUpHlTn6l",
"q0cL6kC-lDZ",
"qBcys8ebwLS",
"xTUxOSTnwCf",
"iclr_2022_Oxeka7Z7Hor",
"iclr_2022_Oxeka7Z7Hor",
"iclr... |
iclr_2022_kQ2SOflIOVC | Towards Better Understanding and Better Generalization of Low-shot Classification in Histology Images with Contrastive Learning | Few-shot learning is an established topic in natural images for years, but few work is attended to histology images, which is of high clinical value since well-labeled datasets and rare abnormal samples are expensive to collect. Here, we facilitate the study of few-shot learning in histology images by setting up three cross-domain tasks that simulate real clinics problems. To enable label-efficient learning and better generalizability, we propose to incorporate contrastive learning (CL) with latent augmentation (LA) to build a few-shot system. CL learns useful representations without manual labels, while LA transfers semantic variations of the base dataset in an unsupervised way. These two components fully exploit unlabeled training data and can scale gracefully to other label-hungry problems. In experiments, we find i) models learned by CL generalize better than supervised learning for histology images in unseen classes, and ii) LA brings consistent gains over baselines. Prior studies of self-supervised learning mainly focus on ImageNet-like images, which only present a dominant object in their centers. Recent attention has been paid to images with multi-objects and multi-textures. Histology images are a natural choice for such a study. We show the superiority of CL over supervised learning in terms of generalization for such data and provide our empirical understanding for this observation. The findings in this work could contribute to understanding how the model generalizes in the context of both representation learning and histological image analysis. Code is available. | Accept (Poster) | Paper presents an approach and evaluation setting for few-shot learning in histology images. The approach leverages contrastive learning pretraining, and latent augmentation (LA) for data augmentation. The evaluation examines in-domain few-shot learning, mixed domain few-shot learning, and out of domain few-shot learning.
Latent augmentation is an approach to learn how categories vary between samples within unsupervised clusters in a base dataset, and transfer that variation to the few-shot sampled classes.
Pros:
- A couple reviewers have claimed as a strength the novelty of the proposed latent augmentation method, but as other reviewers point out, there is much work in this field, some of which wasn't cited (i.e. Delta-Encoder, NeurIPS 2018).
- The latent augmentation method is simple to implement, and outperforms standard input augmentation approaches.
- The paper is rich in content and details of experiments.
- Examining learning over a variety of domain shift settings is interesting.
- Shows contrastive learning can outperform supervised pretraining for this application domain.
Cons:
- Multiple reviewers raise concerns about technical novelty. This work applies mostly previously proposed methods, or variations thereof, to the domain of medical imaging. May be more suited to a medical imaging venue.
- Some of the results are consistent with prior reports, such as finding that self-supervised learning can outperform supervised pretraining. In that regard the results are not surprising.
- One reviewer raised issues about lack of comparison to other relevant few-shot works. Authors argue that fine-tuning is a competitive baseline. Authors did add comparison to one other variation augmentation approach, distribution calibration. But as mentioned, delta-encoder is a very related work, which has not been cited nor compared against. Biggest difference is that delta-encoder uses labels, but the unsupervised clusters can trivially be supplied as labels in this setting. AC feels authors should have done a more comprehensive comparison to related learned augmentation works.
- Authors initially did not address how latent augmentation is affected by random seeds, but authors have replied to reviewers with additional data.
Reviewer consensus, excluding 1 reviewer, favors accept, though significant concerns regarding technical novelty and comparisons to other relevant works persist (especially in regards to works that learn how to augment as the proposed LA method does). | train | [
"08qp3UVWG0d",
"-bkpapcV1X8",
"Y8I51LPtvqm",
"ps-soLhzfo5",
"aFB7crJCB-2",
"XDp6yJfiXjJ",
"ItWRspiKnhx",
"PnwG9KPzYNF",
"7SizdRxc9Zm",
"_ENXyFP6Oe",
"pQyjpRa1tAk",
"cX3pSJWwl1",
"580J_mPlInW",
"OlLKs6nOox",
"ye90iKESHuk",
"qitXo8pNpBV",
"IvYtuMHXPa-",
"bMZsF1qmGhL",
"uEa9DdGWPHT"... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
... | [
" Thanks for the detailed explanation. I have no further questions.",
" \n> Are regularization techniques also not used during contrastive learning pre-training?\n\nNo additional complex regularization techniques (e.g., DropBlock [1], distill regularization [2]) are applied during both contrastive learning pre-tr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"-bkpapcV1X8",
"Y8I51LPtvqm",
"uEa9DdGWPHT",
"aFB7crJCB-2",
"XDp6yJfiXjJ",
"ItWRspiKnhx",
"PnwG9KPzYNF",
"7SizdRxc9Zm",
"_ENXyFP6Oe",
"tVC2i04PTuS",
"cX3pSJWwl1",
"580J_mPlInW",
"OlLKs6nOox",
"ye90iKESHuk",
"2mpV4n87WZT",
"bMZsF1qmGhL",
"iclr_2022_kQ2SOflIOVC",
"djyDye4HYcn",
"R1... |
iclr_2022_R332S76RjxS | A global convergence theory for deep ReLU implicit networks via over-parameterization | Implicit deep learning has received increasing attention recently due to the fact that it generalizes the recursive prediction rule of many commonly used neural network architectures. Its prediction rule is provided implicitly based on the solution of an equilibrium equation. Although a line of recent empirical studies has demonstrated its superior performances, the theoretical understanding of implicit neural networks is limited. In general, the equilibrium equation may not be well-posed during the training. As a result, there is no guarantee that a vanilla (stochastic) gradient descent (SGD) training nonlinear implicit neural networks can converge. This paper fills the gap by analyzing the gradient flow of Rectified Linear Unit (ReLU) activated implicit neural networks. For an $m$ width implicit neural network with ReLU activation and $n$ training samples, we show that a randomly initialized gradient descent converges to a global minimum at a linear rate for the square loss function if the implicit neural network is over-parameterized. It is worth noting that, unlike existing works on the convergence of (S)GD on finite-layer over-parameterized neural networks, our convergence results hold for implicit neural networks, where the number of layers is infinite. | Accept (Poster) | This paper shows gradient flow of ReLU activated implicit networks converges to a global minimum at a linear rate for the square loss when the implicit neural network is over-parameterized. While the analyses follow the existing NTK-type analyses and there are disagreements among reviewers on the novelty of this paper, the meta reviewer values new theoretical results on new, emerging settings (implicit neural networks), and thus decides to recommend acceptance | train | [
"LGBWf5ELFSh",
"LXbMgwfmuPK",
"Zb_h6p6vCoU",
"-Wnza0IBiST",
"UjbzBwRdhvV",
"GQhp9HlCrZy",
"biV6sN_AmC8",
"siQvKuniQrT",
"N2GkSlvrssK",
"mPQwNvzfN-B",
"Zr-fSlWOZY",
"GKyxd63pKGU",
"QnHUFg2_Y0S",
"Qyj_pnfcToa",
"PzoX5AVBQmm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper theoretically analyzes the optimization of deep ReLU implicit networks. It first shows the well-posedness of the problem, i.e., the existence and uniqueness of the equilibrium point, then proves that under over-parameterization, both continuous and discrete GD have global convergence in a linear rate, a... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
8
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2022_R332S76RjxS",
"GQhp9HlCrZy",
"UjbzBwRdhvV",
"siQvKuniQrT",
"N2GkSlvrssK",
"siQvKuniQrT",
"Qyj_pnfcToa",
"N2GkSlvrssK",
"QnHUFg2_Y0S",
"PzoX5AVBQmm",
"LGBWf5ELFSh",
"biV6sN_AmC8",
"iclr_2022_R332S76RjxS",
"iclr_2022_R332S76RjxS",
"iclr_2022_R332S76RjxS"
] |
iclr_2022_ATUh28lnSuW | Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction | Graph neural networks (GNNs) have drawn significant research attention recently, mostly under the setting of semi-supervised learning. When task-agnostic representations are preferred or supervision is simply unavailable, the auto-encoder framework comes in handy with a natural graph reconstruction objective for unsupervised GNN training. However, existing graph auto-encoders are designed to reconstruct the direct links, so GNNs trained in this way are only optimized towards proximity-oriented graph mining tasks, and will fall short when the topological structures matter. In this work, we revisit the graph encoding process of GNNs which essentially learns to encode the neighborhood information of each node into an embedding vector, and propose a novel graph decoder to reconstruct the entire neighborhood information regarding both proximity and structure via Neighborhood Wasserstein Reconstruction (NWR). Specifically, from the GNN embedding of each node, NWR jointly predicts its node degree and neighbor feature distribution, where the distribution prediction adopts an optimal-transport loss based on the Wasserstein distance. Extensive experiments on both synthetic and real-world network datasets show that the unsupervised node representations learned with NWR have much more advantageous in structure-oriented graph mining tasks, while also achieving competitive performance in proximity-oriented ones. | Accept (Poster) | The paper proposes a novel approach to graph representation learning. In particular, a graph auto-encoder is proposed that aims to better capture the topological structure by utilising a neighbourhood reconstruction and a degree reconstruction objective. An optimal-transport based objective is proposed for the neighbourhood reconstruction that optimises the 2-Wasserstein distance between the decoded distribution and an empirical estimate of the neighbourhood distribution. An extensive experimental analysis is performed, highlighting the benefits of the proposed approach on a range of synthetic datasets to capture structure information. The experimental results also highlight its robustness across 9 different real-world graph datasets (ranging from proximity-oriented to structure-oriented datasets).
Strengths:
- The problem studied is well motivated and the method proposed is well placed in the literature.
- The method is intuitive and the way that the neighbourhood information is reconstructed appears novel.
- The empirical comparisons are extensive.
Weaknesses:
- Some of the choices in matching neighborhoods seem a bit arbitrary and not sufficiently justified.
- The scalability of the proposed method is questionable. The method has a high complexity of O(Nd^3) (where N is the number of nodes and d is the average node degree). The authors address this problem by resorting to the neighborhood sampling method (without citing the prior art), which is only very briefly discussed in the paper.
- The reviewers have also expressed concerns about the fixed sample size q. The question of how the neighbour-sampling is handled when a node has less than q neighbours remains unanswered. | train | [
"qBb_3dAXbdL",
"dck1XNfoJ8i",
"pASoaOTmZN5",
"gd_2hNHYMKE",
"pqeBgPR2WBY",
"-LPxhVIXj8b",
"klYQNuNFhp-",
"DZ51VNqglt0",
"1hOR_MWPURA",
"HVS2RuZY6kV",
"HFVlO2HSirA"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the time to check our manuscript and read our response again.\n\nWe are wondering if your concerns have been resolved by our response. We are looking forward to your further comments.",
" We thank the reviewer for the time to check our manuscript and read our response. \n\nWe are wonde... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"DZ51VNqglt0",
"HVS2RuZY6kV",
"1hOR_MWPURA",
"1hOR_MWPURA",
"HFVlO2HSirA",
"HVS2RuZY6kV",
"DZ51VNqglt0",
"iclr_2022_ATUh28lnSuW",
"iclr_2022_ATUh28lnSuW",
"iclr_2022_ATUh28lnSuW",
"iclr_2022_ATUh28lnSuW"
] |
iclr_2022_6q_2b6u0BnJ | TRAIL: Near-Optimal Imitation Learning with Suboptimal Data | In imitation learning, one aims to learn task-solving policies using access to near-optimal expert trajectories collected from the task environment. However, high-quality trajectories -- e.g., from human experts -- can be expensive to obtain in practical settings. On the contrary, it is often much easier to obtain large amounts of suboptimal trajectories which can nevertheless provide insight into the structure of the environment, showing what \emph{could} be done in the environment even if not what \emph{should} be done. Is it possible to formalize these conceptual benefits and devise algorithms to use offline datasets to yield \emph{provable} improvements to the sample-efficiency of imitation learning? In this work, we answer this question affirmatively and present training objectives which use an offline dataset to learn an approximate \emph{factored} dynamics model whose structure enables the extraction of a \emph{latent action space}. Our theoretical analysis shows that the learned latent action space can boost the sample-efficiency of downstream imitation learning, effectively reducing the need for large near-optimal expert datasets through the use of auxiliary non-expert data. We evaluate the practicality of our objective through experiments on a set of navigation and locomotion tasks. Our results verify the benefits suggested by our theory and show that our algorithms is able to recover near-optimal policies with fewer expert trajectories. | Accept (Poster) | The paper investigates what we can learn from _suboptimal_ demonstrations for imitation learning. It suggests that we can learn about the structure of the environment by finding a factored dynamics model including a latent action space. It demonstrates both theoretically and empirically that this information can reduce sample requirements for downstream IL.
The reviewers praised the simplicity of the method (including its minimal assumptions), the theoretical analysis, and the breadth of the experimental validation. The authors were helpful during the discussion period, and addressed any questions or concerns the reviewers raised.
Overall, this is an interesting idea and a well-executed paper. | val | [
"Vf_rXKuXKG",
"TR4u1Byy8ij",
"Vm2GYLfWClI",
"drzFZaf7pLh",
"d4mqaW_rPNr",
"-_o3_LKGego",
"DJJ5sumbOti",
"wAgzrlXuDBa",
"MEdahHK9PjL",
"nMzbBU0PQRO",
"xxGw-Cj1azS",
"DZqussnaB5N",
"TrRIz1yrfV_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper considers an imitation learning (IL) problem with both expert and suboptimal demonstrations. The paper claims that sub-optimal demonstrations can be used to learn latent action abstractions which can improve the efficiency of down-stream IL. To solve this problem, the paper proposes TRAIL, which pre-trai... | [
8,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022_6q_2b6u0BnJ",
"iclr_2022_6q_2b6u0BnJ",
"d4mqaW_rPNr",
"-_o3_LKGego",
"DJJ5sumbOti",
"TrRIz1yrfV_",
"nMzbBU0PQRO",
"iclr_2022_6q_2b6u0BnJ",
"xxGw-Cj1azS",
"TR4u1Byy8ij",
"wAgzrlXuDBa",
"xxGw-Cj1azS",
"Vf_rXKuXKG"
] |
iclr_2022_5i7lJLuhTm | Learning by Directional Gradient Descent | How should state be constructed from a sequence of observations, so as to best achieve some objective? Most deep learning methods update the parameters of the state representation by gradient descent. However, no prior method for computing the gradient is fully satisfactory, for example consuming too much memory, introducing too much variance, or adding too much bias. In this work, we propose a new learning algorithm that addresses these limitations. The basic idea is to update the parameters of the representation by using the directional derivative along a candidate direction, a quantity that may be computed online with the same computational cost as the representation itself. We consider several different choices of candidate direction, including random selection and approximations to the true gradient, and investigate their performance on several synthetic tasks.
| Accept (Poster) | This paper adapts a method called "real-time recurrent learning" for training recurrent neural networks. The idea is to project the true gradient onto a subspace of desired dimensionality along a candidate direction. There are a variety of possible candidates: random directions, backpropagation through time, meta-learning approaches, etc.
The main strength of the paper is that it is a very simple idea that seems to have practical utility.
While often presented in different contexts, it should be clearly noted by the authors that the general idea of using low dimensional directional derivatives for computational efficiency is fairly common in optimization. Reviewers mention sketch and project methods. This has also been looked, for example, in the context of Bayesian optimization, with [random selection](https://bayesopt.github.io/papers/2016/Ahmed.pdf) and [value of information based](https://proceedings.neurips.cc/paper/2017/file/64a08e5f1e6c39faeb90108c430eb120-Paper.pdf) criteria.
Reviewers appreciated aspects of the paper, though had concerns about relations to sketch and project methods, computational costs, and experimental demonstrations and baselines. Through the rebuttal period, reviewers were mostly satisfied that the concerns about computational costs were well-addressed. A better job could still be done about describing relation to other work. There was also still some desire for more thorough experimental demonstrations and consistent baselines, as described in the reviews. The paper also could use some additional proof-reading as it contains several grammatical errors. On the whole, the paper makes a nice simple practical contribution. Please carefully account for reviewer comments in updated versions. | train | [
"QuFyAx4Aif",
"3yWlBuYrpsr",
"uY8vhL943R",
"hdjwXavToGN",
"CAbMJO59adz",
"fm899v0H-tb",
"MhJQM9xOTnX",
"z-J12s5P6Lo",
"YhK_XEFPZFD",
"76m9AuM3_rY",
"QXKiOcJMqW",
"HTkTgsSuRXy",
"5_dwU2ISSh",
"kjmjE0PE6wV"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for engaging in discussion, and providing useful feedback. We did another experiment to address reviewer's concerns. \n\n**The sketch and project method is a variance reduction technique, the proposed method is a way to reduce the computational cost in a recurrent setting**\n\nWe thank the r... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
1
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"uY8vhL943R",
"iclr_2022_5i7lJLuhTm",
"76m9AuM3_rY",
"CAbMJO59adz",
"fm899v0H-tb",
"MhJQM9xOTnX",
"YhK_XEFPZFD",
"iclr_2022_5i7lJLuhTm",
"3yWlBuYrpsr",
"5_dwU2ISSh",
"HTkTgsSuRXy",
"iclr_2022_5i7lJLuhTm",
"iclr_2022_5i7lJLuhTm",
"iclr_2022_5i7lJLuhTm"
] |
iclr_2022_MvO2t0vbs4- | Wisdom of Committees: An Overlooked Approach To Faster and More Accurate Models | Committee-based models (ensembles or cascades) construct models by combining existing pre-trained ones. While ensembles and cascades are well-known techniques that were proposed before deep learning, they are not considered a core building block of deep model architectures and are rarely compared to in recent literature on developing efficient models. In this work, we go back to basics and conduct a comprehensive analysis of the efficiency of committee-based models. We find that even the most simplistic method for building committees from existing, independently pre-trained models can match or exceed the accuracy of state-of-the-art models while being drastically more efficient. These simple committee-based models also outperform sophisticated neural architecture search methods (e.g., BigNAS). These findings hold true for several tasks, including image classification, video classification, and semantic segmentation, and various architecture families, such as ViT, EfficientNet, ResNet, MobileNetV2, and X3D. Our results show that an EfficientNet cascade can achieve a 5.4x speedup over B7 and a ViT cascade can achieve a 2.3x speedup over ViT-L-384 while being equally accurate. | Accept (Poster) | Nice paper, providing a thorough investigation of a simple idea that may be useful to a wide range of practitioners. All reviewers are positive, and the discussion has led to significant improvements in exposition and overall in the quality of the submission. | train | [
"CJvSXF67Eoq",
"xmGVr-jy24c",
"AqiCPnMLhKv",
"DVtFDb3K3n-",
"TE5gfNM4NO8",
"0ECL6Y9CTXd",
"X0WZgO5svc",
"TqwJytaqzmx",
"fVc-i4x-tIT",
"IFKYUOjd_lY",
"onK30OOnOEj",
"ziBP-CbqmM",
"Vh8IagwZ0oZ",
"g78hjatyZzl",
"kLZpAuormWh",
"LqH8iWrfCEt"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
" Thanks for your time to read our rebuttal and provide further feedback! We will continue polishing our paper and incorporate the feedback in the camera-ready version.\n",
"This paper introduces ensembles as an option the reduce the amount of FLOPs while increasing or keeping the accuracy. ## Strong points\nThe ... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"AqiCPnMLhKv",
"iclr_2022_MvO2t0vbs4-",
"iclr_2022_MvO2t0vbs4-",
"AqiCPnMLhKv",
"AqiCPnMLhKv",
"LqH8iWrfCEt",
"xmGVr-jy24c",
"kLZpAuormWh",
"xmGVr-jy24c",
"AqiCPnMLhKv",
"AqiCPnMLhKv",
"LqH8iWrfCEt",
"xmGVr-jy24c",
"xmGVr-jy24c",
"iclr_2022_MvO2t0vbs4-",
"iclr_2022_MvO2t0vbs4-"
] |
iclr_2022_i3RI65sR7N | Hierarchical Variational Memory for Few-shot Learning Across Domains | Neural memory enables fast adaptation to new tasks with just a few training samples. Existing memory models store features only from the single last layer, which does not generalize well in presence of a domain shift between training and test distributions. Rather than relying on a flat memory, we propose a hierarchical alternative that stores features at different semantic levels. We introduce a hierarchical prototype model, where each level of the prototype fetches corresponding information from the hierarchical memory. The model is endowed with the ability to flexibly rely on features at different semantic levels if the domain shift circumstances so demand. We meta-learn the model by a newly derived hierarchical variational inference framework, where hierarchical memory and prototypes are jointly optimized. To explore and exploit the importance of different semantic levels, we further propose to learn the weights associated with the prototype at each level in a data-driven way, which enables the model to adaptively choose the most generalizable features. We conduct thorough ablation studies to demonstrate the effectiveness of each component in our model. The new state-of-the-art performance on cross-domain and competitive performance on traditional few-shot classification further substantiates the benefit of hierarchical variational memory. | Accept (Poster) | This paper presents a hierarchical memory for cross domain and few shot classification problems. The paper is well written, tackles an important topic, and the proposed approach which is an extension of VSM is interesting. Reviewer YEXZ has some concerns regarding comparison to a more proper baseline. I believe that the authors have adequately addressed this. Reviewer 2Ajk and g1Bf also have suggestions that the authors have incorporated in the revision. I recommend accepting this paper. | test | [
"HQJZ-rJIg9g",
"ae7Ki0mnUC",
"qZV0HVArkpZ",
"3if5CX27pu",
"LIwPg_BW6SH",
"AbyvsrxCgnI",
"wto-V8W1uVe",
"zmR4llJzu8S",
"FlC_ciAodxJ",
"BSMsyNbk06q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"This paper proposes a hierarchical memory to store features at different semantic levels for few-shot learning across domains. It introduces a hierarchical prototype model, where each level of the prototypes fetches the corresponding information from the hierarchical memory. The authors follow the hyper network d... | [
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_i3RI65sR7N",
"iclr_2022_i3RI65sR7N",
"AbyvsrxCgnI",
"FlC_ciAodxJ",
"BSMsyNbk06q",
"ae7Ki0mnUC",
"HQJZ-rJIg9g",
"iclr_2022_i3RI65sR7N",
"iclr_2022_i3RI65sR7N",
"iclr_2022_i3RI65sR7N"
] |
iclr_2022_0sgntlpKDOz | Learning Graphon Mean Field Games and Approximate Nash Equilibria | Recent advances at the intersection of dense large graph limits and mean field games have begun to enable the scalable analysis of a broad class of dynamical sequential games with large numbers of agents. So far, results have been largely limited to graphon mean field systems with continuous-time diffusive or jump dynamics, typically without control and with little focus on computational methods. We propose a novel discrete-time formulation for graphon mean field games as the limit of non-linear dense graph Markov games with weak interaction. On the theoretical side, we give extensive and rigorous existence and approximation properties of the graphon mean field solution in sufficiently large systems. On the practical side we provide general learning schemes for graphon mean field equilibria by either introducing agent equivalence classes or reformulating the graphon mean field system as a classical mean field system. By repeatedly finding a regularized optimal control solution and its generated mean field, we successfully obtain plausible approximate Nash equilibria in otherwise infeasible large dense graph games with many agents. Empirically, we are able to demonstrate on a number of examples that the finite-agent behavior comes increasingly close to the mean field behavior for our computed equilibria as the graph or system size grows, verifying our theory. More generally, we successfully apply policy gradient reinforcement learning in conjunction with sequential Monte Carlo methods. | Accept (Poster) | This paper studies graphon mean-field games, whereby a continuum of agents are connected by a graphon. They study a discrete time version and show existence of a Nash equilibrium (under Lipschitz conditions). Moreover they prove that it corresponds to an approximate Nash equilibrium for the game with a finite number of players, thereby validating graphon mean-field games as a natural abstraction when the number of players is sufficiently large. Finally they give algorithms based on fixed point iterations (one based on discretizing the graphon index, the other based on reformulating it as a classical mean-field game) for computing such an equilibrium. They give numerical experiments to validate their approach. The reviewers pointed out various writing issues or other results that would help complete the picture. Many of these were addressed and/or clarified by the authors in their revision. Overall the paper provides an appealing and relatively complete characterization of equilibria in graphon mean-field games. | train | [
"zvmR-vKFsoJ",
"ipaJZDcso20",
"sWNKVLBtD6l",
"zW-ZBaZGo37",
"umSy4YepaZ2",
"vINhJHcXsWP",
"xcv4CzCKjkl",
"QDM5jreYXH",
"96FDs5pAtdx",
"4OHheyK6LXs",
"yPli8KEBBRN",
"vCWrWf6he2a",
"9syURJGwZ7c",
"rEega3wBtdl",
"uzSO2O55Py",
"hLdB05jSjqF"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for his careful considerations and remarks.\n\nRegarding Proposition 2, we will indeed remove the words \"to a GMFE\", which are no longer of importance in view of Theorems 4 and 5, as we have shown there that we obtain the same $(\\epsilon, p)$-Nash property as a GMFE even without convergin... | [
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"sWNKVLBtD6l",
"iclr_2022_0sgntlpKDOz",
"rEega3wBtdl",
"iclr_2022_0sgntlpKDOz",
"uzSO2O55Py",
"zW-ZBaZGo37",
"iclr_2022_0sgntlpKDOz",
"hLdB05jSjqF",
"uzSO2O55Py",
"zW-ZBaZGo37",
"zW-ZBaZGo37",
"ipaJZDcso20",
"ipaJZDcso20",
"ipaJZDcso20",
"iclr_2022_0sgntlpKDOz",
"iclr_2022_0sgntlpKDOz"... |
iclr_2022_2f1z55GVQN | Critical Points in Quantum Generative Models | One of the most important properties of neural networks is the clustering of local minima of the loss function near the global minimum, enabling efficient training. Though generative models implemented on quantum computers are known to be more expressive than their traditional counterparts, it has empirically been observed that these models experience a transition in the quality of their local minima. Namely, below some critical number of parameters, all local minima are far from the global minimum in function value; above this critical parameter count, all local minima are good approximators of the global minimum. Furthermore, for a certain class of quantum generative models, this transition has empirically been observed to occur at parameter counts exponentially large in the problem size, meaning practical training of these models is out of reach. Here, we give the first proof of this transition in trainability, specializing to this latter class of quantum generative model. We use techniques inspired by those used to study the loss landscapes of classical neural networks. We also verify that our analytic results hold experimentally even at modest model sizes. | Accept (Poster) | *Summary:*
Study the location of local minima for quantum generative models.
*Strengths:*
- Rigorous analysis of an important question.
- Clear writing with important conclusions.
*Weaknesses:*
- Technical writing might not be very accessible.
*Discussion:*
Reviewers were mostly favorable about this submission. They found the topic important and the contribution significant. A main concern was that the writing might not be sufficiently self contained and the writing might not be accessible to a broad audience. Authors worked on the accessibility. In the initial review, zxWF expressed concerns about concepts, proposed methods, numerical experiments. zxWF found that the author responses carefully covered most of their comments and raised score as a consequence. zxWF still finds that some aspects could be improved, particularly in regard to experiments. F6sD found the question well motivated, the techniques impressive, and the claims important.
*Conclusion:*
Three reviewers are favorable about this work. Two of them find it good and one marginally above the acceptance threshold. I find the topic and the nature of the claims important. Considering the unanimously positive reactions from the reviewers I am recommending this article to be accepted. I ask the authors to take the comments from the reviewers carefully into account when preparing the final version of the paper. | train | [
"bgesA3b3pex",
"DLzssFrdk6X",
"wIW3EEcIsIh",
"JIW4cFOQF8A",
"Ip-JxByfE7M",
"X9k5RyLqUR_",
"o8B__yFcCic",
"zOF-bNNoPI0",
"u8v2GcdRQO"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank the author that carefully addresses most of the suggestions during the feedback session. \n\nI think most of my original concerns are on the clarity, which has been largely improved in the revised and could potentially benefit general audiences in ICLR. \n\nOne of the remaining comments is the numerical exp... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"JIW4cFOQF8A",
"iclr_2022_2f1z55GVQN",
"iclr_2022_2f1z55GVQN",
"DLzssFrdk6X",
"u8v2GcdRQO",
"DLzssFrdk6X",
"zOF-bNNoPI0",
"iclr_2022_2f1z55GVQN",
"iclr_2022_2f1z55GVQN"
] |
iclr_2022_Bl8CQrx2Up4 | cosFormer: Rethinking Softmax In Attention | Transformer has shown great successes in natural language processing, computer vision, and audio processing. As one of its core components, the softmax attention helps to capture long-range dependencies yet prohibits its scale-up due to the quadratic space and time complexity to the sequence length. Kernel methods are often adopted to reduce the complexity by approximating the softmax operator. Nevertheless, due to the approximation errors, their performances vary in different tasks/corpus and suffer crucial performance drops when compared with the vanilla softmax attention. In this paper, we propose a linear transformer called cosFormer that can achieve comparable or better accuracy to the vanilla transformer in both casual and cross attentions. cosFormer is based on two key properties of softmax attention: i). non-negativeness of the attention matrix; ii). a non-linear re-weighting scheme that can concentrate the distribution of the attention matrix. As its linear substitute, cosFormer fulfills these properties with a linear operator and a cosine-based distance re-weighting mechanism. Extensive experiments on language modeling and text understanding tasks demonstrate the effectiveness of our method. We further examine our method on long sequences and achieve state-of-the-art performance on the Long-Range Arena benchmark. The source code is available at https://github.com/OpenNLPLab/cosFormer. | Accept (Poster) | This paper introduces a new linear attention mechanism for transformer based models. This is accomplished by replacing the softmax in the standard transformer self-attention with a cosine-based re-weighting mechanism. The empirical results are good, and cosFormer generally outperforms existing efficient transformers for autoregressive language modeling, fine-tuning, and on the long range arena.
The reviewers were generally positive regarding the paper, with all reviewers voting to accept. The discussion period focused on particular choices regarding the ReLU activation function vs. other non-negative activation functions, further motivating the cosine operation, and comparing the speed of cosFormer vs. other efficient transformers. The authors responded by providing additional ablations to empirically validate the choice of ReLU, motivated the cosine operation by noting that it introduces a locality bias, and further described the computation requirements of their transformer vs. prior work.
Overall, this is an interesting addition to the linear / efficient transformer literature, with solid empirical results supporting the various design decisions. | train | [
"A_l9lnza9MJ",
"t2QMCxSaAyN",
"yNXhLik1NF0",
"WG4-5Pdk2KF",
"0qTdzvv-7e",
"jZTrIhMy90x",
"ly1_caBbKd",
"y9PLRYe3DJW",
"6ySR2P8lMzn",
"GTvkx-Xca3",
"DZH-J971ICe",
"c7VvhsrAt1L",
"y45Sf8vbAT2",
"XPxrBdZOMi",
"bKXTS2xh-ZI",
"zMuI54qpv5d",
"nW6oZqFxHUS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the new responses from the authors. These explanations are mostly more reasonable than before, and thus I keep my original recommendation.",
"The paper proposes a substitute of the vanilla self-attention module, which claims a linear complexity. Like Performer and many other previous works, it intr... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"WG4-5Pdk2KF",
"iclr_2022_Bl8CQrx2Up4",
"GTvkx-Xca3",
"0qTdzvv-7e",
"jZTrIhMy90x",
"bKXTS2xh-ZI",
"nW6oZqFxHUS",
"DZH-J971ICe",
"zMuI54qpv5d",
"y45Sf8vbAT2",
"GTvkx-Xca3",
"XPxrBdZOMi",
"t2QMCxSaAyN",
"t2QMCxSaAyN",
"iclr_2022_Bl8CQrx2Up4",
"iclr_2022_Bl8CQrx2Up4",
"iclr_2022_Bl8CQrx... |
iclr_2022_dUV91uaXm3 | Revisiting Over-smoothing in BERT from the Perspective of Graph | Recently over-smoothing phenomenon of Transformer-based models is observed in both vision and language fields. However, no existing work has delved deeper to further investigate the main cause of this phenomenon. In this work, we make the attempt to analyze the over-smoothing problem from the perspective of graph, where such problem was first discovered and explored. Intuitively, the self-attention matrix can be seen as a normalized adjacent matrix of a corresponding graph. Based on the above connection, we provide some theoretical analysis and find that layer normalization plays a key role in the over-smoothing issue of Transformer-based models. Specifically, if the standard deviation of layer normalization is sufficiently large, the output of Transformer stacks will converge to a specific low-rank subspace and result in over-smoothing. To alleviate the over-smoothing problem, we consider hierarchical fusion strategies, which combine the representations from different layers adaptively to make the output more diverse. Extensive experiment results on various data sets illustrate the effect of our fusion method. | Accept (Spotlight) | This paper has a deep analysis of the over-smoothing phenomenon in BERT from the perspective of graph. Over-smoothing refers to token uniformity problem in BERT, different input patches mapping to similar latent representation in ViT and the problem of shallower representation better than deeper (overthinking). The authors build a relationship between Transformer blocks and graphs. Namely, self-attention matrix can be regarded as a normalized adjacency matrix of a weighted graph. They prove that if the standard deviation in layer normalization is sufficiently large, the outputs of the transformer stack will converge to a low-rank subspace, resulting in over-smoothing.
In this paper, they also provide theoretical proof why higher layers can lead to over-smoothing. Empirically , they investigate the effects of the magnitude of the two standard deviations between two consecutive layers on possible over-smoothing in diverse tasks.
In order to overcome over-smoothing, they propose a series of hierarchical fusion strategy that adaptively fuses presentation from different layers, including concatenation fusion, max fusion and self-gate fusion into post-normalization. These strategies reduce similarities between tokens and outperforms BERT baseline on a few datasets (GLUE, SWAG and SQuAD).
Overall I agree with reviewers that this is a good contribution. | train | [
"a6OO-6z2s2o",
"EcbOsrs_RA",
"Wy6Zsxdmr-D",
"9xIA7r4Zh9q",
"qut2QbQHwM5",
"QgA3uHhlUzA",
"WBDdUgdFO8D",
"lTYH4xp33bg",
"BankHq8_fq",
"3nhxZM17Img",
"4IKse6Kf_NU",
"52WoxSzVUdF",
"q3l-fgWNYHy",
"wMq841Tlxqw"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Agreed, we will add the 12th layer's similarity in Figure 6.\n\nThanks for your response and we appreciate your endorsement.",
" After reading your response to question 3, I recommend the authors add the 12th layer's similarity in Figure 6. From the additional provided results of the similarity of the 12th laye... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"EcbOsrs_RA",
"lTYH4xp33bg",
"9xIA7r4Zh9q",
"3nhxZM17Img",
"iclr_2022_dUV91uaXm3",
"3nhxZM17Img",
"q3l-fgWNYHy",
"52WoxSzVUdF",
"wMq841Tlxqw",
"qut2QbQHwM5",
"iclr_2022_dUV91uaXm3",
"iclr_2022_dUV91uaXm3",
"iclr_2022_dUV91uaXm3",
"iclr_2022_dUV91uaXm3"
] |
iclr_2022_HCRVf71PMF | LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5 | Existing approaches to lifelong language learning rely on plenty of labeled data for learning a new task, which is hard to obtain in most real scenarios. Considering that humans can continually learn new tasks from a handful of examples, we expect the models also to be able to generalize well on new few-shot tasks without forgetting the previous ones. In this work, we define this more challenging yet practical problem as Lifelong Few-shot Language Learning (LFLL) and propose a unified framework for it based on prompt tuning of T5. Our framework called LFPT5 takes full advantage of PT's strong few-shot learning ability, and simultaneously trains the model as a task solver and a data generator. Before learning a new domain of the same task type, LFPT5 generates pseudo (labeled) samples of previously learned domains, and later gets trained on those samples to alleviate forgetting of previous knowledge as it learns the new domain. In addition, a KL divergence loss is minimized to achieve label consistency between the previous and the current model. While adapting to a new task type, LFPT5 includes and tunes additional prompt embeddings for the new task. With extensive experiments, we demonstrate that LFPT5 can be applied to various different types of tasks and significantly outperform previous methods in different LFLL settings. | Accept (Poster) | This work defines the new problem of lifelong few-shot language learning where the goal is to continually learn new few-shot tasks and use those to benefit future tasks while not forgetting previous tasks. With larger models, this is an important goal due to the cost of updating and retraining these models. The work also shows superiority to existing approaches like EWC and MAS. After the author's rebuttal, the experimental section is also thorough with evaluation on a good range of tasks and approaches such as adapters showing good results. While this setting appears simpler than the full lifelong-learning setting and the approach combines existing ideas, this work's contribution to the definition and thinking about this problem is valuable. However, the authors should more clearly state the advantages of their approach vs standard prompt tuning (with an emphasis of benefiting future tasks) since two reviewers seem caught up on this point. The other two reviewers comments were addressed by the rebuttal as they stated in their comments. | train | [
"i0jpZ7b3jbR",
"2BWzEjisBP5",
"rD6HzukBltW",
"BwCNxR4dEcz",
"Aw_NEe5OsBs",
"61AizLUpRaF",
"VHferxj2FZE",
"Z0v4IXnSdwR",
"xunbq3ZfC_s",
"DXdZcazOVY",
"Q1vVwSSllYK",
"2VIxSZ4FXGu",
"3ZnPgGnk-dl",
"UdLGRa2PMO",
"-q1pFp_nhki",
"jpOPW35mP1g",
"v1whUMudlY-",
"J_6ZXx1utPf",
"pjZDxZ2sUpN... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
... | [
" Thanks for your feedback and we are happy that you are convinced with our response and the revised version! ",
" [1] Lester, Brian, Rami Al-Rfou, and Noah Constant. \"The power of scale for parameter-efficient prompt tuning.\" EMNLP 2021",
" Thank you for the detailed response. I really appreciate the effort... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"rD6HzukBltW",
"iU22lQiwTC",
"iU22lQiwTC",
"61AizLUpRaF",
"BwCNxR4dEcz",
"KgLblNRI8H",
"iU22lQiwTC",
"xunbq3ZfC_s",
"AaixZof4Lpp",
"iclr_2022_HCRVf71PMF",
"C4vPhhqjpnp",
"iclr_2022_HCRVf71PMF",
"pjZDxZ2sUpN",
"-q1pFp_nhki",
"jpOPW35mP1g",
"v1whUMudlY-",
"FUtKJ5I0AiH",
"pjZDxZ2sUpN"... |
iclr_2022_0xiJLKH-ufZ | Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models | Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose \textit{Analytic-DPM}, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. Further, to correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip the estimate for a better result. Empirically, our analytic-DPM improves the log-likelihood of various DPMs, produces high-quality samples, and meanwhile enjoys a $20\times$ to $80\times$ speed up. | Accept (Oral) | This paper presents an analytic approach for estimating the optimal reverse variance schedule given a pre-trained score-based model. The experimental results demonstrated the efficacy of the proposed method on several datasets across different sampling budgets. Given the recent interest in score-based generative models, I believe that the paper will find applications in various domains. I am pleased to recommend it for acceptance. | test | [
"1vNHmSJs_85",
"kKwOMqRIsIH",
"BN1_YyCfcR-",
"SZv7JEBy99Z",
"Pm0oyznxmaz",
"6-hBrDvWYIx",
"17k7cmFvw78",
"cvBK6anXEEU",
"alt999_6IL-",
"uy5TVszw4vk",
"PKNBb6YvQ07",
"5-XbDsTTEcy",
"QbB3q0ViKKd",
"cIBwEZZBTHo",
"ccUFQlA7sRy",
"jiVwWLSUOgh",
"o0QJ34P13kk",
"HVoY12Mc5YC",
"4Qh32Izg8... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thank you very much for the valuable suggestions and the update on the score. We highly appreciate it.",
" Thanks for your replies to all reviews and I have read all of them. The Monte Carlo approximation in other reviews is very common in the Bayesian community when estimating a term that has no closed-form ex... | [
-1,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"kKwOMqRIsIH",
"QbB3q0ViKKd",
"iclr_2022_0xiJLKH-ufZ",
"6-hBrDvWYIx",
"iclr_2022_0xiJLKH-ufZ",
"17k7cmFvw78",
"cvBK6anXEEU",
"5-XbDsTTEcy",
"uy5TVszw4vk",
"HVoY12Mc5YC",
"iclr_2022_0xiJLKH-ufZ",
"Pm0oyznxmaz",
"BN1_YyCfcR-",
"19df0k-PGMz",
"4Qh32Izg8a5",
"iclr_2022_0xiJLKH-ufZ",
"icl... |
iclr_2022_MsHnJPaBUZE | iFlood: A Stable and Effective Regularizer | Various regularization methods have been designed to prevent overfitting of machine learning models. Among them, a surprisingly simple yet effective one, called Flooding, is proposed recently, which directly constrains the training loss on average to stay at a given level. However, our further studies uncover that the design of the loss function of Flooding can lead to a discrepancy between its objective and implementation, and cause the instability issue. To resolve these issues, in this paper, we propose a new regularizer, called individual Flood (denoted as iFlood). With instance-level constraints on training loss, iFlood encourages the trained models to better fit the under-fitted instances while suppressing the confidence on over-fitted ones. We theoretically show that the design of iFlood can be intrinsically connected with removing the noise or bias in training data, which makes it suitable for a variety of applications to improve the generalization performances of learned models. We also theoretically link iFlood to some other regularizers by comparing the inductive biases they introduce. Our experimental results on both image classification and language understanding tasks confirm that models learned with iFlood can stably converge to solutions with better generalization ability, and behave consistently at instance-level. | Accept (Poster) | The paper extends the original work on flooding to individual instance level to prevent overfitting. Even though the technique is a intuitive extension, the reviewers appreciate its simplicity and effectiveness, and consider the extension necessary. Most reviewers' concerns were addressed through rebuttal. | test | [
"JJ2FF0TOsYM",
"wsupdDXBW7i",
"X6QMw0M_e_",
"eH-A_LKKREu",
"wDheRDfr8Ds",
"qji4KFWOAug",
"dQdDnVK9eFA",
"MNo_ksuJqoy",
"WbJFLhDZaSO",
"1jfGa78zlD4",
"WQ3dfLwOzj5",
"6oq9N0zcTI"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer WRr1,\n\nThanks again for your thoughtful comments! \nAs the discussion period is close to the end, we would appreciate it if you could kindly let us know the response has addressed your concerns. And we are happy to discuss further if you have any questions.\n\nAuthors",
" The new results of mod... | [
-1,
-1,
6,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4
] | [
"6oq9N0zcTI",
"WbJFLhDZaSO",
"iclr_2022_MsHnJPaBUZE",
"iclr_2022_MsHnJPaBUZE",
"MNo_ksuJqoy",
"WQ3dfLwOzj5",
"iclr_2022_MsHnJPaBUZE",
"eH-A_LKKREu",
"X6QMw0M_e_",
"6oq9N0zcTI",
"dQdDnVK9eFA",
"iclr_2022_MsHnJPaBUZE"
] |
iclr_2022_vDwBW49HmO | Gradient Matching for Domain Generalization | Machine learning systems typically assume that the distributions of training and test sets match closely. However, a critical requirement of such systems in the real world is their ability to generalize to unseen domains. Here, we propose an _inter-domain gradient matching_ objective that targets domain generalization by maximizing the inner product between gradients from different domains. Since direct optimization of the gradient inner product can be computationally prohibitive --- it requires computation of second-order derivatives –-- we derive a simpler first-order algorithm named Fish that approximates its optimization. We perform experiments on the Wilds benchmark, which captures distribution shift in the real world, as well as the DomainBed benchmark that focuses more on synthetic-to-real transfer. Our method produces competitive results on both benchmarks, demonstrating its effectiveness across a wide range of domain generalization tasks. | Accept (Poster) | This paper proposes a new method for domain generalization. The main idea is to encourage higher inner-product between gradients from different domains. Instead of adding an explicit regularizer to encourage this, authors propose an optimization algorithm called Fish which implicitly encourages higher inner-product between gradients of different domains. Authors further show their proposed method is competitive on challenging benchmarks such as WILDS and DomainBed.
Reviewers all found the proposed algorithm novel and expressed that the contributions of the paper in terms of improving domain generalization is significant. A major issue that came up during the discussion period was that we realized that the presented results on WILDS benchmark are misleading. In particular, the following statements in the manuscript are false because on "CivilComments" and "Amazon", Fish utilizes a BERT model (Devlin et al., 2018). However, other methods at WILDS benchmark use DistilBERT (Sanh et al., 2019):
- Section 4.2: "For hyper-parameters including learning rate, batch size, choice of optimizer and model architecture, we follow the exact configuration as reported in the WILDS benchmark. Importantly, we also use the same model selection strategy used in WILDS to ensure a fair comparison."
- Appendix C2: "Results: We compare results to the baselines used in the WILDS benchmark over 3 random seed runs in Table 10. All models are trained using BERT (Devlin et al., 2018)."
Authors explained that the mismatch is because at the time they evaluated their model, an earlier version of WILDS benchmark was available but they later updated other methods' results on a newer version of WILDS benchmark. Of course, I do not think that this explanation makes the misleading statements OK. Authors promised to do the following for the camera-ready version to make sure it is not misleading:
- Using "Worst-U/R Pearson r" as the comparison measure for "PovertyMap"
- Submitting their method to WILDS benchmark making sure everything matches the baselines and then reporting the results on "Amazon" and "CivilComments" datasets.
Therefore, I recommend acceptance and I hope that authors would stick to their promise and update the manuscript to include these changes. | train | [
"DOlZ_ySlx8q",
"EHGJQz14DHh",
"H8uN-g6JplQ",
"5txK6BpvKSH",
"oIrgIY_fv1Q",
"fP95-B7MwJH",
"8kZQilOPcr9",
"BzIUZCDNDYt",
"qI9Dqokr0C",
"k9FqNZoDJkI",
"1P_mVclldRi",
"Ilk3Jh75hsB",
"z0CPRXLTtrB",
"hX9T0lFGPAO",
"hVmHwudptbg",
"WdFre7VZAiK",
"4aMEkyzQtiZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to maximize the similarity of the gradients of the classification loss for different domains to learn domain-agnostic features. Pros:\n\n1. This paper proposes to maximize the similarity of gradients of the classifcation loss for different domains to learn invariant features. This is an interes... | [
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"iclr_2022_vDwBW49HmO",
"z0CPRXLTtrB",
"1P_mVclldRi",
"Ilk3Jh75hsB",
"hX9T0lFGPAO",
"iclr_2022_vDwBW49HmO",
"iclr_2022_vDwBW49HmO",
"qI9Dqokr0C",
"iclr_2022_vDwBW49HmO",
"DOlZ_ySlx8q",
"hVmHwudptbg",
"4aMEkyzQtiZ",
"WdFre7VZAiK",
"fP95-B7MwJH",
"iclr_2022_vDwBW49HmO",
"iclr_2022_vDwBW4... |
iclr_2022_ZKy2X3dgPA | It Takes Two to Tango: Mixup for Deep Metric Learning | Metric learning involves learning a discriminative representation such that embeddings of similar classes are encouraged to be close, while embeddings of dissimilar classes are pushed far apart. State-of-the-art methods focus mostly on sophisticated loss functions or mining strategies. On the one hand, metric learning losses consider two or more examples at a time. On the other hand, modern data augmentation methods for classification consider two or more examples at a time. The combination of the two ideas is under-studied.
In this work, we aim to bridge this gap and improve representations using mixup, which is a powerful data augmentation approach interpolating two or more examples and corresponding target labels at a time. This task is challenging because, unlike classification, the loss functions used in metric learning are not additive over examples, so the idea of interpolating target labels is not straightforward. To the best of our knowledge, we are the first to investigate mixing both examples and target labels for deep metric learning. We develop a generalized formulation that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing Metric Mix, or Metrix. We also introduce a new metric---utilization---to demonstrate that by mixing examples during training, we are exploring areas of the embedding space beyond the training classes, thereby improving representations. To validate the effect of improved representations, we show that mixing inputs, intermediate representations or embeddings along with target labels significantly outperforms state-of-the-art metric learning methods on four benchmark deep metric learning datasets. | Accept (Poster) | This paper adapts the mixup data augmentation strategy to the case of metric learning. The main challenge addressed is the fact that in metric learning, the loss function does not treat each example as an IID sample. The paper takes the view of metric learning as learning over positive and negative pairs (those belonging to the same/different classes) and uses this to develop a fairly general metric-mixup formulation. To measure the effectiveness of the approach for metric learning, the paper introduces a new measure called utilization that looks at the distance of a query point to its nearest training point in embedding space.
The reviewers (5 of them) all favour acceptance on the grounds of novelty, and the performance of the method. During the discussion, some issues were raised around whether utilization is a useful measure, improvements to the paper clarity, whether the clean loss in eq. 10 is necessary, and potential limitations on the generality of the approach. However, additional experiments and clarification during the discussion period has resolved these issues to their satisfaction. | train | [
"3-gfCuBS-Wh",
"MyU9WkO4Bq7",
"zqX1hH635KN",
"T74l8kj1BA8",
"Qg6LOkX4FXk",
"Fu1NmwWDNZX",
"ba259Iu_3n5",
"YujmZrwCpNQ",
"_kO6KIb9AYi",
"muaVfqzo8OP",
"l1JkOce2-6H",
"28ktwuS9Ph",
"g7YsmoAP6u",
"e35alfO8hoL",
"z2jhACE6I1Z",
"LEMgcvlhhQ9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a technique for using mixup augmentation in deep metric learning training. Specifically, the dml loss function is represented in a general form so that mixup loss can be easily computed for different pairs. This loss is combined with regular dml loss for training the network. The method is evalu... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
3
] | [
"iclr_2022_ZKy2X3dgPA",
"Fu1NmwWDNZX",
"YujmZrwCpNQ",
"iclr_2022_ZKy2X3dgPA",
"YujmZrwCpNQ",
"ba259Iu_3n5",
"_kO6KIb9AYi",
"LEMgcvlhhQ9",
"3-gfCuBS-Wh",
"z2jhACE6I1Z",
"e35alfO8hoL",
"g7YsmoAP6u",
"iclr_2022_ZKy2X3dgPA",
"iclr_2022_ZKy2X3dgPA",
"iclr_2022_ZKy2X3dgPA",
"iclr_2022_ZKy2X3... |
iclr_2022_rhOiUS8KQM9 | Enabling Arbitrary Translation Objectives with Adaptive Tree Search | We introduce an adaptive tree search algorithm, which is a deterministic variant of Monte Carlo tree search, that can find high-scoring outputs under translation models that make no assumptions about the form or structure of the search objective. This algorithm enables the exploration of new kinds of models that are unencumbered by constraints imposed to make decoding tractable, such as autoregressivity or conditional independence assumptions. When applied to autoregressive models, our algorithm has different biases than beam search has, which enables a new analysis of the role of decoding bias in autoregressive models. Empirically, we show that our adaptive tree search algorithm finds outputs with substantially better model scores compared to beam search in autoregressive models, and compared to reranking techniques in models whose scores do not decompose additively with respect to the words in the output. We also characterise the correlation of several translation model objectives with respect to BLEU. We find that while some standard models are poorly calibrated and benefit from the beam search bias, other often more robust models (autoregressive models tuned to maximize expected automatic metric scores, the noisy channel model and a newly proposed objective) benefit from increasing amounts of search using our proposed decoder, whereas the beam search bias limits the improvements obtained from such objectives. Thus, we argue that as models improve, the improvements may be masked by over-reliance on beam search or reranking based methods. | Accept (Poster) | This paper proposes an adaptive tree search algorithm for NMT models with non-decomposable metrics and shows its efficacy against strong baselines. This is an interesting contribution towards overcoming the performance caps introduced by the uncontrolled-for biases of beam search, and it speaks to a growing community interested in decoding beyond greedy surprisal minimisation.
The initial reviews brought to light a number of concerns that in my view are well addressed in the rebuttal and in the current version of the manuscript. One of the key issues was a confusion caused by the use of the term 'non-autoregressive' to refer to the intractability of the metric / objective function of certain models. This use clashed with the more standard use in MT, which refers to a tractable factorisation of a joint probability by means of strong conditional independence assumptions.
The confusion is easy to address and in no way compromises the thoroughness of the empirical section. The authors are aware of the confusion and how to resolve it, and they have acknowledged the need to pick a less ambiguous term.
I'd like to recommend this for acceptance, but I urge that the authors do not ignore the confusion caused by 'auto/non-auto regressive' and the missing literature that came up in the discussion with reviewer i2pz (I understand the discussion happened too late for the manuscript to be updated, but I trust this can be done for the final version). | val | [
"_eZ7HYEBMhF",
"AJSv4ehUsNB",
"tWdScPt-3ZN",
"IfOzn6nTnjg",
"-ismHS8gvK",
"Yowo_iJqlKJ",
"GcQSxbsNtEq",
"Hf10l-eKwmK",
"Gc4llWINPV",
"Umx0htXtSm",
"Su4z1c56yJ",
"9V47JhYKNTO",
"fQQAXlaaWd3",
"pZQpWxuv2KJ",
"i4tlBgVSsMy",
"K-CHEvukFC6",
"OvAlRPx92-x",
"7rpQ_YOSNXL",
"SGU3M0E0HiX"
... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We deeply thank the reviewer for reading our response and providing additional input, and also improving the initial score. We will update the citations and run the statistical significance tests at least for the main experiments of the paper.\n\nOne last thing we wish to clarify is the fact that we have used the... | [
-1,
-1,
-1,
-1,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"IfOzn6nTnjg",
"tWdScPt-3ZN",
"Gc4llWINPV",
"Hf10l-eKwmK",
"iclr_2022_rhOiUS8KQM9",
"iclr_2022_rhOiUS8KQM9",
"-ismHS8gvK",
"-ismHS8gvK",
"SGU3M0E0HiX",
"Yowo_iJqlKJ",
"Yowo_iJqlKJ",
"Yowo_iJqlKJ",
"Yowo_iJqlKJ",
"7rpQ_YOSNXL",
"7rpQ_YOSNXL",
"7rpQ_YOSNXL",
"7rpQ_YOSNXL",
"iclr_2022... |
iclr_2022_cmt-6KtR4c4 | Leveraging Automated Unit Tests for Unsupervised Code Translation | With little to no parallel data available for programming languages, unsupervised methods are well-suited to source code translation. However, the majority of unsupervised machine translation approaches rely on back-translation, a method developed in the context of natural language translation and one that inherently involves training on noisy inputs. Unfortunately, source code is highly sensitive to small changes; a single token can result in compilation failures or erroneous programs, unlike natural languages where small inaccuracies may not change the meaning of a sentence. To address this issue, we propose to leverage an automated unit-testing system to filter out invalid translations, thereby creating a fully tested parallel corpus. We found that fine-tuning an unsupervised model with this filtered data set significantly reduces the noise in the translations so-generated, comfortably outperforming the state-of-the-art for all language pairs studied. In particular, for Java→Python and Python→C++ we outperform the best previous methods by more than 16% and 24% respectively, reducing the error rate by more than 35%. | Accept (Spotlight) | This paper is about unsupervised translation between programming languages. The main positive is that it introduces the idea of using a form of unit test generation and execution behavior within a programming language back-translation setup, and it puts together together a number of pieces in an interesting way: text-to-text transformers, unit test generation, execution and code coverage. Results show a substantial improvement. The main weaknesses are that there are some caveats that need to be made, such as the (heuristic, not learned) way that test cases are translated across languages is not fully general, and that limits the applicability. There are also some cases where I find that the authors are stretching claims a bit beyond what experiments support, e.g., in the response to zd7L about applicability to COBOL.
All-in-all, though, it's a good implementation of an idea that should have a lasting place in this line of work, so it's worth accepting. | train | [
"JAjKJvKNPvw",
"Nssi4iF_RqM",
"hb7dkwYMXs",
"q2VAgl7wdqf",
"ahg3o6x3vG",
"MZy2FIKXeUA",
"rirBcR59K9-",
"P13DQSgs-Es",
"fPwuMozkb01",
"iHbP72io3GB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > There were attempts to use it for bug patching applications, see for instance, DeepDebug (https://arxiv.org/abs/2105.09352) paper. It is also frequently used during inference, as a postprocessing step, and evaluation.\n\nThank you, we added a reference to this work in the related work section. \n\n> Authors cla... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"iHbP72io3GB",
"fPwuMozkb01",
"q2VAgl7wdqf",
"P13DQSgs-Es",
"iclr_2022_cmt-6KtR4c4",
"rirBcR59K9-",
"iclr_2022_cmt-6KtR4c4",
"iclr_2022_cmt-6KtR4c4",
"iclr_2022_cmt-6KtR4c4",
"iclr_2022_cmt-6KtR4c4"
] |
iclr_2022_KEQl-MZ5fg7 | Learning Versatile Neural Architectures by Propagating Network Codes | This work explores how to design a single neural network capable of adapting to multiple heterogeneous vision tasks, such as image segmentation, 3D detection, and video recognition. This goal is challenging because both network architecture search (NAS) spaces and methods in different tasks are inconsistent. We solve this challenge from both sides. We first introduce a unified design space for multiple tasks and build a multitask NAS benchmark (NAS-Bench-MR) on many widely used datasets, including ImageNet, Cityscapes, KITTI, and HMDB51. We further propose Network Coding Propagation (NCP), which back-propagates gradients of neural predictors to directly update architecture codes along the desired gradient directions to solve various tasks. In this way, optimal architecture configurations can be found by NCP in our large search space in seconds.
Unlike prior arts of NAS that typically focus on a single task, NCP has several unique benefits. (1) NCP transforms architecture optimization from data-driven to architecture-driven, enabling joint search an architecture among multitasks with different data distributions. (2) NCP learns from network codes but not original data, enabling it to update the architecture efficiently across datasets. (3) In addition to our NAS-Bench-MR, NCP performs well on other NAS benchmarks, such as NAS-Bench-201. (4) Thorough studies of NCP on inter-, cross-, and intra-tasks highlight the importance of cross-task neural architecture design, i.e., multitask neural architectures and architecture transferring between different tasks. Code is available at https://github.com/dingmyu/NCP. | Accept (Poster) | The paper examines neural architecture search for multi-task networks, by associating model hyperparameters with a coding space and building an MLP predictor for mapping codes to task performance. After the discussion phase, reviewers are marginally in favor or accepting the paper, pointing to the extensive experimental results as a convincing contribution. | train | [
"kbaVqXDKnGH",
"gA_wMnBtH23",
"xYTmOw5DHRm",
"0FOK4ka0Sp",
"24PUvUCobDZ",
"u8moxroV4Tf",
"iloDjgMUg2",
"qGzEZiFAZfK"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper tackles neural architecture search (NAS), addressing the problem of finding architectures suitable for a multitude of vision-related tasks, ranging from object detection to semantic segmentation. To the best of my knowledge, this is the first attempt in NAS for computer vision models that explicitly des... | [
8,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
2,
-1,
4,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2022_KEQl-MZ5fg7",
"iclr_2022_KEQl-MZ5fg7",
"iclr_2022_KEQl-MZ5fg7",
"iclr_2022_KEQl-MZ5fg7",
"xYTmOw5DHRm",
"qGzEZiFAZfK",
"kbaVqXDKnGH",
"iclr_2022_KEQl-MZ5fg7"
] |
iclr_2022_9L1BsI4wP1H | Adversarially Robust Conformal Prediction | Conformal prediction is a model-agnostic tool for constructing prediction sets that are valid under the common i.i.d. assumption, which has been applied to quantify the prediction uncertainty of deep net classifiers. In this paper, we generalize this framework to the case where adversaries exist during inference time, under which the i.i.d. assumption is grossly violated. By combining conformal prediction with randomized smoothing, our proposed method forms a prediction set with finite-sample coverage guarantee that holds for any data distribution with $\ell_2$-norm bounded adversarial noise, generated by any adversarial attack algorithm. The core idea is to bound the Lipschitz constant of the non-conformity score by smoothing it with Gaussian noise and leverage this knowledge to account for the effect of the unknown adversarial perturbation. We demonstrate the necessity of our method in the adversarial setting and the validity of our theoretical guarantee on three widely used benchmark data sets: CIFAR10, CIFAR100, and ImageNet. | Accept (Poster) | This paper studies the problem of producing distribution-free prediction sets using conformal prediction that are robust to test-time adversarial perturbations of the input data. The authors point out that these perturbations could be label and covariate dependent, and hence different from covariate-shift handled in Tibshirani et al 19, the label-shift handled in Podkopaev and Ramdas 21, and the f-divergence shifts of Cauchois et al 2021.
The authors propose a relatively simple idea that has appeared in other literatures like optimization but appears to be new to the conformal literature: (i) use a smoothed (using Gaussian noise on X, and inverse Gaussian CDF) nonconformity score function, in order to control its Lipschitz constant, (ii) utilize a larger score cutoff than the standard 1-alpha quantile of calibration scores employed in conformal prediction. The observation that point (i) alone lends some robustness to adversarial perturbations of the data is interesting. As several experiments in the paper and responses to reviewers show, this comes at the (apparently necessary) price of larger prediction sets.
I read through all the comments and also the supplement. The authors have responded very well to all the reviewers questions/concerns, adding significant sections to their supplement as a result. Three reviewers are convinced, but one remaining reviewer requested additional experiments to compare with Cauchois et al (in addition to all the others already produced by the authors originally and in response to reviewers). However, the authors point out that the code in the aforementioned paper was not public, but they were able to privately get the code from the authors during the rebuttal period. At this point, I recommend acceptance of the paper even without those additional experiments, since it is not the authors' fault that the original code was not public. Nevertheless, I suggest to the authors that, if possible, they could add some comparisons to the camera-ready since they now have the code.
I congratulate the authors on a nice work, a very solid rebuttal, and also the astute reviewers on pointing out various aspects that could be improved.
Minor point for the authors (for the camera-ready): I would like to comment on the Rebuttal point 4.4 in the supplement, which then got further discussed in the thread. The reviewer points out four references [R1-R4]. I will add one more to the list [R5] https://arxiv.org/pdf/1905.10634.pdf (Kivaranovic et al, appeared in 2019, published in 2020). I think the literature reviews in this area are starting to be messy, and all authors need to do a better job. Clearly, the original paper of Vovk et al already establishes various types of conditional validity (and calls it PAC-style guarantee), produces guarantees that others in this area produce, and it appears that much recreation of the wheel is occurring. For eg, [R2, R4] do not cite [R5], despite [R5] appearing earlier and being published earlier, and having PAC-style guarantees and experiments with neural nets, etc. However, in turn, [R5] do not cite Vovk [R1], but [R2, R4] do cite [R1]. (And [R3] does not seem to be relevant to this discussion of conditional validity?) In any case, I am not sure any of these papers need citing since the current paper does not deal with conditional validity. If at all, just one sentence like "Conditional validity guarantees, of the styles suggested by Vovk [2012], would be an interesting avenue for future work". | train | [
"9QUUcN98Le6",
"6M0Xqmdevm9",
"x9NVY7NGTkc",
"JqCMjr0jKPs",
"Tecbu1FjzLn",
"GOFpIqjfHyM",
"6pk4HeFq4M",
"Z10jvimxeh7",
"HcdshoVZD3H",
"Yv2aBOYDp19",
"vdxyvyLvyTD",
"VsE_B8IKPg",
"0stWudOQRmc",
"gRvEj3HrEkf"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the follow-up.\n\n\n1. **\" How can the labeled calibration examples be useful after the shift? \"** \\\nThis is the question that we address in our paper for the specific adversarial setting we study. In other words, this is why we believe our contribution is important to the community.... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"6M0Xqmdevm9",
"JqCMjr0jKPs",
"iclr_2022_9L1BsI4wP1H",
"Tecbu1FjzLn",
"Yv2aBOYDp19",
"iclr_2022_9L1BsI4wP1H",
"iclr_2022_9L1BsI4wP1H",
"x9NVY7NGTkc",
"0stWudOQRmc",
"VsE_B8IKPg",
"gRvEj3HrEkf",
"iclr_2022_9L1BsI4wP1H",
"iclr_2022_9L1BsI4wP1H",
"iclr_2022_9L1BsI4wP1H"
] |
iclr_2022_d5SCUJ5t1k | Objects in Semantic Topology | A more realistic object detection paradigm, Open-World Object Detection, has arised increasing research interests in the community recently. A qualified open-world object detector can not only identify objects of known categories, but also discover unknown objects, and incrementally learn to categorize them when their annotations progressively arrive. Previous works rely on independent modules to recognize unknown categories and perform incremental learning, respectively. In this paper, we provide a unified perspective: Semantic Topology. During the life-long learning of an open-world object detector, all object instances from the same category are assigned to their corresponding pre-defined node in the semantic topology, including the `unknown' category. This constraint builds up discriminative feature representations and consistent relationships among objects, thus enabling the detector to distinguish unknown objects out of the known categories, as well as making learned features of known objects undistorted when learning new categories incrementally. Extensive experiments demonstrate that semantic topology, either randomly-generated or derived from a well-trained language model, could outperform the current state-of-the-art open-world object detectors by a large margin, e.g., the absolute open-set error (the number of unknown instances that are wrongly labeled as known) is reduced from 7832 to 2546, exhibiting the inherent superiority of semantic topology on open-world object detection. | Accept (Poster) | This paper presents work on open-world object detection. The main idea is to use fixed per-category semantic anchors. These can be incrementally added to when new data appear. The reviewers engaged in significant discussion around the paper with many iterations of improvements to the paper. Initial concerns regarding zero-shot learning were addressed, as were remarks on presentation and claims.
In the end the reviewers were split on this paper. I recommend to accept the paper on the basis of the semantic topology ideas and the thorough experimental results.
The remaining concern centered around the evaluation protocol used in the paper, which follows that in the literature (e.g. Joseph et al. CVPR 21). While this is not a fatal flaw, it is an issue with how this genre of methods is evaluated. It would be good to add discussion to the final paper to highlight this as an opportunity for future work in the field to address. Specifically, as a reviewer noted "after detecting "unknown" objects in T1, the (hypothetical) annotation process provides boxes for ALL objects of some new classes instead of only for those that have been correctly detected (localized and marked "unknown")." | val | [
"ZLfM_fMosp",
"puvvwA8V9yd",
"rmmshiFsqrX",
"8UgaReNEZke",
"dzYgLGrIaGM",
"yKEu1gaDOfA",
"qC2WyqXM89A",
"6Eswi0ROMHS",
"LIHu8FVEqFA",
"xyYwdJnuuEq",
"CyPsAYdR1fI",
"VvF4ShA54Ez",
"py9yWJq5y68",
"PyR5ZHoAefM",
"yRqIfqtEYxG",
"MlaYC04X5Ch"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nThanks for your valuable comments! We will definitely discuss the choice of ‘unknown’ anchor in the revised paper.\n\nBest,\n\nAuthors",
" Dear Reviewer,\n\nThanks! We will fully discuss the problem setting and the evaluation protocol in the revised version.\n\nBest,\nAuthors",
"The paper ad... | [
-1,
-1,
8,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"dzYgLGrIaGM",
"rmmshiFsqrX",
"iclr_2022_d5SCUJ5t1k",
"iclr_2022_d5SCUJ5t1k",
"yKEu1gaDOfA",
"qC2WyqXM89A",
"CyPsAYdR1fI",
"LIHu8FVEqFA",
"PyR5ZHoAefM",
"iclr_2022_d5SCUJ5t1k",
"MlaYC04X5Ch",
"8UgaReNEZke",
"yRqIfqtEYxG",
"rmmshiFsqrX",
"iclr_2022_d5SCUJ5t1k",
"iclr_2022_d5SCUJ5t1k"
] |
iclr_2022_45L_dgP48Vd | Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series | Anomaly detection is a widely studied task for a broad variety of data types; among them, multiple time series appear frequently in applications, including for example, power grids and traffic networks. Detecting anomalies for multiple time series, however, is a challenging subject, owing to the intricate interdependencies among the constituent series. We hypothesize that anomalies occur in low density regions of a distribution and explore the use of normalizing flows for unsupervised anomaly detection, because of their superior quality in density estimation. Moreover, we propose a novel flow model by imposing a Bayesian network among constituent series. A Bayesian network is a directed acyclic graph (DAG) that models causal relationships; it factorizes the joint probability of the series into the product of easy-to-evaluate conditional probabilities. We call such a graph-augmented normalizing flow approach GANF and propose joint estimation of the DAG with flow parameters. We conduct extensive experiments on real-world datasets and demonstrate the effectiveness of GANF for density estimation, anomaly detection, and identification of time series distribution drift. | Accept (Spotlight) | The paper tackles the problem of detecting anomalies in multiple time-series. All the reviewers agreed that the methodology is novel, sound and very interesting. Initially, there were some concerns regarding the experimental evaluation, however, the rebuttal and subsequent discussion cleared up these concerns to some extent and all reviewers are eventually supporting or strongly supporting acceptance. | val | [
"qB9qJb49_Bu",
"tNUlrNAvW6x",
"rp-MeXf1O3q",
"2jJxLd4rWDm",
"s2wQ_OzJ9AA",
"M9v9SoZa2ty",
"MpI0kCsazZB",
"vJMvFdR-zOe",
"yF6uIHhkDN",
"pIxNB1j2FvE",
"f3RKzMCFjUo",
"8YQzre0v5Es",
"--luABnQpzS",
"Sft3x6-XhaQ",
"7uv9SW7XLnP",
"jyvDkBJNzsf",
"UbLN_Kv0Xs"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I thank the authors for addressing my comments. I went through the revised draft and the responses to other reviewers. I have no further questions. The new results increase my confidence in the proposed method. I have updated my score accordingly.",
"The authors propose a model for anomaly detection on multivar... | [
-1,
8,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"8YQzre0v5Es",
"iclr_2022_45L_dgP48Vd",
"2jJxLd4rWDm",
"s2wQ_OzJ9AA",
"pIxNB1j2FvE",
"vJMvFdR-zOe",
"iclr_2022_45L_dgP48Vd",
"Sft3x6-XhaQ",
"iclr_2022_45L_dgP48Vd",
"f3RKzMCFjUo",
"yF6uIHhkDN",
"tNUlrNAvW6x",
"tNUlrNAvW6x",
"MpI0kCsazZB",
"UbLN_Kv0Xs",
"iclr_2022_45L_dgP48Vd",
"iclr_... |
iclr_2022_MP904TiHqJ- | Provably convergent quasistatic dynamics for mean-field two-player zero-sum games | In this paper, we study the problem of finding mixed Nash equilibrium for mean-field two-player zero-sum games. Solving this problem requires optimizing over two probability distributions. We consider a quasistatic Wasserstein gradient flow dynamics in which one probability distribution follows the Wasserstein gradient flow, while the other one is always at the equilibrium. Theoretical analysis are conducted on this dynamics, showing its convergence to the mixed Nash equilibrium under mild conditions. Inspired by the continuous dynamics of probability distributions, we derive a quasistatic Langevin gradient descent method with inner-outer iterations, and test the method on different problems, including training mixture of GANs. | Accept (Poster) | The authors first consider a mean field two player zero sum game and consider quasistatic Wasserstein gradient flow dynamics for solving the problem. The dynamics is proved to be convergent under some assumptions. Finally, the authors provide a discretization of the gradient flow and using this proposes an algorithm for solving min-max optimization problems. They use this algorithm for GAN's as the main example. Experimental results claim that the algorithm outperforms langevin gradient descent especially in high dimensionas.
This paper sits right at the border. But subsequent to the author response, one of the reviewers has updated the score and seems more positive about the paper. In view of this, I am leaning towards an accept. | val | [
"YC-29CqWFjo",
"xUsV09WOXGg",
"DrCkL5nJxn8",
"iVMDLuSNCoK",
"9Hh1ntbXZKf",
"wLmmilvBw_",
"j-bKAVgHHwU",
"TMFtOVi9ZCM",
"YJWjtT8Qphp",
"WtdQxpgktge",
"fOJxziWQ-Qk",
"inuDzlPVmbz",
"MpBtA_cHOu1"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper focuses on the methodology of finding mixed Nash equilibrium. The authors first introduces the corresponding quasi static Wasserstein gradient flow dynamics for solving the problem, which is a limiting dynamics with $q$ seen as infinitely faster than $p$. Then the continuous dynamics is proved to be con... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"iclr_2022_MP904TiHqJ-",
"DrCkL5nJxn8",
"iVMDLuSNCoK",
"9Hh1ntbXZKf",
"j-bKAVgHHwU",
"MpBtA_cHOu1",
"inuDzlPVmbz",
"fOJxziWQ-Qk",
"iclr_2022_MP904TiHqJ-",
"YC-29CqWFjo",
"iclr_2022_MP904TiHqJ-",
"iclr_2022_MP904TiHqJ-",
"iclr_2022_MP904TiHqJ-"
] |
iclr_2022_bfuGjlCwAq | Learning Efficient Online 3D Bin Packing on Packing Configuration Trees | Online 3D Bin Packing Problem (3D-BPP) has widespread applications in industrial automation and has aroused enthusiastic research interest recently. Existing methods usually solve the problem with limited resolution of spatial discretization, and/or cannot deal with complex practical constraints well. We propose to enhance the practical applicability of online 3D-BPP via learning on a novel hierarchical representation – packing configuration tree (PCT). PCT is a full-fledged description of the state and action space of bin packing which can support packing policy learning based on deep reinforcement learning (DRL). The size of the packing action space is proportional to the number of leaf nodes, making the DRL model easy to train and well-performing even with continuous solution space. During training, PCT expands based on heuristic rules, however, the DRL model learns a much more effective and robust packing policy than heuristic methods. Through extensive evaluation, we demonstrate that our method outperforms all existing online BPP methods and is versatile in terms of incorporating various practical constraints. | Accept (Poster) | This paper proposes a new approach to online 3D bin packing with deep reinforcement learning. It received mixed reviews. AC finds that the responses from authors have addressed the concerns satisfactorily. | train | [
"nZr7vwRqMgx",
"H9pZrjsLMPv",
"W-SlyWYobMG",
"ZhgfHeldgN1",
"fjcQwJ1rBn",
"KKNHKxepnG",
"rn-Tz592EXF",
"e_sY19PW_S",
"ykoq7EnHJSW",
"r9F5xvsP4eh",
"svqegdrADkX",
"IljI01T10D",
"zrCbyRb7obD",
"Q7wQqhlK2Ly"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the responses by the authors and their detailed responses to the reviewers comments! Having read the other reviews, I maintain my positive rating.",
" **- Explanations of our generalization evaluation**\n\n* We change the probability of sampling each item from a finite item set. This probability di... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
8,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
4,
4,
3
] | [
"ykoq7EnHJSW",
"Q7wQqhlK2Ly",
"zrCbyRb7obD",
"zrCbyRb7obD",
"zrCbyRb7obD",
"iclr_2022_bfuGjlCwAq",
"IljI01T10D",
"r9F5xvsP4eh",
"Q7wQqhlK2Ly",
"iclr_2022_bfuGjlCwAq",
"IljI01T10D",
"iclr_2022_bfuGjlCwAq",
"iclr_2022_bfuGjlCwAq",
"iclr_2022_bfuGjlCwAq"
] |
iclr_2022_MIX3fJkl_1 | NeuPL: Neural Population Learning | Learning in strategy games (e.g. StarCraft, poker) requires the discovery of diverse policies. This is often achieved by iteratively training new policies against existing ones, growing a policy population that is robust to exploit. This iterative approach suffers from two issues in real-world games: a) under finite budget, approximate best-response operators at each iteration needs truncating, resulting in under-trained good-responses populating the population; b) repeated learning of basic skills at each iteration is wasteful and becomes intractable in the presence of increasingly strong opponents. In this work, we propose Neural Population Learning (NeuPL) as a solution to both issues. NeuPL offers convergence guarantees to a population of best-responses under mild assumptions. By representing a population of policies within a single conditional model, NeuPL enables transfer learning across policies. Empirically, we show the generality, improved performance and efficiency of NeuPL across several test domains. Most interestingly, we show that novel strategies become more accessible, not less, as the neural population expands. | Accept (Poster) | The authors propose a new framework of population learning that optimizes a single conditional model to learn and represent multiple diverse policies in real-world games. All reviewers agree the ideas are interesting and the empirical results are strong. The meta reviewer agrees and recommends acceptance. | val | [
"4VSrMXPRTkl",
"3r84CLl2yAz",
"vOrTuvPgcC1",
"IayoJBycRvC",
"aoiJHmnNjxV",
"qS8gUWUkdct",
"34nlhThckia",
"U61Dw0lYzZ5",
"ubHSnGZQLkC",
"BmoZHE4myC",
"eVV-Yh-h5k",
"YT5DNYZNTNS",
"AxuOH6DYsU",
"mlVQAW2GFT",
"HQt2dRiQcrt"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I have increased my score and recommend an acceptance.",
"The paper proposes Neural Population Learning which I think extends PSRO in two aspects. First, it avoid the premature *good*-response. Second, it uses a conditional network to represent the population of policies, so as to enable skill transfer. NeuPL ... | [
-1,
8,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
3,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"YT5DNYZNTNS",
"iclr_2022_MIX3fJkl_1",
"iclr_2022_MIX3fJkl_1",
"AxuOH6DYsU",
"34nlhThckia",
"iclr_2022_MIX3fJkl_1",
"U61Dw0lYzZ5",
"ubHSnGZQLkC",
"BmoZHE4myC",
"qS8gUWUkdct",
"HQt2dRiQcrt",
"3r84CLl2yAz",
"vOrTuvPgcC1",
"iclr_2022_MIX3fJkl_1",
"iclr_2022_MIX3fJkl_1"
] |
iclr_2022_yKIAXjkJc2F | Imbedding Deep Neural Networks | Continuous-depth neural networks, such as Neural ODEs, have refashioned the understanding of residual neural networks in terms of non-linear vector-valued optimal control problems. The common solution is to use the adjoint sensitivity method to replicate a forward-backward pass optimisation problem. We propose a new approach which explicates the network's `depth' as a fundamental variable, thus reducing the problem to a system of forward-facing initial value problems. This new method is based on the principal of `Invariant Imbedding' for which we prove a general solution, applicable to all non-linear, vector-valued optimal control problems with both running and terminal loss.
Our new architectures provide a tangible tool for inspecting the theoretical--and to a great extent unexplained--properties of network depth. They also constitute a resource of discrete implementations of Neural ODEs comparable to classes of imbedded residual neural networks. Through a series of experiments, we show the competitive performance of the proposed architectures for supervised learning and time series prediction. | Accept (Spotlight) | The authors presents an alternative view of Neural ODEs, offering a novel understanding of depth in neural networks. The reviewers were overall impressed by the novelty and potential for insights this work brings. There was some disappointment that the empirical results were not stronger (both in terms of pure performance and computational cost) and that it wasn't clear how the theoretical insights into depth actually translated into a practical insight. Nevertheless, I agree with the reviewers that this is a good submission and would I think make for an interesting addition to the conference programme. | train | [
"Ee-KL20G6Hx",
"ZJy4jIPwEM",
"RqJQfmUfi87",
"q_T-g2Ys2up",
"O2oE1LKOvF8",
"wwK78Q5N3Sc",
"0rMzYCT63fi",
"CxUpnixU1LI",
"mg-bf6PDQq9",
"a5r3TmzphAu",
"ykOACkNwgA",
"NaZHr7pEdkq",
"fS6zO5ccdZx",
"CdiH8v5Rgnb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I have raised your score from 6 to 8.",
"In recent years, continuous depth neural networks, such as Neural ODEs have helped understanding of ResNet in terms of non-linear vector-valued optimal control problems. The authors of the paper show that it is possible to reduce the non-linea... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"CxUpnixU1LI",
"iclr_2022_yKIAXjkJc2F",
"ykOACkNwgA",
"a5r3TmzphAu",
"iclr_2022_yKIAXjkJc2F",
"0rMzYCT63fi",
"CdiH8v5Rgnb",
"ZJy4jIPwEM",
"CdiH8v5Rgnb",
"fS6zO5ccdZx",
"NaZHr7pEdkq",
"iclr_2022_yKIAXjkJc2F",
"iclr_2022_yKIAXjkJc2F",
"iclr_2022_yKIAXjkJc2F"
] |
iclr_2022_kR1hC6j48Tp | GATSBI: Generative Adversarial Training for Simulation-Based Inference | Simulation-based inference (SBI) refers to statistical inference on stochastic models for which we can generate samples, but not compute likelihoods.
Like SBI algorithms, generative adversarial networks (GANs) do not require explicit likelihoods. We study the relationship between SBI and GANs, and introduce GATSBI, an adversarial approach to SBI. GATSBI reformulates the variational objective in an adversarial setting to learn implicit posterior distributions. Inference with GATSBI is amortised across observations, works in high-dimensional posterior spaces and supports implicit priors. We evaluate GATSBI on two common SBI benchmark problems and on two high-dimensional simulators. On a model for wave propagation on the surface of a shallow water body, we show that GATSBI can return well-calibrated posterior estimates even in high dimensions.
On a model of camera optics, it infers a high-dimensional posterior given an implicit prior, and performs better than a
state-of-the-art SBI approach. We also show how GATSBI can be extended to perform sequential posterior estimation to focus on individual observations.
Overall, GATSBI opens up opportunities for leveraging advances in GANs to perform Bayesian inference on high-dimensional simulation-based models. | Accept (Poster) | The paper explores the application of generative adversarial networks as posterior models in simulation-based inference. A new method is proposed, and its connections with related work are studied. The proposed method is empirically evaluated on joint inference of up to 784 parameters.
The reviews are borderline, with one weak reject, two weak accepts, and one strong accept. Overall, the paper is well-written and well-executed. Its main strength is the promising performance of the proposed method in high-dimensional parameter spaces, which are out-of-reach for many existing approaches. The main weakness of the paper is its lack of novelty: the proposed method is only marginally different from already existing ones, while the paper could have explored the differences to a greater extent.
On balance, I'm leaning towards recommending the paper for acceptance. Despite the lack of novelty, the paper is well executed with potential impact in high-dimensional simulation-based inference. | train | [
"GDDU_piGht",
"Q5MzNIhfrNR",
"bNGtu9Sl5Q6",
"r0dDNUw16uK",
"WuEaqdiSpb_",
"PAHiPPjm-ku",
"Qf8nq0oICrw",
"rCTJkXU1USm",
"bHa5ghN8XM",
"6tNXSoq5Id2",
"GAATlYpzJTb",
"SoeOUvjlvK",
"Kb1S4R-1cr",
"BPKyo7IuNC3"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their comments and feedback. We maintain that our paper is the first one to rigorously connect literature on SBI with those on GANs and VAEs, and that this is a fundamental contribution of the paper -- while aspects of these relationships have been described in different papers (as we ci... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"Q5MzNIhfrNR",
"bHa5ghN8XM",
"iclr_2022_kR1hC6j48Tp",
"iclr_2022_kR1hC6j48Tp",
"iclr_2022_kR1hC6j48Tp",
"6tNXSoq5Id2",
"bNGtu9Sl5Q6",
"BPKyo7IuNC3",
"Kb1S4R-1cr",
"SoeOUvjlvK",
"iclr_2022_kR1hC6j48Tp",
"iclr_2022_kR1hC6j48Tp",
"iclr_2022_kR1hC6j48Tp",
"iclr_2022_kR1hC6j48Tp"
] |
iclr_2022_AvcfxqRy4Y | Understanding the Role of Self Attention for Efficient Speech Recognition | Self-attention (SA) is a critical component of Transformer neural networks that have succeeded in automatic speech recognition (ASR). In this paper, we analyze the role of SA in Transformer-based ASR models for not only understanding the mechanism of improved recognition accuracy but also lowering the computational complexity. We reveal that SA performs two distinct roles: phonetic and linguistic localization. Especially, we show by experiments that phonetic localization in the lower layers extracts phonologically meaningful features from speech and reduces the phonetic variance in the utterance for proper linguistic localization in the upper layers. From this understanding, we discover that attention maps can be reused as long as their localization capability is preserved. To evaluate this idea, we implement the layer-wise attention map reuse on real GPU platforms and achieve up to 1.96 times speedup in inference and 33% savings in training time with noticeably improved ASR performance for the challenging benchmark on LibriSpeech dev/test-other dataset.
| Accept (Spotlight) | The paper conducted a thorough experimental analysis of the attention map in the Conformer models for CTC based speech recognition models and connected it with phonetic and linguistic information in the speech. Using these insights, the paper presented some computation improvement and marginal quality gains. The authors actively conducted additional experiments to further justify the claims. The paper is strong in terms of the systematic way of in-depth analysis and further development (i.e. sharing the attention map across layers for speedup). But as pointed out by the reviewers, it lacks some comparisons with other alternatives to justify the importance of sharing attention maps in reducing computations. Also it would be better if there's justifications on how the observations generalize to other types of models (such as LAS, RNN-T).
The decision is mainly because of the thorough analysis conducted in the paper which can be a good contribution to the community. | train | [
"44MV6VUimYz",
"uk2tbcRBiL",
"8sdf2Q07DAa",
"YUDG0RwL5-K",
"t1r8XKJqYas",
"5nciInZlZr",
"v_FE9lmBSFL",
"E0WHj8s8f0",
"1AodWi2h2bUC",
"3b8McTIXhrW",
"1ezkv7iKr8n",
"m2u3G8z_g7i",
"yTwmbxog51",
"XYxf2mtoAYH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper develops a mechanism to analyze self-attention for speech recognition, and further use the resulting insight to facilitate attention reuse for reducing inference cost. The analysis identifies a distinct pattern between lower half and upper half layers on the ASR encoder through cumulative attention diag... | [
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_AvcfxqRy4Y",
"v_FE9lmBSFL",
"t1r8XKJqYas",
"iclr_2022_AvcfxqRy4Y",
"v_FE9lmBSFL",
"iclr_2022_AvcfxqRy4Y",
"iclr_2022_AvcfxqRy4Y",
"44MV6VUimYz",
"XYxf2mtoAYH",
"YUDG0RwL5-K",
"yTwmbxog51",
"iclr_2022_AvcfxqRy4Y",
"iclr_2022_AvcfxqRy4Y",
"iclr_2022_AvcfxqRy4Y"
] |
iclr_2022_xDIvIqQ3DXD | On the approximation properties of recurrent encoder-decoder architectures | Encoder-decoder architectures have recently gained popularity in sequence to sequence modelling, featuring in state-of-the-art models such as transformers. However, a mathematical understanding of their working principles still remains limited. In this paper, we study the approximation properties of recurrent encoder-decoder architectures. Prior work established theoretical results for RNNs in the linear setting, where approximation capabilities can be related to smoothness and memory of target temporal relationships. Here, we uncover that the encoder and decoder together form a particular “temporal product structure” which determines the approximation efficiency. Moreover, the encoder-decoder architecture generalises RNNs with the capability to learn time-inhomogeneous relationships. Our results provide the theoretical understanding of approximation properties of the recurrent encoder-decoder architecture, which precisely characterises, in the considered setting, the types of temporal relationships that can be efficiently learned. | Accept (Spotlight) | This paper presents a theoretical analysis of the approximation properties of linear recurrent encoder-decoder architectures, obtaining universal approximation results and subsequently showing approximation rates of targets for RNN encoder-decoders. It introduces a notion of "temporal products," which helps to characterize the types of temporal relationships that can be efficiently learned in this setting.
Overall, the reviewers and I all agree that this paper makes important theoretical contributions to the important problem of the approximation capabilities of encoder-decoder architectures. The main weaknesses involve the rather simplified linear problem setup, but this limitation is easily forgiven in this first-of-its-kind rigorous analysis. I recommend acceptance. | test | [
"rcKH_wm2xs8",
"sBP2Mn9WJLH",
"gjOrLn6A5Vw",
"eZVq3DMWK29",
"nR26q8Sjqc",
"Pw8boo5RXxf",
"NI-bSMa3gec",
"iJJN4rszppz",
"d6aG3k-rO1Q",
"qqGsjoLKLg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I read the rebuttal and the updated manuscript, thanks for addressing my comments! And I agree that theoretical support is important as the field grows. The authors kindly added more experiments and clarified the points that I was confused about. I agree that for theoretic community, being mathematically rigorous... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nR26q8Sjqc",
"iclr_2022_xDIvIqQ3DXD",
"d6aG3k-rO1Q",
"d6aG3k-rO1Q",
"sBP2Mn9WJLH",
"qqGsjoLKLg",
"sBP2Mn9WJLH",
"iclr_2022_xDIvIqQ3DXD",
"iclr_2022_xDIvIqQ3DXD",
"iclr_2022_xDIvIqQ3DXD"
] |
iclr_2022_qiMXBIf4NfB | How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis | Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited. Despite empirical successes, its theoretical characterization remains elusive. To the best of our knowledge, this work establishes the first theoretical analysis for the known iterative self-training paradigm and formally proves the benefits of unlabeled data in both training convergence and generalization ability. To make our theoretical analysis feasible, we focus on the case of one-hidden-layer neural networks. However, theoretical understanding of iterative self-training is non-trivial even for a shallow neural network. One of the key challenges is that existing neural network landscape analysis built upon supervised learning no longer holds in the (semi-supervised) self-training paradigm. We address this challenge and prove that iterative self-training converges linearly with both convergence rate and generalization accuracy improved in the order of $1/\sqrt{M}$, where $M$ is the number of unlabeled samples. Extensive experiments from shallow neural networks to deep neural networks are also provided to justify the correctness of our established theoretical insights on self-training. | Accept (Poster) | The paper studies self-training for a one hidden-layer architecture, showing that with proper initialization self-training will improve over standard supervision. The reviewers appreciated the analysis and thought the results make sense. However, they did comment that the paper does not provide sufficient insight about the effectiveness of self-training and this should be discussed in the final version. There were additionally comments about the architecture choice (e.g., fixed output weights), and the authors should also address this. | train | [
"3k2kMoF_v1",
"5LcrvcLIoZ3",
"K8pD102PYc3",
"1CrUVbsmJ1r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents the first theoretical analysis of self-training on one hidden-layer neural network with gaussian input. The paper shows that under certain conditions (e.g., the initialization is neither too far or too close to the optimal), iterative self-training on the aformentioned setting can provably reco... | [
6,
8,
6,
6
] | [
3,
3,
3,
3
] | [
"iclr_2022_qiMXBIf4NfB",
"iclr_2022_qiMXBIf4NfB",
"iclr_2022_qiMXBIf4NfB",
"iclr_2022_qiMXBIf4NfB"
] |
iclr_2022_nzvbBD_3J-g | On Incorporating Inductive Biases into VAEs | We explain why directly changing the prior can be a surprisingly ineffective mechanism for incorporating inductive biases into variational auto-encoders (VAEs), and introduce a simple and effective alternative approach: Intermediary Latent Space VAEs (InteL-VAEs). InteL-VAEs use an intermediary set of latent variables to control the stochasticity of the encoding process, before mapping these in turn to the latent representation using a parametric function that encapsulates our desired inductive bias(es). This allows us to impose properties like sparsity or clustering on learned representations, and incorporate human knowledge into the generative model. Whereas changing the prior only indirectly encourages behavior through regularizing the encoder, InteL-VAEs are able to directly enforce desired characteristics. Moreover, they bypass the computation and encoder design issues caused by non-Gaussian priors, while allowing for additional flexibility through training of the parametric mapping function. We show that these advantages, in turn, lead to both better generative models and better representations being learned. | Accept (Poster) | I am recommending a poster for this paper. There was considerable discussion and much author response. The reviews were good (after taking author response and paper revision into account) with one out of three being enthusiastic. There was a concern that the basic idea was technically mis-represented as the inductive bias is being placed in the decoder rather than prior. But I am convinced that it is a reasonable idea to place bias in the decoder and that idea is worth publication.
Personally I think the paper would be much stronger with better empirical evaluation. I find a focus on MNIST (or fashion MNIST) unconvincing. Results on CelebA should be accompanied by sample image generations. I would rather see downstream task metrics based on learned features. This paper cannot be put in the same class as recent results on unsupervised learning of image features for downstream tasks. It remains an open question as to whether this paper provides any contribution in that arena. | train | [
"uu6GUGUD5Wv",
"SmfyHf0uOQ",
"gzDSCQJjxU",
"OTQTa_coBZ7",
"Uya_7ksbrEL",
"wQMp0ArfxY",
"AMBPjCL7HmF",
"7nC9s0p8x9",
"oHhteSLm7CW",
"58iTIxc3_po",
"pTXTrdlFvG1",
"NGzynQMPAsI",
"XYK0qfa1_F4",
"FQVQaf6V3Q",
"vRCfUYgiVyO",
"0hq5jSh_EZA",
"F9b2bwWWMNu",
"W5VEHWCP58",
"0fR0K8N7KFb"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thank you very much for reconsidering your recommendation and praising the simplicity of our paper! \nWe agree more complex human knowledge usually requires subtler design for the mapping $g_\\psi$. But it would still be much simpler than changing the prior, the encoder(, and perhaps the loss function), which is ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"Uya_7ksbrEL",
"gzDSCQJjxU",
"7nC9s0p8x9",
"iclr_2022_nzvbBD_3J-g",
"pTXTrdlFvG1",
"oHhteSLm7CW",
"58iTIxc3_po",
"FQVQaf6V3Q",
"vRCfUYgiVyO",
"XYK0qfa1_F4",
"OTQTa_coBZ7",
"OTQTa_coBZ7",
"0fR0K8N7KFb",
"W5VEHWCP58",
"F9b2bwWWMNu",
"iclr_2022_nzvbBD_3J-g",
"iclr_2022_nzvbBD_3J-g",
"... |
iclr_2022_e0jtGTfPihs | Signing the Supermask: Keep, Hide, Invert | The exponential growth in numbers of parameters of neural networks over the past years has been accompanied by an increase in performance across several fields. However, due to their sheer size, the networks not only became difficult to interpret but also problematic to train and use in real-world applications, since hardware requirements increased accordingly.
Tackling both issues, we present a novel approach that either drops a neural network's initial weights or inverts their respective sign.
Put simply, a network is trained by weight selection and inversion without changing their absolute values.
Our contribution extends previous work on masking by additionally sign-inverting the initial weights and follows the findings of the Lottery Ticket Hypothesis.
Through this extension and adaptations of initialization methods, we achieve a pruning rate of up to 99%, while still matching or exceeding the performance of various baseline and previous models.
Our approach has two main advantages.
First, and most notable, signed Supermask models drastically simplify a model's structure, while still performing well on given tasks.
Second, by reducing the neural network to its very foundation, we gain insights into which weights matter for performance.
The code is available on GitHub. | Accept (Poster) | This paper builds on previous work on supermasks. It proposes to replace binary masks by a signed supermask, i.e. a trainable, trashold-based mask that can take values from {-1,0,1}. This change (in combination with the use of ELUS activation functions and an ELUS specific initialization strategy) leads to a significantly higher pruning rate while keeping competitive performance in comparison to baseline models.
Most reviewers agreed that the paper is well written and that the proposed approach and the experimental findings are interesting. The motivation to improve interpretability was commonly perceived as misleading. Another downside that was mentioned is the training time/efficiency. This however, should not be taken too much into account since the work focusses on finding the smallest possible subnetwork that still performs well (without changing the weight values) and- in line with work on the lottery hypothesis- aims at understanding more about the structure of the „winning tickets“ which is interesting for itself. The paper therefore should be accepted. | train | [
"bHu8oOH05_8",
"eAINb6vcAp4",
"KAMCyDvDWQ",
"bOYOmtl35SH",
"j09s5PpIkoe",
"UbFOeydXwQl",
"dDIZbV3IhGV",
"Alnf4A9IQ2s",
"sJZHycX0mJ",
"qjs8VSws3Jy"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The authors noted that the reviewer changed the comment about the absence of the paper \"Deconstructing Lottery Tickets\" to the absence of the paper \"What's Hidden in a Randomly Weighted Neural Network?\" in his comment about missing work.\n\nTherefore, we would like to add to our original answer, that the pape... | [
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
4,
4
] | [
"UbFOeydXwQl",
"iclr_2022_e0jtGTfPihs",
"dDIZbV3IhGV",
"iclr_2022_e0jtGTfPihs",
"qjs8VSws3Jy",
"sJZHycX0mJ",
"bOYOmtl35SH",
"eAINb6vcAp4",
"iclr_2022_e0jtGTfPihs",
"iclr_2022_e0jtGTfPihs"
] |
iclr_2022_pN1JOdrSY9 | Contrastive Clustering to Mine Pseudo Parallel Data for Unsupervised Translation | Modern unsupervised machine translation systems mostly train their models by generating synthetic parallel training data from large unlabeled monolingual corpora of different languages through various means, such as iterative back-translation. However, there may exist small amount of actual parallel data hidden in the sea of unlabeled data, which has not been exploited. We develop a new fine-tuning objective, called Language-Agnostic Constraint for SwAV loss, or LAgSwAV, which enables a pre-trained model to extract such pseudo-parallel data from the monolingual corpora in a fully unsupervised manner. We then propose an effective strategy to utilize the obtained synthetic data to augment unsupervised machine translation. Our method achieves the state of the art in the WMT'14 English-French, WMT'16 German-English and English-Romanian bilingual unsupervised translation tasks, with 40.2, 36.8, 37.0 BLEU, respectively. We also achieve substantial improvements in the FLoRes low-resource English-Nepali and English-Sinhala unsupervised tasks with 5.3 and 5.4 BLEU, respectively.
| Accept (Poster) | Overall, reviewers are positive. The majority praised the approach as novel and viewed the results as quite strong. Further, reviewers valued the provided ablations and analysis that helped motivate the proposed method. A few concerns were raised about overall clarity (though some reviewers praised the clarity of presentation), the use of hand-crafted filters, and certain experimental comparisons. The majority of these concerns have been adequately addressed in author response. | train | [
"fGRunjb2qm7",
"CO6RKGNqxe2",
"wu_SjClxc-8",
"sHBQYkRBJm1",
"qC7roy49BIu"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nWe have uploaded a new revision to our paper, in which we incorporate some of the comments from the reviewers. Specifically, we mainly:\n1. Added a complete model diagram of our LAgSwAV model in the Appendix (page 14) to facilitate better method understanding.\n2. Added statistical significance... | [
-1,
6,
8,
8,
5
] | [
-1,
4,
4,
4,
4
] | [
"iclr_2022_pN1JOdrSY9",
"iclr_2022_pN1JOdrSY9",
"iclr_2022_pN1JOdrSY9",
"iclr_2022_pN1JOdrSY9",
"iclr_2022_pN1JOdrSY9"
] |
iclr_2022_MTex8qKavoS | MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts | Understanding the performance of machine learning models across diverse data distributions is critically important for reliable applications. Motivated by this, there is a growing focus on curating benchmark datasets that capture distribution shifts. While valuable, the existing benchmarks are limited in that many of them only contain a small number of shifts and they lack systematic annotation about what is different across different shifts. We present MetaShift—a collection of 12,868 sets of natural images across 410 classes—to address this challenge. We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. The key construction idea is to cluster images using its metadata, which provides context for each image (e.g. “cats with cars” or “cats in bathroom”) that represent distinct data distributions. MetaShift has two important benefits: first, it contains orders of magnitude more natural data shifts than previously available. Second, it provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets. We demonstrate the utility of MetaShift in benchmarking several recent proposals for training models to be robust to data shifts. We find that the simple empirical risk minimization performs the best when shifts are moderate and no method had a systematic advantage for large shifts. We also show how MetaShift can help to visualize conflicts between data subsets during model training. | Accept (Poster) | This work studies the impact of distribution shift via a collection of datasets-MetaShift. Reviewers all agreed that this work is simple, effective, and well-motivated, and has key implications, and will be quite useful to the community. There were some concerns about the lack of analysis of MetaShift, and the binary classification setting, which was addressed by the authors’ responses. Thus, I recommend an acceptance. | train | [
"afBFyLK5tvl",
"z5Z--cf4sy8",
"hI9gR3PKRQN",
"xc-5rMKx4N2",
"_sEu6AITpD5",
"IOPlzAxueCg",
"mxcKWoH6LNQ",
"8lu_9ajQYiG",
"RpA2QqPBi81"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a collection called MetaShift to study the impact of dataset distribution. The major advantage of MetaShift is that it provides annotation/information to measure the amount of distribution shift between any two of its data sets. In the experiment, this work constructs two applications, 1) evalua... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_MTex8qKavoS",
"hI9gR3PKRQN",
"afBFyLK5tvl",
"iclr_2022_MTex8qKavoS",
"RpA2QqPBi81",
"afBFyLK5tvl",
"8lu_9ajQYiG",
"iclr_2022_MTex8qKavoS",
"iclr_2022_MTex8qKavoS"
] |
iclr_2022_JYtwGwIL7ye | The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models | Reward hacking---where RL agents exploit gaps in misspecified proxy rewards---has been widely observed, but not yet systematically studied. To understand reward hacking, we construct four RL environments with different misspecified rewards. We investigate reward hacking as a function of agent capabilities: model capacity, action space resolution, and observation space noise. Typically, more capable agents are able to better exploit reward misspecifications, causing them to attain higher proxy reward and lower true reward. Moreover, we find instances of \emph{phase transitions}: capability thresholds at which the agent's behavior qualitatively shifts, leading to a sharp decrease in the true reward. Such phase transitions pose challenges to monitoring the safety of ML systems. To encourage further research on reward misspecification, address this, we propose an anomaly detection task for aberrant policies and offer several baseline detectors. | Accept (Poster) | I thank the authors for their submission and active participation in the discussions. All reviewers are unanimously leaning towards acceptance of this paper. Reviewers in particular liked that the paper is presenting an interesting and systematic study of reward hacking [GVMn] that is useful to the research community [bfGN] and targets an important problem [uYeb] in a rigorous way [16uL]. I thus recommend accepting the paper, but I strongly encourage the authors to further improve their paper based on the reviewer feedback, in particular in regards to improving positioning with respect to related work and a better formalization of their work. | train | [
"h2n_78mzq4",
"ZRzynW2eahu",
"woElwu37xM1",
"6fedX9uAiFe",
"gkLms4x3PGY",
"eLF8f5B1rua",
"aOPbHlmG1ny",
"AQmF4QGf7D",
"wBLjeeO7WwC",
"ATfWmZi7gws",
"t0amLms5w2",
"uyK4aMfrZrD",
"78UDwG-oUUe",
"aWnD2QyeUM",
"GfCsJon7cqO",
"cHPz2M6PmJ3",
"qy9hVnfujRw",
"ZMiglg6ERL3",
"TW96q3Z9iCP",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you for addressing my comments. \n\nSome further minor comments to improve the final paper:\n\n> The value of the benchmark is in allowing other researchers to make progress on reward hacking, particularly on detecting such phase transitions.\n\nWhat I meant is that you could be more specific about how the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ATfWmZi7gws",
"AQmF4QGf7D",
"qy9hVnfujRw",
"eLF8f5B1rua",
"iclr_2022_JYtwGwIL7ye",
"wBLjeeO7WwC",
"uyK4aMfrZrD",
"iclr_2022_JYtwGwIL7ye",
"t0amLms5w2",
"LveRouNcuO2",
"PTO4B7EzBrN",
"GfCsJon7cqO",
"iclr_2022_JYtwGwIL7ye",
"ZMiglg6ERL3",
"iclr_2022_JYtwGwIL7ye",
"78UDwG-oUUe",
"ZNXNR... |
iclr_2022_YJ1WzgMVsMt | Reinforcement Learning with Sparse Rewards using Guidance from Offline Demonstration | A major challenge in real-world reinforcement learning (RL) is the sparsity of reward feedback. Often, what is available is an intuitive but sparse reward function that only indicates whether the task is completed partially or fully. However, the lack of carefully designed, fine grain feedback implies that most existing RL algorithms fail to learn an acceptable policy in a reasonable time frame. This is because of the large number of exploration actions that the policy has to perform before it gets any useful feedback that it can learn from. In this work, we address this challenging problem by developing an algorithm that exploits the offline demonstration data generated by {a sub-optimal behavior policy} for faster and efficient online RL in such sparse reward settings. The proposed algorithm, which we call the Learning Online with Guidance Offline (LOGO) algorithm, merges a policy improvement step with an additional policy guidance step by using the offline demonstration data. The key idea is that by obtaining guidance from - not imitating - the offline {data}, LOGO orients its policy in the manner of the sub-optimal {policy}, while yet being able to learn beyond and approach optimality. We provide a theoretical analysis of our algorithm, and provide a lower bound on the performance improvement in each learning episode. We also extend our algorithm to the even more challenging incomplete observation setting, where the demonstration data contains only a censored version of the true state observation. We demonstrate the superior performance of our algorithm over state-of-the-art approaches on a number of benchmark environments with sparse rewards {and censored state}. Further, we demonstrate the value of our approach via implementing LOGO on a mobile robot for trajectory tracking and obstacle avoidance, where it shows excellent performance. | Accept (Spotlight) | The authors introduce a method for improving reinforcement learning in sparse reward settings. In particular, they propose to take advantage of a suboptimal behavior policy as a guidance policy that is incorporated in a TRPO-like update. The reviewers agree that this is a novel and interesting idea and given the authors' rebuttal with additional experiments, clarifications and discussions, they agreed to accept the paper. However, they also point out several flaws (e.g. evaluation on a more challenging sparse-reward task such as Adroid) that I encourage the authors to address in the final version of the paper. | train | [
"Tq7ZiCgbQ8",
"oJFoRQN67Li",
"01I5sPcc8W",
"W4EyWeRLQuj",
"RWEIi8MYnvE",
"UhMwlVfceIC",
"JDSBgOg3ExZ",
"86ZMvGumlCe",
"FawLGUy7ld4",
"XXfWIn-XnEj",
"FFnrXdxUkEk",
"MzUfo2I3Djx",
"sIk1o6VxJhU",
"lprVe-hNky",
"tXtCs64LyO3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents the learning online with guidance offline (LOGO) algorithm that leverages demonstration data to constrain policy search for reinforcement learning with sparse reward such that the initial exploration phase is guided. Experiments in locomotion tasks in simulated domains and a navigation task on ... | [
8,
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_YJ1WzgMVsMt",
"iclr_2022_YJ1WzgMVsMt",
"FawLGUy7ld4",
"FFnrXdxUkEk",
"iclr_2022_YJ1WzgMVsMt",
"tXtCs64LyO3",
"tXtCs64LyO3",
"Tq7ZiCgbQ8",
"oJFoRQN67Li",
"RWEIi8MYnvE",
"RWEIi8MYnvE",
"lprVe-hNky",
"iclr_2022_YJ1WzgMVsMt",
"iclr_2022_YJ1WzgMVsMt",
"iclr_2022_YJ1WzgMVsMt"
] |
iclr_2022_IwJPj2MBcIa | Compositional Attention: Disentangling Search and Retrieval | Multi-head, key-value attention is the backbone of transformer-like model architectures which have proven to be widely successful in recent years. This attention mechanism uses multiple parallel key-value attention blocks (called heads), each performing two fundamental computations: (1) search - selection of a relevant entity from a set via query-key interaction, and (2) retrieval - extraction of relevant features from the selected entity via a value matrix. Standard attention heads learn a rigid mapping between search and retrieval. In this work, we first highlight how this static nature of the pairing can potentially: (a) lead to learning of redundant parameters in certain tasks, and (b) hinder generalization. To alleviate this problem, we propose a novel attention mechanism, called Compositional Attention, that replaces the standard head structure. The proposed mechanism disentangles search and retrieval and composes them in a dynamic, flexible and context-dependent manner. Through a series of numerical experiments, we show that it outperforms standard multi-head attention on a variety of tasks, including some out-of-distribution settings. Through our qualitative analysis, we demonstrate that Compositional Attention leads to dynamic specialization based on the type of retrieval needed. Our proposed mechanism generalizes multi-head attention, allows independent scaling of search and retrieval and is easy to implement in a variety of established network architectures. | Accept (Spotlight) | This paper identifies a limitation with current attention in transformers where they scoring with query-key pairs is strongly tied to retrieving the value and proposes a more flexible configuration that subsumes the previous setup but provides more flexibility. The authors shows this leads to improvements in various settings.
Overall, all reviewers seem to agree there is interesting insight and results in this paper and it merits publication. Also the discussion helped stress important points regarding weight sharing and more. One concern is that the model was not evaluated on standard NLP/vision datasets (I assume alluding to GLUE/SuperGlue/SQuAD, etc.), and authors seem to hint that pre-training this is an issue for them computationally. This leaves open whether this indeed can and should replace the standard attention mechanism across the board, but is still very worthy of publication. | val | [
"5D1m-8SsqmE",
"dJgb9HbFYoB",
"lqX4eWu7Jru",
"HX51q-GLArb",
"Q7fIVhsjdp0",
"EM7gk7zULVe",
"lVIWtQu83fI",
"3ixYLYF8bUM",
"QGTF9RjcoEB",
"x7N9Xn2VbZ",
"qh5Lhlfcso9",
"LpO1Qp-mWua",
"3XO071xXPAY",
"pumMBhRxoU4",
"ft6vhJKyAuj",
"4pePdRYePnw"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate that the authors answered my questions. I stick to my rating since it reflects my best judgment of the paper.",
" Dear reviewers,\nOverall reviews seem quite positive - could you make sure you read the author response and other reviews to see if you would like to amend your review in any way?",
"... | [
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"3XO071xXPAY",
"iclr_2022_IwJPj2MBcIa",
"3ixYLYF8bUM",
"iclr_2022_IwJPj2MBcIa",
"LpO1Qp-mWua",
"3XO071xXPAY",
"pumMBhRxoU4",
"x7N9Xn2VbZ",
"iclr_2022_IwJPj2MBcIa",
"HX51q-GLArb",
"QGTF9RjcoEB",
"qh5Lhlfcso9",
"4pePdRYePnw",
"ft6vhJKyAuj",
"iclr_2022_IwJPj2MBcIa",
"iclr_2022_IwJPj2MBcIa... |
iclr_2022_1W0z96MFEoH | Benchmarking the Spectrum of Agent Capabilities | Evaluating the general abilities of intelligent agents requires complex simulation environments. Existing benchmarks typically evaluate only one narrow task per environment, requiring researchers to perform expensive training runs on many different environments. We introduce Crafter, an open world survival game with visual inputs that evaluates a wide range of general abilities within a single environment. Agents either learn from the provided reward signal or through intrinsic objectives and are evaluated by semantically meaningful achievements that can be unlocked during each episode, such as discovering resources and crafting tools. Consistently unlocking all achievements requires strong generalization, deep exploration, and long-term reasoning. We experimentally verify that Crafter is of appropriate difficulty to drive future research and provide baselines scores of reward agents and unsupervised agents. Furthermore, we observe sophisticated behaviors emerging from maximizing the reward signal, such as building tunnel systems, bridges, houses, and plantations. We hope that Crafter will accelerate research progress by quickly evaluating a wide spectrum of abilities. | Accept (Poster) | This paper introduces a new RL benchmark that is a simplified 2D version of Minecraft -- it is designed to support complex behaviors but reduce the training complexity. It is very well written and clear, positioned well with respect to other benchmarks, and is likely to improve the speed of development/testing of some RL algorithms. It is likely to appeal to a subset of the community and drive research in some cases, while others may prefer to stick with full 3D Minecraft. As such, there are some mixed reviews on the paper, with open questions as to whether it would be welcomed by people who work on Minecraft-style domains, whether behaviors learned in the simplified 2D environment would generalize to other settings/domains, and the potential for agents to game the environment. The authors are encouraged to take these aspects and perspectives into consideration when revising the paper. | train | [
"HDQpJ0BUhX1",
"SHONBqbINSl",
"vysGiHLri1G",
"SpMKfcYooO3",
"5ddmvRsjPgu",
"AvE6BaLPdLp",
"lr25OLkgG9",
"GzGfQ1zyuU6",
"-a_fRoDmD_u",
"TY96lM8Wc5g",
"bUtcFfW1d0l",
"JpMumcL3EC",
"Eh4kMJ1HTtb",
"FE_yuTuF3hi",
"PxfMuFoEkEN",
"CpzFJLU5Ttc",
"_CQYlo6V8rV",
"nMs5GoL4ie",
"aGurHfosiC",... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
"This paper introduces a new environment for development of agent capabilities, called Crafter. The environment is procedurally generated and consists of a 2-D world inhabited by various resources, terrain types, and objects. The agent is rewarded for crafting items and accomplishing achievements from a set of 22 p... | [
5,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_1W0z96MFEoH",
"SpMKfcYooO3",
"5ddmvRsjPgu",
"TY96lM8Wc5g",
"-a_fRoDmD_u",
"iclr_2022_1W0z96MFEoH",
"rqx3sx629b-",
"PxfMuFoEkEN",
"Eh4kMJ1HTtb",
"nMs5GoL4ie",
"PxfMuFoEkEN",
"Eh4kMJ1HTtb",
"_CQYlo6V8rV",
"PxfMuFoEkEN",
"q1QbUwA3JYv",
"hWPx-oUnaBL",
"aGurHfosiC",
"rqx3sx62... |
iclr_2022_HBsJNesj2S | Neural Relational Inference with Node-Specific Information | Inferring interactions among entities is an important problem in studying dynamical systems, which greatly impacts the performance of downstream tasks, such as prediction. In this paper, we tackle the relational inference problem in a setting where each entity can potentially have a set of individualized information that other entities cannot have access to. Specifically, we represent the system using a graph in which the individualized information become node-specific information (NSI). We build our model in the framework of Neural Relation Inference (NRI), where the interaction among entities are uncovered using variational inference. We adopt NRI model to incorporate the individualized information by introducing private nodes in the graph that represent NSI. Such representation enables us to uncover more accurate relations among the agents and therefore leads to better performance on the downstream tasks. Our experiment results over real-world datasets validate the merit of our proposed algorithm. | Accept (Poster) | This paper extends the Neural Relational Inference framework for probabilistic inference of interaction relations between entities, to a scenario where entities may have private features, which requires modifications of the standard graph encoders and decoders in NRI.
Reviewers appreciated both the model and the overall execution of the paper: the building blocks are clear, the evaluation does its job well. The main doubts are about the applicability of the setting, for which the authors don't provide too many examples. However, the construction is somewhat intuitive, and even in cases where private attributes aren't explicit, it may be valuable to disentangle the shareable attributes this way. We encourage the reviewers to discuss the applicability a bit further.
Typos: (not exhaustive, please doublecheck with a spell checker)
- multiple occurrences of Gumble instead of Gumbel
- bottom of pg 4, factorzied -> factorized | train | [
"uIRAB_Tk5Kr",
"YcPFNfkWbq",
"QNo0EW2Eadx",
"gFCx7BpIguY",
"_rcifcEpKKL",
"UhbMoXUj2lq",
"yhX4Kdzel-x",
"tPI8qOst7Fs"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We would really appreciate it if the reviewer could let us know whether their concerns regarding the paper have been resolved or not. We would be happy to continue this discussion if there is any further questions.\n\nThank you again for your valuable time and feedbacks.\n\n",
" Thank you very much again for yo... | [
-1,
-1,
8,
-1,
-1,
-1,
5,
8
] | [
-1,
-1,
2,
-1,
-1,
-1,
2,
3
] | [
"yhX4Kdzel-x",
"QNo0EW2Eadx",
"iclr_2022_HBsJNesj2S",
"QNo0EW2Eadx",
"yhX4Kdzel-x",
"tPI8qOst7Fs",
"iclr_2022_HBsJNesj2S",
"iclr_2022_HBsJNesj2S"
] |
iclr_2022_9RUHPlladgh | Visual Representation Learning Does Not Generalize Strongly Within the Same Domain | An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
In this paper, we test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets (dSprites, Shapes3D, MPI3D) from controlled environments, and on our contributed CelebGlow dataset.
In contrast to prior robustness work that introduces novel factors of variation during test time, such as blur or other (un)structured noise, we here recompose, interpolate, or extrapolate only existing factors of variation from the training data set (e.g., small and medium-sized objects during training and large objects during testing). Models that learn the correct mechanism should be able to generalize to this benchmark.
In total, we train and test 2000+ models and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias. Moreover, the generalization capabilities of all tested models drop significantly as we move from artificial datasets towards more realistic real-world datasets.
Despite their inability to identify the correct mechanism, the models are quite modular as their ability to infer other in-distribution factors remains fairly stable, providing only a single factor is out-of-distribution. These results point to an important yet understudied problem of learning mechanistic models of observations that can facilitate generalization. | Accept (Poster) | This paper presents a through study of generalization in visual representation learning. It compares in distribution generalization to out of distribution generalization using a comprehensive benchmark. The paper received very positive reviews from all reviewers. Reviewers agreed that the paper has several strengths: It is very well written, the presented benchmark is very useful and the analysis is thorough. One concern that was brought up by the reviewers was that a majority of the presented findings are expected and in a sense, known to the community. The authors have addressed this concern by pointing out that their findings are more fine grained than past works and that their proposed benchmark is a stepping stone towards measuring general robustness. I must note that in spite of this concern, all reviewers have maintained their strong acceptance scores. I agree with the reviewers. This paper makes a strong contribution to this important problem via its benchmark and analysis, which future works can build off of, and hence I recommend acceptance. | val | [
"FK5ctG11vbk",
"ykt8u2x5Ok",
"7LAL1ukZayp",
"KFi1XwQRe9C",
"imo4d-E44hM",
"WmahQzWhywG",
"JPTatSnMXCb",
"BQNQvBdqsio",
"tJ2-SovEgyU",
"E0nK-qHgYq",
"wHDXIFrLwvE",
"p-vs77z-rA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the thorough responses to my comments and those of the other reviewers.\n\nI'm keeping my rating at 8 -- I think this is a solid paper that should be accepted.\n\nOn the \"unsurprising results\": I agree that the current message is more fine-grained than much of the prior work on generalization, but I ... | [
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"tJ2-SovEgyU",
"iclr_2022_9RUHPlladgh",
"JPTatSnMXCb",
"iclr_2022_9RUHPlladgh",
"iclr_2022_9RUHPlladgh",
"E0nK-qHgYq",
"ykt8u2x5Ok",
"p-vs77z-rA",
"wHDXIFrLwvE",
"imo4d-E44hM",
"iclr_2022_9RUHPlladgh",
"iclr_2022_9RUHPlladgh"
] |
iclr_2022_PQTW3iG4sC- | On feature learning in shallow and multi-layer neural networks with global convergence guarantees | We study the gradient flow optimization of over-parameterized neural networks (NNs) in a setup that allows feature learning while admitting non-asymptotic global convergence guarantees. First, we prove that for wide shallow NNs under the mean-field (MF) scaling and with a general class of activation functions, when the input dimension is at least the size of the training set, the training loss converges to zero at a linear rate under gradient flow. Building upon this analysis, we study a model of wide multi-layer NNs with random and untrained weights in earlier layers, and also prove a linear-rate convergence of the training loss to zero, regardless of the input dimension. We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart. | Accept (Poster) | This paper studies optimization of over-parametrized neural networks in the mean-field scaling. Specifically, when the input dimension in larger than the number of training samples, the paper shows that the training loss converges to 0 at a linear rate under gradient flow. It's possible to extend the result by random feature layers to handle the case when input dimension is low. Empirically the dynamics in this paper seems to achieve better generalization performance than the NTK counterpart, but no theoretical result is known. Overall this is a solid contribution to the hard problem of analyzing the training dynamics of mean-field regime. There was some debate between reviewers on what is the definition of "feature learning" and I recommend the authors to give an explicit definition of what they mean (and potentially use a different term). | train | [
"VctadgOpby",
"XMT7SzWKq3T",
"i2ZlzShbI_N",
"IpN7gUMxwyM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors studied the optimization problem of shallow and deep neural network. They showed that under the non-degeneracy condition on certain Gram matrix, gradient descent (GD) can converge to 0 training loss efficiently. One important difference with the existing NTK and mean-field literatures is... | [
6,
8,
8,
3
] | [
3,
4,
2,
4
] | [
"iclr_2022_PQTW3iG4sC-",
"iclr_2022_PQTW3iG4sC-",
"iclr_2022_PQTW3iG4sC-",
"iclr_2022_PQTW3iG4sC-"
] |
iclr_2022_LzQQ89U1qm_ | Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy | Unsupervised detection of anomaly points in time series is a challenging problem, which requires the model to derive a distinguishable criterion. Previous methods tackle the problem mainly through learning pointwise representation or pairwise association, however, neither is sufficient to reason about the intricate dynamics. Recently, Transformers have shown great power in unified modeling of pointwise representation and pairwise association, and we find that the self-attention weight distribution of each time point can embody rich association with the whole series. Our key observation is that due to the rarity of anomalies, it is extremely difficult to build nontrivial associations from abnormal points to the whole series, thereby, the anomalies' associations shall mainly concentrate on their adjacent time points. This adjacent-concentration bias implies an association-based criterion inherently distinguishable between normal and abnormal points, which we highlight through the Association Discrepancy. Technically, we propose the Anomaly Transformer with a new Anomaly-Attention mechanism to compute the association discrepancy. A minimax strategy is devised to amplify the normal-abnormal distinguishability of the association discrepancy. The Anomaly Transformer achieves state-of-the-art results on six unsupervised time series anomaly detection benchmarks of three applications: service monitoring, space & earth exploration, and water treatment. | Accept (Spotlight) | The paper proposed a novel approach that leverages the discrepancies between the (global) series association and the (local) prior association for detecting anomalies in time series. The authors provided detailed empirical support to motivate the above detection criterion, and introduced a two-branch attention architecture for modeling the discrepancies and establishing an anomaly score.
All reviewers acknowledge the technical novelty of this work (including the key insight of modeling anomalousness with Transformer’s self-attention and concrete training mechanism via a minimax optimization process) as well as the comprehensiveness of the empirical study.
Meanwhile, there were some concerns in the positioning of the work, in particular in the clarity in connection to related work, and some reviews concern the clarity of the presentation (e.g. missing some details in experimental results), and the clarity of the exposition of the training process. The authors provided effective feedback during the discussion phase, which helped clarify many of the above concerns. All reviewers agree that the revision makes a solid paper and unanimously recommend acceptance of this work.
The authors are strongly encouraged to take into account the feedback from the discussion phase to further improve the clarity concerning the technical details as well as the reproducibility of the results. | train | [
"XQv0ho40WfA",
"Rkrbp0PZNd",
"CUkXPP0F2Vf",
"M3oAe7knyq",
"99cjPkoCSXm",
"491TUob7w9",
"f1gFpWzYwxU",
"lAM5tV8tNWw",
"RHRyEfUqIbX",
"oF79q35TPvi",
"mJB_Vz47GFz",
"UT7dQRbdPb_",
"YTq_CLdRDM",
"acIbe19kx3g",
"BJ25e51v4k",
"eLEqjumJlXk",
"GSBcRQIZlNU",
"4gMLgBs7Gvh",
"3LdzKERJq2c",
... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We would like to thank Reviewer 88ie for providing a detailed valuable pre-rebuttal review. Your detailed suggestions help us a lot in paper revision. \n\nMany thanks to Reviewer 88ie for further suggestions in the grammar of the revised paper. Thanks again for your dedication! In the final version, we guarantee ... | [
-1,
-1,
8,
-1,
-1,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"Rkrbp0PZNd",
"M3oAe7knyq",
"iclr_2022_LzQQ89U1qm_",
"3LdzKERJq2c",
"GSBcRQIZlNU",
"lAM5tV8tNWw",
"iclr_2022_LzQQ89U1qm_",
"EFp3n2sHrxx",
"iclr_2022_LzQQ89U1qm_",
"mJB_Vz47GFz",
"YTq_CLdRDM",
"acIbe19kx3g",
"UT7dQRbdPb_",
"4gMLgBs7Gvh",
"iclr_2022_LzQQ89U1qm_",
"RHRyEfUqIbX",
"AuZBgA... |
iclr_2022_P7FLfMLTSEX | The Spectral Bias of Polynomial Neural Networks | Polynomial neural networks (PNNs) have been recently shown to be particularly effective at image generation and face recognition, where high-frequency information is critical. Previous studies have revealed that neural networks demonstrate a $\text{\it{spectral bias}}$ towards low-frequency functions, which yields faster learning of low-frequency components during training. Inspired by such studies, we conduct a spectral analysis of the Neural Tangent Kernel (NTK) of PNNs. We find that the $\Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the learning of the higher frequencies.
We verify the theoretical bias through extensive experiments. We expect our analysis to provide novel insights into designing architectures and learning frameworks by incorporating multiplicative interactions via polynomials.
| Accept (Poster) | *Summary:* Investigate the NTK of PNNs and enhanced bias towards higher frequencies.
*Strengths:*
- Spectral bias is a contemporary topic.
- Some reviewers found the paper well written.
*Weaknesses:*
- Restricted setting (two-layers / no bias / infinite width), particularly in view of the objective to provide architecture design guidance. Restricted experiments (Introduction indicates learning spherical harmonics).
- Sparse discussion of related works, particularly on spectral bias.
*Discussion:*
During the discussion period authors made efforts to address some of the concerns of the reviewers. A late new experiment prompted KnZp to raise score. TQnp found the paper good but also expected a more profound theorem addressing broader PNN families given the existing work. They found that experiments and discussion of prior work could be improved. The authors added discussion of prior works and provided an explanation for their choices, but left extensions and further analysis for future work. nFMY expressed concerns about applicability of the analysis and evidence in experiments. Author responses addresses this in part. cEcf points out that the main theoretical contributions have straight forward proofs based on previous works and asks about extensions. Authors agree that the paper does not introduce novel techniques and that extending the analysis is an important direction, but leave this for future work. FuRi finds the paper provides an interesting viewpoint and raised score from 3 to 5 following the discussion (improving presentation, rigor, clarity), but considers that the paper has several drawbacks (oversimplification, lack of technical novelty) that need to be addressed.
*Conclusion:*
One reviewer found this work marginally below the acceptance threshold, three marginally above, and one good. I find that the paper considers an interesting problem and makes some interesting observations and some valuable advances. I appreciate the authors’ efforts during the reviewing period. Hence I am recommending accept. At the same time, I find that, clarity, technical and experimental contributions still can be improved and encourage the authors to carefully consider the reviewers comments when preparing the final version of the paper. | test | [
"0SKXrP0zGO",
"YM2-3LFs4LY",
"Ot7Zjr0BI_",
"w3TD3lDDO0E",
"qCmCOSn5Lvl",
"3ITUMd_434C",
"qoMHOP1Tuk5",
"cFinScgZX2",
"6JMnLOhMK5r",
"GAC5Dxr6V5C",
"R4pEFmkcdjM",
"t4dQ5LTpOz5",
"EWOwg3055aw",
"kqV6Ry5RN0a",
"qDMZ-BwXyo1",
"hNb6HBhwV_z",
"91UZFS-ye7",
"YU1dEe3b64y",
"HtZ8C7wYOaZ",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" We are thankful for the timely response and for appreciating our effort.\n\nWe agree that we follow an existing framework and toolset, but we see using the established models for a new problem as a strength. That is, the observation we make on the effect of adding a multiplicative layer to neural network has not ... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"Ot7Zjr0BI_",
"iclr_2022_P7FLfMLTSEX",
"w3TD3lDDO0E",
"91UZFS-ye7",
"-62fsiILy2c",
"pq-KPaj0J3y",
"jOh1TojVVyT",
"R4pEFmkcdjM",
"iclr_2022_P7FLfMLTSEX",
"hNb6HBhwV_z",
"kqV6Ry5RN0a",
"qDMZ-BwXyo1",
"iclr_2022_P7FLfMLTSEX",
"EWOwg3055aw",
"YU1dEe3b64y",
"HtZ8C7wYOaZ",
"YM2-3LFs4LY",
... |
iclr_2022_Vzh1BFUCiIX | ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning | Despite the recent success of multi-task learning and transfer learning for natural language processing (NLP), few works have systematically studied the effect of scaling up the number of tasks during pre-training. Towards this goal, this paper introduces ExMix (Extreme Mixture): a massive collection of 107 supervised NLP tasks across diverse domains and task-families. Using ExMix, we study the effect of multi-task pre-training at the largest scale to date, and analyze co-training transfer amongst common families of tasks. Through this analysis, we show that manually curating an ideal set of tasks for multi-task pre-training is not straightforward, and that multi-task scaling can vastly improve models on its own. Finally, we propose ExT5: a model pre-trained using a multi-task objective of self-supervised span denoising and supervised ExMix. Via extensive experiments, we show that ExT5 outperforms strong T5 baselines on SuperGLUE, GEM, Rainbow, Closed-Book QA tasks, and several tasks outside of ExMix. ExT5 also significantly improves sample efficiency while pre-training. | Accept (Poster) | This paper explores large scale supervised multi-task training across 107 NLP tasks combined with self-supervised C4 masked span infilling, using the T5 sequence-to-sequence model. The results improve over prior strong T5 baselines on several NLP benchmarks such as SuperGLUE, GEM, and Rainbow.
The paper's main strengths are the scale and large number of tasks, the release of the trained models and data, as well as the clarity and presentation. Reviewers had concerns with the novelty, limitations in the evaluation (to just T5, and to just SuperGLUE in portions of the paper), and the potential impact of hyperparameters on the results. During the discussion period, the authors noted that it is not obvious a priori that their approach would work, and that their evaluations on other tasks made it unlikely to be overfitting to SuperGLUE. They also noted that running the additional hyperparameter experiments suggested during the reviews were computationally prohibitive.
Overall, despite the drawbacks and relative lack of novelty, the extensive experiments and released models provide significant value and will be of interest to the research community. | train | [
"8LctZkSN5tz",
"Lrtk0GjMd1A",
"eyYckJbKq1p",
"t5koISr6YEF",
"rLQrcpXsli",
"iaeO35dWX5r",
"jDJ2zxf9ECkv",
"LHQNLaL7kcNh",
"bPVGdVoJwX7",
"rOUrtwwKbg1x",
"TA1Ad-jUohq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper revisits the idea of multi-task learning for natural language processing (NLP) and scales it up to 107 supervised NLP tasks as EXMIX (Extreme Mixture) across diverse domains and task families. It extensively analyzes the co-training effect between the different families of NLP tasks and proposes a model... | [
6,
-1,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_Vzh1BFUCiIX",
"bPVGdVoJwX7",
"iclr_2022_Vzh1BFUCiIX",
"iclr_2022_Vzh1BFUCiIX",
"iclr_2022_Vzh1BFUCiIX",
"t5koISr6YEF",
"eyYckJbKq1p",
"TA1Ad-jUohq",
"8LctZkSN5tz",
"eyYckJbKq1p",
"iclr_2022_Vzh1BFUCiIX"
] |
iclr_2022_EDeVYpT42oS | Deconstructing the Inductive Biases of Hamiltonian Neural Networks | Physics-inspired neural networks (NNs), such as Hamiltonian or Lagrangian NNs, dramatically outperform other learned dynamics models by leveraging strong inductive biases. These models, however, are challenging to apply to many real world systems, such as those that don’t conserve energy or contain contacts, a common setting for robotics and reinforcement learning. In this paper, we examine the inductive biases that make physics-inspired models successful in practice. We show that, contrary to conventional wisdom, the improved generalization of HNNs is the result of modeling acceleration directly and avoiding artificial complexity from the coordinate system, rather than symplectic structure or energy conservation. We show that by relaxing the inductive biases of these models, we can match or exceed performance on energy-conserving systems while dramatically improving performance on practical, non-conservative systems. We extend this approach to constructing transition models for common Mujoco environments, showing that our model can appropriately balance inductive biases with the flexibility required for model-based control. | Accept (Spotlight) | This paper examined physics-inspired inductive biases in neural networks, in particular Hamiltonian and Lagrangian dynamics. The work separated the benefits arising from incorporating energy conservation, the symplectic bias, the coordinate systems, and second-order dynamics. Through a set of experiments, the paper showed the most important factor for improved performance in the test domains was the second-order dynamics, and not the more common explanation of energy conservation or the other factors. The increased generality of this approach was demonstrated with better predictions on Mujoco tasks that did not conserve energy.
All reviewers liked the insights provided by the paper. They agreed that the paper clearly laid out several hypotheses and systematically tested them. The reviewers found the experiments thoughtful and the results compelling. The reviewers also pointed out several aspects of the document that could be improved, including additional formalism clarifications (reviewer nLbj), baseline algorithms (reviewer wu5x), and domains (reviewers 7KKB,SW9u). The reviewers found the author's response satisfactory but were disappointed that a revised paper was not ready to read. The reviewers want the final paper to include the modifications that were promised in the author response.
All four reviewers indicated to accept this paper which contributes novel insights that simplify and generalize physics-inspired neural networks. The paper is therefore accepted. | train | [
"pjHWOHKzR-1",
"O1lQ8txjG_L",
"eHktHiSq_kT",
"pozfty0fBb",
"NH4xJ4EzRPm",
"NL1zYXgzFGG",
"4ciKAxc34mT",
"Gr3bkrznlce",
"HGZT0_1E2-z",
"q4LNIb585Bt",
"Xcm_f58KWsE",
"Njmt4LisbsC",
"anWI6YdvXsm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This is a well-written analysis of Hamiltonian Neural Networks (HNNs), a class of physics-inspired deep neural networks. The work is motivated by a desire to apply HNNs to non-toy datasets as well as to gain an understanding of the key inductive biases that explain the majority of their performance. Through contro... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_EDeVYpT42oS",
"HGZT0_1E2-z",
"q4LNIb585Bt",
"Gr3bkrznlce",
"4ciKAxc34mT",
"iclr_2022_EDeVYpT42oS",
"anWI6YdvXsm",
"pjHWOHKzR-1",
"Njmt4LisbsC",
"Xcm_f58KWsE",
"iclr_2022_EDeVYpT42oS",
"iclr_2022_EDeVYpT42oS",
"iclr_2022_EDeVYpT42oS"
] |
iclr_2022_kOu3-S3wJ7 | Filling the G_ap_s: Multivariate Time Series Imputation by Graph Neural Networks | Dealing with missing values and incomplete time series is a labor-intensive, tedious, inevitable task when handling data coming from real-world applications. Effective spatio-temporal representations would allow imputation methods to reconstruct missing temporal data by exploiting information coming from sensors at different locations. However, standard methods fall short in capturing the nonlinear time and space dependencies existing within networks of interconnected sensors and do not take full advantage of the available - and often strong - relational information. Notably, most state-of-the-art imputation methods based on deep learning do not explicitly model relational aspects and, in any case, do not exploit processing frameworks able to adequately represent structured spatio-temporal data. Conversely, graph neural networks have recently surged in popularity as both expressive and scalable tools for processing sequential data with relational inductive biases. In this work, we present the first assessment of graph neural networks in the context of multivariate time series imputation. In particular, we introduce a novel graph neural network architecture, named GRIN, which aims at reconstructing missing data in the different channels of a multivariate time series by learning spatio-temporal representations through message passing. Empirical results show that our model outperforms state-of-the-art methods in the imputation task on relevant real-world benchmarks with mean absolute error improvements often higher than 20%. | Accept (Poster) | This paper proposes a method for time series imputation modelling the spatial dependencies with graphs, focusing on spatio-temporal data, where the spatial dimensions are represented by a graph.
The reviewers find the approach novel. The paper is well-written and clear. Related work is adequately discussed.
The experiments are convincing.
The reviewers agree that the paper should be accepted. | train | [
"CfcG5-lL7l",
"82oJxUmtI1K",
"qcIvAK31W7E",
"YAEfPiAmMw",
"gzyXg_pBwsz",
"MsmNd_ricy",
"ov44tj6Zphd",
"UU4wmLwOURn",
"tEma1E3UlB",
"U1T68ZtusBo",
"UtMvTHYiaG7",
"mDnrZR4bh93",
"FbCrUFPNKR",
"e5nrA-wGR5H",
"1Ldy8dqFSE",
"EklGnTpj-H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers and AC, \n\nsince the rebuttal window is closing, we wish to thank all of you again for helping us improve our paper and for your knowledgeable comments. \n\nWe hope that our answers and revision of the paper helped in clearing up your doubts. \n\nThe authors",
" We updated the supplementary mate... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"iclr_2022_kOu3-S3wJ7",
"YAEfPiAmMw",
"MsmNd_ricy",
"iclr_2022_kOu3-S3wJ7",
"iclr_2022_kOu3-S3wJ7",
"U1T68ZtusBo",
"FbCrUFPNKR",
"e5nrA-wGR5H",
"1Ldy8dqFSE",
"gzyXg_pBwsz",
"EklGnTpj-H",
"EklGnTpj-H",
"iclr_2022_kOu3-S3wJ7",
"iclr_2022_kOu3-S3wJ7",
"iclr_2022_kOu3-S3wJ7",
"iclr_2022_kO... |
iclr_2022_qRDQi3ocgR3 | Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective | Deep neural networks (DNNs) often rely on easy–to–learn discriminatory features, or cues, that are not necessarily essential to the problem at hand. For example, ducks in an image may be recognized based on their typical background scenery, such as lakes or streams. This phenomenon, also known as shortcut learning, is emerging as a key limitation of the current generation of machine learning models. In this work, we introduce a set of experiments to deepen our understanding of shortcut learning and its implications. We design a training setup with several shortcut cues, named WCST-ML, where each cue is equally conducive to the visual recognition problem at hand. Even under equal opportunities, we observe that (1) certain cues are preferred to others, (2) solutions biased to the easy–to–learn cues tend to converge to relatively flat minima on the loss surface, and (3) the solutions focusing on those preferred cues are far more abundant in the parameter space. We explain the abundance of certain cues via their Kolmogorov (descriptional) complexity: solutions corresponding to Kolmogorov-simple cues are abundant in the parameter space and are thus preferred by DNNs. Our studies are based on the synthetic dataset DSprites and the face dataset UTKFace. In our WCST-ML, we observe that the inborn bias of models leans toward simple cues, such as color and ethnicity. Our findings emphasize the importance of active human intervention to remove the inborn model biases that may cause negative societal impacts. | Accept (Poster) | The reviewers were split, with one of them leaning towards rejection, primarily due the (perceived) limited impact of the study. I tend to agree with the other reviewers that this paper provides an interesting and original framework for analysis of learning models, and while there are substantial shortcomings, they are outweighed by the positives (including the promise this approach may hold for analysis of learning in more realistic scenarios). I therefore recommend acceptance, if space in the proceedings allows. | train | [
"XPOiPxLU6FG",
"sQfS9uqtZff",
"gXTSZocs01",
"TtQuFMuC8Lc",
"dBc32rfGaW",
"pn-J6PWz1p2",
"48MreOloxmC",
"xsyV6BdF0OY",
"6zw0QvWvyx",
"gZoaeAqqTa1",
"NRueFjy40Z_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose a framework for studying the tendency of deep neural networks to preferentially adopt \"cues\". Specifically, they focus on settings where multiple cues are equally likely, though not all of them are equally exploited. To set up such a scenario, they introduce the WCST-ML task, in which the pre... | [
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_qRDQi3ocgR3",
"48MreOloxmC",
"iclr_2022_qRDQi3ocgR3",
"XPOiPxLU6FG",
"XPOiPxLU6FG",
"gXTSZocs01",
"gXTSZocs01",
"NRueFjy40Z_",
"NRueFjy40Z_",
"iclr_2022_qRDQi3ocgR3",
"iclr_2022_qRDQi3ocgR3"
] |
iclr_2022_ySQH0oDyp7 | QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization | Recently, post-training quantization (PTQ) has driven much attention to produce efficient neural networks without long-time retraining. Despite the low cost, current PTQ works always fail under the extremely low-bit setting. In this study, we pioneeringly confirm that properly incorporating activation quantization into the PTQ reconstruction benefits the final accuracy. To deeply understand the inherent reason, a theoretical framework is established, which inspires us that the flatness of the optimized low-bit model on calibration and test data is crucial. Based on the conclusion, a simple yet effective approach dubbed as \textsc{QDrop} is proposed, which randomly drops the quantization of activations during reconstruction. Extensive experiments on various tasks including computer vision (image classification, object detection) and natural language processing (text classification and question answering) prove its superiority. With \textsc{QDrop}, the limit of PTQ is pushed to the 2-bit activation for the first time and the accuracy boost can be up to 51.49\%. Without bells and whistles, \textsc{QDrop} establishes a new state of the art for PTQ. | Accept (Poster) | This paper proposes a simple, theoretically motivated approach for post-training quantization. The authors justify its effectiveness with both a sound theoretical analysis, and strong empirical results across many tasks and models, including a state-of-the-art result for 2-bit quantized weights/activations. All reviewers agreed the paper is worth accepting, with 3/4 rating it as a clear accept following the discussion period, and the fourth reviewer not giving strong reasons not to accept. | train | [
"Q4v6tEBvTkB",
"GLZCTHcu7et",
"uYcOxJnAMp",
"t8Wqo2_QmAr",
"Bef1BIb5g7k",
"Fb_pkn7YTD_",
"aMfZP6JV1TJ",
"eidzaA0leZ2",
"YwIj8cQraXO",
"p6Dx8zJrRaX",
"PyLRKRV4MMG",
"goub4wt2zn5",
"pYeQHX7e1Qw",
"169ef_iyhDW",
"qEGDZpKfdIK",
"_Pngg9rQvF",
"RqmCsOR7prS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"This paper proposes the post-training quantization for extremely low-bit neural networks. By considering the activation quantization during reconstruction, the presented QDrop randomly drops the quantization of activations with higher loss flatness that adapts the activations with various activation bitwidths well... | [
8,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_ySQH0oDyp7",
"iclr_2022_ySQH0oDyp7",
"iclr_2022_ySQH0oDyp7",
"GLZCTHcu7et",
"GLZCTHcu7et",
"Q4v6tEBvTkB",
"uYcOxJnAMp",
"GLZCTHcu7et",
"Q4v6tEBvTkB",
"GLZCTHcu7et",
"uYcOxJnAMp",
"RqmCsOR7prS",
"iclr_2022_ySQH0oDyp7",
"qEGDZpKfdIK",
"_Pngg9rQvF",
"GLZCTHcu7et",
"iclr_2022_... |
iclr_2022_dQ7Cy_ndl1s | Controlling the Complexity and Lipschitz Constant improves Polynomial Nets | While the class of Polynomial Nets demonstrates comparable performance to neural networks (NN), it currently has neither theoretical generalization characterization nor robustness guarantees. To this end, we derive new complexity bounds for the set of Coupled CP-Decomposition (CCP) and Nested Coupled CP-decomposition (NCP) models of Polynomial Nets in terms of the $\ell_\infty$-operator-norm and the $\ell_2$-operator norm. In addition, we derive bounds on the Lipschitz constant for both models to establish a theoretical certificate for their robustness. The theoretical results enable us to propose a principled regularization scheme that we also evaluate experimentally and show that it improves the accuracy as well as the robustness of the models to adversarial perturbations. We showcase how this regularization can be combined with adversarial training, resulting in further improvements. | Accept (Poster) | This paper considers generalization of polynomial networks. It gives a characterization of the Rademacher complexity as well as Lipschitz constants for polynomial nets. Inspired by the theoretical results, the paper also proposed regularization schemes that empirically improves accuracy and robustness. Most reviewers found the theoretical results to be interesting (but there are some concerns about the mismatch between the upperbound in theory and used in practice, which was partially addressed in the response). There are some more concerns about the experiments but many of them are addressed in the new version. Overall although polynomial networks are not popular in practice, this paper provides some interesting theoretical results. | val | [
"L61VjjNWB7",
"mym6y8mGHOE",
"3y6BjHr_z6c",
"gFosSY9a9ih",
"QMMlFIH7SiU",
"xRC2CG59IFk",
"ov7ADkTT2N",
"jO463n_kiP9",
"cWkrq_qH7J3",
"Dudrgf_5E0f",
"Vbga5R1yd_H",
"oXDW7wVuJC",
"s-mlOflatIJ",
"mWgUqSQWSxF",
"2PO9Jc1RP6",
"UTrhNBAIeEy",
"27zOGSgBVr3",
"6HzovPDtrq9",
"xeS8pm2LC0"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer SK73, \n\ngiven that the discussion window is closing soon, we would like to confirm whether the concern of the reviewer has been addressed. \n\nOne of the main limitations mentioned was the lack of insights, which have been added. The regularization schemes (i.e., layer-wise bounds have been added ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"mWgUqSQWSxF",
"ov7ADkTT2N",
"gFosSY9a9ih",
"QMMlFIH7SiU",
"oXDW7wVuJC",
"jO463n_kiP9",
"jO463n_kiP9",
"cWkrq_qH7J3",
"UTrhNBAIeEy",
"UTrhNBAIeEy",
"xeS8pm2LC0",
"xeS8pm2LC0",
"6HzovPDtrq9",
"27zOGSgBVr3",
"iclr_2022_dQ7Cy_ndl1s",
"iclr_2022_dQ7Cy_ndl1s",
"iclr_2022_dQ7Cy_ndl1s",
"... |
iclr_2022_MXdFBmHT4C | Differentiable Expectation-Maximization for Set Representation Learning | We tackle the set2vec problem, the task of extracting a vector representation from an input set comprised of a variable number of feature vectors. Although recent approaches based on self attention such as (Set)Transformers were very successful due to the capability of capturing complex interaction between set elements, the computational overhead is the well-known downside. The inducing-point attention and the latest optimal transport kernel embedding (OTKE) are promising remedies that attain comparable or better performance with reduced computational cost, by incorporating a fixed number of learnable queries in attention. In this paper we approach the set2vec problem from a completely different perspective. The elements of an input set are considered as i.i.d.~samples from a mixture distribution, and we define our set embedding feed-forward network as the maximum-a-posterior (MAP) estimate of the mixture which is approximately attained by a few Expectation-Maximization (EM) steps. The whole MAP-EM steps are differentiable operations with a fixed number of mixture parameters, allowing efficient auto-diff back-propagation for any given downstream task. Furthermore, the proposed mixture set data fitting framework allows unsupervised set representation learning naturally via marginal likelihood maximization aka the empirical Bayes. Interestingly, we also find that OTKE can be seen as a special case of our framework, specifically a single-step EM with extra balanced assignment constraints on the E-step. Compared to OTKE, our approach provides more flexible set embedding as well as prior-induced model regularization. We evaluate our approach on various tasks demonstrating improved performance over the state-of-the-arts. | Accept (Poster) | This work proposes a new embedding for sets of features. A set is represented by the output means of an EM algorithm for fitting the input set with a mixture of Gaussians. The authors draw a new connection to an existing method for set embedding (OTKE). Moreover, their method achieves good experimental results.
There is general consensus among the reviewers that the paper is sound, well-written and provides new insights for set representation, with convincing experiments.
The authors have answered to most comments raised by the reviewers and have revised the paper accordingly.
I recommend acceptance as a poster. | train | [
"g7zbL5oTfH8",
"SQ0dYKXnSp",
"LbGOf-oTmLc",
"Qim_rkh2OF",
"OYdg3uTM9FC",
"MjzrQnV7S-",
"F7RvvUZ8dVj",
"lJEa62FlRCH",
"MKSOsZ_fv8J",
"Ol9DnAUw86"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a new embedding for sets of features, an important problem since many data modalities can be seen as such (images, sentences, etc.). More precisely, a set is represented by the output means of an EM algorithm for fitting the input set with a mixture of gaussians. The authors draw a new connectio... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
2
] | [
"iclr_2022_MXdFBmHT4C",
"OYdg3uTM9FC",
"MKSOsZ_fv8J",
"iclr_2022_MXdFBmHT4C",
"g7zbL5oTfH8",
"Ol9DnAUw86",
"lJEa62FlRCH",
"iclr_2022_MXdFBmHT4C",
"iclr_2022_MXdFBmHT4C",
"iclr_2022_MXdFBmHT4C"
] |
iclr_2022_BZnnMbt0pW | Promoting Saliency From Depth: Deep Unsupervised RGB-D Saliency Detection | Growing interests in RGB-D salient object detection (RGB-D SOD) have been witnessed in recent years, owing partly to the popularity of depth sensors and the rapid progress of deep learning techniques. Unfortunately, existing RGB-D SOD methods typically demand large quantity of training images being thoroughly annotated at pixel-level. The laborious and time-consuming manual annotation has become a real bottleneck in various practical scenarios. On the other hand, current unsupervised RGB-D SOD methods still heavily rely on handcrafted feature representations. This inspires us to propose in this paper a deep unsupervised RGB-D saliency detection approach, which requires no manual pixel-level annotation during training. It is realized by two key ingredients in our training pipeline. First, a depth-disentangled saliency update (DSU) framework is designed to automatically produce pseudo-labels with iterative follow-up refinements, which provides more trustworthy supervision signals for training the saliency network. Second, an attentive training strategy is introduced to tackle the issue of noisy pseudo-labels, by properly re-weighting to highlight the more reliable pseudo-labels. Extensive experiments demonstrate the superior efficiency and effectiveness of our approach in tackling the challenging unsupervised RGB-D SOD scenarios. Moreover, our approach can also be adapted to work in fully-supervised situation. Empirical studies show the incorporation of our approach gives rise to notably performance improvement in existing supervised RGB-D SOD models. | Accept (Poster) | The paper received two accepts and 1 marginally above acceptance recommendations. The authors provided satisfactory answers, mostly on clarifying the unsupervised learning methodology, in conjunction with the MAA recommendation. I recommend the paper be accepted as poster. | train | [
"Apptn1Gkub",
"OQWweIHsK2Z",
"zasJ6prFODg",
"AOHhG7Xhm6b",
"g6gr8jiUNXy",
"j5j-qevbirU",
"slt8iYDKIu3",
"NbobKN4iaZ",
"trH6WqkpTU",
"ClinXaxq6v2",
"CD08YYukdj",
"Sc40ZMp9uHE"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents an unsupervised RGB saliency detection model to learn from the pseudo-labels given by the traditional handcrafted approach. To enhance the quality of the pseudo labels, the author proposes to disentangle the depth data to promote the saliency signal and compress undesired noise. Last but not le... | [
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_BZnnMbt0pW",
"iclr_2022_BZnnMbt0pW",
"iclr_2022_BZnnMbt0pW",
"trH6WqkpTU",
"ClinXaxq6v2",
"Apptn1Gkub",
"NbobKN4iaZ",
"j5j-qevbirU",
"ClinXaxq6v2",
"OQWweIHsK2Z",
"Sc40ZMp9uHE",
"iclr_2022_BZnnMbt0pW"
] |
iclr_2022_l5aSHXi8jG5 | Demystifying Limited Adversarial Transferability in Automatic Speech Recognition Systems | The targeted transferability of adversarial samples enables attackers to exploit black-box models in the real-world. The most popular method to produce these adversarial samples is optimization attacks, which have been shown to achieve a high level of transferability in some domains. However, recent research has demonstrated that these attack samples fail to transfer when applied to Automatic Speech Recognition Systems (ASRs). In this paper, we investigate factors preventing this transferability via exhaustive experimentation. To do so, we perform an ablation study on each stage of the ASR pipeline. We discover and quantify six factors (i.e., input type, MFCC, RNN, output type, and vocabulary and sequence sizes) that impact the targeted transferability of optimization attacks against ASRs. Future research can leverage our findings to build ASRs that are more robust to other transferable attack types (e.g., signal processing attacks), or to modify architectures in other domains to reduce their exposure to targeted transferability of optimization attacks. | Accept (Poster) | This paper explores why adversarial examples do not transfer well in
adversarial examples on automatic speech recognition systems. The authors
propose a number of potential causes that are then quickly evaluated in
turn.
This could be an excellent paper, but in its current form, it is borderline.
The main problem with the paper is that it proposes a number of causes for the
limited transferability, and then evaluates each of them with one quick
experiment and just a paragraph of text. In particular, none of the results
actually convince me that the claim is definitely correct, and many of the
experimental setups are confusing or would have other explanations other than
the one variable that is aiming to be controlled for.
That said, even with these weaknesses, this paper raises interesting and new
questions with an approach I have not seen previosuly. So while I don't believe
the paper has done much to actually demystify transferability, it does take
steps towards performing scientific experiments to understand why it is so
hard. And these experiments, while not perfect, can serve as the basis for
future work to extend and understand which factors are most important. | train | [
"aHFTQ8tBEFF",
"TfRry6HLcg1",
"Cyo7PS3q1p",
"mA3HG8SJes9",
"pqSyicYYoc",
"tpsqAUomS1O",
"A4-n8SzNnuB",
"-fFK0zWVdl5",
"Es3zaROu0Ss"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifications on the points I raised.\n\n1. I failed to see the architecture the first time, but see it now.\n2. The comparison in 4.6 makes sense now with respect to the vocabulary size. And thank you for clarifying the alignment question.\n3. Thank you for references.\n4. This explanation mak... | [
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"Cyo7PS3q1p",
"A4-n8SzNnuB",
"Es3zaROu0Ss",
"tpsqAUomS1O",
"-fFK0zWVdl5",
"iclr_2022_l5aSHXi8jG5",
"iclr_2022_l5aSHXi8jG5",
"iclr_2022_l5aSHXi8jG5",
"iclr_2022_l5aSHXi8jG5"
] |
iclr_2022_41e9o6cQPj | GreaseLM: Graph REASoning Enhanced Language Models | Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it. However, pretrained language models (LM), the foundation of most modern QA systems, do not robustly represent latent relationships between concepts, which is necessary for reasoning. While knowledge graphs (KG) are often used to augment LMs with structured representations of world knowledge, it remains an open question how to effectively fuse and reason over the KG representations and the language context, which provides situational constraints and nuances. In this work, we propose GreaseLM, a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations. Information from both modalities propagates to the other, allowing language context representations to be grounded by structured world knowledge, and allowing linguistic nuances (e.g., negation, hedging) in the context to inform the graph representations of knowledge. Our results on three benchmarks in the commonsense reasoning (i.e., CommonsenseQA, OpenbookQA) and medical question answering (i.e., MedQA-USMLE) domains demonstrate that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger. | Accept (Spotlight) | The paper presents a novel method of fusion of information from two modalities: text (context and question) and a Knowledge Base, for the task of question answering. The proposed method looks quite simple and clear, while the results show strong gains against baseline methods on 3 different datasets. Ablation studies show that the model achieves good performance on more complex questions. While the reviewers raise some concerns, e.g., on the sensitivity of the proposed method, the technical novelty against prior works, they see values in this paper in general. And the authors did a good job in their rebuttal. After several rounds of interactions, some reviewers were convinced to raise their scores by a little bit. As a result, we think the paper is in a good shape and ICLR audience should be interested in it. | train | [
"kPAyMWyX1c",
"XH9QokMSyFu",
"xKaO58FkJD0",
"ghuDdxY6mTG",
"Zqaprsahqcd",
"pj_9cVJxAc",
"4ZUCZxcqScu",
"Xx0zlKAWUE_",
"Q7zsr3-rl5e",
"Qj2QU7P-9tz",
"-fBikATqjU",
"l_5J5uQiuwi",
"d4lBl7bOPzV",
"LwP1MV3VHRc",
"wPwjCBFuhnH",
"haPRgiteuFE",
"D_2GFBoFtOe"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for continuing to engage with our paper!\n\n> What method in Table 7 do you refer to \"full connectivity on the graph side\"? If I understand correctly, \"full connectivity on the graph side\" is still different with \"A joint attention over all atoms\" over interaction layer. Do you have em... | [
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"Qj2QU7P-9tz",
"Zqaprsahqcd",
"Xx0zlKAWUE_",
"4ZUCZxcqScu",
"LwP1MV3VHRc",
"iclr_2022_41e9o6cQPj",
"l_5J5uQiuwi",
"wPwjCBFuhnH",
"iclr_2022_41e9o6cQPj",
"d4lBl7bOPzV",
"iclr_2022_41e9o6cQPj",
"pj_9cVJxAc",
"D_2GFBoFtOe",
"haPRgiteuFE",
"Q7zsr3-rl5e",
"iclr_2022_41e9o6cQPj",
"iclr_202... |
iclr_2022_JJCjv4dAbyL | Learning Discrete Structured Variational Auto-Encoder using Natural Evolution Strategies | Discrete variational auto-encoders (VAEs) are able to represent semantic latent spaces in generative learning. In many real-life settings, the discrete latent space consists of high-dimensional structures, and propagating gradients through the relevant structures often requires enumerating over an exponentially large latent space. Recently, various approaches were devised to propagate approximated gradients without enumerating over the space of possible structures. In this work, we use Natural Evolution Strategies (NES), a class of gradient-free black-box optimization algorithms, to learn discrete structured VAEs. The NES algorithms are computationally appealing as they estimate gradients with forward pass evaluations only, thus they do not require to propagate gradients through their discrete structures. We demonstrate empirically that optimizing discrete structured VAEs using NES is as effective as gradient-based approximations. Lastly, we prove NES converges for non-Lipschitz functions as appear in discrete structured VAEs. | Accept (Poster) | The authors propose to use genetic algorithms to learn variational autoencoders (VAEs) with discrete latent spaces. Specifically they employ natural evolution strategies (NES) to avoid backpropagating gradients through discrete variables. Experiments show how the proposed approach is competitive with the current state-of-the-art to train discrete VAEs.
Some concerns arose from the review and discussion phases, these included confusion around the justification and derivation of NES for VAEs in the presentation and the limitation of the experiments. Authors were responsive and provided the reviewers the needed clarifications, an updated presentation in the revised paper and additional experimental results which ultimately were successful in raising the reviewers' scores towards full acceptance. | train | [
"n7Gb_Eo-Q4c",
"VMxJTOe9eX",
"Uo_pxXi5B5w",
"s0pj2gDJat",
"oA-F6xHaSuH",
"KPGt224xL0P",
"T2amtYofxx3",
"yI_bPA4D5YU",
"v1eiNq_6CKq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors proposed to use the Natural Evolution Strategy (NES) algorithm for learning discrete structured VAEs. This algorithm estimate gradients with forwarding pass evaluations only and do not require propagating gradients through their discrete structures. Authors showed empirically that optimi... | [
8,
6,
8,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_JJCjv4dAbyL",
"iclr_2022_JJCjv4dAbyL",
"iclr_2022_JJCjv4dAbyL",
"iclr_2022_JJCjv4dAbyL",
"VMxJTOe9eX",
"v1eiNq_6CKq",
"n7Gb_Eo-Q4c",
"Uo_pxXi5B5w",
"iclr_2022_JJCjv4dAbyL"
] |
iclr_2022_nBU_u6DLvoK | UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning | It is a challenging task to learn rich and multi-scale spatiotemporal semantics from high-dimensional videos, due to large local redundancy and complex global dependency between video frames. The recent advances in this research have been mainly driven by 3D convolutional neural networks and vision transformers. Although 3D convolution can efficiently aggregate local context to suppress local redundancy from a small 3D neighborhood, it lacks the capability to capture global dependency because of the limited receptive field. Alternatively, vision transformers can effectively capture long-range dependency by self-attention mechanism, while having the limitation on reducing local redundancy with blind similarity comparison among all the tokens in each layer. Based on these observations, we propose a novel Unified transFormer (UniFormer) which seamlessly integrates merits of 3D convolution and spatiotemporal self-attention in a concise transformer format, and achieves a preferable balance between computation and accuracy. Different from traditional transformers, our relation aggregator can tackle both spatiotemporal redundancy and dependency, by learning local and global token affinity respectively in shallow and deep layers. We conduct extensive experiments on the popular video benchmarks, e.g., Kinetics-400, Kinetics-600, and Something-Something V1&V2. With only ImageNet-1K pretraining, our UniFormer achieves 82.9%/84.8% top-1 accuracy on Kinetics-400/Kinetics-600, while requiring 10x fewer GFLOPs than other state-of-the-art methods. For Something-Something V1 and V2, our UniFormer achieves new state-of-the-art performances of 60.9% and 71.2% top-1 accuracy respectively. Code is available at https://github.com/Sense-X/UniFormer. | Accept (Poster) | The paper presents an approach for spatio-temporal representation learning using Transformers. It introduces a particular architecture design, which shows an impressive computational efficiency. The reviewers agree that the experimental results are strong, and unanimously recommend the paper for acceptance. The reviewers find their concerns regarding the details of the approach/setting address after the authors' response.
We recommend accepting the paper. | val | [
"fGdZeBmWqu",
"2mM54axwIUe",
"YDFQnvXhtI9",
"j6ps1FokbNP",
"-fIIV60gSZq",
"A9zVtSuoIVi",
"rRFoml0o2CL",
"inpnLsbnHX",
"5o-wz3BJVoK",
"cc8LzyPSDr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The rebuttal addresses my concerns. I follow my original score assessment and recommend this work for acceptance.",
" I have read the rebuttal, and it addresses most of my initial concerns. I think it's a good paper that introduces effective and efficient video architecture, which will be valuable to the video ... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"A9zVtSuoIVi",
"YDFQnvXhtI9",
"cc8LzyPSDr",
"rRFoml0o2CL",
"5o-wz3BJVoK",
"inpnLsbnHX",
"iclr_2022_nBU_u6DLvoK",
"iclr_2022_nBU_u6DLvoK",
"iclr_2022_nBU_u6DLvoK",
"iclr_2022_nBU_u6DLvoK"
] |
iclr_2022_OnpFa95RVqs | Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks | The most significant barrier to the advancement of Neural Architecture Search (NAS) is its demand for large computational resources, which hinders scientifically sound empirical evaluations of NAS methods. Tabular NAS benchmarks have alleviated this problem substantially, making it possible to properly evaluate NAS methods in seconds on commodity machines. However, an unintended consequence of tabular NAS benchmarks has been a focus on extremely small architectural search spaces since their construction relies on exhaustive evaluations of the space. This leads to unrealistic results that do not transfer to larger spaces. To overcome this fundamental limitation, we propose a methodology to create cheap NAS surrogate benchmarks for arbitrary search spaces. We exemplify this approach by creating surrogate NAS benchmarks on the existing tabular NAS-Bench-101 and on two widely used NAS search spaces with up to $10^{21}$ architectures ($10^{13}$ times larger than any previous tabular NAS benchmark). We show that surrogate NAS benchmarks can model the true performance of architectures better than tabular benchmarks (at a small fraction of the cost), that they lead to faithful estimates of how well different NAS methods work on the original non-surrogate benchmark, and that they can generate new scientific insight. We open-source all our code and believe that surrogate NAS benchmarks are an indispensable tool to extend scientifically sound work on NAS to large and exciting search spaces. | Accept (Poster) | This paper proposes a methodology to create cheap NAS surrogate benchmarks for arbitrary search spaces. Certainly, the work is interesting and useful, with comprehensive studies to validate such approach. It should be credited as belonging to the first efforts of introducing and comprehensively studying the concept of surrogate NAS benchmarks. In AC's opinion, it is a solid paper that will (or has already) inspire many follow up works. The paper is well written.
This paper received highly mixed ratings. Although the authors might not see, all reviewers actually participated in the private discussions. Reviewer 1eb8 indicated hesitation in her/his support. Reviewer yTPb stated that if not considering the arXiv complicacy, she/he "would certainly raise score by one level". AC also reached out to Reviewer yTPb about her/his mentioned possibility of updating scores, and got confirmed that her/his original opinions wasn't changing after rebuttals. Besides, AC agrees the arXiv/NeurIPS complicacy shouldn't brought into the current discussion, and ignored that factor during decision making.
The main sticking (and considered-as-valid) critique is on the relatively outdated and incomplete selection of baselines. As a benchmark paper, it should capture and diversify the recent methods. For example, the authors might consider adding: https://botorch.org/docs/papers (latest methods in Bayesian Optimization) https://github.com/facebookresearch/LaMCTS (latest methods in Monte Carlo Tree Search) https://facebookresearch.github.io/nevergrad/ (latest methods in Evolutionary algorithms)
Given the above concerns, AC considers this paper to sit on the borderline, and perhaps with pros outweighing the cons. Hence, a weak accept decision is recommended at this moment. | train | [
"kJcSOkgbsL",
"vtlOod8aQIU",
"1P_CIq5fEkh",
"in51YgqxGO",
"B-rqCcLjAnG",
"ZCeQ3c0Vn08",
"3gYOYZM5UL",
"Vc1yMFFX4_u",
"EqWi9Syx_0",
"lLJVCBo_xv0",
"wMpBRJ7qis",
"YNuPtfi7x1_",
"ajvtFbxI-C7",
"vvX75YOcWS",
"PPknlfvk69T",
"N7_wR341R3P",
"yPRtvLpZyM",
"_SEFcLnpks_",
"bN5m83QEpWB",
... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Thank you again for your review. We believe to have fully addressed your concerns 1 and 2 (with the creation of a new benchmark for concern 3 taking longer than the rebuttal period). We were glad to read that you would be open for accepting the paper with concern 1 and 2 taken care of and were wondering whether t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"B-rqCcLjAnG",
"1P_CIq5fEkh",
"in51YgqxGO",
"ZCeQ3c0Vn08",
"bN5m83QEpWB",
"3gYOYZM5UL",
"EqWi9Syx_0",
"wMpBRJ7qis",
"PPknlfvk69T",
"_SEFcLnpks_",
"yPRtvLpZyM",
"iclr_2022_OnpFa95RVqs",
"tmNLCXiWNwV",
"ajvtFbxI-C7",
"N7_wR341R3P",
"vvX75YOcWS",
"YNuPtfi7x1_",
"H2gRPgnGllU",
"zmcgk... |
iclr_2022_6kCiVaoQdx9 | Few-shot Learning via Dirichlet Tessellation Ensemble | Few-shot learning (FSL) is the process of rapid generalization from abundant base samples to inadequate novel samples. Despite extensive research in recent years, FSL is still not yet able to generate satisfactory solutions for a wide range of real-world applications. To confront this challenge, we study the FSL problem from a geometric point of view in this paper. One observation is that the widely embraced ProtoNet model is essentially a Voronoi Diagram (VD) in the feature space. We retrofit it by making use of a recent advance in computational geometry called Cluster-induced Voronoi Diagram (CIVD). Starting from the simplest nearest neighbor model, CIVD gradually incorporates cluster-to-point and then cluster-to-cluster relationships for space subdivision, which is used to improve the accuracy and robustness at multiple stages of FSL. Specifically, we use CIVD (1) to integrate parametric and nonparametric few-shot classifiers; (2) to combine feature representation and surrogate representation; (3) and to leverage feature-level, transformation-level, and geometry-level heterogeneities for a better ensemble. Our CIVD-based workflow enables us to achieve new state-of-the-art results on mini-ImageNet, CUB, and tiered-ImagenNet datasets, with ${\sim}2\%{-}5\%$ improvements upon the next best. To summarize, CIVD provides a mathematically elegant and geometrically interpretable framework that compensates for extreme data insufficiency, prevents overfitting, and allows for fast geometric ensemble for thousands of individual VD. These together make FSL stronger. | Accept (Poster) | This paper analyzes popular metric-based few-shot learning (FSL) methods from the perspective of computational geometry. Namely, viewing prototypical networks as Voronoi diagrams (VDs). This lends itself to incorporating extensions based on the recently proposed CIVD that allows for multiple centers per cell. The paper then discusses various aspects of the FSL pipeline (data augmentation, feature transformations, geometries and representations), referred to as heterogeneities, that can be efficiently ensembled via a cluster-to-cluster VD (CCVD). The resulting model produces state-of-the-art results.
Initial concerns from the reviewers pointed to a potential lack of novelty (since it can be seen as applying existing ideas to FSL), lack of self-containment in the main paper, weak positioning in the context of other FSL methods (which ones can be interpreted under the VD framework) and a potentially impractical computational complexity. The discussion period settled these issues, with the paper receiving several updates, and the reviewers all ended up recommending acceptance.
Personally, I would like to see an addition to Figure 1 with the resulting decision boundaries from CIVD and CCVD. I think that this would greatly improve the ability of the reader to reason about the approach intuitively. Also, as a minor comment, I think that the argmax below eqs 1 and 7 should either be an argmin, or the distances should be negated. Otherwise, I think this is a valuable contribution to the FSL literature. | train | [
"XiypVc1lsQJ",
"rw0DRE2bHwu",
"OZzlzB7vdEf",
"RIFGkn6mxr",
"0pGluOSpnH",
"lKaacTw6lau",
"9xMki8AcjDc",
"1oNRzZix2n4",
"N_jE_gcDoY",
"X3iXL2sAHcO",
"GhchsqHYbvf",
"BXL8N8DjPYg",
"9sJXkkaZnZ",
"_rnTkKcMmW",
"Myz3CiJ7UQ7",
"1tS9lBgCg77",
"8l9dUoDkQ2",
"_4v0c2J6qek"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a CIVD-based approach to few-shot learning. CIVD, cluster-induced Voronoi diagrams, are a known technique that is used to categorize / describe different types of few-shot classifiers. In the experiment section DeepVoro(++) is shown to perform superior to other methods on three datasets. The pap... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"iclr_2022_6kCiVaoQdx9",
"9xMki8AcjDc",
"RIFGkn6mxr",
"0pGluOSpnH",
"9sJXkkaZnZ",
"X3iXL2sAHcO",
"GhchsqHYbvf",
"N_jE_gcDoY",
"_rnTkKcMmW",
"1tS9lBgCg77",
"BXL8N8DjPYg",
"XiypVc1lsQJ",
"_4v0c2J6qek",
"8l9dUoDkQ2",
"iclr_2022_6kCiVaoQdx9",
"iclr_2022_6kCiVaoQdx9",
"iclr_2022_6kCiVaoQd... |
iclr_2022_twv2QlJhXzo | Imitation Learning from Observations under Transition Model Disparity | Learning to perform tasks by leveraging a dataset of expert observations, also known as imitation learning from observations (ILO), is an important paradigm for learning skills without access to the expert reward function or the expert actions. We consider ILO in the setting where the expert and the learner agents operate in different environments, with the source of the discrepancy being the transition dynamics model. Recent methods for scalable ILO utilize adversarial learning to match the state-transition distributions of the expert and the learner, an approach that becomes challenging when the dynamics are dissimilar. In this work, we propose an algorithm that trains an intermediary policy in the learner environment and uses it as a surrogate expert for the learner. The intermediary policy is learned such that the state transitions generated by it are close to the state transitions in the expert dataset. To derive a practical and scalable algorithm, we employ concepts from prior work on estimating the support of a probability distribution. Experiments using MuJoCo locomotion tasks highlight that our method compares favorably to the baselines for ILO with transition dynamics mismatch. | Accept (Poster) | The submitted paper considers the very interesting problem of imitation learning from observations under transition model disparity. The reviewers recommended 2x weak accept and 1x weak reject for the paper. Main concerns about the paper regarded clarity of the presentation, complicatedness of the proposed method, and experimental validation. During the discussion phase, the authors addressed some of the comments and provided an update of the paper providing additional details. While some of the reviewers' concerns still stand, I think the addressed problem is very relevant and the proposed method can be (with clarifications and improvements of the presentation) be interesting to parts of the community. Hence I am recommending acceptance of the paper. Nevertheless, I strongly urge the authors to carefully revise their paper, and taking the reviewers' concerns carefully into account when preparing the camera ready version of the paper. | train | [
"KP4PRl-im1g",
"JJAOzK9_HLE",
"_i6GVCnFNr1",
"8glRddtmsDm",
"7YO5uXmgKne",
"m8CaCjIz4W",
"MGIj4eJ1jIO",
"yU0lNMzbJiX",
"F7_PBn-evoz",
"HF73CFnC1k_",
"L7JGjOFQVsE"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We really appreciate the reviewer for taking our rebuttal into consideration and raising their rating of the paper. As suggested by the reviewer, we would blend the contents from the rebuttal (Appendix A.3 through A.6) into the main text to develop better intuition on the role of the advisor and improve the gener... | [
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"8glRddtmsDm",
"m8CaCjIz4W",
"iclr_2022_twv2QlJhXzo",
"yU0lNMzbJiX",
"iclr_2022_twv2QlJhXzo",
"MGIj4eJ1jIO",
"7YO5uXmgKne",
"_i6GVCnFNr1",
"L7JGjOFQVsE",
"iclr_2022_twv2QlJhXzo",
"iclr_2022_twv2QlJhXzo"
] |
iclr_2022_OqcZu8JIIzS | Pareto Policy Pool for Model-based Offline Reinforcement Learning | Online reinforcement learning (RL) can suffer from poor exploration, sparse reward, insufficient data, and overhead caused by inefficient interactions between an immature policy and a complicated environment. Model-based offline RL instead trains an environment model using a dataset of pre-collected experiences so online RL methods can learn in an offline manner by solely interacting with the model. However, the uncertainty and accuracy of the environment model can drastically vary across different state-action pairs so the RL agent may achieve high model return but perform poorly in the true environment. Unlike previous works that need to carefully tune the trade-off between the model return and uncertainty in a single objective, we study a bi-objective formulation for model-based offline RL that aims at producing a pool of diverse policies on the Pareto front performing different levels of trade-offs, which provides the flexibility to select the best policy for each realistic environment from the pool. Our method, ''Pareto policy pool (P3)'', does not need to tune the trade-off weight but can produce policies allocated at different regions of the Pareto front. For this purpose, we develop an efficient algorithm that solves multiple bi-objective optimization problems with distinct constraints defined by reference vectors targeting diverse regions of the Pareto front. We theoretically prove that our algorithm can converge to the targeted regions. In order to obtain more Pareto optimal policies without linearly increasing the cost, we leverage the achieved policies as initialization to find more Pareto optimal policies in their neighborhoods. On the D4RL benchmark for offline RL, P3 substantially outperforms several recent baseline methods over multiple tasks, especially when the quality of pre-collected experiences is low. | Accept (Poster) | Previous approaches to model-based offline RL require carefully tuning the trade-off between model return and uncertainty. The authors propose an approach that produces a diverse pool of policies on the Pareto front of this tradeoff. On the D4RL offline RL benchmark, P3 outperforms competing approaches when the experience is collected with low or medium return policies.
Before the rebuttal, reviewers identified the following primary concerns:
* The experimental evaluation of P3 uses many policy evaluations to select the policy, which results in an unfair comparison with existing methods.
* P3 underperforms existing methods on some datasets. Why?
Overall, reviewers were satisfied by the response and raised their scores accordingly. The authors responded by including a modification of P3 that uses FQE to select the policy for evaluation, resolving the first concern. The authors explain that P3 underperforms on high return datasets because it splits its updates across the pool of policies. The authors state, "We believe (and in theory it does hold) that P3 can achieve the same performance of UWAC on high-quality datasets, if provided with more computational budget." I suggest that the authors conduct at least one experiment to verify this claim.
The proposed idea is interesting and the revisions the authors have made resolved the primary concerns from reviewers, so I recommend acceptance. The reviewer/author discussion has many substantial points that I recommend the authors integrate into the revision. | val | [
"GO9cva02-rg",
"g5awlopLCxh",
"1V6sAmIBhy6",
"DXJosuR7IU",
"AgyDRZ-6hzS",
"tDx5pwgodlL",
"pJRvUdaVui_",
"ddmKw4c9pZ6",
"RifCHyVIPCF",
"nGmG-gFw7t",
"oc6EEJ1wb2C",
"bTmwYbTDRCr",
"cc1RSAAbmOj",
"eMprrMmHOFe",
"eZCqlw2PYMK",
"44p-8RyJB_7",
"RlsMafHyBoC",
"MVk7dYolUE"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for the response! Here are our detailed replies to your remaining concerns.\n\n### **Q1** *The reply does not address why P3 performs worse than the UWAC baseline for high-quality datasets...*\n\n- The reason is that we compared P3 with UWAC (and all baselines) using the same number of gradient updates (i... | [
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"tDx5pwgodlL",
"iclr_2022_OqcZu8JIIzS",
"DXJosuR7IU",
"44p-8RyJB_7",
"iclr_2022_OqcZu8JIIzS",
"cc1RSAAbmOj",
"ddmKw4c9pZ6",
"eZCqlw2PYMK",
"iclr_2022_OqcZu8JIIzS",
"iclr_2022_OqcZu8JIIzS",
"iclr_2022_OqcZu8JIIzS",
"AgyDRZ-6hzS",
"eMprrMmHOFe",
"MVk7dYolUE",
"RifCHyVIPCF",
"bTmwYbTDRCr"... |
iclr_2022_B72HXs80q4 | Taming Sparsely Activated Transformer with Stochastic Experts | Sparsely activated models (SAMs), such as Mixture-of-Experts (MoE), can easily scale to have outrageously large amounts of parameters without significant increase in computational cost. However, SAMs are reported to be parameter inefficient such that larger models do not always lead to better performance. While most on-going research focuses on improving SAMs models by exploring methods of routing inputs to experts, our analysis reveals that such research might not lead to the solution we expect, i.e., the commonly-used routing methods based on gating mechanisms do not work better than randomly routing inputs to experts. In this paper, we propose a new expert-based model, THOR ($\underline{\textbf{T}}$ransformer wit$\underline{\textbf{H}}$ St$\underline{\textbf{O}}$chastic Expe$\underline{\textbf{R}}$ts). Unlike classic expert-based models, such as the Switch Transformer, experts in THOR are randomly activated for each input during training and inference. THOR models are trained using a consistency regularized loss, where experts learn not only from training data but also from other experts as teachers, such that all the experts make consistent predictions. We validate the effectiveness of THOR on machine translation tasks. Results show that THOR models are more parameter efficient in that they significantly outperform the Transformer and MoE models across various settings. For example, in multilingual translation, THOR outperforms the Switch Transformer by 2 BLEU scores, and obtains the same BLEU score as that of a state-of-the-art MoE model that is 18 times larger. Our code is publicly available at: https://github.com/microsoft/Stochastic-Mixture-of-Experts. | Accept (Poster) | In this paper, the authors introduce a simple mixture-of-experts model, by greatly simplifying the routing mechanism: experts are randomly activated both at train and inference time. A consistency loss function is added for training the proposed models, enforcing all experts to make consistent predictions. The proposed method, called THOR, is evaluated on machine translation tasks, including multi-lingual MT, and outperforms the recently proposed Switch Transformer MoE.
The reviews note that the paper is well written and easy to follow, and that the proposed method is simple. While the results look promising, the reviewers also raised concerns regarding comparisons to previous work, some of which were addressed in the rebuttal. Finally, a reviewer raised the concern that this method is related to ensembles, which work well for machine translation, but are not discussed or compared to. For these reasons, I believe that the paper is borderline, leaning toward acceptance. | train | [
"_4yXU_uIaHB",
"ROd7PhEy-tP",
"ia0RN-BuHNi",
"LM_e_K_zqYB",
"Ir9OXtgt-A4",
"h9w7O_kOpT2",
"_UhNY7Htar",
"-t_aip8a9Q",
"RK90k_kJh4",
"DDWJsMp9iF7",
"_MtI0w3G_oA",
"EzeSYBdAkbX",
"M4DYajomwYz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new routing mechanism for sparse models in the context of language tasks. Rather than learning a parametric router that learns how to assign tokens to experts, the proposed algorithm (THOR), randomly selects two experts per mini batch, and applies those experts independently to every input. A ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"iclr_2022_B72HXs80q4",
"RK90k_kJh4",
"EzeSYBdAkbX",
"M4DYajomwYz",
"_4yXU_uIaHB",
"M4DYajomwYz",
"M4DYajomwYz",
"EzeSYBdAkbX",
"_4yXU_uIaHB",
"_MtI0w3G_oA",
"iclr_2022_B72HXs80q4",
"iclr_2022_B72HXs80q4",
"iclr_2022_B72HXs80q4"
] |
iclr_2022_rHMaBYbkkRJ | CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability | What is the state of the art in continual machine learning? Although a natural question for predominant static benchmarks, the notion to train systems in a lifelong manner entails a plethora of additional challenges with respect to set-up and evaluation. The latter have recently sparked a growing amount of critiques on prominent algorithm-centric perspectives and evaluation protocols being too narrow, resulting in several attempts at constructing guidelines in favor of specific desiderata or arguing against the validity of prevalent assumptions. In this work, we depart from this mindset and argue that the goal of a precise formulation of desiderata is an ill-posed one, as diverse applications may always warrant distinct scenarios. Instead, we introduce the Continual Learning EValuation Assessment Compass: the CLEVA-Compass. The compass provides the visual means to both identify how approaches are practically reported and how works can simultaneously be contextualized in the broader literature landscape. In addition to promoting compact specification in the spirit of recent replication trends, it thus provides an intuitive chart to understand the priorities of individual systems, where they resemble each other, and what elements are missing towards a fair comparison. | Accept (Poster) | This review paper presents a way of comparative assessment of continual learning. Reviewers all agreed that this work is interesting, unique with comprehensive coverage of the CL space. The proposed categorization, CLEVA-Compass, and its GUI have great potential to facilitate future CL work. | train | [
"DlDQr5e6yiQ",
"_xXRwcZUmR",
"RV7R8E8aRZQ",
"jDBSfiOt7pW",
"6KU08zaFpsd",
"lGojlzenebE",
"P34RosWP55U",
"7HkaXdfdpSl",
"UU7OrUOL5D1",
"_qxkYVbiVlK",
"Hk1nED_X7i",
"_RqyZO2-lQh",
"z4MT3NdXLv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear reviewer, thank you once again for your original review and the constructive feedback. \n\nAccording to the ICLR timeline, it seems like the discussion period will be ending later today. \nWe understand that the discussion period in ICLR can be very time consuming. For this reason, we have done our best to s... | [
-1,
8,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
5,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"Hk1nED_X7i",
"iclr_2022_rHMaBYbkkRJ",
"_RqyZO2-lQh",
"iclr_2022_rHMaBYbkkRJ",
"_qxkYVbiVlK",
"iclr_2022_rHMaBYbkkRJ",
"UU7OrUOL5D1",
"iclr_2022_rHMaBYbkkRJ",
"lGojlzenebE",
"jDBSfiOt7pW",
"z4MT3NdXLv",
"_xXRwcZUmR",
"iclr_2022_rHMaBYbkkRJ"
] |
iclr_2022_hzmQ4wOnSb | GNN is a Counter? Revisiting GNN for Question Answering | Question Answering (QA) has been a long-standing research topic in AI and NLP fields, and a wealth of studies has been conducted to attempt to equip QA systems with human-level reasoning capability. To approximate the complicated human reasoning process, state-of-the-art QA systems commonly use pre-trained language models (LMs) to access knowledge encoded in LMs together with elaborately designed modules based on Graph Neural Networks (GNNs) to perform reasoning over knowledge graphs (KGs). However, many problems remain open regarding the reasoning functionality of these GNN-based modules. Can these GNN-based modules really perform a complex reasoning process? Are they under- or over-complicated for QA? To open the black box of GNN and investigate these problems, we dissect state-of-the-art GNN modules for QA and analyze their reasoning capability. We discover that even a very simple graph neural counter can outperform all the existing GNN modules on CommonsenseQA and OpenBookQA, two popular QA benchmark datasets which heavily rely on knowledge-aware reasoning. Our work reveals that existing knowledge-aware GNN modules may only carry out some simple reasoning such as counting. It remains a challenging open problem to build comprehensive reasoning modules for knowledge-powered QA. | Accept (Poster) | This paper proposes a graph soft counter (GSC) model which is very simple and lightweight compared to the conventional graph neural network for solving QA tasks that benefit from knowledge graphs. Compared to the conventional KG-GNN combination, the proposed method is much simpler but produces better results for QA tasks. The paper originally dealt only with multiple-choice QA tasks, but during the rebuttal process, the authors added more complex QA tasks which the reviewers appreciated. Additionally, there was (and still remains) some concern over the exact reasons and mechanisms behind this "too good to be true" result, and the authors addressed this with additional ablation studies, to be included in the appendix. With the publicly released code, others will be able to try GSC and its too-good-to-be-true performance and figure out how it actually works. | train | [
"CzgPOJ1Md5Z",
"OeyWOR717Jx",
"3p2kyXU4PPb",
"RhEzuyekfkm",
"vDRUq8MrsCU",
"uuVzXfyeFR2",
"OvePgUIYhU",
"Rqqkrsi-Mha",
"JDpbrzV01vj",
"GFjZzveZtLJ",
"YOUo6M7vHM",
"mxTbXWuOgQQ",
"sXP_VNeAXsL",
"Pf4Yw7uw4PI",
"iXn-PP5kPLL",
"E-KLqQfqQKR",
"qLa1KVlS6iy",
"qQat5t3WvmF"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your additional comments and advice. We will elaborate on these points as follows.\n\n ---\n\n1. ___Subgraph of MetaQA___\n\n For the k-hop (k=1,2,3) split, we retrieve the k-hop subgraph of the entity in the question, and we also use a k-layer GSC to process it since one layer corresponds to one ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
4,
3,
5
] | [
"3p2kyXU4PPb",
"3p2kyXU4PPb",
"Rqqkrsi-Mha",
"qQat5t3WvmF",
"uuVzXfyeFR2",
"OvePgUIYhU",
"qLa1KVlS6iy",
"qQat5t3WvmF",
"qLa1KVlS6iy",
"sXP_VNeAXsL",
"E-KLqQfqQKR",
"iclr_2022_hzmQ4wOnSb",
"iclr_2022_hzmQ4wOnSb",
"qQat5t3WvmF",
"sXP_VNeAXsL",
"iclr_2022_hzmQ4wOnSb",
"iclr_2022_hzmQ4wO... |
iclr_2022_shpkpVXzo3h | 8-bit Optimizers via Block-wise Quantization | Stateful optimizers maintain gradient statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate optimization significantly, compared to plain stochastic gradient descent, but uses memory that might otherwise be allocated to model parameters, thereby limiting the maximum size of models trained in practice. In this paper, we develop the first optimizers that use 8-bit statistics while maintaining the performance levels of using 32-bit optimizer states. To overcome the resulting computational, quantization, and stability challenges, we develop block-wise dynamic quantization. Block-wise quantization divides input tensors into smaller blocks that are independently quantized. Each block is processed in parallel across cores, yielding faster optimization and high precision quantization. To maintain stability and performance, we combine block-wise quantization with two additional changes: (1) dynamic quantization, a form of non-linear optimization that is precise for both large and small magnitude values, and (2) a stable embedding layer to reduce gradient variance that comes from the highly non-uniform distribution of input tokens in language models. As a result, our 8-bit optimizers maintain 32-bit performance with a small fraction of the memory footprint on a range of tasks, including 1.5B parameter language modeling, GLUE finetuning, ImageNet classification, WMT'14 machine translation, MoCo v2 contrastive ImageNet pretraining+finetuning, and RoBERTa pretraining, without changes to the original optimizer hyperparameters. We open-source our 8-bit optimizers as a drop-in replacement that only requires a two-line code change. | Accept (Spotlight) | This paper proposes Adam and Momentum optimizers, where the optimizer state variables are quantized to 8bit using block dynamics quantization. These modifications significantly improve the memory requirements of training models with many parameters (mainly, NLP models). These are useful contributions which will enable training even larger models than possible today. All reviewers were positive. | test | [
"P2PkmXFq_hs",
"fuLheAd9jKJ",
"PWzRo82xR07",
"AujXwsKEuE",
"gqR3KVGvFw",
"0w2FeCNypI",
"tCKm8GArfh1",
"nQUCrmrRGhH",
"UII0zI5XU7H",
"V0Xr7mGXj7_",
"8hIwjJPwVJO",
"UYVp_Ja8h6K",
"O2d1TdTc3d"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the helpful discussion! We will correct and extend (1) and (2) in our next draft. We agree that the optimizers suggested in (4) would be very insightful and interesting for future work. Since we open-source our software, we hope it will be easy for the community to add these optimizers.\n\nThank you... | [
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"PWzRo82xR07",
"iclr_2022_shpkpVXzo3h",
"gqR3KVGvFw",
"iclr_2022_shpkpVXzo3h",
"iclr_2022_shpkpVXzo3h",
"tCKm8GArfh1",
"nQUCrmrRGhH",
"UII0zI5XU7H",
"V0Xr7mGXj7_",
"AujXwsKEuE",
"O2d1TdTc3d",
"fuLheAd9jKJ",
"iclr_2022_shpkpVXzo3h"
] |
iclr_2022_u2GZOiUTbt | Task Affinity with Maximum Bipartite Matching in Few-Shot Learning | We propose an asymmetric affinity score for representing the complexity of utilizing the knowledge of one task for learning another one. Our method is based on the maximum bipartite matching algorithm and utilizes the Fisher Information matrix. We provide theoretical analyses demonstrating that the proposed score is mathematically well-defined, and subsequently use the affinity score to propose a novel algorithm for the few-shot learning problem. In particular, using this score, we find relevant training data labels to the test data and leverage the discovered relevant data for episodically fine-tuning a few-shot model. Results on various few-shot benchmark datasets demonstrate the efficacy of the proposed approach by improving the classification accuracy over the state-of-the-art methods even when using smaller models. | Accept (Poster) | This paper proposes a few-shot learning method that uses Fisher information matrix-based task affinity. The experimental results show that the proposed method achieved better performance than existing methods. This paper is well-written. The newly proposed task affinity score is interesting. The experimental results and theoretical analysis support the effectiveness of the proposed method. The authors are encouraged to address the reviewers' concerns in the paper. Although the distance between task representations is symmetric in neural processes, they do not use the symmetric distance for meta-learning. They input the task representations into the neural network, so the output can be asymmetric. | train | [
"i3yqLKCjm2q",
"JL7ydvVuX-K",
"bmxRZmNzlUe",
"GhqB6FWffJE",
"KRePbukEMz",
"y4IUqc-6_65"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a task affinity score based on maximum bipartite matching algorithm and Fisher information matrix. And then utilize this score to find the closest training data labels to the test data and leverage the discovered relevant data for episodically fine-tuning the few-shot model. Experimental result... | [
6,
-1,
-1,
-1,
3,
8
] | [
4,
-1,
-1,
-1,
3,
3
] | [
"iclr_2022_u2GZOiUTbt",
"i3yqLKCjm2q",
"y4IUqc-6_65",
"KRePbukEMz",
"iclr_2022_u2GZOiUTbt",
"iclr_2022_u2GZOiUTbt"
] |
iclr_2022_irARV_2VFs4 | Focus on the Common Good: Group Distributional Robustness Follows | We consider the problem of training a classification model with group annotated training data. Recent work has established that, if there is distribution shift across different groups, models trained using the standard empirical risk minimization (ERM) objective suffer from poor performance on minority groups and that group distributionally robust optimization (Group-DRO) objective is a better alternative. The starting point of this paper is the observation that though Group-DRO performs better than ERM on minority groups for some benchmark datasets, there are several other datasets where it performs much worse than ERM. Inspired by ideas from the closely related problem of domain generalization, this paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups. The key insight behind our proposed algorithm is that while Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features, thereby enhancing minority performance beyond what is achieved by Group-DRO. Empirically, we show that our proposed algorithm matches or achieves better performance compared to strong contemporary baselines including ERM and Group-DRO on standard benchmarks on both minority groups and across all groups. Theoretically, we show that the proposed algorithm is a descent method and finds first order stationary points of smooth nonconvex functions. | Accept (Poster) | The manuscript proposes a method for addressing spurious correlations and sub-population (group) shift problem by modelling intergroup interactions. Past work (GroupDRO) focuses on the worst group which is subject to failure when groups have heterogeneous levels of noise and transfer. This work focuses on the group whose gradient leads to largest decrease in average training loss over all groups. The manuscript presents insights on why the proposed method called CGD may perform better than GroupDRO by studying simple synthetic settings. The manuscript also provides empirical evaluation on seven real-world datasets–which include two text and five image tasks with a mix of sub-population and domain shifts.
There are several positive aspects of the manuscript, including:
1. The idea of training on the group which leads to largest overall decrease in loss is natural and interesting;
2. The synthetic examples presented in the manuscript clearly bring out the use cases of the method proposed and comparison with GroupDRO;
3. The empirical results presented lead to improved results on a variety of benchmark tasks.
There are also several major concerns, including:
1. More discussion on why the proposed method works for the chosen real world datasets by connecting them to the synthetic setups presented in the manuscript;
2. The proposed algorithm does not minimize a specific loss function;
3. The standard benchmarks are altered. For example, the CivilComments dataset is shown as a 2 group when it is originally a 8-group task (the groups being the demographics of the users) as shown in the WILDS dataset paper.
Authors clarified, among others, that the proposed approach optimizes the macro-average loss function, and the standard benchmarks are not modified and the experiment setup is exactly like GroupDRO evaluation on the CivilComments-WILDS dataset. Reviewers noted that the generative model has not added anything new since it is essentially the synthetic example and it just shows what every robust machine learning method is supposed to do, i.e., don't rely on e_s (group-specific components) but on e_c (common components) while making predictions. It doesn't justify the procedure of choosing to focus on the group that minimizes the error for the group that decreases all other groups' errors.
The revised manuscript includes a clearer motivation, and more discussion on how the synthetic examples connect to the real world datasets. Based on that, I put an accept recommendation. | train | [
"MXg8I1NOz8",
"LkOgO0yq69Q",
"6TntjYMtP0d",
"iv0PEeBYvgY",
"M2IjZ2R9azx",
"ZpY-Q8h3Pqk",
"2cDcgkHGpaN",
"nN3HwKBaOxK",
"L85QoS8Wbz",
"CQtXfw6W09",
"rKfj51T4aAD"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. Here is a simple Generative Model to demonstrate the rationale behind CGD.\n\n**Setting**: \nFor a group $i \\in \\{1,2,3\\}$, and label y, let the data from group i be generated as $x = y(e_c + \\beta_i e_s) + N(0, 1), \\forall i \\in [D]$ for some common and group-specific components... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nN3HwKBaOxK",
"iclr_2022_irARV_2VFs4",
"iclr_2022_irARV_2VFs4",
"L85QoS8Wbz",
"iclr_2022_irARV_2VFs4",
"rKfj51T4aAD",
"LkOgO0yq69Q",
"CQtXfw6W09",
"iclr_2022_irARV_2VFs4",
"iclr_2022_irARV_2VFs4",
"iclr_2022_irARV_2VFs4"
] |
iclr_2022__xwr8gOBeV1 | Geometric and Physical Quantities improve E(3) Equivariant Message Passing | Including covariant information, such as position, force, velocity or spin is important in many tasks in computational physics and chemistry. We introduce Steerable E($3$) Equivariant Graph Neural Networks (SEGNNs) that generalise equivariant graph networks, such that node and edge attributes are not restricted to invariant scalars, but can contain covariant information, such as vectors or tensors. Our model, composed of steerable MLPs, is able to incorporate geometric and physical information in both the message and update functions.
Through the definition of steerable node attributes, the MLPs provide a new class of activation functions for general use with steerable feature fields. We discuss ours and related work through the lens of equivariant non-linear convolutions, which further allows us to pin-point the successful components of SEGNNs: non-linear message aggregation improves upon classic linear (steerable) point convolutions; steerable messages improve upon recent equivariant graph networks that send invariant messages. We demonstrate the effectiveness of our method on several tasks in computational physics and chemistry and provide extensive ablation studies. | Accept (Spotlight) | This work combines steerable MLPs with equivariant message passing layers to form Steerable E(3) Equivariant GNNs (SEGNNs). It extends previous work such as Schnet and EGNNs, by allowing equivariant tensor messages (in contrast to scalar or vector messages). The paper also provides a unifying view of related work which is a nice overview for the ML community. It is overall well written, but would benefit from further revision to improve readability in some parts (in particular section 3, cf. reviews).
It shows strong empirical results on the IS2RE task of the OC20 dataset and mixed results on the QM9 dataset. | train | [
"Jw-DfoaDu8n",
"1kVen_62dP2",
"MeeIpntEdt8",
"EIJvpwOheB",
"YTnaVCO-NP",
"FAL8BDQrE3e",
"fwzHYn5rT8K",
"FftxSFhk4-8",
"ow5g9GcUBI",
"M5DY-FzWfv_",
"sn4nCOedH_7",
"YC1huMcrzlX",
"jUEjvAvJq46",
"A2Mt8fA9_P9",
"EF9KGA45cWQ",
"lOBqT28XrMI",
"stqZEaFUDJ",
"u9iJDAO9U2-",
"8O8P5KQEC9C"
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"... | [
" Sorry for my very very late response. I appreciate authors detailed explanations and their effort on improving readability. This work stands out more as a complete guide for \"what, how\" on equivariant message passing layers. This should act as a catalyst for future works and hence the increase in my score.",
... | [
-1,
6,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
10
] | [
-1,
4,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"1kVen_62dP2",
"iclr_2022__xwr8gOBeV1",
"jUEjvAvJq46",
"iclr_2022__xwr8gOBeV1",
"YC1huMcrzlX",
"FftxSFhk4-8",
"iclr_2022__xwr8gOBeV1",
"ow5g9GcUBI",
"fwzHYn5rT8K",
"fwzHYn5rT8K",
"8O8P5KQEC9C",
"u9iJDAO9U2-",
"EIJvpwOheB",
"stqZEaFUDJ",
"1kVen_62dP2",
"iclr_2022__xwr8gOBeV1",
"iclr_2... |
iclr_2022_Lm8T39vLDTE | Autoregressive Diffusion Models | We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models (Uria et al., 2014) and absorbing discrete diffusion (Austin et al., 2021), which we show are special cases of ARDMs under mild assumptions. ARDMs are simple to implement and easy to train. Unlike standard ARMs, they do not require causal masking of model representations, and can be trained using an efficient objective similar to modern probabilistic diffusion models that scales favourably to highly-dimensional data. At test time, ARDMs support parallel generation which can be adapted to fit any given generation budget. We find that ARDMs require significantly fewer steps than discrete diffusion models to attain the same performance. Finally, we apply ARDMs to lossless compression, and show that they are uniquely suited to this task. Contrary to existing approaches based on bits-back coding, ARDMs obtain compelling results not only on complete datasets, but also on compressing single data points. Moreover, this can be done using a modest number of network calls for (de)compression due to the model's adaptable parallel generation. | Accept (Poster) | This paper introduces Autoregressive Diffusion Models (ARDMs), which generalises order-agnostic autoregressive models and absorbing discrete diffusion.
All reviewers appreciated the paper with a few also finding it very dense. The experimental section is a bit lacking in detail. This has to some degree been answered in the discussion and should also be included in the final version of the paper.
Acceptance is recommended. | test | [
"Y7U17DNsQdB",
"z_8ZsRVdi91u",
"RpQpwiQnph4",
"OB6cxJEbYix",
"-31smWXse6X",
"jkox3MxhLPl",
"chP57W5Kwdi",
"ew7Hb9AG7-",
"de1Ir-T9WjF",
"cyoOyBkJos",
"25MKzkgK9Y3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response and the clarification. For the meantime, I keep my score.",
" Thank you for your feedback and for answering the weak points that I have mentioned. I consider these weak points answered and solved. \n\nThe DP approach remains the part that I am the least confident in reviewing. I hope ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"chP57W5Kwdi",
"OB6cxJEbYix",
"-31smWXse6X",
"de1Ir-T9WjF",
"25MKzkgK9Y3",
"cyoOyBkJos",
"ew7Hb9AG7-",
"iclr_2022_Lm8T39vLDTE",
"iclr_2022_Lm8T39vLDTE",
"iclr_2022_Lm8T39vLDTE",
"iclr_2022_Lm8T39vLDTE"
] |
iclr_2022_2bO2x8NAIMB | Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative | In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continued-pretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pre-trained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020).
We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on end-task performance and data efficiency. | Accept (Poster) | The authors argue in favor of task-aware continued pretraining and demonstrate through experiments that using objectives based on the end-task during continued pretraining help in improving downstream performance.
The reviewers generally appreciated the motivation, the formal treatment of the topic and the thoroughness in the experiments. There were some concerns about (i) positioning of the paper (pretraining as opposed to continued pre-training) (ii) thorough comparison with other MTL frameworks (iii) evaluating on more datasets (iv) cost of continued pretraining for each task v) the benefit of META-TARTAN over MT-TARTAN only in specific settings and (vi) lack of surprise/novelty in the results.
IMO, the authors have adequately addressed ALL the above concerns raised by the reviewers. Further, despite the above concerns, all reviewers agree that the problem is well motivated and of interest to the community and most aspects of this work are thorough. The findings will be useful and may spawn other work in this area. | train | [
"iS_jVN-wT9Q",
"md95YYuUdvI",
"49HmpxGaw6z",
"HnnE_mW8rT7",
"_pd3Lfo7PsG",
"wEZtIxrwrw-",
"DVMQ-yWwQ8G",
"-oJnffnHqgz",
"zfPwpGCM6z",
"8HHViCEIym9",
"-uwpkCKQCOx",
"2zTspBhxFau"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification and additional results! \n\nI agree with other reviewers that the framing of the paper is a little misleading. I see the authors have updated their manuscript. Still, I feel in the current version, the transition from \"pre-training\" to \"continued pre-training\" in the abstract i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"HnnE_mW8rT7",
"iclr_2022_2bO2x8NAIMB",
"2zTspBhxFau",
"-uwpkCKQCOx",
"wEZtIxrwrw-",
"8HHViCEIym9",
"zfPwpGCM6z",
"iclr_2022_2bO2x8NAIMB",
"iclr_2022_2bO2x8NAIMB",
"iclr_2022_2bO2x8NAIMB",
"iclr_2022_2bO2x8NAIMB",
"iclr_2022_2bO2x8NAIMB"
] |
iclr_2022_gEZrGCozdqR | Finetuned Language Models are Zero-Shot Learners | This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of datasets described via instructions—substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning. | Accept (Oral) | This paper examines the extent to which a large language model (LM) can generalize to unseen tasks via "instruction tuning", a process that fine-tunes the LM on a large number of tasks with natural language instructions. At test time, the model is evaluated zero-shot on held out tasks. The empirical results are good, and the 137B FLAN model generally out performs the 175B untuned GPT-3 model.
All reviewers voted to accept with uniformly high scores, despite two commenting on the relative lack of novelty. The discussion period focused on questions raised by two reviewers regarding the usefulness of fine-tuning with instructions vs. multi-task fine-tuning without instructions. The authors responded with an ablation study demonstrating that providing instructions at during tuning led to large gains.
Overall the paper's approach and detailed experiments will be useful for other researchers working in this fast moving area in NLP. | train | [
"VskjS1Ib8WW",
"9P2LppCNA_I",
"akY6zaAiO3j",
"o4i-yMBR6P7",
"_WmfFevYf-i",
"bhkzRI53-kP",
"AzOhp5NhChC",
"smM5zzVFkqe",
"edlNL-dBHLT",
"-YgUOpXUim",
"g-U7KRV0UCv",
"G8KXj5zrhxo",
"EVEmcRlRkAc"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The new appendix B2 results are great and address my primary concern. One final curiosity - did the fine-tuning variants do just as well on held-in datasets?\n\nOverall I still mostly stand by my original assessment of the paper. I think the paper is still a good paper, and probably would be in the top 10-20% o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"smM5zzVFkqe",
"o4i-yMBR6P7",
"iclr_2022_gEZrGCozdqR",
"EVEmcRlRkAc",
"bhkzRI53-kP",
"G8KXj5zrhxo",
"g-U7KRV0UCv",
"edlNL-dBHLT",
"-YgUOpXUim",
"iclr_2022_gEZrGCozdqR",
"iclr_2022_gEZrGCozdqR",
"iclr_2022_gEZrGCozdqR",
"iclr_2022_gEZrGCozdqR"
] |
iclr_2022__SJ-_yyes8 | Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning | We present DrQ-v2, a model-free reinforcement learning (RL) algorithm for visual continuous control. DrQ-v2 builds on DrQ, an off-policy actor-critic approach that uses data augmentation to learn directly from pixels. We introduce several improvements that yield state-of-the-art results on the DeepMind Control Suite. Notably, DrQ-v2 is able to solve complex humanoid locomotion tasks directly from pixel observations, previously unattained by model-free RL. DrQ-v2 is conceptually simple, easy to implement, and provides significantly better computational footprint compared to prior work, with the majority of tasks taking just 8 hours to train on a single GPU. Finally, we publicly release DrQ-v2 's implementation to provide RL practitioners with a strong and computationally efficient baseline. | Accept (Poster) | The paper addresses various improvements in visual continuous RL, based on a previous RL algorithm (DrQ). As the reviewers point out, the main contribution of the paper is of empirical nature, demonstrating how several different choices relative to DrQ significantly improve data efficiency and wall-clock computation, such that several control problems of the DeepMind control suite can be solved more efficiently. The average rating for the paper is above the acceptance threshold, and some reviewers increased their rating after there rebuttal. While a mostly empirically motivated papers is always a bit more controversial, the paper may nevertheless stimulated an interesting discussion at ICLR that will be beneficial for the community, and should thus be accepted. | test | [
"UN4iNrv7qW",
"wfMB7XL38n",
"FnMu5Urmf21",
"gojdLkNfjhn",
"OkH4SCICVK",
"90BDAsrFK6d",
"idvwKWcra8h",
"emIRLT0gk4",
"NYsSRZiBxS-",
"ziiEz2DYKyHh",
"635jKXPdlql",
"3FETYfScAz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks, the authors raised good points but they don't justify any change to my initial assessment. \n",
"This paper introduces DrQ-v2, an improvement over DrQ, a model free off-policy actor-critic approach which uses SAC+data augmentation to learn directly from pixels.\nDrQ-v2, on the other hand, switches SAC w... | [
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"ziiEz2DYKyHh",
"iclr_2022__SJ-_yyes8",
"NYsSRZiBxS-",
"idvwKWcra8h",
"iclr_2022__SJ-_yyes8",
"3FETYfScAz",
"OkH4SCICVK",
"iclr_2022__SJ-_yyes8",
"wfMB7XL38n",
"635jKXPdlql",
"iclr_2022__SJ-_yyes8",
"iclr_2022__SJ-_yyes8"
] |
iclr_2022_P-pPW1nxf1r | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. 'class' and 'id' attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling '<title>' tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. | Accept (Poster) | This paper develops a new large language model trained on 25TB of (simplified) HTML text data. The HTML tags provide valuable information about the document structure. The training adapted the BART denoising objectives (to inject noisy size hint to control generation length during training). The paper also studies various prompting methods for the model. The model achieves state-of-the-art performance on zero-shot summarization and several text classification tasks. Reviewers have found the motivation of pretraining with structured text convincing, and the results are good. | train | [
"9ltk6a1YIw8",
"DHmu1hqWsT",
"N5mSXPRIop",
"dlXJYNtINf",
"D_97KHVdNv7",
"nVVQA8tghsT",
"UU-YcB3Kh3E",
"Rfb3I5nM11q",
"iIdpKW5YTmI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces HTLM which is a language model pretrained on a large-scale web crawl hyper-text data. There are several contributions in the paper:\n* A preprocessing step to filter out noisy components in the web pages is proposed. The resulting simplified format, Minimal-HTML (MHTML), is likely to be compo... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_P-pPW1nxf1r",
"dlXJYNtINf",
"iIdpKW5YTmI",
"9ltk6a1YIw8",
"Rfb3I5nM11q",
"UU-YcB3Kh3E",
"iclr_2022_P-pPW1nxf1r",
"iclr_2022_P-pPW1nxf1r",
"iclr_2022_P-pPW1nxf1r"
] |
iclr_2022_xMJWUKJnFSw | NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs | Conventional representation learning algorithms for knowledge graphs (KG) map each entity to a unique embedding vector.
Such a shallow lookup results in a linear growth of memory consumption for storing the embedding matrix and incurs high computational costs of working with real-world KGs.
Drawing parallels with subword tokenization commonly used in NLP, we explore the landscape of more parameter-efficient node embedding strategies with possibly sublinear memory requirements.
To this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary.
In NodePiece, a vocabulary of subword/sub-entity units is constructed from anchor nodes in a graph with known relation types. Given such a fixed-size vocabulary, it is possible to bootstrap an encoding and embedding for any entity, including those unseen during training.
Experiments show that NodePiece performs competitively in node classification, link prediction, and relation prediction tasks retaining less than 10% of explicit nodes in a graph as anchors and often having 10x fewer parameters. To this end, we show that a NodePiece-enabled model outperforms existing shallow models on a large OGB WikiKG 2 graph having 70x fewer parameters.
| Accept (Poster) | This paper presents a technique for compositionally constructing embeddings for nodes in knowledge graphs, hence reducing the memory requirements as well as allowing inductive learning. The reviewers find the direction promising and the approach novel and well-motivated. There were some concerns about the experiment results — Reviewer KuBz suggests including more baselines, Reviewer CpaB suggests trying NodePiece on single-relation graphs and Reviewer 2qcD notes that NodePiece lags behind the other approaches on some tasks. Most of these concerns seem to have been addressed in the author response and I tend to agree with the authors that single-relation graphs are out of the scope of this work. Reviewer X7aq also raised a concern about the claims made regarding (i) uniqueness of the hashes and (ii) sub-linearity of the approach. It is good to see that claim (ii) has been removed, but (i) is still present in many places — it would be good to add a discussion about why the hashes are highly likely to be unique in the final version. | test | [
"kYPDpL9Wp8",
"q23zt6RDCw",
"CGFVVI5hWE_",
"vbgx2d0vMkh",
"11JOonQYlJF",
"6VKHDISCnNp",
"KGn3ChENxQK",
"2FbMW0XdREl",
"mMT76cCV-ov",
"RqetbjggiOc",
"ly1-I-_s6x9",
"QbVODXL9Dxb",
"nc4rW4tssEl",
"LaGTs5qgwv7",
"vwf6eSNANGR",
"4rpj3g8OUN7",
"9chUGedGmEo",
"mwl0xPO7vuJ",
"EhIvputWoj2... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer X7aq, please let us know if our response addressed your concerns or if you have any further questions. ",
" Thank you for the comments.\n\nNodePiece can indeed be applied successfully to both transductive and inductive settings, with GNNs and non-GNN models. NodePiece’s novelty and strength lies i... | [
-1,
-1,
-1,
-1,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"ogEps-UH_Mg",
"vbgx2d0vMkh",
"mwl0xPO7vuJ",
"11JOonQYlJF",
"iclr_2022_xMJWUKJnFSw",
"iclr_2022_xMJWUKJnFSw",
"6VKHDISCnNp",
"iclr_2022_xMJWUKJnFSw",
"RqetbjggiOc",
"iclr_2022_xMJWUKJnFSw",
"ogEps-UH_Mg",
"nc4rW4tssEl",
"6VKHDISCnNp",
"vwf6eSNANGR",
"4rpj3g8OUN7",
"9chUGedGmEo",
"11J... |
iclr_2022_jaLDP8Hp_gc | Visual Correspondence Hallucination | Given a pair of partially overlapping source and target images and a keypoint in the source image, the keypoint's correspondent in the target image can be either visible, occluded or outside the field of view. Local feature matching methods are only able to identify the correspondent's location when it is visible, while humans can also hallucinate its location when it is occluded or outside the field of view through geometric reasoning. In this paper, we bridge this gap by training a network to output a peaked probability distribution over the correspondent's location, regardless of this correspondent being visible, occluded, or outside the field of view. We experimentally demonstrate that this network is indeed able to hallucinate correspondences on pairs of images captured in scenes that were not seen at training-time. We also apply this network to an absolute camera pose estimation problem and find it is significantly more robust than state-of-the-art local feature matching-based competitors. | Accept (Poster) | This paper receives positive reviews. The authors provide additional results and justifications during the rebuttal phase. All reviewers find this paper interesting and the contributions are sufficient for this conference. The area chair agrees with the reviewers and recommends it be accepted for presentation. | train | [
"EkXSobQaGSX",
"t_m188ir-01",
"aB-aAw0xvHi",
"9MQG_FP3eQB",
"F5nhRjo6xc",
"n8UMfwiyzdr",
"Lv3RrS-8N1T",
"dO24FT8L9rz",
"ag7x_m7iFKH",
"y1QImMpCSy",
"tz6cced5D1B",
"VjkTIgQhQr8"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply, please find our responses below:\n\n1. _It is a good point. Please highlight that NeurHal can output \"location\" of outpaint._ \n We added a paragraph discussing our approach versus [Germain 2021] in Sec. A.5 of the updated version of the paper.\n2. _It is Ok to say it is not a baseli... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"9MQG_FP3eQB",
"Lv3RrS-8N1T",
"F5nhRjo6xc",
"dO24FT8L9rz",
"n8UMfwiyzdr",
"ag7x_m7iFKH",
"VjkTIgQhQr8",
"tz6cced5D1B",
"y1QImMpCSy",
"iclr_2022_jaLDP8Hp_gc",
"iclr_2022_jaLDP8Hp_gc",
"iclr_2022_jaLDP8Hp_gc"
] |
iclr_2022_aTzMi4yV_RO | Do Not Escape From the Manifold: Discovering the Local Coordinates on the Latent Space of GANs | The discovery of the disentanglement properties of the latent space in GANs motivated a lot of research to find the semantically meaningful directions on it. In this paper, we suggest that the disentanglement property is closely related to the geometry of the latent space. In this regard, we propose an unsupervised method for finding the semantic-factorizing directions on the intermediate latent space of GANs based on the local geometry. Intuitively, our proposed method, called $\textit{Local Basis}$, finds the principal variation of the latent space in the neighborhood of the base latent variable. Experimental results show that the local principal variation corresponds to the semantic factorization and traversing along it provides strong robustness to image traversal. Moreover, we suggest an explanation for the limited success in finding the global traversal directions in the latent space, especially $\mathcal{W}$-space of StyleGAN2. We show that $\mathcal{W}$-space is warped globally by comparing the local geometry, discovered from Local Basis, through the metric on Grassmannian Manifold. The global warpage implies that the latent space is not well-aligned globally and therefore the global traversal directions are bound to show limited success on it. | Accept (Poster) | The initial reviews for this paper were diverging. After the rebuttal all reviewers have reached the consensus of recommending the paper's acceptance. Some reviewers have concerns regarding the novelty of the paper, however they appreciate that the paper is ell written and the empirical results are interesting. Following the reviewers recommendation, the meta reviewer recommends acceptance. In the final version of the paper the authors are encouraged to strengthen the weaknesses discussion as requested by one of the reviewers. | train | [
"RTF401FlfG",
"tZqUSFPBnfm",
"bAIBxyK1YJx",
"FXmWE9JdKP_",
"Gx6HBqpXln_",
"na5Ecw31ZAa",
"HVkPQqPc9W0",
"ENif8Z3IiA7",
"rb0ts7dkcfG",
"4-kiWAZ2Ft",
"E2lOwWnJ4F",
"RsCILtffy6",
"Nx1tmA1zN2p",
"Xwdxf3XPJ6e",
"X3QIRorDDLb"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper propose to explore disentanglement locally through the top singular vectors of the Jacobian of the generator of a GAN. This contrast the popular view that disentanglement should follow Euclidean coordinate axes in the latent space. An algorithm for traversing the latent space is proposed. Disentanglement... | [
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2022_aTzMi4yV_RO",
"bAIBxyK1YJx",
"RsCILtffy6",
"iclr_2022_aTzMi4yV_RO",
"4-kiWAZ2Ft",
"HVkPQqPc9W0",
"rb0ts7dkcfG",
"Nx1tmA1zN2p",
"Xwdxf3XPJ6e",
"FXmWE9JdKP_",
"RTF401FlfG",
"X3QIRorDDLb",
"RTF401FlfG",
"RTF401FlfG",
"iclr_2022_aTzMi4yV_RO"
] |
iclr_2022_rI0LYgGeYaw | Understanding approximate and unrolled dictionary learning for pattern recovery | Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals. Alternating minimization (AM) is standard for the underlying optimization, where gradient descent steps alternate with sparse coding procedures. The major drawback of this method is its prohibitive computational cost, making it unpractical on large real-world data sets. This work studies an approximate formulation of dictionary learning based on unrolling and compares it to alternating minimization to find the best trade-off between speed and precision. We analyze the asymptotic behavior and convergence rate of gradients estimates in both methods. We show that unrolling performs better on the support of the inner problem solution and during the first iterations. Finally, we apply unrolling on pattern learning in magnetoencephalography (MEG) with the help of a stochastic algorithm and compare the performance to a state-of-the-art method. | Accept (Poster) | The paper proposes an unrolled algorithm to solve the l1-norm formulated dictionary learning problem, and focuses on the number of unrolling steps. It shows that it is better to limit the number of unrolling steps, and this leads to favorable performance over the alternating minimization baseline. The method can also be adapted to scale to very large datasets.
Most reviewers were positive or became positive after the rebuttals. Reviewer njnY was still concerned about some issues, such as constraints and the choice of the l1 model over the l0 model; there also may have been confusion about unit sphere vs unit ball constraints. However, given the recommendations of the other reviewers and my own opinion, I think the paper is a worthy contribution, and the point about not unrolling too deeply is an important one that is worth highlighting. | train | [
"hi4z3eXBz2y",
"1J7-XbUWWL",
"9HkItRVcDr",
"f9UDt9_pMQ",
"K63-iXzfco8",
"WMKyZE96wi_",
"RIW9NVZvqOY",
"OPQTR3xLFQm",
"QuE4i5E_4nR",
"Fu1bUU2YQGs",
"P7apt21CMHV",
"_PSfGdn9a1A",
"XS2hF7TqNc",
"VcZW6sOGQGp",
"3E0u6Xyua1F",
"LvjrdVSh8jI",
"_gdUMuZ1NYr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" for your response and for answering my questions regarding the surrounding literature.",
"The authors theoretically study the performance of dictionary learning using \"unrolling\" based methods. As opposed to alternating minimization (AM) which switches back and forth between dictionary estimation and sparse r... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"VcZW6sOGQGp",
"iclr_2022_rI0LYgGeYaw",
"iclr_2022_rI0LYgGeYaw",
"QuE4i5E_4nR",
"RIW9NVZvqOY",
"f9UDt9_pMQ",
"OPQTR3xLFQm",
"Fu1bUU2YQGs",
"_PSfGdn9a1A",
"iclr_2022_rI0LYgGeYaw",
"9HkItRVcDr",
"_gdUMuZ1NYr",
"P7apt21CMHV",
"1J7-XbUWWL",
"LvjrdVSh8jI",
"iclr_2022_rI0LYgGeYaw",
"iclr_2... |
iclr_2022_yhCp5RcZD7 | Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields | We present implicit displacement fields, a novel representation for detailed 3D geometry. Inspired by a classic surface deformation technique, displacement mapping, our method represents a complex surface as a smooth base surface plus a displacement along the base's normal directions, resulting in a frequency-based shape decomposition, where the high-frequency signal is constrained geometrically by the low-frequency signal. Importantly, this disentanglement is unsupervised thanks to a tailored architectural design that has an innate frequency hierarchy by construction. We explore implicit displacement field surface reconstruction and detail transfer
and demonstrate superior representational power, training stability, and generalizability. | Accept (Poster) | This paper provides a normal map-inspired implicit surface representation involving a smooth surface whose high frequency detail comes from normal displacements. Reviewers were impressed with the results and theoretical discussion in the paper. The AC agrees.
The authors were responsive to reviewer feedback and addressed some questions about parameter choice during the rebuttal phase, including new experiments/discussion in the supplementary document. Note the response to reviewer WHEF notes that the authors will be releasing data/code; the AC strongly hopes the authors are true to their word in that regard.
The AC chose to disregard some comments from reviewer G54X regarding tests with noise, as this method appears to be tuned to computer graphics applications; the level of empirical work here aligns with past work in the area. Of course the authors are encouraged to include some tests responding to the reviewer comments in the camera ready. The AC also found the score from reviewer WHEF to be somewhat uncalibrated with the tone of their review, but of course their assessment is quite positive nonetheless.
One small comment: The abstract appears a bit strangely on the OpenReview site because of line breaks; if possible, please remove the line breaks.
Another small comment: The "spectral shape representation" phrase used a bit in the discussion below might not be advisable, as this phrase typically refers to the intrinsic spectrum of a shape (e.g. Laplace-Beltrami analysis) | train | [
"J8KpbxF8JA_",
"KKXkJwoZsaS",
"R6WIe7VhQKc",
"8DOPw9Kszvx",
"lVEge98Jcec",
"5UgPKJYhZd",
"DtgeigL0E99",
"9qkBvDc-GNg",
"fvIcDvlaaxP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper addresses the problem of obtaining a 3D surface reconstruction from point clouds. This has been one of the most studied topics in 3D vision, with different kinds of approaches. The authors tackle this problem using implicit representations, specifically using signal distance functions and query features.... | [
5,
-1,
-1,
-1,
-1,
-1,
6,
6,
10
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_yhCp5RcZD7",
"8DOPw9Kszvx",
"9qkBvDc-GNg",
"fvIcDvlaaxP",
"DtgeigL0E99",
"J8KpbxF8JA_",
"iclr_2022_yhCp5RcZD7",
"iclr_2022_yhCp5RcZD7",
"iclr_2022_yhCp5RcZD7"
] |
iclr_2022_w1UbdvWH_R3 | Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path | The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last-layer features collapse to their class-means, both classifiers and class-means collapse to the same Simplex Equiangular Tight Frame, and classifier behavior collapses to the nearest-class-mean decision rule. Recent works demonstrated that deep nets trained with mean squared error (MSE) loss perform comparably to those trained with CE. As a preliminary, we empirically establish that NC emerges in such MSE-trained deep nets as well through experiments on three canonical networks and five benchmark datasets. We provide, in a Google Colab notebook, PyTorch code for reproducing MSE-NC and CE-NC: https://colab.research.google.com/github/neuralcollapse/neuralcollapse/blob/main/neuralcollapse.ipynb. The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss, inspiring us to leverage MSE loss towards the theoretical investigation of NC. We develop three main contributions: (I) We show a new decomposition of the MSE loss into (A) terms directly interpretable through the lens of NC and which assume the last-layer classifier is exactly the least-squares classifier; and (B) a term capturing the deviation from this least-squares classifier. (II) We exhibit experiments on canonical datasets and networks demonstrating that term-(B) is negligible during training. This motivates us to introduce a new theoretical construct: the central path, where the linear classifier stays MSE-optimal for feature activations throughout the dynamics. (III) By studying renormalized gradient flow along the central path, we derive exact dynamics that predict NC. | Accept (Oral) | This paper extends the Neural Collapse (NC) phenomenon discovered by Papyan, Han and Donoho (2020) on deep learning image classifications with Cross Entropy (CE) loss, to the scenario with Mean Squared Error (MSE), that achieves similar performance to CE and favors deeper analysis. In particular, the paper shows that the least square loss can be decomposed orthogonally into a 'central' path as the optimal least square loss, and its perpendicular loss. Moreover, the paper shows by experiments that after the zero training error (Terminal Phase of Training, or TPT) the perpendicular loss is typically much smaller than the optimal least square loss, and the optimal least square loss is further decomposed into the NC1 loss which is the dominance and NC2/3 loss (even smaller than the perpendicular loss). Such a discovery with loss decomposition is very thought provoking to understand the training dynamics of deep neural networks.
Reviewers unanimously accept the paper, so is the final recommendation. | train | [
"YouG_KEHbEO",
"oO3Tn1yCN5x",
"0_Ri6Pcfjlu",
"FzosfajBzAo",
"BbaEZSurgE",
"ob1rxcnethD",
"hb_5o6bm8NK",
"kyj9a-kOYee",
"nW3O_6usaeH",
"xbGtcc-ovAR",
"oCLSVY0Cp4b",
"hINTs2rvsF",
"_wkGziXE3xb",
"l_QfG5oxZ9L",
"snVM0xCTQjI",
"0WBh8BPcdw",
"RerSUaD-4AK",
"SrHhXVjfxY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Hi,\n\nI decided to raise the score considering the effort put by the authors. Still, I think another iteration would benefit the message in the paper.",
"This paper extends the recent work on Neural Collapse, using Mean Squared Error (MSE) instead of CE, as MSE is easier for analysis. With this, the paper show... | [
-1,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"hb_5o6bm8NK",
"iclr_2022_w1UbdvWH_R3",
"iclr_2022_w1UbdvWH_R3",
"hINTs2rvsF",
"ob1rxcnethD",
"xbGtcc-ovAR",
"kyj9a-kOYee",
"nW3O_6usaeH",
"oO3Tn1yCN5x",
"0_Ri6Pcfjlu",
"SrHhXVjfxY",
"SrHhXVjfxY",
"RerSUaD-4AK",
"0_Ri6Pcfjlu",
"oO3Tn1yCN5x",
"oO3Tn1yCN5x",
"iclr_2022_w1UbdvWH_R3",
... |
iclr_2022_ItkxLQU01lD | Convergent Graph Solvers | We propose the convergent graph solver (CGS), a deep learning method that learns iterative mappings to predict the properties of a graph system at its stationary state (fixed point) with guaranteed convergence. The forward propagation of CGS proceeds in three steps: (1) constructing the input-dependent linear contracting iterative maps, (2) computing the fixed points of the iterative maps, and (3) decoding the fixed points to estimate the properties. The contractivity of the constructed linear maps guarantees the existence and uniqueness of the fixed points following the Banach fixed point theorem. To train CGS efficiently, we also derive a tractable analytical expression for its gradient by leveraging the implicit function theorem. We evaluate the performance of CGS by applying it to various network-analytic and graph benchmark problems. The results indicate that CGS has competitive capabilities for predicting the stationary properties of graph systems, irrespective of whether the target systems are linear or non-linear. CGS also shows high performance for graph classification problems where the existence or the meaning of a fixed point is hard to be clearly defined, which highlights the potential of CGS as a general graph neural network architecture. | Accept (Poster) | The paper got four accepts (after the reviewers changed their scores), all with high confidences. The theories are complete and the experiments are solid. The AC found no reason to overturn reviewers' recommendations. However, the AC deemed that all the pieces are just routine, thus only recommended poster. | train | [
"eQaaRkuMZtj",
"xZhJyCsbA4_",
"lbFuxKmSLh0",
"3iELP26td1J",
"GT-xx93rzqI",
"7OPRbL7gra5",
"c3uA5GcL9tJ",
"RJTbdCm9DYr",
"dMbsP6xkxv8",
"8JavxOMC68B",
"Lg7o7lyK6Ml",
"N4czDOsNYp",
"3kNRtbisyJS",
"34U4Hp28WQa",
"4DNBT3CBJC-",
"jv7Di9EMKmM",
"EvU1UFVzRBi"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for modifying the manuscript accordingly. I will keep my score as accept.",
"The paper constructed a linear map on graph that is guaranteed to converge to a fixed point, by embeding such linear map into a graph neural network, it can be used to solve nonlinear problems on graphs. Experiments show that th... | [
-1,
8,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"3kNRtbisyJS",
"iclr_2022_ItkxLQU01lD",
"4DNBT3CBJC-",
"xZhJyCsbA4_",
"7OPRbL7gra5",
"RJTbdCm9DYr",
"iclr_2022_ItkxLQU01lD",
"dMbsP6xkxv8",
"Lg7o7lyK6Ml",
"iclr_2022_ItkxLQU01lD",
"c3uA5GcL9tJ",
"34U4Hp28WQa",
"EvU1UFVzRBi",
"jv7Di9EMKmM",
"xZhJyCsbA4_",
"iclr_2022_ItkxLQU01lD",
"icl... |
iclr_2022_-ApAkox5mp | SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models | In recent years, implicit deep learning has emerged as a method to increase the depth of deep neural networks. While their training is memory-efficient, they are still significantly slower to train than their explicit counterparts. In Deep Equilibrium Models~(DEQs), the training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix. In this paper, we propose a novel strategy to tackle this computational bottleneck from which many bi-level problems suffer. The main idea is to use the quasi-Newton matrices from the forward pass to efficiently approximate the inverse Jacobian matrix in the direction needed for the gradient computation. We provide a theorem that motivates using our method with the original forward algorithms. In addition, by modifying these forward algorithms, we further provide theoretical guarantees that our method asymptotically estimates the true implicit gradient. We empirically study this approach in many settings, ranging from hyperparameter optimization to large Multiscale DEQs applied to CIFAR and ImageNet. We show that it reduces the computational cost of the backward pass by up to two orders of magnitude. All this is achieved while retaining the excellent performance of the original models in hyperparameter optimization and on CIFAR, and giving encouraging and competitive results on ImageNet. | Accept (Spotlight) | The paper considers the setting of bi-level optimization and proposes a quasi-Newton scheme to reduce the cost of Jacobian inversion, which is the main bottleneck of bi-level optimization methods. The paper proves that the proposed scheme correctly estimates the true implicit gradient. The theoretical results are supported by numerical experiments, which are encouraging and show that the proposed method is either competitive with or outperforms the Jacobian Free method recently proposed in the literature.
Even though the reviews expressed some initial concerns regarding the empirical performance of the proposed method, the authors adequately addressed those concerns and provided additional experiments. Thus, a consensus was reached that the paper should be accepted. | val | [
"JJcj5AcVWYf",
"kC80UX_2Le6",
"UDXKijshZ2W",
"gpdaKWKbOB",
"RnOWzJP-vuX",
"gvE41jgQAYb",
"bb5klG0ROlA",
"dIoduG60qp-",
"aX8SWH4kqRa",
"QlYt3uObpS8",
"G3TpeJZnpX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We want to take this last opportunity to write a message to thank all the reviewers for engaging in a fruitful, timely and respectful review process that led to an improved version of the paper.",
"The paper proposes a way to improve on the computational cost of bi-level optimization problems. These often come ... | [
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_-ApAkox5mp",
"iclr_2022_-ApAkox5mp",
"kC80UX_2Le6",
"iclr_2022_-ApAkox5mp",
"gvE41jgQAYb",
"aX8SWH4kqRa",
"kC80UX_2Le6",
"gpdaKWKbOB",
"iclr_2022_-ApAkox5mp",
"G3TpeJZnpX",
"iclr_2022_-ApAkox5mp"
] |
iclr_2022_F72ximsx7C1 | How Attentive are Graph Attention Networks? | Graph Attention Networks (GATs) are one of the most popular GNN architectures and are considered as the state-of-the-art architecture for representation learning with graphs. In GAT, every node attends to its neighbors given its own representation as the query.
However, in this paper we show that GAT computes a very limited kind of attention: the ranking of the attention scores is unconditioned on the query node. We formally define this restricted kind of attention as static attention and distinguish it from a strictly more expressive dynamic attention.
Because GATs use a static attention mechanism, there are simple graph problems that GAT cannot express: in a controlled problem, we show that static attention hinders GAT from even fitting the training data.
To remove this limitation, we introduce a simple fix by modifying the order of operations and propose GATv2: a dynamic graph attention variant that is strictly more expressive than GAT. We perform an extensive evaluation and show that GATv2 outperforms GAT across 12 OGB and other benchmarks while we match their parametric costs.
Our code is available at https://github.com/tech-srl/how_attentive_are_gats . GATv2 is available as part of the PyTorch Geometric library, the Deep Graph Library, and the TensorFlow GNN library. | Accept (Poster) | This paper argues that the widely adopted graph attention networks (GAT) have a shortcoming that with the static nature of the attention mechanism, they may fail to represent certain graphs. This paper presents an alternative, GATv2, a simple variant with the same time complexity as GAT but with more expressivity, able to represent the graphs that GAT fails to. This is shown both empirically and theoretically, with various tasks on synthetic as well as standard benchmark graphs.
GATs are of high interest to the ICLR community, and this paper makes fundamental progress in how attention works in GNNs. This is one of the few papers that present both empirical and theoretical analyses, and these findings will motivate others in the community to make further advances in this field. | train | [
"9cxj3ibrAWV",
"nM5KrCPI_Vr",
"lXCC-G2bb_4",
"C_MJ91KqwA0",
"LRd7xXMZ906",
"pD5m16B6v1L",
"yjAzxeSjWKO",
"8srX-ss3YXZ",
"rDq8EyYlU6C",
"PR8XoDGCOg3",
"MCiWIbgIYiO",
"XiznNCTb1Pl",
"6pyKcMbSeq",
"Qth3SgH8qu",
"xe7RLnR9ji",
"VS91CySe-7E",
"oLSrJX9v4gZ",
"x4NnmKv9uTb"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for engaging in a discussion! We think that your suggestions here bring up an idea that can really simplify the presentation of the DictionaryLookup problem. Please see our response below.\n\n> A complete bipartite graph is not difficult \n\nExactly! The novel insight in our paper is that even suc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"nM5KrCPI_Vr",
"LRd7xXMZ906",
"C_MJ91KqwA0",
"XiznNCTb1Pl",
"pD5m16B6v1L",
"6pyKcMbSeq",
"iclr_2022_F72ximsx7C1",
"iclr_2022_F72ximsx7C1",
"x4NnmKv9uTb",
"oLSrJX9v4gZ",
"Qth3SgH8qu",
"rDq8EyYlU6C",
"VS91CySe-7E",
"xe7RLnR9ji",
"iclr_2022_F72ximsx7C1",
"iclr_2022_F72ximsx7C1",
"iclr_2... |
iclr_2022_p98WJxUC3Ca | Discrepancy-Based Active Learning for Domain Adaptation | The goal of the paper is to design active learning strategies which lead to domain adaptation under an assumption of Lipschitz functions. Building on previous work by Mansour et al. (2009) we adapt the concept of discrepancy distance between source and target distributions to restrict the maximization over the hypothesis class to a localized class of functions which are performing accurate labeling on the source domain. We derive generalization error bounds for such active learning strategies in terms of Rademacher average and localized discrepancy for general loss functions which satisfy a regularity condition. A practical K-medoids algorithm that can address the case of large data set is inferred from the theoretical bounds. Our numerical experiments show that the proposed algorithm is competitive against other state-of-the-art active learning techniques in the context of domain adaptation, in particular on large data sets of around one hundred thousand images. | Accept (Poster) | This paper deals with the important topic of active transfer learning. All reviewers agree that
while the paper presents some shortcomings , it is considered to be a worth contribution. | train | [
"-wcanlaDGJK",
"Byn3PAgqJOK",
"udLiTbx1K0p",
"BlbFDThKoI",
"uZ3cnvzFWu",
"MTyART6saEk",
"8--3248LptA",
"9i71ilgvmYV",
"JWlPWVBGg24",
"CMcL99drJ4i"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response to my comments. ",
" We thank the authors for carefully addressing the concerns regarding bounds, and comparison with k-center. I have increased my score. ",
"This paper proposes a k-medoid solution for active learning in the context of domain adaptation. T... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"uZ3cnvzFWu",
"MTyART6saEk",
"iclr_2022_p98WJxUC3Ca",
"CMcL99drJ4i",
"JWlPWVBGg24",
"udLiTbx1K0p",
"9i71ilgvmYV",
"iclr_2022_p98WJxUC3Ca",
"iclr_2022_p98WJxUC3Ca",
"iclr_2022_p98WJxUC3Ca"
] |
iclr_2022_mk0HzdqY7i1 | What’s Wrong with Deep Learning in Tree Search for Combinatorial Optimization | Combinatorial optimization lies at the core of many real-world problems. Especially since the rise of graph neural networks (GNNs), the deep learning community has been developing solvers that derive solutions to NP-hard problems by learning the problem-specific solution structure. However, reproducing the results of these publications proves to be difficult. We make three contributions. First, we present an open-source benchmark suite for the NP-hard Maximum Independent Set problem, in both its weighted and unweighted variants. The suite offers a unified interface to various state-of-the-art traditional and machine learning-based solvers. Second, using our benchmark suite, we conduct an in-depth analysis of the popular guided tree search algorithm by Li et al. [NeurIPS 2018], testing various configurations on small and large synthetic and real-world graphs. By re-implementing their algorithm with a focus on code quality and extensibility, we show that the graph convolution network used in the tree search does not learn a meaningful representation of the solution structure, and can in fact be replaced by random values. Instead, the tree search relies on algorithmic techniques like graph kernelization to find good solutions. Thus, the results from the original publication are not reproducible. Third, we extend the analysis to compare the tree search implementations to other solvers, showing that the classical algorithmic solvers often are faster, while providing solutions of similar quality. Additionally, we analyze a recent solver based on reinforcement learning and observe that for this solver, the GNN is responsible for the competitive solution quality. | Accept (Poster) | I would like to thank the authors for having managed a thorough discussion despite the complexity of the task at hand (e.g. BEvM). during discussion, the reviewers clearly converged to accepting the paper, praising the importance of the problem tackled and the setup put in place to effectively tackle the challenge at hand.
All this makes the paper an important contribution and a clear accept (and an enjoyable read), for which I can only recommend a further polish before camera ready to follow the latest inclusions.
AC. | train | [
"Hjlw7L-HF1O",
"EYQ5agHzK8r",
"kPwqekFjh8x",
"bDvV09Rtq6",
"CfXZ14Fazr",
"Vskx9sgapOR",
"pQgoimM1NHL",
"Omwh2wyqA-C",
"4eI9-f919Ry",
"zTtRNWRJvYC",
"LRf7KIxvtbq",
"hJsdHeucxFv",
"5IiqitgRpcT",
"7Iq3ynMr-7f"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I thank the authors for their response and for revising the paper to address the concerns. I have increased my score from 5 to 6.",
"The paper presents an evaluation of deep learning-based tree serch solutions (that are based on graph neural networks) for solving combinatorial optimization problems. They presen... | [
-1,
6,
-1,
-1,
-1,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
4,
-1,
-1,
-1,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"hJsdHeucxFv",
"iclr_2022_mk0HzdqY7i1",
"CfXZ14Fazr",
"Omwh2wyqA-C",
"zTtRNWRJvYC",
"iclr_2022_mk0HzdqY7i1",
"iclr_2022_mk0HzdqY7i1",
"LRf7KIxvtbq",
"iclr_2022_mk0HzdqY7i1",
"Vskx9sgapOR",
"pQgoimM1NHL",
"EYQ5agHzK8r",
"7Iq3ynMr-7f",
"iclr_2022_mk0HzdqY7i1"
] |
iclr_2022_p3DKPQ7uaAi | Temporal Alignment Prediction for Supervised Representation Learning and Few-Shot Sequence Classification | Explainable distances for sequence data depend on temporal alignment to tackle sequences with different lengths and local variances. Most sequence alignment methods infer the optimal alignment by solving an optimization problem under pre-defined feasible alignment constraints, which not only is time-consuming, but also makes end-to-end sequence learning intractable. In this paper, we propose a learnable sequence distance called Temporal Alignment Prediction (TAP). TAP employs a lightweight convolutional neural network to directly predict the optimal alignment between two sequences, so that only straightforward calculations are required and no optimization is involved in inference. TAP can be applied in different distance-based machine learning tasks. For supervised sequence representation learning, we show that TAP trained with various metric learning losses achieves completive performances with much faster inference speed. For few-shot action classification, we apply TAP as the distance measure in the metric learning-based episode-training paradigm. This simple strategy achieves comparable results with state-of-the-art few-shot action recognition methods. | Accept (Poster) | This paper has been evaluated by three reviewers with 2 borderlines leaning towards the accept, and with 1 accept. The reviewers have noted that the idea of alignment is not particularly novel per se. Nonetheless, they found some merit in the use of a network learning the alignment and they liked experiments.
AC has however some concerns about this work. Firstly, it is not clear why Lifted+SoftDTW and Binomial+SoftDTW completely fail in Table 1, and in Table 5, SoftDTW is worse by 30% than TAP. Is soft-DTW set up properly in these experiments (the softmax temperature, the base distance used, the maximum number of steps away from the main path etc.)?
AC is also not convinced about the principled nature of the proposed alignment. Eq. 3 and the residual design above seem more as heuristics than a principled OT transport as Eq. 1 and 2 set out to suggest. With concatenation of distances between sequence features and positional encoding, the proposed alignment seems more similar to attention and transformers than OT. | test | [
"QbpMfAE6dZg",
"jD1AcE8hTkY",
"AJYMDCQiRYT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a sequence distance metric named temporal alignment prediction (TAP) which leverages a CNN to predict alignment between two sequences, in contrast to other non-learnable alignment algorithms in prior work. The authors have benchmarked the TAP on two video tasks of supervised video representatio... | [
6,
8,
6
] | [
3,
4,
3
] | [
"iclr_2022_p3DKPQ7uaAi",
"iclr_2022_p3DKPQ7uaAi",
"iclr_2022_p3DKPQ7uaAi"
] |
iclr_2022_4N-17dske79 | Associated Learning: an Alternative to End-to-End Backpropagation that Works on CNN, RNN, and Transformer | This paper studies Associate Learning (AL), an alternative methodology to the end-to-end backpropagation (BP). We introduce the workflow to convert a neural network into a proper structure such that AL can be used to learn the weights for various types of neural networks. We compared AL and BP on some of the most successful types of neural networks -- Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Transformer. Experimental results show that AL consistently outperforms BP on various open datasets. We discuss possible reasons for AL's success and its limitations. | Accept (Poster) | The authors propose a method for associative learning as an alternative to back propagation based learning. The idea is to interesting. The coupling between layers are broken down into local loss functions that can be updated independently. The targets are projected to previous layers and the information is preserved using an auto-encoder loss function. The projections from the target side are then compared with the projections from input side using a bridge function and a metric loss. The method is evaluated on text and image classification tasks. The results suggest that this is a promising alternative to back propagation based learning.
Pros
+ A novel idea that seems promising
+ Evaluated on text and image classification tasks and demonstrated utility
Cons
- The impact of the number of additional parameters and the computation is not clarified (even though epoch's are lower)
The authors utilized the discussion period very well, running additional experiments that were suggested (especially ablation studies). They also clarified all the questions that were raised. In all, the paper has improved substantially from the robust discussion. | train | [
"K49AG6YkFRl",
"USjDvdaovw2",
"RG4NFnpye0f",
"Itkyxwf1vmM",
"rmvGsXSQrt3",
"e8TRABYwoW",
"iyoAOjwzO0F",
"ozGSYf6Lu_q",
"Cd2NQThlob",
"Utvou_ldkP",
"_2rMSq2GvtJ",
"yLtut8Kbt6G",
"gZnoJs2KlZu",
"LhNbXwpw0gv",
"cc74PmQxeYe",
"HPa-YX2RP9L",
"yFUZHADZyOh",
"L0ezniPFaVi"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response.\n\nFor CNN-AL, the associated loss is defined as the distance between the flattened feature map and the latent representation of $y$. For RNN-AL, we use the last token’s latent state as the latent representation of the entire input sequence $x$, and compute associated loss with $t$. F... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"rmvGsXSQrt3",
"Itkyxwf1vmM",
"iclr_2022_4N-17dske79",
"ozGSYf6Lu_q",
"iyoAOjwzO0F",
"LhNbXwpw0gv",
"gZnoJs2KlZu",
"yLtut8Kbt6G",
"Utvou_ldkP",
"yFUZHADZyOh",
"iclr_2022_4N-17dske79",
"L0ezniPFaVi",
"RG4NFnpye0f",
"HPa-YX2RP9L",
"iclr_2022_4N-17dske79",
"iclr_2022_4N-17dske79",
"iclr... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.