paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_-8sBpe7rDiV
NETWORK INSENSITIVITY TO PARAMETER NOISE VIA PARAMETER ATTACK DURING TRAINING
Neuromorphic neural network processors, in the form of compute-in-memory crossbar arrays of memristors, or in the form of subthreshold analog and mixed-signal ASICs, promise enormous advantages in compute density and energy efficiency for NN-based ML tasks. However, these technologies are prone to computational non-ide...
Accept (Poster)
The authors propose an adversarial training method to increase network robustness to parameter variations. The proposed approach performs adversarial attacks on network parameters during training. They demonstrate that their method flattens the loss landscape of the network. Experiments were performed on F-MNIST, ECG d...
train
[ "UDLaBvBkmRT", "-g4pMUpjz9a", "gp4CyIPZC1H", "cw7sMA4Wv0", "3T2gqvwAV6Z", "7JDwF9f0Arx", "ZD-4A3Vo8eR", "y9B15XAw1Ll", "0-gzJJ_y_uR", "59uYncpFgo", "jWv_ocePLuB", "XWGGEqQmU4-", "ZAYUTqJnmkk" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " In the additional experiments that we performed on Cifar10 using the Resnet32 architecture, we compared our method to AWP and AMP. Our method outperformed AMP on all noise models. However when using the parameter mismatch noise model as described in our paper, AWP outperformed our model in the face of parameter m...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 2 ]
[ "jWv_ocePLuB", "gp4CyIPZC1H", "ZD-4A3Vo8eR", "3T2gqvwAV6Z", "y9B15XAw1Ll", "iclr_2022_-8sBpe7rDiV", "59uYncpFgo", "ZAYUTqJnmkk", "XWGGEqQmU4-", "7JDwF9f0Arx", "iclr_2022_-8sBpe7rDiV", "iclr_2022_-8sBpe7rDiV", "iclr_2022_-8sBpe7rDiV" ]
iclr_2022_NlObxR0rosG
Practical Integration via Separable Bijective Networks
Neural networks have enabled learning over examples that contain thousands of dimensions. However, most of these models are limited to training and evaluating on a finite collection of \textit{points} and do not consider the hypervolume in which the data resides. Any analysis of the model's local or global behavior is ...
Accept (Poster)
This paper suggests the use of networks for supervised learning which are composed of a bijective network (e.g. a flow) followed by a separable function. This allows easy integration over the input space, which can be used to formulate novel regularizers (examples given are for local consistency, and for out-of-distrib...
train
[ "q5N0o_r8aKJ", "lsZhNXCL-Vf", "qRHFyyTTjrG", "-gYozcz66Ri", "4QoZqgTCJkg", "u5aahRFm8SQ", "FFNJ-3saEka", "OcFEMrNpQ3I", "QZ105Kgfhnq", "e5c7vvtYTE", "7Nr_Dv5-In5", "gYjkXEpIths", "t1IOvkS5fAd", "YTogciobafE", "QAGOIdHQcN-" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We intended for this correction when changing the assessment from the number of network evaluations ($\\mathcal{O}(G)$) to the overall complexity ($\\mathcal{O}(GM)$) in the beginning of Section 4. We will correct this transcription error and note the overall complexity at $\\mathcal{O}(G)$ in future versions.", ...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 1 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "lsZhNXCL-Vf", "7Nr_Dv5-In5", "e5c7vvtYTE", "iclr_2022_NlObxR0rosG", "u5aahRFm8SQ", "QZ105Kgfhnq", "iclr_2022_NlObxR0rosG", "QZ105Kgfhnq", "QAGOIdHQcN-", "-gYozcz66Ri", "YTogciobafE", "t1IOvkS5fAd", "iclr_2022_NlObxR0rosG", "iclr_2022_NlObxR0rosG", "iclr_2022_NlObxR0rosG" ]
iclr_2022_NyJ2KIN8P17
Neural Program Synthesis with Query
Aiming to find a program satisfying the user intent given input-output examples, program synthesis has attracted increasing interest in the area of machine learning. Despite the promising performance of existing methods, most of their success comes from the privileged information of well-designed input-output examples....
Accept (Poster)
The paper addresses an important problem of selecting inputs to drive an inductive program synthesis process. This is an important problem because inductive synthesis relies on carefully chosen inputs to ensure that the chosen inputs can provide sufficient information about what the desired program is. This paper propo...
train
[ "gvwjRLlaq_A", "kmnfLi3hFX", "74XkukYOV4a", "6Onhq0fTuoT", "vtZ6q5cMGEs", "MRgwZlhRRKS", "N9FxYCrUt9O", "SBMyfiTC9Ck", "f9k4rAwXIs", "2ylW-y4_hTN", "7izTNDg7_bv", "J_yoKHHpeBR", "cqb9AX8whwU", "7htYbK4_vVd", "nEqcvHs3rr", "OZJIKa4Dx5c", "CybYCpN_DgS" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments, and we have conducted a new experiment on QBC as you suggested.\n\nIn your short-term method, the first query should not crash, because if so the program synthesis network cannot receive it as valid input. And, it is a bit hard to get the first query (we have generated for hours with mul...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "kmnfLi3hFX", "6Onhq0fTuoT", "7htYbK4_vVd", "SBMyfiTC9Ck", "MRgwZlhRRKS", "CybYCpN_DgS", "iclr_2022_NyJ2KIN8P17", "2ylW-y4_hTN", "iclr_2022_NyJ2KIN8P17", "N9FxYCrUt9O", "2ylW-y4_hTN", "CybYCpN_DgS", "J_yoKHHpeBR", "nEqcvHs3rr", "OZJIKa4Dx5c", "iclr_2022_NyJ2KIN8P17", "iclr_2022_NyJ2K...
iclr_2022_Ix_mh42xq5w
PSA-GAN: Progressive Self Attention GANs for Synthetic Time Series
Realistic synthetic time series data of sufficient length enables practical applications in time series modeling tasks, such as forecasting, but remains a challenge. In this paper we present PSA-GAN, a generative adversarial network (GAN) that generates long time series samples of high quality using progressive growing...
Accept (Poster)
This paper adapts the idea of progressive growing of GANs to time series synthesis. The reviewers thought that the idea was well motivated. DRP7 initially expressed concern w.r.t. novelty. They were also concerned with the lack of certain baselines. The authors responded, highlighting its contributions w.r.t. Evaluatio...
train
[ "mRu3MCtHfL3", "0nnFVbVXK9b", "2DeL8JE1YEi", "NMpijkv2Uos", "7IaeRFXaygu", "VVBrxqY36Qd", "2ZBjTSAqBjq", "XU5-3ZZc1V3", "-BNciJ9icdU", "hIdK8qCFc60", "GlticfJ2Sto", "e-WWGREaTv", "ivWdcX8ojk0", "BDCPiaXYvww", "b9va1INVZtK", "Zyhe6tcza8", "UOENm6EKHsO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " I would like to thank the authors for addressing my concerns. I'm happy with the current state of the paper and leaning towards acceptance.", "The authors propose a new GAN-based algorithm for time series synthesis. They use progressive growing of GAN architectures to improve the performance of GAN and self-att...
[ -1, 6, -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 5, -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "GlticfJ2Sto", "iclr_2022_Ix_mh42xq5w", "Zyhe6tcza8", "VVBrxqY36Qd", "iclr_2022_Ix_mh42xq5w", "7IaeRFXaygu", "7IaeRFXaygu", "-BNciJ9icdU", "iclr_2022_Ix_mh42xq5w", "UOENm6EKHsO", "UOENm6EKHsO", "-BNciJ9icdU", "-BNciJ9icdU", "-BNciJ9icdU", "-BNciJ9icdU", "0nnFVbVXK9b", "iclr_2022_Ix_m...
iclr_2022_oU3aTsmeRQV
Self-ensemble Adversarial Training for Improved Robustness
Due to numerous breakthroughs in real-world applications brought by machine intelligence, deep neural networks (DNNs) are widely employed in critical applications. However, predictions of DNNs are easily manipulated with imperceptible adversarial perturbations, which impedes the further deployment of DNNs and may resul...
Accept (Poster)
This paper proposes Self-Ensemble Adversarial Training (SEAT) for yielding a robust classifier by averaging weights of history models. The solution is different from an ensemble of predictions of different adversarially trained models. The authors also provided theoretical and empirical evidence that the proposed self-...
test
[ "2YMO5XkuOyU", "BKzTl9nhXP2", "kuKY1DihFgH", "2tl4x5o1QuB", "n9vZ_gzE_7p", "5T5UUUO4_xb", "hGBznkmvYq", "lEplWCaipW7", "HUUPIuTrhLi", "SefN2xnVJEr", "m3nXXAa4G0i", "Ueon5ijLIB", "9e2wa4l8zUn", "SwYfNVuFEYr", "FQ4Cof8v1zL", "SMBJV2cWHyo", "qYHo6SYP1l", "zH8ykbS6-N", "CQZHfT-WE8y",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors of the paper propose a new method for improving the robustness of a model against adversarial attacks. If I understood correctly, the proposed method is a combination of both adversarial training and model ensembling, where in this instance the ensembling is performed by maintaining a moving average of...
[ 5, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_oU3aTsmeRQV", "iclr_2022_oU3aTsmeRQV", "5T5UUUO4_xb", "iclr_2022_oU3aTsmeRQV", "SNoGaE7IQVZ", "BKzTl9nhXP2", "2tl4x5o1QuB", "SefN2xnVJEr", "lEplWCaipW7", "m3nXXAa4G0i", "BKzTl9nhXP2", "9e2wa4l8zUn", "SNoGaE7IQVZ", "FQ4Cof8v1zL", "CQZHfT-WE8y", "qYHo6SYP1l", "2tl4x5o1QuB", ...
iclr_2022_Azh9QBQ4tR7
Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off
While adversarial training has become the de facto approach for training robust classifiers, it leads to a drop in accuracy. This has led to prior works postulating that accuracy is inherently at odds with robustness. Yet, the phenomenon remains inexplicable. In this paper, we closely examine the changes induced in the...
Accept (Poster)
The authors propose a simple addition to adversarial training methods that improves model performance without significantly changing the complexity of training. The initial reviews raised some questions about whether experiments were sufficiently extensive, but these issues were resolved during the rebuttal and discus...
test
[ "kadT4LUUF-X", "pjCYwys0AhH", "18fkIu6mIOg", "7iABzcRR5v3", "kJp19XZRJuT", "yIqdH0bzQi", "V5SdGw-_L1P", "JLVl0n9sWuN", "I-RV0Y81Cmk", "tiUjz4oXXL", "OST7sFNklH", "jFOLzUTPje2", "bbszG0nn4kF", "E112-cVGKRD", "quLfU_8lY96", "4_R6URpLpSa", "51tXBdYq3KZ", "sZqouZvlDio" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for taking the time to respond to our clarifications. We are delighted to know that you found our work to be amazing. We believe that both of us have converged on most points. Below, we summarize our discussion in this thread and also add more details.\n\nWe think that your definition of features mainly en...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "pjCYwys0AhH", "7iABzcRR5v3", "7iABzcRR5v3", "kJp19XZRJuT", "I-RV0Y81Cmk", "iclr_2022_Azh9QBQ4tR7", "4_R6URpLpSa", "iclr_2022_Azh9QBQ4tR7", "tiUjz4oXXL", "4_R6URpLpSa", "jFOLzUTPje2", "yIqdH0bzQi", "E112-cVGKRD", "51tXBdYq3KZ", "sZqouZvlDio", "iclr_2022_Azh9QBQ4tR7", "iclr_2022_Azh9Q...
iclr_2022_AMpki9kp8Cn
Nonlinear ICA Using Volume-Preserving Transformations
Nonlinear ICA is a fundamental problem in machine learning, aiming to identify the underlying independent components (sources) from data which is assumed to be a nonlinear function (mixing function) of these sources. Recent works prove that if the sources have some particular structures (e.g. temporal structure), they ...
Accept (Poster)
This paper proposes an identifiable nonlinear ICA model based on volume-preserving transformations. The overall approach is very similar to the GIN method published @ ICLR 2020. There is a weak consensus among the reviewers that this paper has some merit, although none pushed for acceptance. After reviewing the paper m...
train
[ "jnqdDYRg9Cy", "YrfhJUglldw", "-x78LFUhoKR", "gbP1R8LwwNf", "CutNiXlIr8c", "kg7te-N3hyt", "3CkmVBv8fSl", "F66q6OWe0QR", "eMQjbmZYwFfO", "3LYDT5pRJJvd", "ZOhN7HsV9US", "vJge4up5I1p", "dERJW0EmoRw", "5GYV8Uny96Jq", "1lnw5Uq1tdv", "siMnfznb9p", "vfhJAaN31w", "pVx27R0Yys-" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments, and we would like to further clarify your concerns.\n\nHere we highlight that we have introduced a much weaker condition on mixing functions (see Appendix B). The introduced condition only requires the mixing functions to preserve the independence of factors, and hence is much more gener...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "YrfhJUglldw", "CutNiXlIr8c", "dERJW0EmoRw", "iclr_2022_AMpki9kp8Cn", "kg7te-N3hyt", "eMQjbmZYwFfO", "5GYV8Uny96Jq", "iclr_2022_AMpki9kp8Cn", "1lnw5Uq1tdv", "iclr_2022_AMpki9kp8Cn", "pVx27R0Yys-", "vfhJAaN31w", "siMnfznb9p", "F66q6OWe0QR", "iclr_2022_AMpki9kp8Cn", "iclr_2022_AMpki9kp8C...
iclr_2022_KSugKcbNf9
Transformers Can Do Bayesian Inference
Currently, it is hard to reap the benefits of deep learning for Bayesian methods, which allow the explicit specification of prior knowledge and accurately capture model uncertainty. We present Prior-Data Fitted Networks (PFNs). PFNs leverage large-scale machine learning techniques to approximate a large set of posterio...
Accept (Poster)
This paper presents a method for using transformer models to perform approximate Bayesian inference, in the sense of approximating the posterior predictive distribution for a test example. This seems similar to doing amortized variational inference using a transformer model. The reviewers all found the paper to be cl...
train
[ "LqU4mOLghzm", "9ZTlukdVLad", "2I-I8LfVs8I", "CH1nqrIPlnW", "6Qi7I03ch76", "CxgM0Fx_L7w", "4xer71lZ3se", "EA-oZyzLQU7", "Jh13dIhr9u", "9yhI1qajoZ-", "rTc_zt0GRQI", "ciOO8uHTmJn" ]
[ "author", "author", "public", "author", "author", "author", "public", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank our reviewers and Jakob Macke for their suggestions. We uploaded a final revision with the following changes:\n- We added all hyper-parameters for the BNN on PFNs for tabular data.\n- We ran the MCMC and SVI baseline for higher budgets for Figure 5, to see where their NLL converges for BNNs.\n- We added ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2022_KSugKcbNf9", "2I-I8LfVs8I", "CxgM0Fx_L7w", "9yhI1qajoZ-", "CH1nqrIPlnW", "4xer71lZ3se", "iclr_2022_KSugKcbNf9", "rTc_zt0GRQI", "ciOO8uHTmJn", "iclr_2022_KSugKcbNf9", "iclr_2022_KSugKcbNf9", "iclr_2022_KSugKcbNf9" ]
iclr_2022_iEx3PiooLy
VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects
Perceiving and manipulating 3D articulated objects (e.g., cabinets, doors) in human environments is an important yet challenging task for future home-assistant robots. The space of 3D articulated objects is exceptionally rich in their myriad semantic categories, diverse shape geometry, and complicated part functionalit...
Accept (Poster)
The paper claims to present actionable visual representations for manipulating 3D articulated objects. Specifically, the approach learns to estimate the spatial affordance map as well as the trajectories and their scores. After checking the rebuttal from the authors, all reviewers agree that the paper adds value to the...
train
[ "G5IaBwodMZJ", "po14ju7lesF", "sSBAQatpTA", "kTp1zNkEhyV", "contDM8IEKd", "YmKVB0b6_1o", "S-TAV48vVdG", "jK3LNB5BBJZ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for raising your rating to 6. We are glad that our responses help alleviate your concerns. Thanks again for all your valuable feedback!", "The paper proposes a method for exploration of 3D articulated environments that alternates between collecting interaction data with RL while maximizing a combination ...
[ -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "po14ju7lesF", "iclr_2022_iEx3PiooLy", "po14ju7lesF", "po14ju7lesF", "S-TAV48vVdG", "jK3LNB5BBJZ", "iclr_2022_iEx3PiooLy", "iclr_2022_iEx3PiooLy" ]
iclr_2022_bZJbzaj_IlP
A NON-PARAMETRIC REGRESSION VIEWPOINT : GENERALIZATION OF OVERPARAMETRIZED DEEP RELU NETWORK UNDER NOISY OBSERVATIONS
We study the generalization properties of the overparameterized deep neural network (DNN) with Rectified Linear Unit (ReLU) activations. Under the non-parametric regression framework, it is assumed that the ground-truth function is from a reproducing kernel Hilbert space (RKHS) induced by a neural tangent kernel (NTK) ...
Accept (Poster)
The paper contributes a theoretical understanding of training over-parametrized deep neural networks with rectified linear unit (ReLU) activations using gradient descent with respect to square loss in the neural tangent kernel (NTK) regime. Authors consider a non-parametric regression framework wherein the labels are g...
train
[ "1tmMDGVT9Av", "WuVvOlXk-ej", "CHbRAOyNQz", "OphMSkDxsAp", "d1qH6rDkXU", "5q2pMnRFCoA", "bY50rQjN9AN", "CfUPrW7tFrM", "sA3vDBAtR3P", "6Hvi431tXU", "upt-5Wlms-G", "vliyUqlMABK", "gM-zmwUIgm3", "H_KWdVq34JZ", "gFl_mTbAA5y", "c4uGx5yd-GU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We thank the authors for your feedback. Admittedly, we didn't read the paper very carefully/thoroughly during the review and realized that we made some typos in the first round of review comments. After reading the authors' response, our review score remains the same.", "The paper made technical contribution al...
[ -1, 6, -1, 8, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 3, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "6Hvi431tXU", "iclr_2022_bZJbzaj_IlP", "gFl_mTbAA5y", "iclr_2022_bZJbzaj_IlP", "H_KWdVq34JZ", "iclr_2022_bZJbzaj_IlP", "gM-zmwUIgm3", "upt-5Wlms-G", "iclr_2022_bZJbzaj_IlP", "WuVvOlXk-ej", "vliyUqlMABK", "5q2pMnRFCoA", "upt-5Wlms-G", "OphMSkDxsAp", "c4uGx5yd-GU", "iclr_2022_bZJbzaj_IlP...
iclr_2022_9jInD9JjicF
PoNet: Pooling Network for Efficient Token Mixing in Long Sequences
Transformer-based models have achieved great success in various NLP, vision, and speech tasks. However, the core of Transformer, the self-attention mechanism, has a quadratic time and memory complexity with respect to the sequence length, which hinders applications of Transformer-based models to long sequences. Many ap...
Accept (Poster)
This work reduces the time and memory complexity of Transformer for long sequences by using multiscale pooling to reduce attention from quadratic to linear complexity. Theoretical and experimental results show good results and are very competitive with the state-of-the-art. The paper is well written and experiments are...
train
[ "mnNfIiz-C77", "6gGsN30R4xb", "PfogBA2H7wU", "sGljps0PvZJ", "pZEVQlb-HT3", "1Je4SmFaAHQ", "Su6G_ZIW_wH", "Tz7-5t8we-a", "esRvGeUmIjt", "cFfzxQpuYWy", "vBDm3OYAVGo", "LrGMrKbqg4x", "jY919cf2WmH", "qNgRfhgqNah", "j0pkEo1H77", "5xRisY-DQTU", "pAUGY0wI2fW", "exaW27GH2C", "yhrjd8zC_0"...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", ...
[ " Thank you for all your feedback! \n\nBelow we address all the concerns you raised and we further compare PoNet to Fastformer and Luna regarding your comments. Before comparing PoNet to Fastformer and Luna, we would like to gently remind that based on the ICLR2022 policy (which we cite below), both Fastformer and ...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 5, 4 ]
[ "j0pkEo1H77", "lu2dQwnKPOF", "lu2dQwnKPOF", "lu2dQwnKPOF", "j0pkEo1H77", "iclr_2022_9jInD9JjicF", "Tz7-5t8we-a", "vBDm3OYAVGo", "iclr_2022_9jInD9JjicF", "j0pkEo1H77", "1Je4SmFaAHQ", "5xRisY-DQTU", "yhrjd8zC_0", "j0pkEo1H77", "pAUGY0wI2fW", "iclr_2022_9jInD9JjicF", "4OF33_tB4xB", "4...
iclr_2022_aBO5SvgSt1
Mirror Descent Policy Optimization
Mirror descent (MD), a well-known first-order method in constrained convex optimization, has recently been shown as an important tool to analyze trust-region algorithms in reinforcement learning (RL). However, there remains a considerable gap between such theoretically analyzed algorithms and the ones used in practice....
Accept (Poster)
This paper proposes and studies a variant of policy optimization---mirror descent policy optimization (MDPO)---which was inspired by the mirror descent algorithm in the optimization literature. The proposed algorithm attempts to find a policy parameter that maximizes the expected regularized advantage function, where ...
train
[ "Jr2NAaxdocv", "p9sZrVUFhjk", "z3yjW34Fzt", "loBh0pV5g_8", "GqhXrvP25V", "wzXQo7uPU0Q", "8U_VPAZU1d", "3JTkmnkbcG", "VBAytB8-BCk", "RZEqC7QhEWJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply and I will keep my score unchanged.", "The paper connects the optimization method, mirror descent, to the study of the policy optimization method. Based on the mirror descent principle, the paper proposes the MDPO algorithm, which updates the policy via approximately solving a trust-region...
[ -1, 5, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "loBh0pV5g_8", "iclr_2022_aBO5SvgSt1", "GqhXrvP25V", "RZEqC7QhEWJ", "p9sZrVUFhjk", "VBAytB8-BCk", "3JTkmnkbcG", "iclr_2022_aBO5SvgSt1", "iclr_2022_aBO5SvgSt1", "iclr_2022_aBO5SvgSt1" ]
iclr_2022_xQUe1pOKPam
Pre-training Molecular Graph Representation with 3D Geometry
Molecular graph representation learning is a fundamental problem in modern drug and material discovery. Molecular graphs are typically modeled by their 2D topological structures, but it has been recently discovered that 3D geometric information plays a more vital role in predicting molecular functionalities. However, t...
Accept (Poster)
This paper studies the problem of how to use 3D molecular geometry information during training to improve performance during prediction time when 3D information is not available. This is a highly interesting problem as obtaining 3D molecular geometry information requires expensive calculations and such information is u...
train
[ "b13F4fwEAsE", "dHCT3sE8eHY", "fqA53tbrCvZ", "LxAxyEKmdRN", "CAjWBXO9WI", "SoU6mVRQf0B", "9mGecCevuv6", "Zo7S0v1qMf", "FxNrsEful4p", "7plX0sCyMRz", "Qn4dLbwmmSG", "Ot6462Jw6PW", "GgT5mXrVks", "tvc6Lvi2A5w", "fjcwcs09V5e", "zVY8hStj48", "O9Oa2U6wi6A", "na9LxWDoEEZ", "hkI-_pdUUuF",...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "...
[ " We would like to thank again all reviewers.\n\nPlease let us know if there are additional questions or concerns before the end of the discussion period. We would be happy to discuss or address any additional comments.\n", " Thank you again for taking the time to discuss the paper with us. We sincerely appreciat...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2022_xQUe1pOKPam", "fqA53tbrCvZ", "CAjWBXO9WI", "iclr_2022_xQUe1pOKPam", "7plX0sCyMRz", "hkI-_pdUUuF", "iclr_2022_xQUe1pOKPam", "zVY8hStj48", "iclr_2022_xQUe1pOKPam", "tvc6Lvi2A5w", "fjcwcs09V5e", "O9Oa2U6wi6A", "na9LxWDoEEZ", "FxNrsEful4p", "FxNrsEful4p", "FxNrsEful4p", "uH-f0...
iclr_2022_B8DVo9B1YE0
Relating transformers to models and neural representations of the hippocampal formation
Many deep neural network architectures loosely based on brain networks have recently been shown to replicate neural firing patterns observed in the brain. One of the most exciting and promising novel architectures, the Transformer neural network, was developed without the brain in mind. In this work, we show that trans...
Accept (Poster)
This paper received 4 unanimous accept (including 1 marginal accept). This well-written and clear paper clarifies the relationship between transformers and a recent exciting model of the medial temporal lobe in neuroscience. There was some clarifications requested by the reviewers that were addressed during the revisio...
train
[ "YquDjTp_grR", "ShJwEOl6xd8", "MxL7dY1vrDT", "zxfwcf2k7O2", "ww20TrrLbSB", "FVlHIQxVy4", "ZxE5NcHkEXo", "2EnbpZLzzg", "KnpqmLzwFod", "gJEtIw59xWT", "TBQrenV-ZX", "Oy7_Z_baZHA", "vBfQ6-zqFfl", "7KkWeTvhzQF", "A_gOpDK994b", "wx7eEbPy1yV", "5-93z7yFu6u", "BbcHbMLpxVb", "xJjkEln14_F"...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_rev...
[ " No problem, and likewise - it has been a pleasure! Your comments have been really helpful to see where our paper was not clear, and have allowed us state our main claims in a much clearer way. The paper is much improved because of it.\n\nHopefully we can talk more at the conference!", "The paper postulates a ta...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "MxL7dY1vrDT", "iclr_2022_B8DVo9B1YE0", "RJZBNYZDFl", "ZxE5NcHkEXo", "KnpqmLzwFod", "2EnbpZLzzg", "jVmp0Xw88j", "Wj3nT81WnmW", "3RCqLyI7U7W", "TBQrenV-ZX", "Oy7_Z_baZHA", "iclr_2022_B8DVo9B1YE0", "7KkWeTvhzQF", "yelZaF4voND", "iclr_2022_B8DVo9B1YE0", "BbcHbMLpxVb", "xJjkEln14_F", "...
iclr_2022_EwqEx5ipbOu
How Well Does Self-Supervised Pre-Training Perform with Streaming Data?
Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained. Unfortunately, such a problem setting is often impractical if not infeasible since many real-world tasks rely on sequential lea...
Accept (Poster)
This paper performs a comprehensive investigation on self-supervised pre-training with streaming data. Reviewers agreed that the task studied in this paper is highly practical and important, and the analysis is insightful. Meanwhile, reviewers raised some concerns such as empirical setups and insights. In the revised p...
train
[ "bKIrhXiPagt", "n-fD4HiIUKV", "kfnOzKhKz5q", "5ifG8pbhEMH", "sV9je11PwSj", "6SEtSbLbdL", "VVEMTQwoAVg", "6S-Ra-JyjwG", "dVf12NDoJmH", "dvnqvqPsGyV", "7ngA74atuEU", "wmdccq8Rnkn", "Y2V8xbMrYqi", "gShF-rLhVPm", "Sy-f5O25XGl", "g8ULtidtkXo", "aWDOdydjYtX", "ORqHX-nf2_o", "wU6BEoo-XI...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official...
[ " > **Q8**: Looking at the training curves provided in Fig 14, I am doubtful if the small increase in the loss value can be considered a spike due to distribution drift when training starts for a new chunk. It would have been useful to also see the training curves for SSL-JT for a fair comparison. As also acknowled...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "n-fD4HiIUKV", "5ifG8pbhEMH", "iclr_2022_EwqEx5ipbOu", "dVf12NDoJmH", "iclr_2022_EwqEx5ipbOu", "iclr_2022_EwqEx5ipbOu", "6S-Ra-JyjwG", "ORqHX-nf2_o", "dvnqvqPsGyV", "wmdccq8Rnkn", "ckuka761LnW", "kfnOzKhKz5q", "7ngA74atuEU", "wU6BEoo-XIv", "wmeWtyPjurb", "Sy-f5O25XGl", "t_h6rrQStw", ...
iclr_2022_fy_XRVHqly
Structure-Aware Transformer Policy for Inhomogeneous Multi-Task Reinforcement Learning
Modular Reinforcement Learning, where the agent is assumed to be morphologically structured as a graph, for example composed of limbs and joints, aims to learn a policy that is transferable to a structurally similar but different agent. Compared to traditional Multi-Task Reinforcement Learning, this promising approach ...
Accept (Poster)
This paper studies the role of positional and relational embedding s for multi-task reinforcement learning with transformer-based policies, The paper is well-motivated, the experiment shows its effectiveness against other competitive methods. In the rebuttal period, the authors solved most of the reviews’ questions suc...
train
[ "bjYc0FZ9bZ_", "HHQ2azm2l8O", "uAcGbUkYDIB", "oCU0SRGbqlY", "L8dorMhNm8w", "_UI0HQ4VLD", "9O5db2z36QD", "Hl0LvcLny_9", "Hb9y-aMC5U9", "BUqFCKuyJE3", "ujCx-5Lhf_f", "7L-8z9IUZEF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and additional experiments. I will keep my current score since I feel the paper does provide good empirical performance although there is some concern about the generalizability of this approach for other tasks. I hope the authors could discuss these points in the main paper to let the ...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 2, -1, 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "BUqFCKuyJE3", "iclr_2022_fy_XRVHqly", "_UI0HQ4VLD", "iclr_2022_fy_XRVHqly", "Hl0LvcLny_9", "HHQ2azm2l8O", "ujCx-5Lhf_f", "oCU0SRGbqlY", "iclr_2022_fy_XRVHqly", "7L-8z9IUZEF", "iclr_2022_fy_XRVHqly", "iclr_2022_fy_XRVHqly" ]
iclr_2022_XhF2VOMRHS
A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training
Adversarial Training (AT) is known as an effective approach to enhance the robustness of deep neural networks. Recently researchers notice that robust models with AT have good generative ability and can synthesize realistic images, while the reason behind it is yet under-explored. In this paper, we demystify this pheno...
Accept (Poster)
This paper presents a probabilistic framework that explains why models trained adversarially are robust generators. It received fairly high initial scores. The reviewers thought the work was novel and interesting. They liked that the analysis provided a way to derive a novel training method and sampling algorithms. Rev...
train
[ "Vb_D86fpl-R", "X4OgILxqhmG", "69wAc5v5mHQ", "gwfLWu_tBlG", "t6ooGG7I_b", "oPzpNvEZyh5", "XCdBl5KTtci", "WPoPbXSUffo", "p5_yDO2Cs_G", "4yKNtligz2Y", "Re-rW3mdbcX", "dlJAymSVyA3", "IhRIJtZkuX-", "fxMEAiQCc4c", "xGYqXYDdQIp", "eVvI7-DNHK", "QSdxOkEo8bo", "iwaX698QeVH", "ODppKui3mKu...
[ "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors to write a comprehensive rebuttal that cleared up all my questions. I have liked this paper and still like it right now. It's nice to see some of the results on discriminative models too. I'm still strongly supporting acceptance.", " Dear Reviewer rJdV,\n\nWe have updated our m...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "4yKNtligz2Y", "ODppKui3mKu", "iwaX698QeVH", "ODppKui3mKu", "XCdBl5KTtci", "XCdBl5KTtci", "Re-rW3mdbcX", "iwaX698QeVH", "ODppKui3mKu", "UwXWbIEOgKM", "eVvI7-DNHK", "UwXWbIEOgKM", "ODppKui3mKu", "iclr_2022_XhF2VOMRHS", "QSdxOkEo8bo", "iclr_2022_XhF2VOMRHS", "iclr_2022_XhF2VOMRHS", "...
iclr_2022_1JDiK_TbV4S
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destro...
Accept (Spotlight)
The paper aims to solve the source-free domain adaptation, specifically on measurement shift. The proposed method lowers the domain gap via restoring the source feature distribution with a lightweight approximation. The effectiveness and performance are well validated by extensive experiments on various datasets compar...
train
[ "CYOxMj93Zr", "Et4lOOKlR1b", "WB3Xc6IA_n3", "J7kqifc4p8m", "_5zMu88FdRs", "6woxqTpcRPY", "f-Lw-j6NAAY", "mRo_93PgFtb", "GOLqYu_hOh", "tpXHD46ArC", "a6ofUIYr79j", "LBX4zYJ4um", "0cXdxvAeSCU", "-p_5GbsWnVb", "Q_Oo2uybAnu", "9qESAI91VhI", "7M4YkzOdH0d" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper addresses the problem of source-free domain adaptation (SFDA) under measurement shifts. The measurement shifts are a subset of general domain shifts which arise from a change in measurement system. The proposed method aims to resolve this problem by restoring the target features to the source feature di...
[ 6, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2022_1JDiK_TbV4S", "GOLqYu_hOh", "iclr_2022_1JDiK_TbV4S", "tpXHD46ArC", "LBX4zYJ4um", "iclr_2022_1JDiK_TbV4S", "CYOxMj93Zr", "f-Lw-j6NAAY", "7M4YkzOdH0d", "a6ofUIYr79j", "WB3Xc6IA_n3", "-p_5GbsWnVb", "mRo_93PgFtb", "6woxqTpcRPY", "9qESAI91VhI", "iclr_2022_1JDiK_TbV4S", "iclr_20...
iclr_2022_CrCvGNHAIrz
Explainable GNN-Based Models over Knowledge Graphs
Graph Neural Networks (GNNs) are often used to learn transformations of graph data. While effective in practice, such approaches make predictions via numeric manipulations so their output cannot be easily explained symbolically. We propose a new family of GNN-based transformations of graph data that can be trained effe...
Accept (Poster)
This paper proposes monotonic graph neural networks (MGNNs) for the transformation of knowledge graphs. Specifically, MGNNs transform a knowledge graph into a colored graph where each node is represented by a numeric feature vector and each edge encodes the node relationship with different colors. The authors provide t...
train
[ "lRizfePQf66", "1fYgVqubPen", "7Bm-PyAy7n", "bf2HeWWzD4Z", "txd7v0KlWxH", "Av8z_z_3I2", "AAEr4lbj9kn", "Z4Jwpa2glCC", "xA7QkNibV1", "OuXtnhTCmkS", "gvGqr_tzKZk", "Yt8WCAINsq", "tMGHOwdX4YG", "1obT5mrOwiI", "Wo3HMGa89kh", "26OjfIMpggF", "JBYATR-kDtS", "_8hF6uDNdq", "f25yugflfEX", ...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_re...
[ " We thank the reviewer for their response. We would like to emphasise that, to the best of our knowledge, our approach is the first to introduce a class of neural networks learning dataset transformations that: (i) can be entirely characterised as Datalog programs, and (ii) can be trained in practice to achieve co...
[ -1, 6, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "7Bm-PyAy7n", "iclr_2022_CrCvGNHAIrz", "6WAS8cpCE8Q", "txd7v0KlWxH", "nU_5Me81gOJ", "AAEr4lbj9kn", "Wo3HMGa89kh", "iclr_2022_CrCvGNHAIrz", "gvGqr_tzKZk", "JBYATR-kDtS", "Yt8WCAINsq", "tMGHOwdX4YG", "1obT5mrOwiI", "Z4Jwpa2glCC", "Z4Jwpa2glCC", "u2-BD-Ua3s2", "iclr_2022_CrCvGNHAIrz", ...
iclr_2022_TfhfZLQ2EJO
SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning
Preference-based reinforcement learning (RL) has shown potential for teaching agents to perform the target tasks without a costly, pre-defined reward function by learning the reward with a supervisor’s preference between the two agent behaviors. However, preference-based learning often requires a large amount of human ...
Accept (Poster)
The topic of learning reward functions from preferences and how to do this efficiently is of high interest to the ML/RL community. All reviewers appreciate the suggested technical approach and the thorough evaluations that demonstrate clear improvements. While the technical novelty of the paper is not entirely compelli...
train
[ "qbks5FGBGwM", "pckg_hdkluI", "5vCbjzgBecC", "nxxN_usKfwa", "dVycTQqtO7s", "-w0omgxZpb8", "BTYsbxhUBlP", "GkCA-ncQzQ4", "6DagX26JOz4", "Hvh0Ey0zj4d", "hc5AGks6yM", "8h5fL8wNeG", "TXn8Yxv5Ckt", "AgYZdJWVI3s" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks for the thorough response and additional experiments. My questions are well addressed.\nI keep my vote for acceptance.", " We are happy to hear that our response and additional results improve our paper, and thank you again for the valuable suggestions and comments.\n\nRegarding the temporal cropping aug...
[ -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "8h5fL8wNeG", "nxxN_usKfwa", "iclr_2022_TfhfZLQ2EJO", "Hvh0Ey0zj4d", "iclr_2022_TfhfZLQ2EJO", "AgYZdJWVI3s", "dVycTQqtO7s", "5vCbjzgBecC", "iclr_2022_TfhfZLQ2EJO", "dVycTQqtO7s", "iclr_2022_TfhfZLQ2EJO", "AgYZdJWVI3s", "5vCbjzgBecC", "iclr_2022_TfhfZLQ2EJO" ]
iclr_2022_VPjw9KPWRSK
Self-Supervised Inference in State-Space Models
We perform approximate inference in state-space models with nonlinear state transitions. Without parameterizing a generative model, we apply Bayesian update formulas using a local linearity approximation parameterized by neural networks. It comes accompanied by a maximum likelihood objective that requires no supervisio...
Accept (Poster)
This paper presents a method for inference in state-space models with non-linear dynamics and linear-Gaussian observations. Instead of parameterizing a generative model, the paper proposes to parameterize the conditional distribution of current latent states given previous latent states and observations using locally l...
train
[ "KNzUI7tpwZR", "oUZAIDTUn3S", "CbZALCx2yZ", "FmciCb-ZQS", "H6K1aQA1XOj", "TND9SLnT2yq", "xO4StYBVAOt", "cTdr-r51Sop", "69cy2JsLf9L", "b_w_gz0OEL", "oD7YzSeWOy3", "bo2zEEN643Z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for the detailed response. I found the latents and the noise experiments somewhat helpful, so I would recommend including them in the supplementary material. The audio files are less useful. \n\nI will keep my initial score. Thanks again for responding thoroughly.", " I thank the author...
[ -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "b_w_gz0OEL", "69cy2JsLf9L", "xO4StYBVAOt", "iclr_2022_VPjw9KPWRSK", "iclr_2022_VPjw9KPWRSK", "FmciCb-ZQS", "TND9SLnT2yq", "H6K1aQA1XOj", "bo2zEEN643Z", "oD7YzSeWOy3", "iclr_2022_VPjw9KPWRSK", "iclr_2022_VPjw9KPWRSK" ]
iclr_2022_kG0AtPi6JI1
Visual Representation Learning over Latent Domains
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes ...
Accept (Poster)
The problem setting studied in this paper is an extension of the problem setting of multi-domain learning, where domain information is missing in training. This is an interesting and practical problem setting. However, regarding technical novelty, the contributions are relatively limited. Specifically, first, the overa...
val
[ "Zk8X5stpRKJ", "na0NMX2XxO-", "O2S-00B0IB5", "6RAMigXHuV", "Uoy4JlfPK7D", "AlOyQMlMUsH", "SgvEXmVG5JA", "iwBwvX9B-Jb", "uV16b80CkvH", "Jqw9xJLc6Z2", "xRMPbqL85S" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Overall, the authors have addressed my main concerns and I am happy to keep my original rate of '6: marginally above the acceptance threshold'.", " We thank the reviewers for their constructive feedback and are pleased that our manuscript has been well received overall:\n\n**R1**: *\"The paper is well motivated...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 2 ]
[ "AlOyQMlMUsH", "iclr_2022_kG0AtPi6JI1", "Uoy4JlfPK7D", "xRMPbqL85S", "Jqw9xJLc6Z2", "uV16b80CkvH", "iwBwvX9B-Jb", "iclr_2022_kG0AtPi6JI1", "iclr_2022_kG0AtPi6JI1", "iclr_2022_kG0AtPi6JI1", "iclr_2022_kG0AtPi6JI1" ]
iclr_2022_d_2lcDh0Y9c
DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG Signals
The quantitative analysis of non-invasive electrophysiology signals from electroencephalography (EEG) and magnetoencephalography (MEG) boils down to the identification of temporal patterns such as evoked responses, transient bursts of neural oscillations but also blinks or heartbeats for data cleaning. Several works ha...
Accept (Poster)
This paper presents a novel method for identifying simuli induced patterns in MEG and EEG signals. The authors develop a novel statistical point process model and a fast EM algorithm to learn the parameters. Discussion of this paper centered around: how to fit hyperparameters, and similarity and comparison with other...
train
[ "TZrubHS68sh", "BIdPFDus7Q1", "p9BG_z2JT6N", "43q_wuUcM58" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This is an exciting manuscript. It was a delight to read and evaluate it. The manuscript is well written, and the methodology is presented and validated with illustrative cases. Provided code runs without problems, and it is excellently documented. An introduced point process-based technique seems to be adequately...
[ 8, 8, 6, 3 ]
[ 4, 4, 4, 3 ]
[ "iclr_2022_d_2lcDh0Y9c", "iclr_2022_d_2lcDh0Y9c", "iclr_2022_d_2lcDh0Y9c", "iclr_2022_d_2lcDh0Y9c" ]
iclr_2022_FEDfGWVZYIn
RelaxLoss: Defending Membership Inference Attacks without Losing Utility
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ubiquitously in machine learning models. Existing works evidence strong connection between the distinguishability of the training and testing loss distributions and the model's vulnerability to MIAs. Motivated by existing ...
Accept (Spotlight)
The paper proposes an approach and specific training algorithm to defend against membership inference attacks (MIA) in machine learning models. Existing MIA attacks are relatively simple and rely on the test loss distribution at the query point and therefore the proposed algorithm sets a positive target mean training l...
train
[ "cWcNH7NiZ5l", "KSUdkK4DwWO", "OXYjSjgW2N5", "0w3T3vlyqZe", "nllcodJoDo", "e0O-la-XgkO", "04pZ5bUxxKT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "To defend the common membership inference attacks, this paper proposes a relaxed loss with a more achievable learning target to reduce the attack accuracy and also help with the generalization gap of learning models. Extensive evaluations on five datasets with diverse modalities show that the proposed method can ...
[ 8, -1, -1, -1, -1, 8, 8 ]
[ 2, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_FEDfGWVZYIn", "OXYjSjgW2N5", "e0O-la-XgkO", "cWcNH7NiZ5l", "04pZ5bUxxKT", "iclr_2022_FEDfGWVZYIn", "iclr_2022_FEDfGWVZYIn" ]
iclr_2022_5kq11Tl1z4
IGLU: Efficient GCN Training via Lazy Updates
Training multi-layer Graph Convolution Networks (GCN) using standard SGD techniques scales poorly as each descent step ends up updating node embeddings for a large portion of the graph. Recent attempts to remedy this sub-sample the graph that reduces compute but introduce additional variance and may offer suboptimal pe...
Accept (Poster)
Overall the paper present the idea of caching and using stale information to update instead of sub sampling for speeding up graph convolution neural network. Reviewers liked the idea but also there were concerns about experimental comparisons. In the rebuttal the authors did provide more evidence of comparison with ot...
train
[ "n-nQ0mhd50j", "wlrQsGvHtLd", "E7_8Sp-5FIg", "QA3fsPADZaK", "Csrq3GkZDBf", "OYAVWxl_RCv", "wmHIFNJYW1B", "IjnJTObykid", "EHVNjKxciY", "Xeju9f_-gwk", "Kh2LfU_AiEG", "0Km6fGl31Tq", "qRpT0zQsqeH", "uT5mfhIAwQx", "zI-Ia1JHW9q", "bpkUzSJ8XEG", "yufYaTDYden" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer jiPr,\n\nWe would like to thank you for the constructive feedback on our paper. We have tried to incorporate the feedback in the latest draft, hopefully this positively impacts your opinion of the paper.\n\nGentle follow up on the responses and if there are any further questions/concerns we can addr...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "OYAVWxl_RCv", "IjnJTObykid", "Csrq3GkZDBf", "iclr_2022_5kq11Tl1z4", "0Km6fGl31Tq", "yufYaTDYden", "QA3fsPADZaK", "bpkUzSJ8XEG", "yufYaTDYden", "bpkUzSJ8XEG", "iclr_2022_5kq11Tl1z4", "qRpT0zQsqeH", "uT5mfhIAwQx", "QA3fsPADZaK", "EHVNjKxciY", "iclr_2022_5kq11Tl1z4", "iclr_2022_5kq11Tl...
iclr_2022_4Muj-t_4o4
Learning a subspace of policies for online adaptation in Reinforcement Learning
Deep Reinforcement Learning (RL) is mainly studied in a setting where the training and the testing environments are similar. But in many practical applications, these environments may differ. For instance, in control systems, the robot(s) on which a policy is learned might differ from the robot(s) on which a policy wil...
Accept (Poster)
The paper tackles the problem of generalizing to a new environment by learning a small set up anchor policies (even just 2 for the final approach) which span a sub-space that can be searched efficiently in a new environment. The discussion and additional experiments managed to convince most reviewers that the method in...
test
[ "y757uCksq_A", "A96sy71zt8w", "83X41Qfvb5J", "BxuXoLkrtfi", "s2cfb3cFym", "T1zD0geylCs", "PijZixNKe-n", "rYOv7sGOU_", "Du-HPFMJWVn", "fl-GpUjliXX", "C_qlSw-rmk3", "_YfBoChl3dT", "hYq0gIZqpS2", "aKpscYD5XYk", "Wk7qGgBn4fo", "EL85W0ZElmT", "9zu2tgXBYg0", "GNp-EGZjRBP", "dZU_bf3WnD3...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " Hi, Thanks for the feedback\n\nWith the short rebuttal period, and the many experiments we had to launch, we made two mistakes:\n- First, we have updated the website to answer the reviewer qrCv, and switched to the 'full' version of the website with the link to the arxiv version of the paper. We have corrected th...
[ -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ -1, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "83X41Qfvb5J", "iclr_2022_4Muj-t_4o4", "9zu2tgXBYg0", "s2cfb3cFym", "PijZixNKe-n", "iclr_2022_4Muj-t_4o4", "rYOv7sGOU_", "Du-HPFMJWVn", "aKpscYD5XYk", "T1zD0geylCs", "4srpb9ZX5fI", "A96sy71zt8w", "dZU_bf3WnD3", "T1zD0geylCs", "4srpb9ZX5fI", "iclr_2022_4Muj-t_4o4", "A96sy71zt8w", "d...
iclr_2022_Z8FzvVU6_Kj
SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search
One-shot Neural Architecture Search (NAS) usually constructs an over-parameterized network, which we call a supernet, and typically adopts sharing parameters among the sub-models to improve computational efficiency. One-shot NAS often repeatedly samples sub-models from the supernet and trains them to optimize the share...
Accept (Poster)
The paper proposes a supernet learning strategy for NAS based on meta-learning to tackle the knowledge forgetting issue. Forgetting happens when training a sampled sub-model to optimize the shared parameters overrides the previous knowledge learned by the other sub-models. The main idea of the paper is to consider trai...
train
[ "DIB4zNwOzp", "93uboNaRXw-", "Mk8q1-4QpDV", "6vOoZ8C1_RC", "-Qu4EbWoYF", "-ntxyDq8Y0C", "G3xxzwOYfFS", "a89HZ9x5DZi", "nueSElNK95U", "NHvvaZNxFS7", "SSyEy8JgEwQ", "kOdFtMRIHmY", "8nWdUH08tyh", "UEl7rnQbBfb", "-_jZzG3hMCT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " My question on adaptation steps has been addressed and I believe including the new results (for adaptation step=4, 5, 6) to Table 4 and adding corresponding discussions would strengthen the paper further.\n\nThe rebuttal does not change my overall assessment and I'd like to keep my original score. ", "This pape...
[ -1, 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "kOdFtMRIHmY", "iclr_2022_Z8FzvVU6_Kj", "NHvvaZNxFS7", "-_jZzG3hMCT", "G3xxzwOYfFS", "iclr_2022_Z8FzvVU6_Kj", "8nWdUH08tyh", "iclr_2022_Z8FzvVU6_Kj", "93uboNaRXw-", "-_jZzG3hMCT", "-ntxyDq8Y0C", "UEl7rnQbBfb", "-ntxyDq8Y0C", "iclr_2022_Z8FzvVU6_Kj", "iclr_2022_Z8FzvVU6_Kj" ]
iclr_2022_7YDLgf9_zgm
Continual Learning with Recursive Gradient Optimization
Learning multiple tasks sequentially without forgetting previous knowledge, called Continual Learning(CL), remains a long-standing challenge for neural networks. Most existing methods rely on additional network capacity or data replay. In contrast, we introduce a novel approach which we refer to as Recursive Gradient O...
Accept (Spotlight)
This paper proposes an innovative method for continual learning that modifies the direction of gradients on a new task to minimise forgetting on previous tasks without data replay. The method is mathematically rigorous with a strong theoretical analysis and excellent empirical results across multiple continual learning...
train
[ "cU6BpuEnaBM", "kmepP3MWOjQ", "5Vk1WkBUcNE", "ZtWWCTWfmzo", "naHPL9otGX", "D0ApK1-MU2b", "xurZZV6fyWe", "zF2StEaPGnM", "l3Zh8Qsfm7N", "UU9fCNomyEx", "2P7qW8mALBq", "zuW2fimqufm", "Wlw2XBY5XCS", "7mCFkLMN4-K", "qD0uR1_N_m" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "public", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper introduces a new method, Recursive Gradient Optimization (RGO), for continual learning in the task-incremental scenario. This method modifies the direction of gradients on a new task in order to minimise forgetting on previous tasks, and unlike many previous works, does not require storing past raw data ...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_7YDLgf9_zgm", "5Vk1WkBUcNE", "naHPL9otGX", "xurZZV6fyWe", "D0ApK1-MU2b", "zuW2fimqufm", "iclr_2022_7YDLgf9_zgm", "l3Zh8Qsfm7N", "qD0uR1_N_m", "7mCFkLMN4-K", "Wlw2XBY5XCS", "cU6BpuEnaBM", "iclr_2022_7YDLgf9_zgm", "iclr_2022_7YDLgf9_zgm", "iclr_2022_7YDLgf9_zgm" ]
iclr_2022_tgcAoUVHRIB
Neural Methods for Logical Reasoning over Knowledge Graphs
Reasoning is a fundamental problem for computers and deeply studied in Artificial Intelligence. In this paper, we specifically focus on answering multi-hop logical queries on Knowledge Graphs (KGs). This is a complicated task because, in real world scenarios, the graphs tend to be large and incomplete. Most previous wo...
Accept (Poster)
This paper focuses on answering complex logical queries over an incomplete KG and use neural networks to do so flexibly handling multiple operations from FOL. Overall reviews agree that empirical performance is impressive. One reviewer gave a strong accept, one leaning to accept and two leaning to reject. Overall, the ...
train
[ "yRF13_T6Nbp", "glS1quzOmuu", "kUonGw5DOF0", "m_3I0hIw6FL", "w8f8pDWSNvq", "pcQ_uBcEA23", "84b6uj2kSSe", "tax-B65-LRF" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "New model for knowledge graph reasoning over complex logical queries.\nShows performance improvements over state of the art methods.\n The paper describes a new approach for learning and inference in knowledge graphs when the queries have logical structure.\nThe main idea is to use neural networks to implement dif...
[ 6, -1, -1, -1, -1, 5, 5, 8 ]
[ 4, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2022_tgcAoUVHRIB", "tax-B65-LRF", "yRF13_T6Nbp", "pcQ_uBcEA23", "84b6uj2kSSe", "iclr_2022_tgcAoUVHRIB", "iclr_2022_tgcAoUVHRIB", "iclr_2022_tgcAoUVHRIB" ]
iclr_2022_Vjki79-619-
Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks
The lottery ticket hypothesis states that a randomly-initialized neural network contains a small subnetwork which, when trained in isolation, can compete with the performance of the original network. Recent theoretical works proved an even stronger version: every sufficiently overparameterized (dense) neural network co...
Accept (Poster)
The paper presents interesting new results for pruning random convolutional networks to approximate a target function. It follows a recent line of work in the topic of pruning by learning. The results are novel, and the techniques interesting. There are some technical issues that are easy to fix within the camera ready...
val
[ "r-hdg2nIJa", "ZPmBWb2XZOH", "1VpFUtthjIq", "8vAq-9NCyr", "9jBLdeNuva_", "RoDgi7O3p9-", "-9X29opOV1", "lJiKdGN4B0y", "jKQQlGMxCKe", "9aNoZXg3CM1", "cTEKVkBpjum", "7kPBTP20Tv", "-Hej-IDPi7u", "xEeBMNWJsOaY", "DStJgvgqyEA", "kN-ywWzMxr", "kI3mrSPSohY", "MMeYAlKD-3o", "u52UfSBOb0j",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "public", "author", "public", "author", "public", "author", "public", "author", "author", ...
[ "The Strong Lottery Ticket Hypothesis (SLTH) says that any (sufficiently large) randomly initialized network can be pruned to obtain a network which performs well on a given task. This paper proves the Strong Lottery Ticket Hypothesis for convolutional neural networks by showing that given a target convolutional ne...
[ 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2022_Vjki79-619-", "1VpFUtthjIq", "-9X29opOV1", "iclr_2022_Vjki79-619-", "RoDgi7O3p9-", "lJiKdGN4B0y", "9aNoZXg3CM1", "u52UfSBOb0j", "7kPBTP20Tv", "cTEKVkBpjum", "XnmuVnSdfW", "-Hej-IDPi7u", "xEeBMNWJsOaY", "DStJgvgqyEA", "kN-ywWzMxr", "kI3mrSPSohY", "MMeYAlKD-3o", "iclr_2022...
iclr_2022_mfwdY3U_9ea
Igeood: An Information Geometry Approach to Out-of-Distribution Detection
Reliable out-of-distribution (OOD) detection is fundamental to implementing safer modern machine learning (ML) systems. In this paper, we introduce Igeood, an effective method for detecting OOD samples. Igeood applies to any pre-trained neural network, works under various degrees of access to the ML model, does not re...
Accept (Poster)
This paper introduces a novel approach for out of distribution detection that generates scores from a trained DNN model by using the Fisher-Rao distance between the feature distributions of a given input sample at the logit layer and the lower layers of the model and the corresponding mean feature distributions over t...
val
[ "N4cKKsX_Jei", "pucHqFHRFcW", "oyhO9w91b4c", "cJ6tf1YeLL2", "odE5k3xdsqR", "GIPD5ZHfcQe", "NRK9zby3971", "L0ZUgFXMtmZ", "Mkrf9_d13vm", "OlTXbd-idr", "VVkxvg8k1kz", "9G8BjwOV6P", "rOCKxA3Cl11" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " We want to acknowledge the reviewers for their thoughtful and valuable comments and efforts towards improving our manuscript. We believe the discussion period was very useful to truly revise and improve the presentation of our results.", "This paper presents IGEOOD, a new method for detecting OOD samples by usi...
[ -1, 6, -1, 8, -1, -1, 6, 5, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, 4, -1, -1, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2022_mfwdY3U_9ea", "iclr_2022_mfwdY3U_9ea", "VVkxvg8k1kz", "iclr_2022_mfwdY3U_9ea", "OlTXbd-idr", "9G8BjwOV6P", "iclr_2022_mfwdY3U_9ea", "iclr_2022_mfwdY3U_9ea", "L0ZUgFXMtmZ", "cJ6tf1YeLL2", "pucHqFHRFcW", "NRK9zby3971", "iclr_2022_mfwdY3U_9ea" ]
iclr_2022_AjGC97Aofee
Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning
Several image super-resolution (SR) networks have been proposed of late for efficient SR, achieving promising results. However, they are still not lightweight enough and neglect to be extended to larger networks. At the same time, model compression techniques, like neural architecture search and knowledge distillation,...
Accept (Poster)
This paper receives positive reviews. The authors provide additional results and justifications during the rebuttal phase. All reviewers find this paper interesting and the contributions are sufficient for this conference. The area chair agrees with the reviewers and recommends it be accepted for presentation.
val
[ "nzm3ATATRGC", "t3rbtsADMjg", "KYLVg4_PSDK", "zclfVxWh9K", "aag6E66-tqg", "_s0UoIdedq0", "ngjblNaAeM", "H5ARNOlYks", "aePo9FvGNFv", "kGp1SkaPH6", "-E9wB8bc5I_", "icWO02tnNe", "neJEVlzUjrH", "NQEJMgINzPJ", "yc_iRcEyR7t", "biWAhlFl8M" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers and area chairs for their valuable time and comments. After discussing with reviewers and providing more clarifications/results/analyses, we would like to give a brief response.\n\nAll reviewers now agree with the novelty, extensive experiments (e.g., ablation study, main comparisons, and s...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 5, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_AjGC97Aofee", "KYLVg4_PSDK", "zclfVxWh9K", "NQEJMgINzPJ", "_s0UoIdedq0", "-E9wB8bc5I_", "iclr_2022_AjGC97Aofee", "biWAhlFl8M", "yc_iRcEyR7t", "yc_iRcEyR7t", "ngjblNaAeM", "NQEJMgINzPJ", "NQEJMgINzPJ", "iclr_2022_AjGC97Aofee", "iclr_2022_AjGC97Aofee", "iclr_2022_AjGC97Aofee" ...
iclr_2022_Qg2vi4ZbHM9
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
In this paper, we perform an in-depth study of the properties and applications of aligned generative models. We refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learnin...
Accept (Oral)
The paper provides an interesting analysis of aligned GAN models. The paper shows that when a model is obtained (fine-tuned) from another, then the corresponding hidden semantic spaces are aligned. The paper uses this property to show that without any additional architecture or training, the models can perform diverse ...
train
[ "8IwvvopkDJb", "-q-HVfdZnSV", "Y762J6Grstn", "dMiSALQ5Ykq", "pFuOfAG-ij", "JvWz2j35Xcj", "tevlH3xnsYN", "5eEvhYV5h6W", "7TFg3kNgKtK", "9-fX68vAN8", "ndrcI1XpBvf", "r5jhnw3S5o9", "DyuxSQNQhO", "fzXOqXvBfJ8", "OsMFaWtSNHb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I maintain my stance that the paper should be accepted - this is a valuable study", " Thanks for your reply. All my concerns have been addressed. While the fine-tuning method is the same as prior works, I agree with other viewers that the analysis of the latent space is interesting and can contribute to other r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "7TFg3kNgKtK", "5eEvhYV5h6W", "dMiSALQ5Ykq", "pFuOfAG-ij", "JvWz2j35Xcj", "tevlH3xnsYN", "9-fX68vAN8", "fzXOqXvBfJ8", "OsMFaWtSNHb", "r5jhnw3S5o9", "DyuxSQNQhO", "iclr_2022_Qg2vi4ZbHM9", "iclr_2022_Qg2vi4ZbHM9", "iclr_2022_Qg2vi4ZbHM9", "iclr_2022_Qg2vi4ZbHM9" ]
iclr_2022_vfsRB5MImo9
Towards Continual Knowledge Learning of Language Models
Large Language Models (LMs) are known to encode world knowledge in their parameters as they pretrain on a vast amount of web corpus, which is often utilized for performing knowledge-dependent downstream tasks such as question answering, fact-checking, and open dialogue. In real-world scenarios, the world knowledge stor...
Accept (Poster)
The paper introduces the problem of continual knowledge (language) learning. The authors point out the interesting duality between continual learning and knowledge learning where: in knowledge learning one must avoid forgetting time-invariant knowledge (avoid forgetting in CL), be able to acquire new knowledge (learn n...
train
[ "x_6heTJo910", "SmLMz5kvaSk", "ZgqtKbIuqrE", "Krcwk3--3SN", "Y8kmfAmGzZ1", "kX_-BNnWz4_", "tV8FOmso_Gg", "w7hfY-NPgJN", "exK1biKTlg2", "Ot-6NNXeVf", "QWxMzVXvBSB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my questions. I stick to my rating since the rebuttal doesn't fully address my concerns.\n", " > Weakness 5: Have you tried different sizes of T5 models? Maybe the GAP between vanilla and CKL methods will be smaller given larger models.\n\nThank you for your comment. The authors agree th...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "Y8kmfAmGzZ1", "ZgqtKbIuqrE", "Ot-6NNXeVf", "QWxMzVXvBSB", "w7hfY-NPgJN", "tV8FOmso_Gg", "exK1biKTlg2", "iclr_2022_vfsRB5MImo9", "iclr_2022_vfsRB5MImo9", "iclr_2022_vfsRB5MImo9", "iclr_2022_vfsRB5MImo9" ]
iclr_2022__hszZbt46bT
Anomaly Detection for Tabular Data with Internal Contrastive Learning
We consider the task of finding out-of-class samples in tabular data, where little can be assumed on the structure of the data. In order to capture the structure of the samples of the single training class, we learn mappings that maximize the mutual information between each sample and the part that is masked out. The ...
Accept (Poster)
The paper proposes contrastive learning for tabular data to improve anomaly detection. Strengths: - Interesting and important problem. - Usage of contrastive learning for anomaly detection in general multi-variate datasets is novel (as prior work mostly focuses on images) - Extensive experiments with comparisons to mu...
train
[ "msjP7TYpPkN", "R9bxqGVfwNz", "IAvORr2yE6A", "7o4hNiT0c1Y", "IF2U5-I_Sz", "Eo-L8HZ7KcQ", "OArjxrY0WSd", "qIEC3wdnP0p", "ju2ovkMTjPW", "aXq0qf1USCE", "QbJdkD6QHgq", "C1Zum-_ubAc", "z-bEe_rISpw", "lMo98960A6", "bPVzX67XHkE", "mOZpY_nqf5Z", "H7mhv4nh6kG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces an anomaly detection algorithm for tabular data based on contrastive learning in the semi-supervised setting (training set assumed to be only normal data). The contrastive learning task is based on masking: the features from a single training example are split into two groups (pairs), one bei...
[ 8, -1, 6, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022__hszZbt46bT", "7o4hNiT0c1Y", "iclr_2022__hszZbt46bT", "IF2U5-I_Sz", "IAvORr2yE6A", "OArjxrY0WSd", "ju2ovkMTjPW", "iclr_2022__hszZbt46bT", "aXq0qf1USCE", "bPVzX67XHkE", "iclr_2022__hszZbt46bT", "z-bEe_rISpw", "IAvORr2yE6A", "msjP7TYpPkN", "qIEC3wdnP0p", "H7mhv4nh6kG", "iclr...
iclr_2022_JzNB0eA2-M4
On the Convergence of the Monte Carlo Exploring Starts Algorithm for Reinforcement Learning
A simple and natural algorithm for reinforcement learning (RL) is Monte Carlo Exploring Starts (MCES), where the Q-function is estimated by averaging the Monte Carlo returns, and the policy is improved by choosing actions that maximize the current estimate of the Q-function. Exploration is performed by "exploring start...
Accept (Poster)
The paper considers the convergence of the Monte Carlo Exploring Start (MCES) algorithm, a basic method in RL. Although the method is very simple and known for a long time, the condition for its convergence is not completely understood. One of the latest results is from almost 20 years ago (Tsitsiklis, "On the Converg...
train
[ "ipE1I9p8TDG", "UJt8HNTj1C4", "2ycJ4WcSjEj", "U6Jchyj7UjR", "GsIY2IZJLqL", "iA-6S4rOkPf", "i5Al7LqXJ69", "ne-HoC82Hyx", "YaQ8n8vpAA8", "bBLZ33yNsm1", "VD6dCrNPThFt", "t_YnPhzCMH7", "oVsmt9kAsXH", "FOjS6hZJyaJ", "jX3PyCgHdj-" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comment! The AlphaZero learning process can be roughly seen as a nested loop: there is the outer loop where the agent starts the game from an empty board and plays to the end of the game, and then for each state s on this trajectory, there is the inner loop of MCTS, which is a planning phase th...
[ -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "2ycJ4WcSjEj", "ne-HoC82Hyx", "bBLZ33yNsm1", "GsIY2IZJLqL", "YaQ8n8vpAA8", "iclr_2022_JzNB0eA2-M4", "oVsmt9kAsXH", "jX3PyCgHdj-", "FOjS6hZJyaJ", "VD6dCrNPThFt", "oVsmt9kAsXH", "iA-6S4rOkPf", "iclr_2022_JzNB0eA2-M4", "iclr_2022_JzNB0eA2-M4", "iclr_2022_JzNB0eA2-M4" ]
iclr_2022__X90SIKbHa
A Class of Short-term Recurrence Anderson Mixing Methods and Their Applications
Anderson mixing (AM) is a powerful acceleration method for fixed-point iterations, but its computation requires storing many historical iterations. The extra memory footprint can be prohibitive when solving high-dimensional problems in a resource-limited machine. To reduce the memory overhead, we propose a novel class ...
Accept (Poster)
The reviewers found this work well-motivated and the additional experiments conducted during the response phase were greatly appreciated. Anderson's acceleration appears to be a simple device that may be of great value to this field, and therefore this work is a timely contribution. The presented theoretical results ju...
train
[ "FP72BNJdK3", "wE08tKA5Q2W", "drhPKwMwJC2", "AUNehnH9ckJ", "4dMAPguac0p", "z8qWg1VQQNk", "3DQ9y22dv9", "gcE3fPl4ahs", "PI-GsMduop7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks a lot for your support and raising the score. We add some clarifications, hoping them can further address your concerns.\n\nQ1. Experimental comparisons to standard (quasi-)Newton methods. \nA1. The compared quasi-Newton methods in our experiments are listed as follows. \n(1) Cubic-regularized optimizati...
[ -1, -1, 8, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 2, -1, -1, -1, -1, 3, 3 ]
[ "wE08tKA5Q2W", "z8qWg1VQQNk", "iclr_2022__X90SIKbHa", "PI-GsMduop7", "iclr_2022__X90SIKbHa", "drhPKwMwJC2", "gcE3fPl4ahs", "iclr_2022__X90SIKbHa", "iclr_2022__X90SIKbHa" ]
iclr_2022_vJb4I2ANmy
Noisy Feature Mixup
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data augmentation that combines the best of interpolation based training and noise injection schemes. Rather than training with convex combinations of pairs of examples and their labels, we use noise-perturbed convex combinations of pairs o...
Accept (Poster)
This paper introduces Noisy Feature Mixup: an extension of input mixup and manifold mixup to all layers of a neural net, for the purpose of improving robustness and generalization in supervised learning. Experimental validation supports the increased robustness to attacks on the input data. The reviewers find the paper...
train
[ "UUzrCXxoIRL", "uA4RPXuUS7g", "6jdD3wsh-gb", "cJlymCo1ZQ2", "AJb8wOwTHlj", "ukiKCuHnfLf", "UB9hbvm0AhN", "LXkts5KP5fn", "uvHAds2r0QM", "xLbJOni8LT5", "upRNjVtGPQ8", "hvs2zt-pOb", "Bm8UYLG1H6", "cfBGBiiRXsv" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for pointing out the typo in the citation. As for Figure 7, you are right. We have taken the expectation over the mixing weights in the approximated loss.", " I'm glad to see that the revision has taken into account my concerns above. Figure 7 in particular gives some confidence to the correctness of ...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "uA4RPXuUS7g", "xLbJOni8LT5", "LXkts5KP5fn", "iclr_2022_vJb4I2ANmy", "iclr_2022_vJb4I2ANmy", "UB9hbvm0AhN", "AJb8wOwTHlj", "cfBGBiiRXsv", "cJlymCo1ZQ2", "Bm8UYLG1H6", "hvs2zt-pOb", "iclr_2022_vJb4I2ANmy", "iclr_2022_vJb4I2ANmy", "iclr_2022_vJb4I2ANmy" ]
iclr_2022_mdUYT5QV0O
Training Structured Neural Networks Through Manifold Identification and Variance Reduction
This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through...
Accept (Poster)
The paper develops optimization algorithms for fitting structured neural networks. It focuses on the manifold identification property, which guarantees after finitely many iterations, all iterates have the same sparsity structure as at convergence. The proposed method extends dual averaging to include momentum. The pap...
train
[ "dvp-IGbEI3Z", "sYY_QREJdX-", "eJufcgem2Pi", "6V_233QSb38", "peb-9yS2Cu", "ZQRgQwVcJRG", "0AimpG0XbHA", "N_1iXaUC9d6", "6iI438w6vE", "OchnAiSJXxx", "XwRoE5WFapi", "jPofpuoc8N", "8vJHNpR_uE5", "zCKLqEjpU1O" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author" ]
[ " We would like to thank all reviewers again for their time in engaging in discussions with us and for the evaluation of the revisions.\nAs suggested by Reviewers mrfp and MELK, we have improved the visualization of the figures. The new version now should be hopefully more legible. We have also fixed the page forma...
[ -1, 8, -1, -1, -1, 8, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_mdUYT5QV0O", "iclr_2022_mdUYT5QV0O", "6V_233QSb38", "sYY_QREJdX-", "0AimpG0XbHA", "iclr_2022_mdUYT5QV0O", "iclr_2022_mdUYT5QV0O", "6iI438w6vE", "sYY_QREJdX-", "peb-9yS2Cu", "ZQRgQwVcJRG", "8vJHNpR_uE5", "sYY_QREJdX-", "0AimpG0XbHA" ]
iclr_2022_xLfAgCroImw
Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning
Valuation problems, such as feature interpretation, data valuation and model valuation for ensembles, become increasingly more important in many machine learning applications. Such problems are commonly solved by well-known game-theoretic criteria, such as Shapley value or Banzhaf value. In this work, we present a nove...
Accept (Poster)
This paper considers the valuation problem for a cooperative game, and shows that some classical metrics (e.g. Shapley value), can be considered as approximations to the maximum entropy. Reviewers were generally very positive. They especially praised the novelty and writing quality, while having some concerns about th...
train
[ "D00PG334eKf", "wFAYSKfEK3t", "-Ai42KFgFuY", "6LMHzhyIajF", "xWRmJYRuGmI", "pz5o2Q4Jaib", "9bzKvPCBwP0", "J6zOmJXcwuh", "z91NwxGXlOm", "fCsgljqcjRE", "_htJQmB7DLh", "Fahtqdh7ZKC", "cmWa1enoAH", "cIWM8euscaE", "H_aF2pvQhx", "RRxqHkc5AEr", "MmaRGsNG3_d", "tpB77fiviBL", "ni98ebqQ58R...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author"...
[ "This paper studies valuation problems from cooperative game theory. There are $n$ agents and a valuation function $F: [n] \\to R$ where $F(S)$ is the collective payoff of the coalition $S \\subseteq [n]$. The goal is to use this function $F$ to define an importance vector $\\phi(F) \\in R^n$. Examples include the ...
[ 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_xLfAgCroImw", "D00PG334eKf", "6LMHzhyIajF", "wFAYSKfEK3t", "Fahtqdh7ZKC", "9bzKvPCBwP0", "-Ai42KFgFuY", "iclr_2022_xLfAgCroImw", "iclr_2022_xLfAgCroImw", "_htJQmB7DLh", "cmWa1enoAH", "iclr_2022_xLfAgCroImw", "kDxnBZpC6t", "iclr_2022_xLfAgCroImw", "RRxqHkc5AEr", "Fahtqdh7ZKC"...
iclr_2022_dDo8druYppX
Training Data Generating Networks: Shape Reconstruction via Bi-level Optimization
We propose a novel 3d shape representation for 3d shape reconstruction from a single image. Rather than predicting a shape directly, we train a network to generate a training set which will be fed into another learning algorithm to define the shape. The nested optimization problem can be modeled by bi-level optimizatio...
Accept (Poster)
All reviewers recommended accept after author responses. AC doesn't find any reason to overturn this consensus.
train
[ "G-oopGhOTx", "d0Wg-DFwZWf", "E4lBKpWfxaM", "ICLNS6LPYvv", "7y-n1u9akiO", "rZL3kbz3iI1", "1wsHDFUnUlY", "vjpnvxb34B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper introduce a few shot learning method for single view reconstruction task. Different from traditional few shot learning methods, data used for training are generated by a network which takes an image as input. Both qualitative and quantitative reconstruction results show that this method gives a better pe...
[ 8, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_dDo8druYppX", "rZL3kbz3iI1", "7y-n1u9akiO", "G-oopGhOTx", "vjpnvxb34B", "1wsHDFUnUlY", "iclr_2022_dDo8druYppX", "iclr_2022_dDo8druYppX" ]
iclr_2022_eYciPrLuUhG
Efficient Neural Causal Discovery without Acyclicity Constraints
Learning the structure of a causal graphical model using both observational and interventional data is a fundamental problem in many scientific fields. A promising direction is continuous optimization for score-based methods, which, however, require constrained optimization to enforce acyclicity or lack convergence gua...
Accept (Poster)
This paper studies the problem of learning a graphical model given observational and experimental data. The main novelty is the use of interventions to avoid the acyclicity constraint that plagues existing methods. Although this idea is quite standard and well-known, the generality of the approach merits consideration....
train
[ "jSRPNbngtv", "acE50LFI09_", "skotILjW8sV", "9_d4j0zoVMi", "leiD0RgNGF_", "C2JV8bRHIe0", "KZVb2rNdNBf", "muCdk0JEWFP", "2g0xiM0OlrY", "lIkpAHeHbtq", "5xzOK9sSGDt", "iDmDNup4asI", "8peqb_elpUR", "vjrccw50Vy1", "VFjCWOrLWkN", "hHZ__5n2hrN", "7n_Kv4VlQi", "RPIU8of67vi", "x90bVzzuN0b...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " I would like to thank the authors for the clarification, and I have no more questions for the authors.", "The authors propose a differentiable causal discovery method from interventional data. Each edge is learned separately using a score-based approach that relies on Monte-Carlo sampling. The main difference i...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "5xzOK9sSGDt", "iclr_2022_eYciPrLuUhG", "9_d4j0zoVMi", "leiD0RgNGF_", "VFjCWOrLWkN", "KZVb2rNdNBf", "muCdk0JEWFP", "iDmDNup4asI", "lIkpAHeHbtq", "hHZ__5n2hrN", "lOnSKXGo9GH", "8peqb_elpUR", "oVivrZQ0sBz", "VFjCWOrLWkN", "LEiNPkIuwx9", "f_kIvQ5diUT", "RPIU8of67vi", "x90bVzzuN0bj", ...
iclr_2022_uxxFrDwrE7Y
Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
Humans excel at continually learning from an ever-changing environment whereas it remains a challenge for deep neural networks which exhibit catastrophic forgetting. The complementary learning system (CLS) theory suggests that the interplay between rapid instance-based learning and slow structured learning in the brain...
Accept (Poster)
The manuscript proposes an experience replay method that supports two time scales of memory, as in complementary learning systems from the cognitive sciences literature. The authors demonstrate that their method on a wide range of benchmarks and after the rebuttal demonstrate it on one additional benchmark. The reviewe...
train
[ "2uHsKeSZ2D", "D5imrDXjxmp", "02rHsb2vXc", "-QNd5YQru_g", "Kcv10RJgsQ6", "BuxJKktTL-3", "ohSnL1ocUt-", "4xDO5MYo4vC", "zmbFw7xmLV", "EcHegK6X3sj" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. After reading the other reviews and your responses I still recommend acceptance. For future work, I'd recommend to \"upgrade\" the model size, training scheme, etc to better performing models. That said, I see no reason why the contributions of this paper would not apply there. \n\nOn...
[ -1, -1, -1, -1, -1, -1, 6, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 2, 4 ]
[ "-QNd5YQru_g", "EcHegK6X3sj", "zmbFw7xmLV", "4xDO5MYo4vC", "ohSnL1ocUt-", "iclr_2022_uxxFrDwrE7Y", "iclr_2022_uxxFrDwrE7Y", "iclr_2022_uxxFrDwrE7Y", "iclr_2022_uxxFrDwrE7Y", "iclr_2022_uxxFrDwrE7Y" ]
iclr_2022_G89-1yZLFHk
Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation
Traditional computer vision models are trained to predict a fixed set of predefined categories. Recently, natural language has been shown to be a broader and richer source of supervision that provides finer descriptions to visual concepts than supervised "gold" labels. Previous works, such as CLIP, use InfoNCE loss to ...
Accept (Poster)
The paper addresses the interesting many-to-many assignement problem between a set of images and a set of text. Most reviewers, (and I agree with them) think that the idea and its application worth being published although the performance improvement is marginal. I request the authors to update the paper based on the...
train
[ "ggvj3HwWaak", "AAubrElz-w", "p2J5WhF2655", "rZB3hKww0T5", "zX5oJ97HHT", "BYSP2m5k7Q", "YCBidXzEYAw", "fKZI4Iou8R7", "CP7SCNNPYj", "tW9Yiv5oyL0", "Tynrdafj0B9", "ZxS5OgClFWu", "J0Ix8zMdbz6", "dtfAi5O1qSx", "CoTHzeaEuLo", "2-Ih4yq4Nd", "GU_3Zy8vBw7", "qUnatJMDjCH", "mXn4G06rIKs", ...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "...
[ "Review: This paper proposes a novel framework, OTTER, which considers the many-to-many relationship within a batch of images and text captions for data-efficient language-supervised zero-shot recognition. An improved InfoNCE is explored to consider the many-to-many relationship between unpaired images and texts. T...
[ 6, -1, -1, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2022_G89-1yZLFHk", "p2J5WhF2655", "CoTHzeaEuLo", "zX5oJ97HHT", "BYSP2m5k7Q", "J0Ix8zMdbz6", "dtfAi5O1qSx", "iclr_2022_G89-1yZLFHk", "Tynrdafj0B9", "iclr_2022_G89-1yZLFHk", "GU_3Zy8vBw7", "qUnatJMDjCH", "2-Ih4yq4Nd", "fKZI4Iou8R7", "ggvj3HwWaak", "dg31U4v1kqG", "YdBRQRFk2NG", ...
iclr_2022_apv504XsysP
Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions
Solving the Schrödinger equation is key to many quantum mechanical properties. However, an analytical solution is only tractable for single-electron systems. Recently, neural networks succeeded at modelling wave functions of many-electron systems. Together with the variational Monte-Carlo (VMC) framework, this led to s...
Accept (Spotlight)
This paper builds on the success of the FermiNet neural wave function framework by pairing it with a graph neural network which predicts the parameters of neural wave function from the geometry. The resulting PESNet trains significantly faster, with no loss of accuracy. This method constitutes an important advance in M...
train
[ "IUAR7nMBCfn", "RjM0QXSDq9E", "y2qVOB3gHYv", "8mAlGfn7JPg", "FDLt8jUGXa", "owBKBkmVGXb", "_vT-JgTYl0u", "T7GVmf6_WqI", "n3DZ7heHOaZ", "3dD1z7pmu2A", "ckqLR4Ai82b" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper develops a neural network based variational ansatz for modeling wave functions. Authors build their model on top of FermiNet architecture with a few modifications: they use a different feature embedding approach that is invariant with respect to basic spatial symmetries. In addition, authors use a GNN “h...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2022_apv504XsysP", "y2qVOB3gHYv", "T7GVmf6_WqI", "_vT-JgTYl0u", "owBKBkmVGXb", "3dD1z7pmu2A", "ckqLR4Ai82b", "IUAR7nMBCfn", "ckqLR4Ai82b", "iclr_2022_apv504XsysP", "iclr_2022_apv504XsysP" ]
iclr_2022_kavTY__jxp
Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery
We developed Distilled Graph Attention Policy Networks (DGAPNs), a reinforcement learning model to generate novel graph-structured chemical representations that optimize user-defined objectives by efficiently navigating a physically constrained domain. The framework is examined on the task of generating molecules that ...
Accept (Poster)
After discussion, all reviewers are convinced about the novelty of the proposed method, and adjusted scores to recommend acceptance. They all appreciate the attempt to attack COVID-19 using machine learning.
train
[ "mzcFFzx_Tf", "-6A1ahaa5M", "7wMftjDJx9q", "4qb20Qxazoj", "2u8iV9XJe59", "Q9XaytCTCQ", "HXXn4FBm3H", "JQC2ksOJpt", "c_XovlGD0Ap" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " I thank the authors for their comments and leave my final scores unchanged", " Thank you for your response and explanations. Most of my comments were addressed, and I see work done to improve the manuscript. I looked through the generated molecules, and they indeed are on the heavy side, with many repeated subs...
[ -1, -1, 6, -1, 6, -1, -1, -1, 8 ]
[ -1, -1, 4, -1, 3, -1, -1, -1, 4 ]
[ "JQC2ksOJpt", "HXXn4FBm3H", "iclr_2022_kavTY__jxp", "Q9XaytCTCQ", "iclr_2022_kavTY__jxp", "2u8iV9XJe59", "7wMftjDJx9q", "c_XovlGD0Ap", "iclr_2022_kavTY__jxp" ]
iclr_2022_l8It-0lE5e7
Implicit Bias of Adversarial Training for Deep Neural Networks
We provide theoretical understandings of the implicit bias imposed by adversarial training for homogeneous deep neural networks without any explicit regularization. In particular, for deep linear networks adversarially trained by gradient descent on a linearly separable dataset, we prove that the direction of the produ...
Accept (Poster)
The paper is a nice addition to the developing theory of implicit bias in neural training. While the results are somewhat expected, the technical aspects are fairly involved due to the adversarial component.
train
[ "HYx1ussTqHu", "dB-yjLUP9Ey", "ZXOE41Ds04L", "4mfik0PBzOq", "aowp-u3Tjxw", "4GV6la78rwj", "IV7s6yexL0", "uh2MtzVXN-4", "RcXbqBusnoP", "lPQN9EfxXdj", "m1K8I-2zenx" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their valuable comments and suggestions. In the revision, we made the following improvements to address the raised concerns:\n- We supplemented several experiments to verify our theoretical claims in practical settings and address concerns regarding Assumption 4:\n 1. Adversarial tra...
[ -1, -1, -1, -1, -1, -1, -1, 5, 8, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "iclr_2022_l8It-0lE5e7", "aowp-u3Tjxw", "lPQN9EfxXdj", "uh2MtzVXN-4", "RcXbqBusnoP", "m1K8I-2zenx", "4mfik0PBzOq", "iclr_2022_l8It-0lE5e7", "iclr_2022_l8It-0lE5e7", "iclr_2022_l8It-0lE5e7", "iclr_2022_l8It-0lE5e7" ]
iclr_2022_HTp-6yLGGX
Hot-Refresh Model Upgrades with Regression-Free Compatible Training in Image Retrieval
The task of hot-refresh model upgrades of image retrieval systems plays an essential role in the industry but has never been investigated in academia before. Conventional cold-refresh model upgrades can only deploy new models after the gallery is overall backfilled, taking weeks or even months for massive data. In cont...
Accept (Poster)
This paper is proposed to deeply investigate the hot-refresh model upgrades of image retrieval systems. The hot-refresh model is very useful since the model can be quickly updated after the gallery is backfilled. To address the model regression with negative flips, this paper introduces a Regression-Alleviating Compati...
train
[ "1JQJlRsaJgP", "J_fTKZkRDKN", "Z1Ad1w4GUQ5", "Lw_T1nfVhF", "YEvWZ7V8KF", "cOkzJsIe2w", "vnilMw_B7Qi", "5gLuLRMylPX", "A_6U3xk0BHC", "YXxnMCS6Y9x", "hwyq18VxAQm", "w7pBXMiFCeg", "wpQt3zjuzTy", "g1WmoHusVR", "gWzZo-QGb-", "qNF0pzKhcE", "CFli8VeoI0", "6J5Hx9tvjW", "XtZgduFRsct", "...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ "This paper tackles the problem of model upgrades in large-scale training of image retrieval models. It proposes a new framework of “hot-refresh” updates with two main techniques: \n1) by alleviating “negative flip” by a regression-free regularization term in the loss and \n2) by prioritizing gallery images with th...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2022_HTp-6yLGGX", "cOkzJsIe2w", "CFli8VeoI0", "YEvWZ7V8KF", "gWzZo-QGb-", "vnilMw_B7Qi", "hwyq18VxAQm", "w7pBXMiFCeg", "wpQt3zjuzTy", "6J5Hx9tvjW", "g1WmoHusVR", "wpQt3zjuzTy", "8pfpBtSZXfI", "1JQJlRsaJgP", "qNF0pzKhcE", "XtZgduFRsct", "YXxnMCS6Y9x", "UUlrbjruUCb", "iclr_20...
iclr_2022_IYMuTbGzjFU
Representing Mixtures of Word Embeddings with Mixtures of Topic Embeddings
A topic model is often formulated as a generative model that explains how each word of a document is generated given a set of topics and document-specific topic proportions. It is focused on capturing the word co-occurrences in a document and hence often suffers from poor performance in analyzing short documents. In a...
Accept (Poster)
This is a borderline paper on the well researched theme of Topic models. The strongest point of the paper is that it proposes a new topic modelling framework where both word and topic embeddings live in the same space. It then appeals to optimal transport theory to do the necessary training using SGD. However, this is...
train
[ "P_zFKzZ2Xk", "4Z1LTnYxKG2", "pkX6O5OLg29", "LqFewqFRoOp", "0sr3q2sDuV", "hXZy5inmCnn", "8K8x9mZX8a0", "MJ6Fw9bfxqH", "t51H6Dziu8k", "lSjv4BGk4w8", "R0fXa9RFXO", "BaibbZD8HzD", "GQoPOvkDpvw" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nWe appreciate it if you could let us know whether our responses and revisions are able to address your concerns.\n\nThank you,\n\nPaper786 Authors", " Thank you for your kind reply. Please let us know if you have any additional comments or suggestions.", " Thank you to the authors for the e...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "iclr_2022_IYMuTbGzjFU", "pkX6O5OLg29", "MJ6Fw9bfxqH", "GQoPOvkDpvw", "BaibbZD8HzD", "8K8x9mZX8a0", "R0fXa9RFXO", "t51H6Dziu8k", "lSjv4BGk4w8", "iclr_2022_IYMuTbGzjFU", "iclr_2022_IYMuTbGzjFU", "iclr_2022_IYMuTbGzjFU", "iclr_2022_IYMuTbGzjFU" ]
iclr_2022_oMI9PjOb9Jl
DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR
We present in this paper a novel query formulation using dynamic anchor boxes for DETR and offer a deeper understanding of the role of queries in DETR. This new formulation directly uses box coordinates as queries in Transformer decoders and dynamically updates them layer-by-layer. Using box coordinates not only helps ...
Accept (Poster)
Somewhat borderline paper given the scores, but leaning on the side of accepting mostly because the positive (and weak positive) reviews are a little more persuasive. The negative review is a bit of an outlier; the main issues raised in the negative review are that the novelty is on the lower side or otherwise that the...
train
[ "_sqQNdZpm-N", "NkRi1FGJgS0", "Qj-onU0a3BK", "lJ85IdxC6O5", "4L46fIJ4ZY6", "NnO4OhVfJHU", "tJ7iOX3KQkF", "QNnpFlOZnl", "E2DzXDIbsxN", "b6-3KvrTa14", "GJjFJEwvFfz", "_qgQT-EWad", "iVEU1ZLXCqZ", "9DKT97SIbFC", "OsnawRtVfedA", "c-x1Y3pVYvBo", "ZxmGMV1G_N" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your positive recommendation. We will modify our paper accordingly in the next version. More specifically, we will add hyperparameter searching for ablation experiments, add comparisons with variants of Deformable DETR properly in Table 3, and add experiments with competitors of improved techniques....
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "b6-3KvrTa14", "lJ85IdxC6O5", "iclr_2022_oMI9PjOb9Jl", "NnO4OhVfJHU", "E2DzXDIbsxN", "E2DzXDIbsxN", "E2DzXDIbsxN", "E2DzXDIbsxN", "9DKT97SIbFC", "iclr_2022_oMI9PjOb9Jl", "iclr_2022_oMI9PjOb9Jl", "b6-3KvrTa14", "b6-3KvrTa14", "Qj-onU0a3BK", "Qj-onU0a3BK", "ZxmGMV1G_N", "iclr_2022_oMI9...
iclr_2022_H4PmOqSZDY
Towards Empirical Sandwich Bounds on the Rate-Distortion Function
Rate-distortion (R-D) function, a key quantity in information theory, characterizes the fundamental limit of how much a data source can be compressed subject to a fidelity criterion, by any compression algorithm. As researchers push for ever-improving compression performance, establishing the R-D function of a given da...
Accept (Poster)
This paper proposes an algorithmic approach to estimating upper and lower bounds of the rate-distortion (R-D) function of a data source on the basis of samples drawn from it. The proposed upper bound is based on the variational objective employed in the Blahut-Arimoto algorithm, whereas the proposed lower bound is base...
train
[ "jXqraypjBA7", "IZJpaJGSJqV", "UtEBGVxUerI", "tnZW0Zq9qSc", "hsXXqhvqSD0", "icmyguqc7iw", "62sQ-Www3TI", "TEHtLwuumYY", "jba8br0r0Ab", "0QRaRCt5Ma", "6QeBIgayUtu", "6wrgKy9LkWZ", "ld6XmHecVCg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper aims to provide stronger upper and lower bounds for the RD function of arbitrary sources. The authors handle unknown sources by requiring only i.i.d. samples. Specifically, the authors derive an upper bound to the RD function using a $\\beta$-VAE-like generative model, which has some similarities to the...
[ 8, 6, -1, 3, -1, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_H4PmOqSZDY", "iclr_2022_H4PmOqSZDY", "hsXXqhvqSD0", "iclr_2022_H4PmOqSZDY", "62sQ-Www3TI", "iclr_2022_H4PmOqSZDY", "tnZW0Zq9qSc", "tnZW0Zq9qSc", "iclr_2022_H4PmOqSZDY", "jXqraypjBA7", "icmyguqc7iw", "IZJpaJGSJqV", "iclr_2022_H4PmOqSZDY" ]
iclr_2022_afoV8W3-IYp
RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning
Reasoning about visual relationships is central to how humans interpret the visual world. This task remains challenging for current deep learning algorithms since it requires addressing three key technical problems jointly: 1) identifying object entities and their properties, 2) inferring semantic relations between pai...
Accept (Poster)
Three experts reviewed the paper and all recommended acceptance. Based on the reviewers' feedback, the decision is to recommend the paper for acceptance. However, the reviewers did raise some valuable concerns that should be addressed in the final camera-ready version of the paper. For example, the discussion about or ...
train
[ "nYGyK_IPM-o", "QC0rBfllYkO", "hh-YdYbsWdW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new concept-guided approach for visual relational reasoning. Particularly, it uses a concept-feature dictionary to store and retrieve the feature of an image by the corresponding visual concepts. Then, the concept-guided features are integrated into the training process with the global and loc...
[ 6, 8, 6 ]
[ 2, 4, 4 ]
[ "iclr_2022_afoV8W3-IYp", "iclr_2022_afoV8W3-IYp", "iclr_2022_afoV8W3-IYp" ]
iclr_2022_LdEhiMG9WLO
Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions
Structured pruning methods which are capable of delivering a densely pruned network are among the most popular techniques in the realm of neural network pruning, where most methods prune the original network at a filter or layer level. Although such methods may provide immediate compression and acceleration benefits, w...
Accept (Poster)
This paper studies structured pruning methods, called kernel-pruning in the paper which is also known as channel pruning for convolutional kernels. A simple method is proposed that primarily consists of three stages: (i) clusters the filters in a convolution layer into predefined number of groups, (ii) prune the unimp...
train
[ "fwep990UpGN", "RvOyViBcHT9", "snK2LM4xDBD", "wIt8Et3vowA", "0iBURwjxi-K", "4QMgdR2_XVO", "OwARc7DFmUk", "kbRgRIZsCjY", "nCR6535udEd", "jMdNxNOReyr", "G6AFUll4-Oz", "OkONLTQG1Cx", "06pMCRzDCOs", "CLhYr1n_MZi", "dw-GdrSlxsG", "hTMPnj-u5vP", "rvx2LXOlT_", "agzMfbTGKIi", "IhFfJrcRaa...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "autho...
[ " As the (extended) posting deadline is probably also coming to an end, and given that we have carried out a very comprehensive and up-front rebuttal which is appreciated by all reviewers who have replied yet, \n\n### **we'd like to consider that all raised concerns are adequately resolved by now. Would you please ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2022_LdEhiMG9WLO", "fwep990UpGN", "MDPJ0GzGHVR", "MDPJ0GzGHVR", "06pMCRzDCOs", "06pMCRzDCOs", "5Hj0pWsf_w_", "G2vWQKgY-hd", "06pMCRzDCOs", "06pMCRzDCOs", "06pMCRzDCOs", "wJ7S8sNRS8h", "iclr_2022_LdEhiMG9WLO", "Y1NGBxAzgql", "06pMCRzDCOs", "G2vWQKgY-hd", "zJSWhfFYMG-", "5Hj0pW...
iclr_2022_g2LCQwG7Of
End-to-End Learning of Probabilistic Hierarchies on Graphs
We propose a novel probabilistic model over hierarchies on graphs obtained by continuous relaxation of tree-based hierarchies. We draw connections to Markov chain theory, enabling us to perform hierarchical clustering by efficient end-to-end optimization of relaxed versions of quality metrics such as Dasgupta cost or T...
Accept (Poster)
The authors introduce a novel probabilistic hierarchical clustering method for graphs. In particular they design an end-to-end gradient-based learning to optimize the Dasgupta cost and Tree Sampling Divergence cost at the same time. Overall the paper presents solid results both from a theoretical and experimental pers...
train
[ "Q5AYBE36N0h", "WJsXKCCjLG", "vmUYbSbEMoA", "BRL16IU0q-m", "Ssr2ebP9rAt", "I-dwFaZY8_", "yEpediF9J_b", "co1JR-FgD3t", "4ROjm-6CfYC", "duyno1BTYBw", "qkXhSWw4PU", "kOSQYlRG3kJ" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your swift reply. We are glad you found our response illustrative and comprehensive. **Note that all of the additional results** are shown in the revised version of the paper: we added results for the **three new datasets to Table 1**, and added results of the **Louvain baseline to Tables ...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, 4 ]
[ "WJsXKCCjLG", "Ssr2ebP9rAt", "4ROjm-6CfYC", "iclr_2022_g2LCQwG7Of", "kOSQYlRG3kJ", "vmUYbSbEMoA", "iclr_2022_g2LCQwG7Of", "qkXhSWw4PU", "yEpediF9J_b", "BRL16IU0q-m", "iclr_2022_g2LCQwG7Of", "iclr_2022_g2LCQwG7Of" ]
iclr_2022_sOK-zS6WHB
Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Over the past years, deep generative models have achieved a new level of performance. Generated data has become difficult, if not impossible, to be distinguished from real data. While there are plenty of use cases that benefit from this technology, there are also strong concerns on how this new technology can be misuse...
Accept (Spotlight)
The paper proposes and studies a method for the responsible disclosure of a fingerprint along with samples generated by a generative model, which has important applications in identifying "deep fakes". The authors establish both the detectability of their fingerprint-without significant loss of fidelity-as well as the ...
train
[ "yNUu8yy6hU", "u3QnwtxhTp1", "OMnjucqrsJy", "6PhSFvML5KD", "uSk9bRrPVWS", "kUrckROaePH", "ZhQ_wgq7Edh", "qVUjlHOlU2r", "7MvlWaLxEF6", "8bB1PPIhaCk", "SMZsaKqp-8W", "9UXK3XerbH9", "fCYLf96BHxY", "NIn96-R20B", "FRpJ3-S7QKC", "Dv3GKwmG_s0", "YUcZcN0X2WV" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I have read the authors response and stand by my recommendation.", "Aim of this work is to develop a method to fingerprint GAN models. In this way, images generated from that model can be detected and attributed to a specific GAN model. This would help in a scenario where a malicious actor uses a published GAN ...
[ -1, 6, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "uSk9bRrPVWS", "iclr_2022_sOK-zS6WHB", "iclr_2022_sOK-zS6WHB", "iclr_2022_sOK-zS6WHB", "YUcZcN0X2WV", "u3QnwtxhTp1", "Dv3GKwmG_s0", "u3QnwtxhTp1", "OMnjucqrsJy", "OMnjucqrsJy", "u3QnwtxhTp1", "YUcZcN0X2WV", "6PhSFvML5KD", "6PhSFvML5KD", "Dv3GKwmG_s0", "iclr_2022_sOK-zS6WHB", "iclr_20...
iclr_2022_01AMRlen9wJ
Online Hyperparameter Meta-Learning with Hypergradient Distillation
Many gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization, which can be considered as hyperparameters. Although such hyperparameters can be optimized using the existing gradient-based hyperparameter optimization (HO) methods, they suffer from the following issues...
Accept (Spotlight)
This paper presents a novel methodology for performing meta learning for gradient-based hyperparameter optimization. The approach overcomes limitations (scaling, e.g.) of previous methods through distilling the gradients of the hyperparameters. The paper received 4 reviews, of which all were positive (6, 6, 8, 8). T...
test
[ "YgiuFriu9jJ", "nMel-JnjW7U", "bhsXYAD0kmW", "x3hLGhD9TQB", "Gdm9nbg6y-", "PoRD31dDC_8", "8dDVuv3_fM", "-WiLvzZPQ1", "bQXBXxPP9sE", "-dQu_yhkeg5" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a hyperparameter optimization algorithm in meta-learning, where parameters w/o being involved in the inner loop optimization are treated as hyperparameters. The proposed algorithms approximate the second-order hypergradients via knowledge distillation. They further evaluate the effectiveness on...
[ 6, -1, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "iclr_2022_01AMRlen9wJ", "PoRD31dDC_8", "iclr_2022_01AMRlen9wJ", "YgiuFriu9jJ", "bQXBXxPP9sE", "-WiLvzZPQ1", "-dQu_yhkeg5", "iclr_2022_01AMRlen9wJ", "iclr_2022_01AMRlen9wJ", "iclr_2022_01AMRlen9wJ" ]
iclr_2022_7gE9V9GBZaI
Exploring Memorization in Adversarial Training
Deep learning models have a propensity for fitting the entire training set even with random labels, which requires memorization of every training sample. In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, an...
Accept (Poster)
This paper demonstrates that deep networks can memorize adversarial examples of training data with completely random labels, which motivates some analyses on the convergence and generalization of adversarial training (AT). The authors identify a significant drawback of memorization in AT that could result in robust ove...
train
[ "pK0Hf4GMuRq", "I57LD_CPRwA", "J1frNk28xp", "5CCYY8saIr0", "xmWyVUcRJyG", "GRLeYNw85N1", "pjG9RyuzIsw", "lyw6R01HOcL", "CO7zEFN1fc-", "lq0QZnl9U54", "3pGzHbGU45-", "Eh3OwFJAbQL", "0ntQT9Q-jrS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ***Further response to Question 1: Contribution of the stability analysis***\n\nIt is common practice to consider 80% label noise in the research field of (standard) training with noisy labels. Improving model robustness under a larger perturbation budget has also been studied in recent work [1,2]. Although most ...
[ -1, -1, -1, 10, -1, -1, -1, -1, -1, -1, 6, 3, 8 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "I57LD_CPRwA", "lyw6R01HOcL", "xmWyVUcRJyG", "iclr_2022_7gE9V9GBZaI", "GRLeYNw85N1", "5CCYY8saIr0", "0ntQT9Q-jrS", "Eh3OwFJAbQL", "3pGzHbGU45-", "3pGzHbGU45-", "iclr_2022_7gE9V9GBZaI", "iclr_2022_7gE9V9GBZaI", "iclr_2022_7gE9V9GBZaI" ]
iclr_2022_NkZq4OEYN-
Sound Adversarial Audio-Visual Navigation
Audio-visual navigation task requires an agent to find a sound source in a realistic, unmapped 3D environment by utilizing egocentric audio-visual observations. Existing audio-visual navigation works assume a clean environment that solely contains the target sound, which, however, would not be suitable in most real-wor...
Accept (Poster)
This paper addresses audio-visual navigation tasks where a reinforcement learning agent perceives visual RGB and binaural audio inputs, rendered in a first-person perspective 3D environment, and is tasked to navigate to the audio source. The authors propose to make the RL navigation policy robust, by training the agent...
test
[ "J0JkFB31Ahq", "VqPVI9195gE", "h3ofzPa39uM", "Qt7FOGXXDQb", "YuRNt-zzBTY" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer" ]
[ "The paper proposes an interference-robust training method for audio-visual navigation. Unlike existing approaches that focus on clean environments, the system is trained in simulated acoustically complex environments. A single-source adversarial attacker is introduced, which determines position, noise type and vol...
[ 6, -1, 8, -1, 8 ]
[ 3, -1, 4, -1, 4 ]
[ "iclr_2022_NkZq4OEYN-", "h3ofzPa39uM", "iclr_2022_NkZq4OEYN-", "YuRNt-zzBTY", "iclr_2022_NkZq4OEYN-" ]
iclr_2022_rS9-7AuPKWK
Towards Understanding Generalization via Decomposing Excess Risk Dynamics
Generalization is one of the fundamental issues in machine learning. However, traditional techniques like uniform convergence may be unable to explain generalization under overparameterization \citep{nagarajan2019uniform}. As alternative approaches, techniques based on stability analyze the training dynamics and derive...
Accept (Poster)
The main contribution is a way of analyzing the generalization error of neural nets by breaking it down into bias and variance components, and using separate principles to analyze each of the two components. The submission first proves rigorous generalization bounds for overparameterized linear regression (motivated in...
val
[ "-b8BBh0h7Ci", "7aEK0dRIMEU", "KhS6pp56nq3", "yFS5HS4Uw-J", "2gjJEKO8Wnfh", "4X035WNyu-I", "gFaLRlDsGG", "MEekhJGj5bn", "D0T_yodhTQY", "T-rZYrTL3sA", "WiuwSyySQH", "LFGSatpoX73" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors have addressed most of my concerns. But still, the motivation of such a decomposition is not clear to me. It is especially so for the classification task where the noise, if any, is low compared to the signal. Therefore, I will keep my score unchanged.", " - **Comparison with the original stability-...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "WiuwSyySQH", "MEekhJGj5bn", "LFGSatpoX73", "2gjJEKO8Wnfh", "WiuwSyySQH", "D0T_yodhTQY", "T-rZYrTL3sA", "iclr_2022_rS9-7AuPKWK", "iclr_2022_rS9-7AuPKWK", "iclr_2022_rS9-7AuPKWK", "iclr_2022_rS9-7AuPKWK", "iclr_2022_rS9-7AuPKWK" ]
iclr_2022_9jsZiUgkCZP
Unified Visual Transformer Compression
Vision transformers (ViTs) have gained popularity recently. Even without customized image operators such as convolutions, ViTs can yield competitive performance when properly trained on massive data. However, the computational overhead of ViTs remains prohibitive, due to stacking multi-head self-attention modules and e...
Accept (Poster)
This paper receives positive reviews. The authors provide additional results and justifications during the rebuttal phase. All reviewers find this paper interesting and the contributions are sufficient for this conference. The area chair agrees with the reviewers and recommends it be accepted for presentation.
train
[ "yf50BC0o1mx", "bIgMA4lZ18D", "mGT_ubuLeVJ", "bfSvQ49OmiI", "dbX6kpAkk7-", "8tq4kotgcMh", "Hl6BSGjEO68", "e_8Rq-AZUGe", "Nj4EYt9gKw", "72tQjLciB2C", "9705TrIpfl", "zqSDXPavIUO", "1G8cNYo5-Y9", "qjeukXM7Pm", "lb4S73VwEHl", "h0cmFmURDLp", "X2E-cC8DT3Z" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer iRBq,\n\nWe greatly appreciated your time and constructive reviews. \n\nWe politely send you a kind reminder that the discussion period is ending **within 7 hrs**.\n\nSo far, reviewers **UdwX** and **K4Gq** have already recognized our paper's merit and reached a consensus on an acceptance suggestion...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "X2E-cC8DT3Z", "iclr_2022_9jsZiUgkCZP", "Hl6BSGjEO68", "X2E-cC8DT3Z", "h0cmFmURDLp", "X2E-cC8DT3Z", "iclr_2022_9jsZiUgkCZP", "Hl6BSGjEO68", "X2E-cC8DT3Z", "Hl6BSGjEO68", "X2E-cC8DT3Z", "Hl6BSGjEO68", "h0cmFmURDLp", "h0cmFmURDLp", "h0cmFmURDLp", "iclr_2022_9jsZiUgkCZP", "iclr_2022_9js...
iclr_2022_AAJLBoGt0XM
Conditional Contrastive Learning with Kernel
Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables. Fair contrastive learning constructs negative pairs, for example, from the same gender (conditioning on sensitive information), which in turn reduces...
Accept (Poster)
The reviewers all acknowledge the importance of the paper as it addressed the challenge of the insufficient data problem in conditional contrastive learning, feeling that the idea was novel, the experiments verified the effectiveness of the model well, and the paper is well written. Reviewers also raised some good ques...
train
[ "Zb7gnusMfM3", "V2xHVsCAYTG", "h60Ovy4_xyI", "X-qG-vkMsON", "DSdUh4XV8Mo", "S9TtSZ9onX0", "7ScTAeNjCCF", "DvctrFHV_Qt", "5UiI7nvkeEB", "DiPuuMIZ6od", "lnVrIq_hON9", "OrJFyRg8DB8", "9ZnYC0307c", "jblDHnLQ4QD", "eJumo5KuZz", "sa-VZ-sFKI", "pCWD70QXfOG", "RdoGRfAPfII", "O73pKEv80kK"...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for further discussions and comments. We will note in our revised manuscript that our methods are not significantly better than baselines if the numbers do not differ more than two standard deviations.", "The paper looks into using more conditional data than standard methods by leveraging conditional mea...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "h60Ovy4_xyI", "iclr_2022_AAJLBoGt0XM", "S9TtSZ9onX0", "O73pKEv80kK", "iclr_2022_AAJLBoGt0XM", "DvctrFHV_Qt", "DvctrFHV_Qt", "9ZnYC0307c", "V2xHVsCAYTG", "OrJFyRg8DB8", "iclr_2022_AAJLBoGt0XM", "pCWD70QXfOG", "V2xHVsCAYTG", "RdoGRfAPfII", "RdoGRfAPfII", "lnVrIq_hON9", "lnVrIq_hON9", ...
iclr_2022_AJAR-JgNw__
DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting
Periodic time series (PTS) forecasting plays a crucial role in a variety of industries to foster critical tasks, such as early warning, pre-planning, resource scheduling, etc. However, the complicated dependencies of the PTS signal on its inherent periodicity as well as the sophisticated composition of various periods ...
Accept (Spotlight)
The paper proposed a novel deep learning model specifically designed for periodic time series forecasting problems. The approach includes lay-by-layer expansion, residual learning, and periodic parametrization. The model outperforms state-of-the-art baselines on several time series forecasting benchmarks. The reviewer...
train
[ "h0hkrWGVrbf", "03f1S1HfrN", "IqFDKMvodkT", "AgGL6V6n5ZN", "c4RAgT2tvd3", "qUmMPGjLLZH", "ihwhyJSLvI7", "_mmJQmm8_v3", "ZmcDfyPqKMU", "_XBXI7Jt6kj", "ypyvqgQs8J", "jf5M8SWQjB2" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the comprehensive responses to my questions. I appreciate it.", " Dear all reviewers and ACs,\n\nThanks very much for your review. We really appreciate your insightful questions and constructive suggestions. In addition to the respective responses, we have prepared a revised paper that attempts to...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "qUmMPGjLLZH", "iclr_2022_AJAR-JgNw__", "jf5M8SWQjB2", "ypyvqgQs8J", "AgGL6V6n5ZN", "_XBXI7Jt6kj", "_mmJQmm8_v3", "ZmcDfyPqKMU", "iclr_2022_AJAR-JgNw__", "iclr_2022_AJAR-JgNw__", "iclr_2022_AJAR-JgNw__", "iclr_2022_AJAR-JgNw__" ]
iclr_2022_L3_SsSNMmy
On the Connection between Local Attention and Dynamic Depth-wise Convolution
Vision Transformer (ViT) attains state-of-the-art performance in visual recognition, and the variant, Local Vision Transformer, makes further improvements. The major component in Local Vision Transformer, local attention, performs the attention separately over small local windows. We rephrase local attention as a chann...
Accept (Spotlight)
All three reviewers recommend acceptance. The paper introduces an interesting study and insights on the connection between local attention and dynamic depth-wise convolution, in terms of sparse connectivity, weight sharing, and dynamic weight. The reviews included questions such as the novelty over [Cordonnier et al 20...
train
[ "zWll7Q-byuw", "dbrP3CyBjk", "7SZ1tJjjtAu", "YNvuNeuCw3", "uF0E1IvGmFk", "dvn74o6Vp7R", "LZxgpk_-7W" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper connects the local attention and dynamic depth-wise convolution and validates that empirically.\n Strengths\n\nThe paper is easy to follow and well-written and builds the connection between local attention and dynamic depth-wise convolution.\n\nWeaknesses\n\n1. Regarding the empirical validation of equiv...
[ 8, -1, -1, -1, -1, 8, 8 ]
[ 5, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_L3_SsSNMmy", "YNvuNeuCw3", "dvn74o6Vp7R", "zWll7Q-byuw", "LZxgpk_-7W", "iclr_2022_L3_SsSNMmy", "iclr_2022_L3_SsSNMmy" ]
iclr_2022_5FUq05QRc5b
Understanding Latent Correlation-Based Multiview Learning and Self-Supervision: An Identifiability Perspective
Multiple views of data, both naturally acquired (e.g., image and audio) and artificially produced (e.g., via adding different noise to data samples), have proven useful in enhancing representation learning. Natural views are often handled by multiview analysis tools, e.g., (deep) canonical correlation analysis [(D)CCA]...
Accept (Spotlight)
All reviewers agreed that this is a strong paper, that the methodological contributions are both relevant and significant, and that the experimental validation is convincing. I fully share this viewpoint!
train
[ "N9zOjUhppnu", "vE52ZrkTxjT", "O-NFc_vxmOC", "-jgTIIKUQ-m", "rVygAOU4RGV", "DEL4tBA0aD", "wnWni3uqxP3", "tFsa46hdO-G", "MyuWAMXPJmX", "np2bj3UsSi", "1Hzn-Yj-KlD", "3l9voPopX4H", "CXpBrSQhrIH", "S2tIDQisfP3", "4p_rlx36EUt" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "The paper offers a theoretical analysis of neural canonical correlation analysis (CCA) type methods. This reveals conditions under which these methods may work. The paper then proposes regularizers that approximately realize these conditions, which result in a new CCA-type algorithm. Empirical results indicate tha...
[ 8, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, -1, 4, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_5FUq05QRc5b", "-jgTIIKUQ-m", "iclr_2022_5FUq05QRc5b", "rVygAOU4RGV", "tFsa46hdO-G", "iclr_2022_5FUq05QRc5b", "np2bj3UsSi", "1Hzn-Yj-KlD", "DEL4tBA0aD", "N9zOjUhppnu", "O-NFc_vxmOC", "O-NFc_vxmOC", "O-NFc_vxmOC", "O-NFc_vxmOC", "O-NFc_vxmOC" ]
iclr_2022_edONMAnhLu-
Surrogate Gap Minimization Improves Sharpness-Aware Training
The recently proposed Sharpness-Aware Minimization (SAM) improves generalization by minimizing a perturbed loss defined as the maximum loss within a neighborhood in the parameter space. However, we show that both sharp and flat minima can have a low perturbed loss, implying that SAM does not always prefer flat mini...
Accept (Poster)
The paper proposes an interesting and well-motivated improvement of Sharpness Aware Minimization. Overall the AC and reviewers are satisfied by the author feedback in improving the solidity and rigor of the theoretical results. The points made by the authors in response to the reviewers initial concerns are essentia...
val
[ "YRcmDnq1o_c", "GQtToqubLln", "csdBHzHnoS", "3bd4MptpPC", "DTeNi3yBh4P", "hEFFFzFA_JF", "m3f0PUaCQSq", "SI5hDbyiI2R", "ZtB-xSI-Kq4", "MzVPBCajI-E", "p9F2zOrM7zi", "Oo2F3rVoh7U", "aUc9Kr0WWXI" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ " Thanks for your review, we are glad to resolve your concerns and appreciate your advice on the improvement of our paper.", "In this paper,the authors proposed a new method for sharpness aware training. The idea is to note a surrogate gap as a measure of sharpness, which motivates to minimize the perturbed loss ...
[ -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "csdBHzHnoS", "iclr_2022_edONMAnhLu-", "m3f0PUaCQSq", "DTeNi3yBh4P", "ZtB-xSI-Kq4", "iclr_2022_edONMAnhLu-", "GQtToqubLln", "GQtToqubLln", "hEFFFzFA_JF", "aUc9Kr0WWXI", "Oo2F3rVoh7U", "iclr_2022_edONMAnhLu-", "iclr_2022_edONMAnhLu-" ]
iclr_2022_YgPqNctmyd
Towards Building A Group-based Unsupervised Representation Disentanglement Framework
Disentangled representation learning is one of the major goals of deep learning, and is a key step for achieving explainable and generalizable models. The key idea of the state-of-the-art VAE-based unsupervised representation disentanglement methods is to minimize the total correlation of the joint distribution of the ...
Accept (Poster)
The submission provides a theoretical framework on the learning of group-based disentanglement representations and proposes a novel method to learn such representations. The reviewers appreciated the novel perspective of the paper in introducing the concept of group-based disentanglement in unsupervised VAE. Furthermo...
train
[ "iGejGgSbFuI", "skbNECSP9qL", "L8MHP_cTued", "P4xQ-jL-QiN", "bny0BKbtQjn", "g-7-Xf2FYO4", "yKUtbTObxe4", "DFEvjTAHHC3", "xJPGwR8vG0", "n1w3jGNnFSm", "gt0uQkgn83o", "ts_zv_9FZUy", "ZhvcCiJ-0w6", "BwM-ErgRt_f", "fv_Kblfh41S", "1tAz5caQ_G", "kZjIE5BV0z", "8mLMqXVzU7y", "7QYFn_s6TRG"...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " Your feedback was extremely valuable and helped improve the paper substantially.", " Thanks for your response. I believe these modifications make the paper clearer. I will increase the score to 6 weak accept.", "This paper provides a theoretical framework on the learning of group-based disentanglement represe...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "skbNECSP9qL", "bny0BKbtQjn", "iclr_2022_YgPqNctmyd", "xJPGwR8vG0", "pUH21fc6m4k", "yKUtbTObxe4", "lzDAsIyCNJ5", "ZhvcCiJ-0w6", "P-WfzDb7UZt", "ZhvcCiJ-0w6", "BwM-ErgRt_f", "iclr_2022_YgPqNctmyd", "_M1RJt3uEy", "7QYFn_s6TRG", "jqUCcSVuzEG", "GqC25fhEjkn", "pUH21fc6m4k", "k9oF8dLjks...
iclr_2022_2-mkiUs9Jx7
Stein Latent Optimization for Generative Adversarial Networks
Generative adversarial networks (GANs) with clustered latent spaces can perform conditional generation in a completely unsupervised manner. In the real world, the salient attributes of unlabeled data can be imbalanced. However, most of existing unsupervised conditional GANs cannot cluster attributes of these data in th...
Accept (Poster)
This paper proposes an unsupervised learning method for GANs, called SLOGAN, which allows conditional generation of samples, by utilizing clustering structures of training data in a latent space. The main significance of the proposal over existing unconditional conditional GANs is that it is capable of dealing with tra...
test
[ "l-VeH0GQPw", "whXUsMczByi", "phEcpUpaEP", "udTxo98ULRm", "wk79634tC6p", "J7MZYyv-mck", "ewhbv0kR_w", "1elswqrCaAL", "0JC6kQ-_uF-", "-FFkJiIsfS2", "lsNvsdLTMhA", "9_dLaeD1Ch", "3AWuE8UoSV", "_IvxtoFCIfR", "sicFPMX_th", "15yPfMpte9Y", "6jrTjvq_nn3", "9DdeNmeU3KS", "GcYDrRQG1fV", ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", ...
[ " We appreciate your response. We are sincerely glad to learn that you have been satisfied with the additional comparisons and discussion.", " Given the proposed method is moderately novel and the newly presented comparisons after the discussions with authors, I increased my rating from 5 to 6.", " Thank you v...
[ -1, -1, -1, -1, -1, 6, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "whXUsMczByi", "_IvxtoFCIfR", "udTxo98ULRm", "wk79634tC6p", "3AWuE8UoSV", "iclr_2022_2-mkiUs9Jx7", "9DdeNmeU3KS", "lsNvsdLTMhA", "iclr_2022_2-mkiUs9Jx7", "iclr_2022_2-mkiUs9Jx7", "9_dLaeD1Ch", "GcYDrRQG1fV", "sxvIuiNP2uO", "sicFPMX_th", "15yPfMpte9Y", "OVHtNtfcf0", "J7MZYyv-mck", "...
iclr_2022_RQLLzMCefQu
Provably Filtering Exogenous Distractors using Multistep Inverse Dynamics
Many real-world applications of reinforcement learning (RL) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. Prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information...
Accept (Oral)
The paper presents a new technique that infers the endogenous states of an RL problem, as well as the corresponding model and optimal policy. A bound is derived that shows that the amount of data needed depends only on the number of endogenous states, while being independent of the number of exogenous states and the c...
train
[ "EGQFMOseoV", "tWeV_s7iDdW", "t7pwGdcng0y", "VQW1iO293p", "vXx_RmeQjdO", "x5ZTAlyOb6h", "Y0fH80J1BpJ", "mGKnzIzkL-Y", "Ve4lyhmDPUZ", "YVY_Lt8viR", "YXKmLUaX2Rj", "Jh1DPAMT6s", "ATa__s7Mw4", "8i2CkAziQgq", "SoZ8FgF3oeo", "GixzPVk9vco", "2rvn8n_p_mE", "GO82tiLJqjg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors consider the problem of reinforcement learning using high-dimensional observations (eg, images) that may contain both exogenous and endogenous state information. Seeking to remedy the issues with learning that arise due to the exogenous state information, the authors propose a new model called an Exoge...
[ 8, 8, -1, 8, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 2, -1, 5, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_RQLLzMCefQu", "iclr_2022_RQLLzMCefQu", "vXx_RmeQjdO", "iclr_2022_RQLLzMCefQu", "x5ZTAlyOb6h", "GixzPVk9vco", "YXKmLUaX2Rj", "iclr_2022_RQLLzMCefQu", "iclr_2022_RQLLzMCefQu", "GixzPVk9vco", "SoZ8FgF3oeo", "GO82tiLJqjg", "8i2CkAziQgq", "2rvn8n_p_mE", "mGKnzIzkL-Y", "VQW1iO293p...
iclr_2022_WH6u2SvlLp4
Learning Prototype-oriented Set Representations for Meta-Learning
Learning from set-structured data is a fundamental problem that has recently attracted increasing attention, where a series of summary networks are introduced to deal with the set input. In fact, many meta-learning problems can be treated as set-input tasks. Most existing summary networks aim to design different archit...
Accept (Poster)
The proposed method for set representation learning with an application to mete learning is well-motivated and reasonable. Reviewers' original concerns about novelty and technical presentation have been well explained and addressed in the revision. If some theoretical analysis can be provided regarding the proposed met...
val
[ "fOy2djKMdbm", "cELgJ5n3XSY", "eNtPO2u9y3N", "TVRZRV_EVQX", "GAzk5G3Vhc3", "l-F-brGrrMF", "pp7Usb3M3WX", "JhX7I-zyp3T", "6bJ1FWoZIB9", "dZ06YICChP", "uwxlGm-O7XY" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments. Q1: We agree with you that theoretical analysis is desirable, which however is non-trivial in the context of muti-distributions, summary networks, and others. We are still working on this problem. But, please note that our proposed method is still novel and promising, for providing a new...
[ -1, -1, 6, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, 2, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "6bJ1FWoZIB9", "GAzk5G3Vhc3", "iclr_2022_WH6u2SvlLp4", "eNtPO2u9y3N", "l-F-brGrrMF", "eNtPO2u9y3N", "dZ06YICChP", "uwxlGm-O7XY", "iclr_2022_WH6u2SvlLp4", "iclr_2022_WH6u2SvlLp4", "iclr_2022_WH6u2SvlLp4" ]
iclr_2022_NYBmJN4MyZ
Safe Neurosymbolic Learning with Differentiable Symbolic Execution
We study the problem of learning verifiably safe parameters for programs that use neural networks as well as symbolic, human-written code. Such neurosymbolic programs arise in many safety-critical domains. However, because they need not be differentiable, it is hard to learn their parameters using existing gradient-bas...
Accept (Poster)
The proposed method, Differentiable Symbolic Execution (DSE), addresses the safety of learned navigation and control programs. The approach samples code paths using a softened probabilistic version of symbolic execution, constructing gradients of a "safety loss" along these paths, and then backpropagating these gradie...
train
[ "OiATeVm0fNI", "U2nR2YXaVN", "gRyJHkQzvPM", "gTEp-HgACBO", "dghC0y_auB1", "ybOwzfeJw_", "oh_Jk3eboTr", "gGLyQXllhMP", "LNpjnEKdwCP", "4BkKhldSWz", "fEMKi8sBto", "DcF-Rno5b_", "gMx42S8D2D6", "bbttVJbmzez", "IzYi6cFe7gq", "tdubL0gJiF0", "If04dmCyOIv", "2ufX_XdtCvj", "zo5Xrrq_8B2", ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", ...
[ " Thank you very much for the insightful discussion and questions! Your suggestions and questions help us make a stronger manuscript. \n\nThanks for the side note. We will carefully select the materials to be included in the main paper.", " Thank you very much for the feedback. We enjoyed the discussion with you!...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "dghC0y_auB1", "gRyJHkQzvPM", "ybOwzfeJw_", "iclr_2022_NYBmJN4MyZ", "LNpjnEKdwCP", "oh_Jk3eboTr", "7klMvPd4I7C", "DcF-Rno5b_", "If04dmCyOIv", "iclr_2022_NYBmJN4MyZ", "tdubL0gJiF0", "iclr_2022_NYBmJN4MyZ", "Ugu2fU0s5mW", "DcF-Rno5b_", "bbttVJbmzez", "zo5Xrrq_8B2", "sONzntl78pA", "4x...
iclr_2022_2eXhNpHeW6E
R5: Rule Discovery with Reinforced and Recurrent Relational Reasoning
Systematicity, i.e., the ability to recombine known parts and rules to form new sequences while reasoning over relational data, is critical to machine intelligence. A model with strong systematicity is able to train on small-scale tasks and generalize to large-scale tasks. In this paper, we propose R5, a relational rea...
Accept (Spotlight)
A novel method is described that uses RL to search for a rule set which predicts multiple relations at once for KBC-like problems. The rules can include latent predicates, which reduces the complexity of individual rules, similar to Cropper & Muggleton's (2015) meta-interpretive learning framework, which is usual for ...
train
[ "5hm_PscjtcJ", "JiJngCqckDC", "-cajg3Qoz_l", "kAMC4CNmyw", "lOObDEMQq4l", "3VBfekHjGBE", "crJ7Br_JVP", "7rBSXHpWMfo", "f9Wj_Eu2TL" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your valuable review! We made the following modifications to our paper according to your suggestions:\n\n- We formally define the relation prediction task in Section 2. (in green)\n- As we described in Section 2.1, our path sampling procedure is as follows: 1) for the datasets that have ve...
[ -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "f9Wj_Eu2TL", "crJ7Br_JVP", "crJ7Br_JVP", "3VBfekHjGBE", "kAMC4CNmyw", "7rBSXHpWMfo", "iclr_2022_2eXhNpHeW6E", "iclr_2022_2eXhNpHeW6E", "iclr_2022_2eXhNpHeW6E" ]
iclr_2022_hR_SMu8cxCV
Scaling Laws for Neural Machine Translation
We present an empirical study of scaling properties of encoder-decoder Transformer models used in neural machine translation (NMT). We show that cross-entropy loss as a function of model size follows a certain scaling law. Specifically (i) We propose a formula which describes the scaling behavior of cross-entropy loss ...
Accept (Spotlight)
This is a strong empirical paper that studies scaling laws for NMT in terms of several new aspects, such as the model quality as a function of the encoder and decoder sizes, and how the composition of data affects scaling, etc. The extensive empricial results offer new insights to the questions and provide valuable gui...
test
[ "CR5ahpdPZUw", "zWuPUGVMAuP", "MbFNo-29jSI", "tLBq_gCwFgW", "thT5b7vb5-", "f-EyeVlBwC3", "NAoYaecEiS_", "h0jFEMR35F", "BYnn1UdnESU", "O5VUJq9LlF1", "mGktNUxeZKt", "Zux0VK0nU_s" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ " By the way, note the Gordon citation should be updated to EMNLP2021 since it is now published. See bibtex here: https://aclanthology.org/2021.emnlp-main.478/\nAlso, since the title of your paper is very similar (\"Scaling Laws for NMT\" vs \"Data and Parameter Scaling Laws for NMT\"), it may be worth considering ...
[ -1, -1, 8, -1, -1, -1, -1, 8, -1, -1, 8, 10 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, -1, -1, 4, 5 ]
[ "tLBq_gCwFgW", "BYnn1UdnESU", "iclr_2022_hR_SMu8cxCV", "f-EyeVlBwC3", "Zux0VK0nU_s", "mGktNUxeZKt", "O5VUJq9LlF1", "iclr_2022_hR_SMu8cxCV", "MbFNo-29jSI", "h0jFEMR35F", "iclr_2022_hR_SMu8cxCV", "iclr_2022_hR_SMu8cxCV" ]
iclr_2022_St-53J9ZARf
Deep AutoAugment
While recent automated data augmentation methods lead to state-of-the-art results, their design spaces and the derived data augmentation strategies still incorporate strong human priors. In this work, instead of fixing a set of hand-picked default augmentations alongside the searched data augmentations, we propose a fu...
Accept (Poster)
We appreciate the authors for engaging in discussions with the reviewers and providing further experimental results to clarify and address the concerns raised by them in their original reviews, leading to changes in some of the recommendations. While the (revised) paper with the clarifications and new results incorpor...
train
[ "wTBC8hDlhhZ", "kxSaDcASffz", "nY9BMDJajuA", "HFazlBXjkAJ", "K0Ai706Le_", "RRC9p50-gOX", "EBoGKbsi1LR", "K1DMElV5LmC", "Xo0oqyLU04H", "JPVYHXEOuYs", "YiTqGnuf5G5", "1hYDEKUz4CU", "3XdF1_5CqE7", "_XiIx3P4-Wf", "ckmHsRFZdW", "XYpCD0x5YSP", "jgfbO_R7jQq", "ElBHjEjVV_t", "4LwVRVXb9rA...
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " The reviewer appreciates the additional experiments from the authors that well motivate the use of mean and standard deviation of gradient similarity. Therefore, the reviewer decides to keep the original rating.", " We thank the reviewer for taking time to read our response. \n\nCompared to existing approaches,...
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "S65SBm1YxLd", "EBoGKbsi1LR", "Xo0oqyLU04H", "JPVYHXEOuYs", "YiTqGnuf5G5", "3XdF1_5CqE7", "K1DMElV5LmC", "iclr_2022_St-53J9ZARf", "ckmHsRFZdW", "Zf-Flp6qbcq", "LxUt51v4Ew0", "iclr_2022_St-53J9ZARf", "1hYDEKUz4CU", "K1DMElV5LmC", "K1DMElV5LmC", "1hYDEKUz4CU", "S65SBm1YxLd", "LxUt51v...
iclr_2022_WQc075jmBmf
CodeTrek: Flexible Modeling of Code using an Extensible Relational Representation
Designing a suitable representation for code-reasoning tasks is challenging in aspects such as the kinds of program information to model, how to combine them, and how much context to consider. We propose CodeTrek, a deep learning approach that addresses these challenges by representing codebases as databases that confo...
Accept (Poster)
The paper presents a deep learning approach encodes codebases as databases that conform to rich relational schemas. Based on this, a biased graph-walk mechanism efficiently feeds this structured data into a transformer and deepset approach. The results shown a quite good, compared to other approaches present at ICLR. M...
train
[ "U-BbkiarHAR", "WC5Odw8p9hk", "CH4GlkgAQyy", "tZ4NUIOpt3B", "djCbKiERU1X", "1AbqP1m7ep", "o081X_8kWBp", "vsV1RFs9ZQB", "KRIjpsipN1b", "BSmFZaHC1v", "L_lAov4PyIs", "CfTes58aOSV", "_kmLBR1_2C", "-fvkDHha-S", "RGsz3_qD3Xj", "TkvMChiR0DN", "VogGwKMRPVH", "bfkUu7JAcnO", "frU6i5GsUFH",...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_...
[ " Dear authors,\n\nThank you for your efforts and responsiveness.\n\nThe proposed system shows strong empirical results. However, I still think that the paper describes a too specific system, powered by carefully engineered features (in the form of edges and relations), and which is tailored to a specific engine (S...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "RGsz3_qD3Xj", "iclr_2022_WQc075jmBmf", "iclr_2022_WQc075jmBmf", "iclr_2022_WQc075jmBmf", "1AbqP1m7ep", "o081X_8kWBp", "L_lAov4PyIs", "KRIjpsipN1b", "_kmLBR1_2C", "CfTes58aOSV", "VogGwKMRPVH", "kafcCe53JHZ", "NolbM3uaIUy", "CH4GlkgAQyy", "kafcCe53JHZ", "NolbM3uaIUy", "CH4GlkgAQyy", ...
iclr_2022_8Py-W8lSUgy
Relational Multi-Task Learning: Modeling Relations between Data and Tasks
A key assumption in multi-task learning is that at the inference time the multi-task model only has access to a given data point but not to the data point’s labels from other tasks. This presents an opportunity to extend multi-task learning to utilize data point’s labels from other auxiliary tasks, and this way improve...
Accept (Spotlight)
The paper describes a novel learning scenario where there are many related tasks, some seen at test time, and some seen only at training time, where additionally the task labels can be hidden or present. This approach generalizes both a "relational setting" (where auxiliary task labels could be used as features) and a...
train
[ "TRIlxsDmJR", "HtrrVMpSMGf", "uxxxXjtMXp", "o1wb8siE2mP", "OjWGav9vbSm", "-1XYSm9-IbYZ", "HfjvV94qC-V", "deMsadjhC4", "va8fCQw6D9", "R4_FEOw2FVj" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the responses. I keep my original rating and recommend acceptance.", " Dear Reviewers,\n\nWe sincerely appreciate your time and efforts in reviewing our paper.\n\nWe tried our best to reply to all the concerns and questions raised in the review. We hope the additional explanations and clar...
[ -1, -1, -1, -1, -1, -1, 5, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "OjWGav9vbSm", "iclr_2022_8Py-W8lSUgy", "HfjvV94qC-V", "deMsadjhC4", "va8fCQw6D9", "R4_FEOw2FVj", "iclr_2022_8Py-W8lSUgy", "iclr_2022_8Py-W8lSUgy", "iclr_2022_8Py-W8lSUgy", "iclr_2022_8Py-W8lSUgy" ]
iclr_2022_rWXfFogxRJN
AdaAug: Learning Class- and Instance-adaptive Data Augmentation Policies
Data augmentation is an effective way to improve the generalization capability of modern deep learning models. However, the underlying augmentation methods mostly rely on handcrafted operations. Moreover, an augmentation policy useful to one dataset may not transfer well to other datasets. Therefore, Automated Data Aug...
Accept (Poster)
Reviewers agreed that this work is well-motivated and presents a novel approach for data augmentation around the adaptive augmentation policies. There were some concerns around the lack of ablation studies and unclear performance improvements, which were addressed well by the authors’ responses. Thus, I recommend an ac...
train
[ "LAMFsa94-EJ", "3jGt3qeEYZw", "6urf7FofpG2", "1QeSebI3H_0", "_61yvCoY-Y", "XSEysvnwvEz", "ZMcA31rRBPU", "umG0_bzgjlK", "vxeg2oYqmc", "yKTtaJQN_ZD", "tMK4fCGUYB", "oUS2AdA4Oqw", "YL02756t-Bj", "8unQtUOfmAV", "MikXzsvoxq2", "DsdRRUv7oBV", "iIK69HrOEO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces a data augmentation method AdaAug that learns adaptive augmentation policies in a class-dependent and potentially instance-dependent manner to improve the generalisation capability of deep learning models. Concretely, it proposes an efficient exploition-exploration workflow to search for an a...
[ 6, -1, -1, -1, -1, -1, 8, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, -1, -1, -1, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_rWXfFogxRJN", "1QeSebI3H_0", "_61yvCoY-Y", "_61yvCoY-Y", "MikXzsvoxq2", "umG0_bzgjlK", "iclr_2022_rWXfFogxRJN", "8unQtUOfmAV", "tMK4fCGUYB", "iclr_2022_rWXfFogxRJN", "YL02756t-Bj", "iclr_2022_rWXfFogxRJN", "yKTtaJQN_ZD", "ZMcA31rRBPU", "LAMFsa94-EJ", "iIK69HrOEO", "iclr_20...
iclr_2022_moHCzz6D5H3
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently
Sparse neural networks (NNs) are intensively investigated in literature due to their appeal in saving storage, memory, and computational costs. A recent work (Ramanujan et al., 2020) showed that, different from conventional pruning-and-finetuning pipeline, there exist hidden subnetworks in randomly initialized NNs that...
Accept (Poster)
I recommend acceptance. This paper presents an interesting "in-between" of work on lottery tickets and work on supermasks, and I think it is sufficiently novel to merit acceptance even if the significance of the results will need to be left to the judgment of future researchers. The reviewers seem broadly in favor of a...
train
[ "h6IaemX0Hl-", "orPXKxgINHE", "fU6fXwHDKC", "W0FZDOoZ71", "FWJVlaD-LYm", "oBjGl9YIR9D", "L-6bm8vi9Fx", "irdqBG3GfnJ", "Vq1rgBwtp7m", "w8mAUh5_glx", "nwxXYVT2sBF", "fkLZtrIBUoR", "iReBg8qhtAi", "pE44yP-P4Vf", "uLPZOU-dBRw", "MT6KYdUIJvo", "0x2svroFpQ", "9qzqAZWTJJ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 7V6H,\n\nThank you for your active responses and the positive evaluation of our work post-rebuttal. The discussion was joyful. Thank you again for your valuable time!\n\nBest,\n\nAuthors", "This paper extends the definition of the hidden subnetworks in randomly initiated neural networks. The new n...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "fU6fXwHDKC", "iclr_2022_moHCzz6D5H3", "W0FZDOoZ71", "FWJVlaD-LYm", "Vq1rgBwtp7m", "fkLZtrIBUoR", "iclr_2022_moHCzz6D5H3", "9qzqAZWTJJ", "orPXKxgINHE", "iclr_2022_moHCzz6D5H3", "0x2svroFpQ", "MT6KYdUIJvo", "9qzqAZWTJJ", "orPXKxgINHE", "orPXKxgINHE", "iclr_2022_moHCzz6D5H3", "iclr_202...
iclr_2022_ucASPPD9GKN
Is Homophily a Necessity for Graph Neural Networks?
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks. When applied to semi-supervised node classification, GNNs are widely believed to work well due to the homophily assumption (``like attracts like''), and fail to generalize to hete...
Accept (Poster)
Heterophily is known to degrade the performance of graph neural networks. This paper explores whether, for graph convolutional networks (GCNs), this is a general phenomenon, or if there are some circumstances under which a GCN can still perform well in a heterophilous setting. This paper characterizes one such setting ...
train
[ "m8Vt9BsVUx", "tYEU09vD4a", "BbxobQXujmm", "MeQjLklk_qQ", "ysIdHDvZ62Z", "Xsn92vmHSI2", "Att5itf_HfX", "WhO4tNJrqTd", "VkxFzX1iGh7", "Z9IFIIkiexR", "HMKGncMY_yO", "P2q_u_ECRgM", "jCSdhQsXg_O", "Vb3I2HmWixW", "Twt0k0JHNZ", "mcwSbhXGhFz", "IFhvV1vJXEy", "uqd90ovs3Y2" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "The authors characterize the conditions and provide supporting theoretical understanding and empirical observations of the conditions that GCNs can achieve strong performance on heterophilous graphs.\n Strength:\n\nThe authors provide some valuable arguments about the performance of GCN on heterophilous graphs and...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_ucASPPD9GKN", "iclr_2022_ucASPPD9GKN", "iclr_2022_ucASPPD9GKN", "BbxobQXujmm", "IFhvV1vJXEy", "Att5itf_HfX", "WhO4tNJrqTd", "VkxFzX1iGh7", "uqd90ovs3Y2", "HMKGncMY_yO", "P2q_u_ECRgM", "jCSdhQsXg_O", "tYEU09vD4a", "MeQjLklk_qQ", "mcwSbhXGhFz", "m8Vt9BsVUx", "iclr_2022_ucASP...
iclr_2022_nc0ETaieux
Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs
Arguably the most fundamental question in the theory of generative adversarial networks (GANs) is to understand when GANs can actually learn the underlying distribution. Theoretical and empirical evidence (see e.g. Arora-Risteski-Zhang '18) suggest local optimality of the empirical training objective is insufficient, y...
Accept (Poster)
The provides a complexity theoretic look at GANs. The exposition is multi-disciplinary, and in my personal opinion, it is an interesting look at the GANs in the context of random number generators.
train
[ "czr6XlpgLbm", "7QF3ewQcOlh", "lRTbtDwCKR2", "cEfQ9QjV1UQ", "0Zc6IT95RpY", "N7JS-FgvjJo", "UHus2KLJUfs", "OZ1Dm3JQyn", "tERf4D0ZKsY", "tLfOqlIKC2X", "XB8N0-X4RVg", "NS3s9wnLWgH", "MgxJLlJDkjw", "Efa5gD-q7Rc", "33suSG1lxSZ", "PQ71X4acVIy" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of learning generative adversarial networks using a ploy-size ReLU generator and discriminator under the standard Wasserstein-1 metric. The main result is that there exists a \"bad\" generator that can cheat all discriminators under the estimation of the Wasserstein-1 metric while be...
[ 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ 3, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_nc0ETaieux", "lRTbtDwCKR2", "XB8N0-X4RVg", "tLfOqlIKC2X", "MgxJLlJDkjw", "OZ1Dm3JQyn", "iclr_2022_nc0ETaieux", "UHus2KLJUfs", "iclr_2022_nc0ETaieux", "Efa5gD-q7Rc", "czr6XlpgLbm", "33suSG1lxSZ", "PQ71X4acVIy", "iclr_2022_nc0ETaieux", "iclr_2022_nc0ETaieux", "iclr_2022_nc0ETa...
iclr_2022_ydopy-e6Dg
Image BERT Pre-training with Online Tokenizer
The success of language Transformers is primarily attributed to the pretext task of masked language modeling (MLM), where texts are first tokenized into semantically meaningful pieces. In this work, we study masked image modeling (MIM) and indicate the necessity and challenges of using a semantically meaningful visual ...
Accept (Poster)
The paper is interesting, and its focus is timely and important, given the continuing rapid rise of transformers (and their dependence of tokenization of images). All three reviewers recommend acceptance, to varying degree. The paper will be a valuable contribution to the program at ICLR.
train
[ "iG5QddmaavW", "hjXyvQAeXMw", "D_7z05MsvzS", "Thi72c0h17j", "hRFiZ_nYnx", "kUoqypiTJsl", "9B3WLHftQe", "Kw7zg6hu0S", "gDhDQ6hve9b", "CAPkIB66C9v", "JO-BGgrxBX3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes the iBOT method. This approach is inspired by the contrastive self-supervised learning approach like DINO and the mask modelling approach like BeiT. The idea is to use an online tokenizer instead of a pretrained tokenizer like Beit. The iBoT approach combines a loss at the patch level like Beit ...
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_ydopy-e6Dg", "Thi72c0h17j", "iclr_2022_ydopy-e6Dg", "iG5QddmaavW", "iclr_2022_ydopy-e6Dg", "CAPkIB66C9v", "iclr_2022_ydopy-e6Dg", "9B3WLHftQe", "JO-BGgrxBX3", "hRFiZ_nYnx", "iclr_2022_ydopy-e6Dg" ]
iclr_2022_Ro_zAjZppv
Tracking the risk of a deployed model and detecting harmful distribution shifts
When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain---but not all---distribution shifts could result in significant performance degradation. In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model doe...
Accept (Poster)
This paper is concerned with the problem of distribution shift, and develops techniques for detecting when the risk of a deployed model performs significantly worse on a testing distribution than on the training distribution. The reviews for this paper were extremely consistent: after the discussion period, all five r...
train
[ "kbdYYIiJaO", "g4oI8T8n5F6", "_RiWKLGMpw1", "7J37B8I9grJ", "JYDBpH3k9ln", "P51HkCPrBZ", "d_5OXvNvCTX", "a7qrNXL0LJs", "5ULfOfzEB-", "n3UevzaMkFt", "erKhdYAuO-I", "oPI8x9EoVV", "yg43Zmru5Wv", "kE6TwcmoXAh", "bNmSJqI5Xx8", "uH-mrd9hR2" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper develops a tool for testing online whether the performance of a model on the test data becomes significantly worse than the performance on the training data, which allows differentiating between benign and harmful shifts. The proposed framework is based on sequential testing for a significant risk incre...
[ 6, 6, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 2, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_Ro_zAjZppv", "iclr_2022_Ro_zAjZppv", "iclr_2022_Ro_zAjZppv", "_RiWKLGMpw1", "iclr_2022_Ro_zAjZppv", "uH-mrd9hR2", "JYDBpH3k9ln", "g4oI8T8n5F6", "_RiWKLGMpw1", "kbdYYIiJaO", "kbdYYIiJaO", "JYDBpH3k9ln", "iclr_2022_Ro_zAjZppv", "JYDBpH3k9ln", "_RiWKLGMpw1", "iclr_2022_Ro_zAjZp...
iclr_2022_TBWA6PLJZQm
Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations
Existing research on learning with noisy labels mainly focuses on synthetic label noise. The synthetic noise, though has clean structures which greatly enabled statistical analyses, often fails to model the real-world noise patterns. The recent literature has observed several efforts to offer real-world noisy datasets,...
Accept (Poster)
The authors propose two new benchmark datasets CIFAR-10-N and CIFAR-100-N, variants of CIFAR-10 and CIFAR-100 with real-world human annotation noise. The benchmark datasets are more realistic (e.g. instance-dependent noise) than some existing synthetic benchmarks for label noise. The authors also benchmark several popu...
train
[ "wVgJUQScAV6", "cAlhT3xzq6P", "6Xu98NBQ9qi", "x73il7v-sKA", "-MW6GmED080", "5mmORDk5sUr", "O8tXYoP-7Vk", "qQ2iaRaCQ0G", "4XjDU4DdJVD", "HPOC_LeYnPU", "FTCtilIOz1h", "TYB2yGHkGDd", "Pb0nCIgCQwk", "CwJEw_m32A", "5Avp5Ngj4Q", "3QqWfU7Zbgt", "pPCxBsIx3bU", "Lmq_IXQlnw9", "MXEy7inlS31...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " **Dear Reviewer PLPg,**\n\nThanks for the quick response! And we sincerely appreciate this valuable suggestion ''add comparisons among CIFAR-10, CIFAR-10H, and CIFAR-N''. In our next version, we will definitely include detailed comparisons and highlight the contributions of these three works.\n\nBest,\n\nICLR 202...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "cAlhT3xzq6P", "5mmORDk5sUr", "iclr_2022_TBWA6PLJZQm", "Pb0nCIgCQwk", "FTCtilIOz1h", "O8tXYoP-7Vk", "qQ2iaRaCQ0G", "6Xu98NBQ9qi", "pPCxBsIx3bU", "iclr_2022_TBWA6PLJZQm", "3QqWfU7Zbgt", "iclr_2022_TBWA6PLJZQm", "CwJEw_m32A", "5Avp5Ngj4Q", "TYB2yGHkGDd", "HPOC_LeYnPU", "6Xu98NBQ9qi", ...
iclr_2022_9NVd-DMtThY
Distributionally Robust Fair Principal Components via Geodesic Descents
Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines. In consequential domains such as college admission, healthcare and credit approval, it is imperative to take into account emerging criteria such as the fairness and the robustness of the learned ...
Accept (Poster)
This paper considers the problem of distributionally robust fair PCA for binary sensitive variables. The main modeling contribution of the paper is the consideration of fairness and robustness of the PCA simultaneously, and the main technical contribution of the paper is the provision of a Riemannian subgradient descen...
test
[ "dS0iKfUTdG", "6aXgGdGegEX", "44mSjpBjiv9", "MiShfR3cmwj", "BFbPVic8RXI", "qjBkFMdbJjw", "eQlctzFr3Q", "SjTjmatmHUK", "p5rEK2lAq4k", "XrcL6hpyMRL", "q5AbtHKzXl6", "ExdywrgBjg0" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q: Second, the proposed method involves the selection of tuning parameter. While the authors seem to avoid the discussion tuning parameter selection. Moreover, the numerical experiments just consider the use of fixed value of these tuning parameters, which makes reader hard to judge the quality of the numerical...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "6aXgGdGegEX", "ExdywrgBjg0", "q5AbtHKzXl6", "BFbPVic8RXI", "44mSjpBjiv9", "XrcL6hpyMRL", "SjTjmatmHUK", "p5rEK2lAq4k", "iclr_2022_9NVd-DMtThY", "iclr_2022_9NVd-DMtThY", "iclr_2022_9NVd-DMtThY", "iclr_2022_9NVd-DMtThY" ]
iclr_2022_1NvflqAdoom
Neural Networks as Kernel Learners: The Silent Alignment Effect
Neural networks in the lazy training regime converge to kernel machines. Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent kernel? We demonstrate that this can indeed happen due to a phenomenon we term silent alignment, which requires that the tangent kernel of a netwo...
Accept (Poster)
The authors make a case for a phenomenon of deep network training that they call the "silent alignment effect": that, while the training error is still large, the NTK associated with the network aligns its eigenvectors with key directions in "feature space". They support this with non-rigorous theoretical analysis of ...
train
[ "LuMTjbny4vy", "WGI0g1bsba", "cX2t7ZSLiKZ", "1tB1HsB0Ta-", "ZwLk6onZaRF", "KBEC9ta29yx", "rx1n4_j0TRX", "IMjGdsooHKM", "VNnb8klfryT", "yru9zOZ_Itw", "H5PsmRJqkx", "zl2Omi6Kmr6", "Edjip8YF_YQ", "SLBzHDkiX4U", "MU2P-iPWhFL", "nkhJeFRBPy" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Though we are no longer able to provide updates to the current pdf of the paper, we did reorganize and make more clear the results of Appendix B and we would like to bring these changes to the attention of the reviewer.\n\n1. As the reviewer suggested, we organized our results into lemmas and theorems as detailed...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "rx1n4_j0TRX", "MU2P-iPWhFL", "Edjip8YF_YQ", "iclr_2022_1NvflqAdoom", "KBEC9ta29yx", "rx1n4_j0TRX", "IMjGdsooHKM", "VNnb8klfryT", "SLBzHDkiX4U", "MU2P-iPWhFL", "iclr_2022_1NvflqAdoom", "1tB1HsB0Ta-", "zl2Omi6Kmr6", "nkhJeFRBPy", "iclr_2022_1NvflqAdoom", "iclr_2022_1NvflqAdoom" ]
iclr_2022_Dup_dDqkZC5
Latent Variable Sequential Set Transformers for Joint Multi-Agent Motion Prediction
Robust multi-agent trajectory prediction is essential for the safe control of robotic systems. A major challenge is to efficiently learn a representation that approximates the true joint distribution of contextual, social, and temporal information to enable planning. We propose Latent Variable Sequential Set Transforme...
Accept (Spotlight)
This paper studies the problem of motion prediction for multiple agents in a scene using transformer-based VAE like architecture. The paper received mixed reviews initially which generally tended towards borderline acceptance. All reviews appreciated extensive experiments but had some clarifications and requests for ab...
val
[ "RAf_y1rO4zf", "sXgRaaG2of", "9wsV5qAPiX4", "IYvdcy4cF1C", "gegjUwKN582", "4oBR1uD6nNv", "EdR-CYwyfEC", "ppc-cr51PWk", "FnwoW-4wjL", "Fq703gD9Vri", "tTQxE55BirV", "Trv7EaBGcSo", "o6YutwdqB0P", "KWDc_J7kGc", "4hXkXnosfDV", "40n1ukP6UlM", "kEHsiXLGvi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper adapts transformer to multi-agent motion forecasting. The attention layers are applied on time and agent axis to capture motion and social information. A latent variable is introduced on the output to capture discrete motion for each agent.\nExtensive experiments are conducted on various dataset with go...
[ 8, -1, 6, 8, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, 3, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_Dup_dDqkZC5", "EdR-CYwyfEC", "iclr_2022_Dup_dDqkZC5", "iclr_2022_Dup_dDqkZC5", "ppc-cr51PWk", "Fq703gD9Vri", "tTQxE55BirV", "kEHsiXLGvi", "iclr_2022_Dup_dDqkZC5", "o6YutwdqB0P", "RAf_y1rO4zf", "iclr_2022_Dup_dDqkZC5", "9wsV5qAPiX4", "FnwoW-4wjL", "KWDc_J7kGc", "IYvdcy4cF1C",...
iclr_2022_q4HaTeMO--y
Declarative nets that are equilibrium models
Implicit layers are computational modules that output the solution to some problem depending on the input and the layer parameters. Deep equilibrium models (DEQs) output a solution to a fixed point equation. Deep declarative networks (DDNs) solve an optimisation problem in their forward pass, an arguably more intuitive...
Accept (Poster)
Thank you for your submission to ICLR. All the reviewers are in agreement that this paper presents a nice contribution to the field, highlighting a class of DEQ models that correspond to optimization problems. This provides a nice perspective on what kinds of computations DEQ models may perform, and I think provides ...
train
[ "Ud8PdqGwqV", "SwlNSKf9ml-", "5fnz8Cjpo-", "y2wP7urTIrv", "OFOtP4SIqF", "tGClOBfChEA", "E5FPzhhabzQ", "C2JleUQGfcE", "7LTDT2RNqPm", "spWgBfFsqV_", "fC-I_z3b7D", "FyzsucoPzY7", "heH4zAWW8p", "IOGjyq7io-i", "jT0D9bh4Cwl", "tJxMV5KJ3K6", "AdZFW_oTObH", "aY7YkhnByL" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your review and for responding to our rebuttal. ", " Thanks for taking the time to review our paper and respond to our rebuttal.", " I'd like to thank the authors for their responses. I've read through the reviews and think the authors did a great job clarifying some of the important question...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "5fnz8Cjpo-", "OFOtP4SIqF", "spWgBfFsqV_", "iclr_2022_q4HaTeMO--y", "C2JleUQGfcE", "jT0D9bh4Cwl", "tJxMV5KJ3K6", "y2wP7urTIrv", "aY7YkhnByL", "fC-I_z3b7D", "AdZFW_oTObH", "E5FPzhhabzQ", "IOGjyq7io-i", "tGClOBfChEA", "iclr_2022_q4HaTeMO--y", "iclr_2022_q4HaTeMO--y", "iclr_2022_q4HaTeM...
iclr_2022_DnG75_KyHjX
MoReL: Multi-omics Relational Learning
Multi-omics data analysis has the potential to discover hidden molecular interactions, revealing potential regulatory and/or signal transduction pathways for cellular processes of interest when studying life and disease systems. One of critical challenges when dealing with real-world multi-omics data is that they may m...
Accept (Poster)
A deep Bayesian generative model is presented for multi-omics integration, using fused Gromov-Wasserstein regularization between latent representations of the data views. The method removes several non-trivial and practically important restrictions from an earlier method BayRel, enabling application in new setups, whil...
train
[ "09hiCi42B82", "0CoMsagtlk", "ndN6XZ4tv4W", "vOOqDLjceNp", "C4XwF_XvsA", "chimWrpiZZZ", "F2RpGBoQMh", "XHkI-vxVKWK", "XSHRyTl6or8", "f-944taGjK" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewer’s feedback. Regarding the suggestions to extend related works and better present the context of the problem settings with connection to existing methods, we tried our best in our submission and in our responses to clearly describe them. We truly appreciate it if the reviewer can point o...
[ -1, -1, -1, -1, -1, -1, 5, 5, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 2, 3, 3 ]
[ "0CoMsagtlk", "C4XwF_XvsA", "f-944taGjK", "XSHRyTl6or8", "XHkI-vxVKWK", "F2RpGBoQMh", "iclr_2022_DnG75_KyHjX", "iclr_2022_DnG75_KyHjX", "iclr_2022_DnG75_KyHjX", "iclr_2022_DnG75_KyHjX" ]
iclr_2022_sA4qIu3zv6v
Towards General Function Approximation in Zero-Sum Markov Games
This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value function or the model is parameterized by general function classes. Provably efficient algorithms for both decoupled and coordinated settings are developed. In the ...
Accept (Poster)
Summary: The paper discusses Markov games with general function approximation, and investigates in particular reinforcement learning algorithms that learn a Nash policy in a trial-and-error fashion. They consider two settings: the decoupled one where the player does not observe the opponent’s policy and the coordinated...
train
[ "hZ7erKEZMNf", "YsH-r8VI7hs", "LPBs7oDcc0", "rEBDijX62zU", "gTb0pvJf95x", "jHziDYS5JT", "-R-TY8KJhEn", "GjPo7rKJ3B", "77KjCKdbcyf", "CASs1MhnQnT", "pzIXZNv39Ci", "h1gPbCkWS6h" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors made some efforts to improve the submission. However, I am not fully convinced that these efforts really raised the bar. Accordingly I am not going to adjust my grade for this submission.", " Dear Reviewer, we are wondering if you have any further questions. Although we cannot upload new versions, w...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "h1gPbCkWS6h", "LPBs7oDcc0", "gTb0pvJf95x", "h1gPbCkWS6h", "pzIXZNv39Ci", "CASs1MhnQnT", "77KjCKdbcyf", "iclr_2022_sA4qIu3zv6v", "iclr_2022_sA4qIu3zv6v", "iclr_2022_sA4qIu3zv6v", "iclr_2022_sA4qIu3zv6v", "iclr_2022_sA4qIu3zv6v" ]
iclr_2022_Opmqtk_GvYL
MetaMorph: Learning Universal Controllers with Transformers
Multiple domains like vision, natural language, and audio are witnessing tremendous progress by leveraging Transformers for large scale pre-training followed by task specific fine tuning. In contrast, in robotics we primarily train a single robot for a single task. However, modular robot systems now allow for the flexi...
Accept (Poster)
The paper aims to improve generalization and sample efficiency in robotic control. In this regard authors note that modular robot systems, which provide building blocks for a task-specific morphology, can be considered just another domain where transformers can be used. The authors propose to learn a universal controll...
train
[ "q3C_CMD5eOx", "UzG5WRXD1Es", "2sGStanaNqC", "goDrVKSLkNb", "WYMmGXNsVF0", "korCNc9jGLz", "omjF3OA4wTe", "llynQ4M0qJ", "HDx_wrKLgSq", "4ZO8x07gfyi", "i6_p9sUO3sB", "1g5e7lGbPe1", "IDuMPyniafa", "tLO9J4Tv2xF", "cbvYAOHZWA2", "omX3fvcIBTN", "0d_ODvPXFFT", "BeDk_8AdKDE", "F27qcSZunL...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "...
[ " Thanks for clarifying my doubts and addressing my concerns. I will therefore keep my (already very good!) score.", "The paper presents MetaMorph (MM), a transformer-based architecture for generalization over robot morphologies. Extensive experimentation demonstrates that policies learned by MM transfer well to ...
[ -1, 6, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, 5, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "tLO9J4Tv2xF", "iclr_2022_Opmqtk_GvYL", "goDrVKSLkNb", "llynQ4M0qJ", "0d_ODvPXFFT", "F27qcSZunL6", "iclr_2022_Opmqtk_GvYL", "HDx_wrKLgSq", "4ZO8x07gfyi", "i6_p9sUO3sB", "1g5e7lGbPe1", "omX3fvcIBTN", "iclr_2022_Opmqtk_GvYL", "uywG2bltvLf", "JUv5J-te6B", "UzG5WRXD1Es", "BeDk_8AdKDE", ...
iclr_2022_MEpKGLsY8f
Meta Discovery: Learning to Discover Novel Classes given Very Limited Data
In novel class discovery (NCD), we are given labeled data from seen classes and unlabeled data from unseen classes, and we train clustering models for the unseen classes. However, the implicit assumptions behind NCD are still unclear. In this paper, we demystify assumptions behind NCD and find that high-level semantic ...
Accept (Spotlight)
All reviewers believe that this paper is valuable, and the authors have made a significant, careful contribution. Some suggestions from the area chair: - "in causality" is not a standard technical term and also not non-technical idiomatic English, so it should be explained the first time it is used. - The authors shou...
train
[ "CGvNduqpleA", "3eTULa9zNZ8", "0dDsnDDNKRr", "vsKw7Ij35Eb", "w6m-p0agWyo", "3Hw0D6CEXpk", "JRtrh33WyxL", "Hm94O0Ukte6", "KC1wrw0W5aA", "6Jgeb9-i0sA", "5NMuDDudqcL", "vz6UmQ100oW", "TgYd8uHaGYJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. My concerns are well addressed. I also have carefully read the comments from other reviewers. To this end, I would like to keep my acceptance score.", "Learning to discover novel class is a very challenging task and a new research topic in recent years. In this task, a known-class data...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "3Hw0D6CEXpk", "iclr_2022_MEpKGLsY8f", "6Jgeb9-i0sA", "JRtrh33WyxL", "iclr_2022_MEpKGLsY8f", "vz6UmQ100oW", "TgYd8uHaGYJ", "5NMuDDudqcL", "3eTULa9zNZ8", "3eTULa9zNZ8", "iclr_2022_MEpKGLsY8f", "iclr_2022_MEpKGLsY8f", "iclr_2022_MEpKGLsY8f" ]
iclr_2022_JprM0p-q0Co
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
A wide variety of deep generative models has been developed in the past decade. Yet, these models often struggle with simultaneously addressing three key requirements including: high sample quality, mode coverage, and fast sampling. We call the challenge imposed by these requirements the generative learning trilemma, a...
Accept (Spotlight)
The paper modifies DPMs by replacing the denoising L2 losses with GANs to learn the iterative denoising process. This leads to excellent results using a small number of refinement steps. In some sense, this also takes away one of the key advantages of DPMs over GANs, which is DPMs minimize a well-defined objective func...
train
[ "KzE_dfiqxcx", "dE9v9dRqV3", "4uHq1c6vg5m", "T8o_Fe-2Zi", "ik9T0pZnUy", "XyzTTkXjh-F", "kehVatsTTeC", "GElhwquO7Oj", "Cb9-59gImPc", "ke5CCKykMDQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces _Denoising Diffusion GANs (DDGANs)_ to solve the so-called \"generative learning trilemma\" by training a generative model that achieves high-quality sampling, fast sampling times, and at the same time, high mode coverage. The main intuition of the paper is to combine GANs, which have been sh...
[ 8, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_JprM0p-q0Co", "4uHq1c6vg5m", "T8o_Fe-2Zi", "KzE_dfiqxcx", "Cb9-59gImPc", "GElhwquO7Oj", "ke5CCKykMDQ", "iclr_2022_JprM0p-q0Co", "iclr_2022_JprM0p-q0Co", "iclr_2022_JprM0p-q0Co" ]
iclr_2022_NoB8YgRuoFU
PI3NN: Out-of-distribution-aware Prediction Intervals from Three Neural Networks
We propose a novel prediction interval (PI) method for uncertainty quantification, which addresses three major issues with the state-of-the-art PI methods. First, existing PI methods require retraining of neural networks (NNs) for every given confidence level and suffer from the crossing issue in calculating multiple P...
Accept (Poster)
The paper proposes a novel method, PI3NN, for estimating prediction intervals (PIs) for quantifying the uncertainty of neural network predictions. The method is based on independently training three neural networks with different loss functions which are then combined via a linear combination where the coefficients for...
train
[ "0E1bWanmVaU", "w3AaVclRc1W", "myqd7Sfv9Ad", "x2cusRhWDLX", "q93SVNvyPb", "RfoG-yD055L", "mfH3V1o3E-o", "pueZrP34pJe", "bvMnAGyG6Eb", "94XywdUG4T", "Rm2alVO3ywq", "_4XDdyZ0j8G", "eTeAkX79YB" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for taking a look at the additional experiments and our response. However, we are very confused about the reviewer's overall evaluation: \"... it's still a bit hard to say the proposed method is novel enough as it is a combination of pre-existing methods\". Our question is: \"from the review...
[ -1, -1, -1, 5, 5, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "q93SVNvyPb", "myqd7Sfv9Ad", "Rm2alVO3ywq", "iclr_2022_NoB8YgRuoFU", "iclr_2022_NoB8YgRuoFU", "bvMnAGyG6Eb", "eTeAkX79YB", "q93SVNvyPb", "_4XDdyZ0j8G", "iclr_2022_NoB8YgRuoFU", "x2cusRhWDLX", "iclr_2022_NoB8YgRuoFU", "iclr_2022_NoB8YgRuoFU" ]
iclr_2022_rFbR4Fv-D6-
Automated Self-Supervised Learning for Graphs
Graph self-supervised learning has gained increasing attention due to its capacity to learn expressive node representations. Many pretext tasks, or loss functions have been designed from distinct perspectives. However, we observe that different pretext tasks affect downstream tasks differently cross datasets, which sug...
Accept (Poster)
Strengths: * Strong empirical study across multiple datasets. However, the gains are not as impressive as for other pretraining domains, such as text or images. * Interesting formulation of pseudo-homophily as an objective to optimize in the self-supervision stage * Well-written paper Weaknesses: * Novelty may be limi...
train
[ "2CtULa8R3t0", "4uxkh9JhxzG", "i0DtXQipyfY", "UVvpvuaOk7n", "DlXgczYdvne", "jHrPDP2vt26", "iNa-Nfvidji", "QHmk8CsCA2_", "I69gQPKIsuL", "sklSiQHWZVS", "9W-KevTrtJ", "H4w78OOPsSZ", "sD-3YadW2KU", "QXdpgdEd0I-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The proposed work looks at the task of automated self-supervised learning (SSL) on graphs, by using pseudo-homophily as a surrogate objective combined with a search strategy for the proposed approach, AutoSSL . \n\nHomophily is defined as the average of sameness of labels over pairs of connected vertices. Pseudo-h...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_rFbR4Fv-D6-", "UVvpvuaOk7n", "sD-3YadW2KU", "9W-KevTrtJ", "iclr_2022_rFbR4Fv-D6-", "i0DtXQipyfY", "QHmk8CsCA2_", "I69gQPKIsuL", "H4w78OOPsSZ", "QXdpgdEd0I-", "2CtULa8R3t0", "iclr_2022_rFbR4Fv-D6-", "iclr_2022_rFbR4Fv-D6-", "iclr_2022_rFbR4Fv-D6-" ]
iclr_2022_QjOQkpzKbNk
Distilling GANs with Style-Mixed Triplets for X2I Translation with Limited Data
Conditional image synthesis is an integral part of many X2I translation systems, including image-to-image, text-to-image and audio-to-image translation systems. Training these large systems generally requires huge amounts of training data. Therefore, we investigate knowledge distillation to transfer knowledge from a h...
Accept (Poster)
This paper studies the problem of distilling the knowledge present in different GAN-based image generation tasks. The paper received mixed reviews. The reviewers had difficulty understanding some details regarding the approach, and requests for ablations and clarifications on existing empirical evaluation. The authors ...
test
[ "iGYscP9jLas", "0wsrmRxD5n8", "B2Xel8fxPWK", "df0vFerfEWh", "BdPLyagSTQZ", "Dy8hZQPKeXU", "aQ2tUSAJXYy", "YtiFnrvym9c", "IXwqOrxYYJX", "STzz78HamXy", "SxmYH8Smt0f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " First, I really appreciate the long response from the authors. After reading all the comments which address most of my concerns, I hold a positive attitude to this submission. Basically, I think this X2I method is interesting and flexible to many applications. And it also solves (at least tries to solve) an impor...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "IXwqOrxYYJX", "YtiFnrvym9c", "iclr_2022_QjOQkpzKbNk", "SxmYH8Smt0f", "iclr_2022_QjOQkpzKbNk", "df0vFerfEWh", "STzz78HamXy", "B2Xel8fxPWK", "aQ2tUSAJXYy", "iclr_2022_QjOQkpzKbNk", "iclr_2022_QjOQkpzKbNk" ]
iclr_2022_GQd7mXSPua
Meta Learning Low Rank Covariance Factors for Energy Based Deterministic Uncertainty
Numerous recent works utilize bi-Lipschitz regularization of neural network layers to preserve relative distances between data instances in the feature spaces of each layer. This distance sensitivity with respect to the data aids in tasks such as uncertainty calibration and out-of-distribution (OOD) detection. In previ...
Accept (Poster)
Reviewers all found the work well-motivated in addressing uncertainty, a topic that has not seen much focus in meta-learning and few-shot learning. They describe the challenges well: small sample sizes and OOD shift. They then propose a solution they find works well empirically to overcome these challenges based on a s...
train
[ "MiZC4JZn7d0", "hXCllFsNy1d", "64gP9oXbqsQ", "7IUU9ukg-jk", "pt1ZNQv0VJi", "mAoiMweyuAl", "OKZrzeuEZz", "oFKzS3GQXiI", "cg9cOuRckae", "FlpdLKPa7lz", "sgeQZ6fIIi", "COl6U9Ml3gF", "AfFsFJrV0t", "KTjcu_80jQ4" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. \n\nIn this paper, we focus on developing a data-driven approach to solve the target problem. Therefore, our desiderata is the ability to learn from the task distribution. \n\nWe can categorize machine learning approaches into two directions. The first one, as you mentioned, is to int...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "hXCllFsNy1d", "64gP9oXbqsQ", "pt1ZNQv0VJi", "iclr_2022_GQd7mXSPua", "sgeQZ6fIIi", "7IUU9ukg-jk", "AfFsFJrV0t", "7IUU9ukg-jk", "KTjcu_80jQ4", "AfFsFJrV0t", "7IUU9ukg-jk", "7IUU9ukg-jk", "iclr_2022_GQd7mXSPua", "iclr_2022_GQd7mXSPua" ]
iclr_2022_WLEx3Jo4QaB
Graph Condensation for Graph Neural Networks
Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns. To alleviate the concerns, we propose and study the problem of graph condensation for graph neural networks (GNNs). Specifically, we aim to condense the large, origina...
Accept (Poster)
This paper addresses the scale issue in Graph Neural Networks by proposing a “condensation” approach that produces a small synthetic graph from a large original graph such that GNNs trained on both graphs have comparable performance. Reviewer cTj2 had concerns with novelty: they claimed the proposed method was close ...
val
[ "w-jgJV9XlMk", "a0GvcBHp1hR", "j53VLb6TSm2", "4Ho6VBNZI9V", "UQbKK0_dDZr", "MctvfUb8Q6C", "mBeXiZvYb7y", "JHxRlht-fY0", "A3VEqYW1mRv", "CI_IjnIXwNU", "tsslz3uc1B7", "J4T-wmOuA22", "By9B7PMOc0K", "5sVWiov2J45" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed comments and valuable questions! Your comment of “the paper's idea is interesting” is very encouraging! Next we provide details to clarify your major concerns.\n\n> Q1. Technical contributions of this work. \n\nA1. We thank the reviewer, and will clarify our technical novelty and signif...
[ -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "5sVWiov2J45", "By9B7PMOc0K", "UQbKK0_dDZr", "iclr_2022_WLEx3Jo4QaB", "iclr_2022_WLEx3Jo4QaB", "iclr_2022_WLEx3Jo4QaB", "CI_IjnIXwNU", "4Ho6VBNZI9V", "j53VLb6TSm2", "JHxRlht-fY0", "w-jgJV9XlMk", "a0GvcBHp1hR", "iclr_2022_WLEx3Jo4QaB", "iclr_2022_WLEx3Jo4QaB" ]
iclr_2022_HTx7vrlLBEj
Half-Inverse Gradients for Physical Deep Learning
Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than usual neural network training, the same gradient-based optimizers are used to minim...
Accept (Spotlight)
Thank you for your submission to ICLR. The reviewers and I are in agreement that the paper presents a substantial contribution to the field at the intersection of differentiable simulation and ML methods. In particular, the half-inverse method is compelling, non-obvious, and hints of a nice path forward towards the g...
train
[ "iXSFJXkBLft", "A1cGwG8whG_", "dgcg9IxlD5O", "eiDfOV2HuR6", "vcTdEATuhgl", "_-7OQPVNRFh", "3kG5qpcXvT7", "qPVae0muwwQ", "JpkAHGkH-hR", "v77Tqyfeer9" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThank you for your responses. I think the authors have partially addressed my concerns. Vectorized and serialized back-propagation seem to have individual downsides in linear memory and time consumption, respectively. However, I guess it will not be a big problem here considering the performance ...
[ -1, -1, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "_-7OQPVNRFh", "iclr_2022_HTx7vrlLBEj", "iclr_2022_HTx7vrlLBEj", "dgcg9IxlD5O", "dgcg9IxlD5O", "JpkAHGkH-hR", "v77Tqyfeer9", "iclr_2022_HTx7vrlLBEj", "iclr_2022_HTx7vrlLBEj", "iclr_2022_HTx7vrlLBEj" ]
iclr_2022_QRX0nCX_gk
Multimeasurement Generative Models
We formally map the problem of sampling from an unknown distribution with a density in $\mathbb{R}^d$ to the problem of learning and sampling a smoother density in $\mathbb{R}^{Md}$ obtained by convolution with a fixed factorial kernel: the new density is referred to as M-density and the kernel as multimeasurement nois...
Accept (Poster)
This paper studies the problem of generative modeling by convolving an unknown complex density with a factorial kernel called multi-measurement noise model (MNM) to obtain a smoother density that is easier to sample from. Poisson and Gaussian MNMs are proposed for the convolution. Experiment regarding image synthesis a...
train
[ "jpbNAbGvLke", "R1EA_7dc_qZ", "qaaNghxmVfK", "LSirLT8sadY", "Swg95y1Mij", "6iEEPOt33w", "dDozOifWyvJ", "kB4LQivevPk", "Qg5XI2Aohh", "79QmnoGMOTi" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are very grateful for the detailed and valuable comments from all reviewers, and glad to see that all reviewers agree that our results are novel and interesting. We are working on incorporating all suggestions into an upcoming revision, but meanwhile we will provide our responses to the common questions here.\...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2022_QRX0nCX_gk", "qaaNghxmVfK", "Qg5XI2Aohh", "dDozOifWyvJ", "iclr_2022_QRX0nCX_gk", "79QmnoGMOTi", "kB4LQivevPk", "iclr_2022_QRX0nCX_gk", "iclr_2022_QRX0nCX_gk", "iclr_2022_QRX0nCX_gk" ]
iclr_2022_qsZoGvFiJn1
Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception
Self-driving cars must detect vehicles, pedestrians, and other traffic participants accurately to operate safely. Small, far-away, or highly occluded objects are particularly challenging because there is limited information in the LiDAR point clouds for detecting them. To address this challenge, we leverage valuable inf...
Accept (Poster)
The paper proposes a framework for object detection on lidar scans, with query of scene feature extracted offline from previous traversals. Overall there is good agreement among reviewers, with three recommending accepting the paper and one marginally accepting it -- to me the authors satisfactorily addressed most aspe...
val
[ "HlcIQJwqaPY", "M93NGzVnMwR", "psY-M5O3r73", "95DoVNqq5xz", "yVlv-FrQfMq", "PejtZMOTg0S", "VugwLWSxKpp", "QHVkp7PeT76", "snVL0lnBEox", "3Pv81b4dc2o", "E_rKvpSCUw8", "rTU2ZxN8NS", "U0Gn1_bEW8l" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi there, thanks for the replies, and sorry for late response! \n\nI am mostly positive about the paper except the few ablation or clarification that I mentioned in my review. One minor comment though: when more data are observed, except for using the new data to update the offline features with the trained model...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "snVL0lnBEox", "psY-M5O3r73", "PejtZMOTg0S", "QHVkp7PeT76", "VugwLWSxKpp", "U0Gn1_bEW8l", "rTU2ZxN8NS", "E_rKvpSCUw8", "3Pv81b4dc2o", "iclr_2022_qsZoGvFiJn1", "iclr_2022_qsZoGvFiJn1", "iclr_2022_qsZoGvFiJn1", "iclr_2022_qsZoGvFiJn1" ]
iclr_2022_uxgg9o7bI_3
A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?"
We propose a new perspective on designing powerful Graph Neural Networks (GNNs). In a nutshell, this enables a general solution to inject structural properties of graphs into a message-passing aggregation scheme of GNNs. As a theoretical basis, we develop a new hierarchy of local isomorphism on neighborhood subgraphs. ...
Accept (Oral)
This paper proposes an efficient method for message passing that can incorporate structural information that is provably stronger than 1-WL. As compared to three strands of provably powerful (more than 1 WL) GNNs, the method has limited additional computational overhead, and can also show encouraging results on the ove...
train
[ "6jVKzUT9yX0", "hkJjtAhaQBd", "aaFIQr6Lq3_", "KnMu5nrzhT3", "7Mu84I7G3f", "L-vnrAgPY1", "BKPxVPHtOp_", "YsCmVRpgea", "iU6zsabJ4Gw", "-TuEmM9lzr3", "LWuGRDvQx5v", "Hc75HwmC6_P", "bEHXEXHS9hZ", "GSWn9wxEwfJ", "uQNa688UIvk", "iVIrKbFpBqD", "d44lOms2YB5", "LMqgPiTK-SY" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for addressing my comments. I appreciate also their response to the other reviewers' comments. I still believe it is a good paper, and I keep my score. ", " I would like to keep my score.", "The paper introduces a general framework for designing message-passing neural networks (MPNNs) stro...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "GSWn9wxEwfJ", "-TuEmM9lzr3", "iclr_2022_uxgg9o7bI_3", "LWuGRDvQx5v", "uQNa688UIvk", "iU6zsabJ4Gw", "YsCmVRpgea", "iclr_2022_uxgg9o7bI_3", "iclr_2022_uxgg9o7bI_3", "LMqgPiTK-SY", "Hc75HwmC6_P", "aaFIQr6Lq3_", "d44lOms2YB5", "iVIrKbFpBqD", "iclr_2022_uxgg9o7bI_3", "iclr_2022_uxgg9o7bI_3...