title
stringlengths
19
143
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
9
347
tags
stringclasses
3 values
abstract
stringlengths
457
2.38k
pdf
stringlengths
71
71
Self-Supervised Policy Adaptation during Deployment
https://openreview.net/forum?id=o_V-MjyyGV_
https://openreview.net/forum?id=o_V-MjyyGV_
Nicklas Hansen,Rishabh Jangir,Yu Sun,Guillem Alenyà,Pieter Abbeel,Alexei A Efros,Lerrel Pinto,Xiaolong Wang
ICLR 2021,Spotlight
In most real world scenarios, a policy trained by reinforcement learning in one environment needs to be deployed in another, potentially quite different environment. However, generalization across different environments is known to be hard. A natural solution would be to keep training after deployment in the new enviro...
https://openreview.net/pdf/6949f5e82ffd2bd635a6de802a733540b19b9cc3.pdf
Differentially Private Learning Needs Better Features (or Much More Data)
https://openreview.net/forum?id=YTWGvpFOQD-
https://openreview.net/forum?id=YTWGvpFOQD-
Florian Tramer,Dan Boneh
ICLR 2021,Spotlight
We demonstrate that differentially private machine learning has not yet reached its ''AlexNet moment'' on many canonical vision tasks: linear models trained on handcrafted features significantly outperform end-to-end deep neural networks for moderate privacy budgets. To exceed the performance of handcrafted features, w...
https://openreview.net/pdf/63107901e325896b18874aad193314befc47c7ae.pdf
Data-Efficient Reinforcement Learning with Self-Predictive Representations
https://openreview.net/forum?id=uCQfPZwRaUu
https://openreview.net/forum?id=uCQfPZwRaUu
Max Schwarzer,Ankesh Anand,Rishab Goel,R Devon Hjelm,Aaron Courville,Philip Bachman
ICLR 2021,Spotlight
While deep reinforcement learning excels at solving tasks where large amounts of data can be collected through virtually unlimited interaction with the environment, learning from limited interaction remains a key challenge. We posit that an agent can learn more efficiently if we augment reward maximization with self-su...
https://openreview.net/pdf/1332dd3bfd157968abcdfda3acf4d4a7499d6143.pdf
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
https://openreview.net/forum?id=wS0UFjsNYjn
https://openreview.net/forum?id=wS0UFjsNYjn
Dong Bok Lee,Dongchan Min,Seanie Lee,Sung Ju Hwang
ICLR 2021,Spotlight
Unsupervised learning aims to learn meaningful representations from unlabeled data which can captures its intrinsic structure, that can be transferred to downstream tasks. Meta-learning, whose objective is to learn to generalize across tasks such that the learned model can rapidly adapt to a novel task, shares the spir...
https://openreview.net/pdf/7b58adedb02a73d26b32a949a08c9238409022a5.pdf
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time
https://openreview.net/forum?id=0N8jUH4JMv6
https://openreview.net/forum?id=0N8jUH4JMv6
Tolga Ergen,Mert Pilanci
ICLR 2021,Spotlight
We study training of Convolutional Neural Networks (CNNs) with ReLU activations and introduce exact convex optimization formulations with a polynomial complexity with respect to the number of data samples, the number of neurons, and data dimension. More specifically, we develop a convex analytic framework utilizing sem...
https://openreview.net/pdf/dba1d25e1354e478235ccc68af0dd34e0cf91c79.pdf
Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
https://openreview.net/forum?id=GY6-6sTvGaf
https://openreview.net/forum?id=GY6-6sTvGaf
Denis Yarats,Ilya Kostrikov,Rob Fergus
ICLR 2021,Spotlight
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to transfo...
https://openreview.net/pdf/b8b967965ff52b2eb545d1a7d4284f59f0fc181f.pdf
Dynamic Tensor Rematerialization
https://openreview.net/forum?id=Vfs_2RnOD0H
https://openreview.net/forum?id=Vfs_2RnOD0H
Marisa Kirisame,Steven Lyubomirsky,Altan Haan,Jennifer Brennan,Mike He,Jared Roesch,Tianqi Chen,Zachary Tatlock
ICLR 2021,Spotlight
Checkpointing enables the training of deep learning models under restricted memory budgets by freeing intermediate activations from memory and recomputing them on demand. Current checkpointing techniques statically plan these recomputations offline and assume static computation graphs. We demonstrate that a simple onli...
https://openreview.net/pdf/241e988e3953566bc4fe0e6a974d29ff78dfcc2e.pdf
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
https://openreview.net/forum?id=VqzVhqxkjH1
https://openreview.net/forum?id=VqzVhqxkjH1
Nils Lukas,Yuxuan Zhang,Florian Kerschbaum
ICLR 2021,Spotlight
In Machine Learning as a Service, a provider trains a deep neural network and gives many users access. The hosted (source) model is susceptible to model stealing attacks, where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust...
https://openreview.net/pdf/4c8c09b36e5485077c542426e9f254160401a43c.pdf
Model-Based Visual Planning with Self-Supervised Functional Distances
https://openreview.net/forum?id=UcoXdfrORC
https://openreview.net/forum?id=UcoXdfrORC
Stephen Tian,Suraj Nair,Frederik Ebert,Sudeep Dasari,Benjamin Eysenbach,Chelsea Finn,Sergey Levine
ICLR 2021,Spotlight
A generalist robot must be able to complete a variety of tasks in its environment. One appealing way to specify each task is in terms of a goal observation. However, learning goal-reaching policies with reinforcement learning remains a challenging problem, particularly when hand-engineered reward functions are not avai...
https://openreview.net/pdf/e7d9842e2ee26ac0242d6efcf7a863c669541594.pdf
Mathematical Reasoning via Self-supervised Skip-tree Training
https://openreview.net/forum?id=YmqAnY0CMEy
https://openreview.net/forum?id=YmqAnY0CMEy
Markus Norman Rabe,Dennis Lee,Kshitij Bansal,Christian Szegedy
ICLR 2021,Spotlight
We demonstrate that self-supervised language modeling applied to mathematical formulas enables logical reasoning. To measure the logical reasoning abilities of language models, we formulate several evaluation (downstream) tasks, such as inferring types, suggesting missing assumptions and completing equalities. For trai...
https://openreview.net/pdf/405aeadddeb5c223426f15f57b0e520aeb2ce585.pdf
DeepAveragers: Offline Reinforcement Learning By Solving Derived Non-Parametric MDPs
https://openreview.net/forum?id=eMP1j9efXtX
https://openreview.net/forum?id=eMP1j9efXtX
Aayam Kumar Shrestha,Stefan Lee,Prasad Tadepalli,Alan Fern
ICLR 2021,Spotlight
We study an approach to offline reinforcement learning (RL) based on optimally solving finitely-represented MDPs derived from a static dataset of experience. This approach can be applied on top of any learned representation and has the potential to easily support multiple solution objectives as well as zero-sh...
https://openreview.net/pdf/41ec2c7a3d80d8e07956f446e858586b83aa7620.pdf
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
https://openreview.net/forum?id=p-NZIuwqhI4
https://openreview.net/forum?id=p-NZIuwqhI4
Kenji Kawaguchi
ICLR 2021,Spotlight
A deep equilibrium model uses implicit layers, which are implicitly defined through an equilibrium point of an infinite sequence of computation. It avoids any explicit computation of the infinite sequence by finding an equilibrium point directly via root-finding and by computing gradients via implicit differentiation. ...
https://openreview.net/pdf/2d0baf2a17b567711b0bc3085000c41372e8c2d8.pdf
BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration
https://openreview.net/forum?id=yHeg4PbFHh
https://openreview.net/forum?id=yHeg4PbFHh
Augustus Odena,Kensen Shi,David Bieber,Rishabh Singh,Charles Sutton,Hanjun Dai
ICLR 2021,Spotlight
Program synthesis is challenging largely because of the difficulty of search in a large space of programs. Human programmers routinely tackle the task of writing complex programs by writing sub-programs and then analyzing their intermediate results to compose them in appropriate ways. Motivated by this intuition, we pr...
https://openreview.net/pdf/2a5e6446f3e44243b64f41369e186a582fb55a63.pdf
The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings
https://openreview.net/forum?id=qYda4oLEc1
https://openreview.net/forum?id=qYda4oLEc1
Elliot Meyerson,Risto Miikkulainen
ICLR 2021,Spotlight
This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective le...
https://openreview.net/pdf/467ef145eb7edf951ce11daf694cfa5593a44e89.pdf
Fidelity-based Deep Adiabatic Scheduling
https://openreview.net/forum?id=NECTfffOvn1
https://openreview.net/forum?id=NECTfffOvn1
Eli Ovits,Lior Wolf
ICLR 2021,Spotlight
Adiabatic quantum computation is a form of computation that acts by slowly interpolating a quantum system between an easy to prepare initial state and a final state that represents a solution to a given computational problem. The choice of the interpolation schedule is critical to the performance: if at a certain time ...
https://openreview.net/pdf/864d26c496d060fce5f6a17f3e6edd74aaead783.pdf
Deciphering and Optimizing Multi-Task Learning: a Random Matrix Approach
https://openreview.net/forum?id=Cri3xz59ga
https://openreview.net/forum?id=Cri3xz59ga
Malik Tiomoko,Hafiz Tiomoko Ali,Romain Couillet
ICLR 2021,Spotlight
This article provides theoretical insights into the inner workings of multi-task and transfer learning methods, by studying the tractable least-square support vector machine multi-task learning (LS-SVM MTL) method, in the limit of large ($p$) and numerous ($n$) data. By a random matrix analysis applied to a Gaussian mi...
https://openreview.net/pdf/4484f1a7adf7cb2152f913693079ef4764c69462.pdf
Learning-based Support Estimation in Sublinear Time
https://openreview.net/forum?id=tilovEHA3YS
https://openreview.net/forum?id=tilovEHA3YS
Talya Eden,Piotr Indyk,Shyam Narayanan,Ronitt Rubinfeld,Sandeep Silwal,Tal Wagner
ICLR 2021,Spotlight
We consider the problem of estimating the number of distinct elements in a large data set (or, equivalently, the support size of the distribution induced by the data set) from a random sample of its elements. The problem occurs in many applications, including biology, genomics, computer systems and linguistics. A line...
https://openreview.net/pdf/5febcda9d574f8ade6a7ee98fde88e0c8e140481.pdf
Unlearnable Examples: Making Personal Data Unexploitable
https://openreview.net/forum?id=iAmZUo0DxC0
https://openreview.net/forum?id=iAmZUo0DxC0
Hanxun Huang,Xingjun Ma,Sarah Monazam Erfani,James Bailey,Yisen Wang
ICLR 2021,Spotlight
The volume of "free" data on the internet has been key to the current success of deep learning. However, it also raises privacy concerns about the unauthorized exploitation of personal data for training commercial models. It is thus crucial to develop methods to prevent unauthorized data exploitation. This paper raises...
https://openreview.net/pdf/eb123b0f1c20d0c5d47b33fa7feca81748e02666.pdf
How Benign is Benign Overfitting ?
https://openreview.net/forum?id=g-wu9TMPODo
https://openreview.net/forum?id=g-wu9TMPODo
Amartya Sanyal,Puneet K. Dokania,Varun Kanade,Philip Torr
ICLR 2021,Spotlight
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models. When trained with SGD, deep neural networks essentially achieve zero training error, even in the presence of label noise, while also exhibiting good generalization on natural test data, something refer...
https://openreview.net/pdf/b7f336ebe5df354fdcbb88c27b978a6581289ca8.pdf
Autoregressive Entity Retrieval
https://openreview.net/forum?id=5k8F6UU39V
https://openreview.net/forum?id=5k8F6UU39V
Nicola De Cao,Gautier Izacard,Sebastian Riedel,Fabio Petroni
ICLR 2021,Spotlight
Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain ques...
https://openreview.net/pdf/921ba67c80871fda61a4c0cf8f889b1c381a2a78.pdf
Neural Approximate Sufficient Statistics for Implicit Models
https://openreview.net/forum?id=SRDuJssQud
https://openreview.net/forum?id=SRDuJssQud
Yanzhi Chen,Dinghuai Zhang,Michael U. Gutmann,Aaron Courville,Zhanxing Zhu
ICLR 2021,Spotlight
We consider the fundamental problem of how to automatically construct summary statistics for implicit generative models where the evaluation of the likelihood function is intractable but sampling data from the model is possible. The idea is to frame the task of constructing sufficient statistics as learning mutual info...
https://openreview.net/pdf/dcb75e787b368e0cd6057205f63497c6fa17f9cb.pdf
Large Scale Image Completion via Co-Modulated Generative Adversarial Networks
https://openreview.net/forum?id=sSjqmfsk95O
https://openreview.net/forum?id=sSjqmfsk95O
Shengyu Zhao,Jonathan Cui,Yilun Sheng,Yue Dong,Xiao Liang,Eric I-Chao Chang,Yan Xu
ICLR 2021,Spotlight
Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the g...
https://openreview.net/pdf/9a3cfa3a1710ee23378772a3be3070ef32a29e17.pdf
DDPNOpt: Differential Dynamic Programming Neural Optimizer
https://openreview.net/forum?id=6s7ME_X5_Un
https://openreview.net/forum?id=6s7ME_X5_Un
Guan-Horng Liu,Tianrong Chen,Evangelos Theodorou
ICLR 2021,Spotlight
Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited. In this work, we make an attempt along this line by reformulating the training procedure from th...
https://openreview.net/pdf/73e56442dda6b1e73bab62c8ca8c2dac7d319003.pdf
Geometry-Aware Gradient Algorithms for Neural Architecture Search
https://openreview.net/forum?id=MuSYkd1hxRP
https://openreview.net/forum?id=MuSYkd1hxRP
Liam Li,Mikhail Khodak,Nina Balcan,Ameet Talwalkar
ICLR 2021,Spotlight
Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood. We argue for the study of single-level empirical risk minimization to under...
https://openreview.net/pdf/110552d41d9f40c3d50988fde09b3b5038c2bebd.pdf
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with $1/n$ Parameters
https://openreview.net/forum?id=rcQdycl0zyk
https://openreview.net/forum?id=rcQdycl0zyk
Aston Zhang,Yi Tay,SHUAI Zhang,Alvin Chan,Anh Tuan Luu,Siu Hui,Jie Fu
ICLR 2021,Spotlight
Recent works have demonstrated reasonable success of representation learning in hypercomplex space. Specifically, “fully-connected layers with quaternions” (quaternions are 4D hypercomplex numbers), which replace real-valued matrix multiplications in fully-connected layers with Hamilton products of quaternions, both en...
https://openreview.net/pdf/98639a764ded8e038fa188dc104694519947e67c.pdf
Tent: Fully Test-Time Adaptation by Entropy Minimization
https://openreview.net/forum?id=uXl3bZLkr3c
https://openreview.net/forum?id=uXl3bZLkr3c
Dequan Wang,Evan Shelhamer,Shaoteng Liu,Bruno Olshausen,Trevor Darrell
ICLR 2021,Spotlight
A model must adapt itself to generalize to new and different data during testing. In this setting of fully test-time adaptation the model has only the test data and its own parameters. We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predict...
https://openreview.net/pdf/4de0af9691a5dcc52de7de756676fded33d037ef.pdf
Neural Topic Model via Optimal Transport
https://openreview.net/forum?id=Oos98K9Lv-k
https://openreview.net/forum?id=Oos98K9Lv-k
He Zhao,Dinh Phung,Viet Huynh,Trung Le,Wray Buntine
ICLR 2021,Spotlight
Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis. However, it is usually hard for existing NTMs to achieve good document representation and coherent/diverse topics at the same time. Moreover, they often...
https://openreview.net/pdf/7be7e3b207a273ccbe61f42c2358cc4fb090748f.pdf
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
https://openreview.net/forum?id=9xC2tWEwBD
https://openreview.net/forum?id=9xC2tWEwBD
Sanghyun Hong,Yigitcan Kaya,Ionuț-Vlad Modoranu,Tudor Dumitras
ICLR 2021,Spotlight
Recent increases in the computational demands of deep neural networks (DNNs), combined with the observation that most input samples require only simple models, have sparked interest in input-adaptive multi-exit architectures, such as MSDNets or Shallow-Deep Networks. These architectures enable faster inferences and cou...
https://openreview.net/pdf/c7b1e1ec7f160d09cea4ae461b498ee701297eb3.pdf
Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees?
https://openreview.net/forum?id=Ut1vF_q_vC
https://openreview.net/forum?id=Ut1vF_q_vC
Zhen Qin,Le Yan,Honglei Zhuang,Yi Tay,Rama Kumar Pasumarthi,Xuanhui Wang,Michael Bendersky,Marc Najork
ICLR 2021,Spotlight
Despite the success of neural models on many major machine learning problems, their effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely acknowledged. We first validate this concern by showing that most recent neural LTR models are, by a large margin, inferior to the best publicly available ...
https://openreview.net/pdf/ad3fca583fdc23233f81a4e1b068afdb9ccb877f.pdf
Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning
https://openreview.net/forum?id=qda7-sVg84
https://openreview.net/forum?id=qda7-sVg84
Rishabh Agarwal,Marlos C. Machado,Pablo Samuel Castro,Marc G Bellemare
ICLR 2021,Spotlight
Reinforcement learning methods trained on few environments rarely learn policies that generalize to unseen environments. To improve generalization, we incorporate the inherent sequential structure in reinforcement learning into the representation learning process. This approach is orthogonal to recent approaches, which...
https://openreview.net/pdf/18d8a7a260105accf754ef2ec331bcf48e817b1a.pdf
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images
https://openreview.net/forum?id=RLRXCV6DbEJ
https://openreview.net/forum?id=RLRXCV6DbEJ
Rewon Child
ICLR 2021,Spotlight
We present a hierarchical VAE that, for the first time, generates samples quickly $\textit{and}$ outperforms the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made...
https://openreview.net/pdf/e63933fc98cb52a55d96ffe8bb28d87410c6438e.pdf
Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic Method using Deep Denoising Priors
https://openreview.net/forum?id=9EsrXMzlFQY
https://openreview.net/forum?id=9EsrXMzlFQY
Yu Sun,Jiaming Liu,Yiran Sun,Brendt Wohlberg,Ulugbek Kamilov
ICLR 2021,Spotlight
Regularization by denoising (RED) is a recently developed framework for solving inverse problems by integrating advanced denoisers as image priors. Recent work has shown its state-of-the-art performance when combined with pre-trained deep denoisers. However, current RED algorithms are inadequate for parallel processing...
https://openreview.net/pdf/42abafb63caa1b6ddc6bda1b8e8337b1c2a9db91.pdf
A Good Image Generator Is What You Need for High-Resolution Video Synthesis
https://openreview.net/forum?id=6puCSjH3hwA
https://openreview.net/forum?id=6puCSjH3hwA
Yu Tian,Jian Ren,Menglei Chai,Kyle Olszewski,Xi Peng,Dimitris N. Metaxas,Sergey Tulyakov
ICLR 2021,Spotlight
Image and video synthesis are closely related areas aiming at generating content from noise. While rapid progress has been demonstrated in improving image-based models to handle large resolutions, high-quality renderings, and wide variations in image content, achieving comparable video generation results remains proble...
https://openreview.net/pdf/08bf1c319723defae9a4e04ca258811da08d2ed3.pdf
Undistillable: Making A Nasty Teacher That CANNOT teach students
https://openreview.net/forum?id=0zvfm-nZqQs
https://openreview.net/forum?id=0zvfm-nZqQs
Haoyu Ma,Tianlong Chen,Ting-Kuei Hu,Chenyu You,Xiaohui Xie,Zhangyang Wang
ICLR 2021,Spotlight
Knowledge Distillation (KD) is a widely used technique to transfer knowledge from pre-trained teacher models to (usually more lightweight) student models. However, in certain situations, this technique is more of a curse than a blessing. For instance, KD poses a potential risk of exposing intellectual properties (IPs)...
https://openreview.net/pdf/42f6ff4cc0e85c1f3a226c56205d2f78953cdc7c.pdf
Support-set bottlenecks for video-text representation learning
https://openreview.net/forum?id=EqoXe2zmhrh
https://openreview.net/forum?id=EqoXe2zmhrh
Mandela Patrick,Po-Yao Huang,Yuki Asano,Florian Metze,Alexander G Hauptmann,Joao F. Henriques,Andrea Vedaldi
ICLR 2021,Spotlight
The dominant paradigm for learning video-text representations – noise contrastive learning – increases the similarity of the representations of pairs of samples that are known to be related, such as text and video from the same sample, and pushes away the representations of all other pairs. We posit that this last beha...
https://openreview.net/pdf/a650da3e5bc4bc919f69887e2a9264dc61a58c94.pdf
Grounded Language Learning Fast and Slow
https://openreview.net/forum?id=wpSWuz_hyqA
https://openreview.net/forum?id=wpSWuz_hyqA
Felix Hill,Olivier Tieleman,Tamara von Glehn,Nathaniel Wong,Hamza Merzic,Stephen Clark
ICLR 2021,Spotlight
Recent work has shown that large text-based neural language models acquire a surprising propensity for one-shot learning. Here, we show that an agent situated in a simulated 3D world, and endowed with a novel dual-coding external memory, can exhibit similar one-shot word learning when trained with conventional RL algor...
https://openreview.net/pdf/e357c41d68e8a24bfdaba368a3b2baa867fa25e2.pdf
GAN "Steerability" without optimization
https://openreview.net/forum?id=zDy_nQCXiIj
https://openreview.net/forum?id=zDy_nQCXiIj
Nurit Spingarn,Ron Banner,Tomer Michaeli
ICLR 2021,Spotlight
Recent research has shown remarkable success in revealing "steering" directions in the latent spaces of pre-trained GANs. These directions correspond to semantically meaningful image transformations (e.g., shift, zoom, color manipulations), and have the same interpretable effect across all categories that the GAN can g...
https://openreview.net/pdf/78417c13154fe1e724c34ef2fcfef9f5a84707a0.pdf
Noise against noise: stochastic label noise helps combat inherent label noise
https://openreview.net/forum?id=80FMcTSZ6J0
https://openreview.net/forum?id=80FMcTSZ6J0
Pengfei Chen,Guangyong Chen,Junjie Ye,jingwei zhao,Pheng-Ann Heng
ICLR 2021,Spotlight
The noise in stochastic gradient descent (SGD) provides a crucial implicit regularization effect, previously studied in optimization by analyzing the dynamics of parameter updates. In this paper, we are interested in learning with noisy labels, where we have a collection of samples with potential mislabeling. We show t...
https://openreview.net/pdf/cb07afb92c9402f5b191a438058b6a911ae61ba1.pdf
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
https://openreview.net/forum?id=5m3SEczOV8L
https://openreview.net/forum?id=5m3SEczOV8L
Zhisheng Xiao,Karsten Kreis,Jan Kautz,Arash Vahdat
ICLR 2021,Spotlight
Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly...
https://openreview.net/pdf/e45436941ac36b0258992469d1932909c6cbed5e.pdf
Graph-Based Continual Learning
https://openreview.net/forum?id=HHSEKOnPvaO
https://openreview.net/forum?id=HHSEKOnPvaO
Binh Tang,David S. Matteson
ICLR 2021,Spotlight
Despite significant advances, continual learning models still suffer from catastrophic forgetting when exposed to incrementally available data from non-stationary distributions. Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an a...
https://openreview.net/pdf/39a91d5348ba8489817ac3ee4a93637e12b23c4b.pdf
Sparse Quantized Spectral Clustering
https://openreview.net/forum?id=pBqLS-7KYAF
https://openreview.net/forum?id=pBqLS-7KYAF
Zhenyu Liao,Romain Couillet,Michael W. Mahoney
ICLR 2021,Spotlight
Given a large data matrix, sparsifying, quantizing, and/or performing other entry-wise nonlinear operations can have numerous benefits, ranging from speeding up iterative algorithms for core numerical linear algebra problems to providing nonlinear filters to design state-of-the-art neural network models. Here, we explo...
https://openreview.net/pdf/26efa7df2ba6ab8a833b21a2c4c741e420ba7584.pdf
LambdaNetworks: Modeling long-range Interactions without Attention
https://openreview.net/forum?id=xTJEN-ggl1b
https://openreview.net/forum?id=xTJEN-ggl1b
Irwan Bello
ICLR 2021,Spotlight
We present lambda layers -- an alternative framework to self-attention -- for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Lambda layers capture such interactions by transforming available contexts into linear functions, termed lambd...
https://openreview.net/pdf/811ba70b99e04f0d84a07c0c93d21c805e4466ff.pdf
Contrastive Divergence Learning is a Time Reversal Adversarial Game
https://openreview.net/forum?id=MLSvqIHRidA
https://openreview.net/forum?id=MLSvqIHRidA
Omer Yair,Tomer Michaeli
ICLR 2021,Spotlight
Contrastive divergence (CD) learning is a classical method for fitting unnormalized statistical models to data samples. Despite its wide-spread use, the convergence properties of this algorithm are still not well understood. The main source of difficulty is an unjustified approximation which has been used to derive the...
https://openreview.net/pdf/03d95d33dbce2d626edf50ba2d01876c374f7049.pdf
Quantifying Differences in Reward Functions
https://openreview.net/forum?id=LwEQnp6CYev
https://openreview.net/forum?id=LwEQnp6CYev
Adam Gleave,Michael D Dennis,Shane Legg,Stuart Russell,Jan Leike
ICLR 2021,Spotlight
For many tasks, the reward function is inaccessible to introspection or too complex to be specified procedurally, and must instead be learned from user data. Prior work has evaluated learned reward functions by evaluating policies optimized for the learned reward. However, this method cannot distinguish between the lea...
https://openreview.net/pdf/c9babbffccc1b8e389a2e8de1c7aac4cee00f966.pdf
Long-tail learning via logit adjustment
https://openreview.net/forum?id=37nvvqkCo5
https://openreview.net/forum?id=37nvvqkCo5
Aditya Krishna Menon,Sadeep Jayasumana,Ankit Singh Rawat,Himanshu Jain,Andreas Veit,Sanjiv Kumar
ICLR 2021,Spotlight
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels have only a few associated samples. This poses a challenge for generalisation on such labels, and also makes naive learning biased towards dominant labels. In this paper, we present a statistical ...
https://openreview.net/pdf/7b399c4dfe989810af6c9881d1716bd2ae07b903.pdf
Locally Free Weight Sharing for Network Width Search
https://openreview.net/forum?id=S0UdquAnr9k
https://openreview.net/forum?id=S0UdquAnr9k
Xiu Su,Shan You,Tao Huang,Fei Wang,Chen Qian,Changshui Zhang,Chang Xu
ICLR 2021,Spotlight
Searching for network width is an effective way to slim deep neural networks with hardware budgets. With this aim, a one-shot supernet is usually leveraged as a performance evaluator to rank the performance \wrt~different width. Nevertheless, current methods mainly follow a manually fixed weight sharing pattern, which ...
https://openreview.net/pdf/72deb2e0a363d37cf758dfbccea8fada27ebf7a8.pdf
Mutual Information State Intrinsic Control
https://openreview.net/forum?id=OthEq8I5v1
https://openreview.net/forum?id=OthEq8I5v1
Rui Zhao,Yang Gao,Pieter Abbeel,Volker Tresp,Wei Xu
ICLR 2021,Spotlight
Reinforcement learning has been shown to be highly successful at many challenging tasks. However, success heavily relies on well-shaped rewards. Intrinsically motivated RL attempts to remove this constraint by defining an intrinsic reward function. Motivated by the self-consciousness concept in psychology, we make a na...
https://openreview.net/pdf/6dac086e50a2341e09a0b7d6c417b5cdfd9ed47a.pdf
Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows
https://openreview.net/forum?id=WiGQBFuVRv
https://openreview.net/forum?id=WiGQBFuVRv
Kashif Rasul,Abdul-Saboor Sheikh,Ingmar Schuster,Urs M Bergmann,Roland Vollgraf
ICLR 2021,Spotlight
Time series forecasting is often fundamental to scientific and engineering problems and enables decision making. With ever increasing data set sizes, a trivial solution to scale up predictions is to assume independence between interacting time series. However, modeling statistical dependencies can improve accuracy and ...
https://openreview.net/pdf/d83950d8eebdd224b7c8b0eb72ca044ccead7fb6.pdf
Information Laundering for Model Privacy
https://openreview.net/forum?id=dyaIRud1zXg
https://openreview.net/forum?id=dyaIRud1zXg
Xinran Wang,Yu Xiang,Jun Gao,Jie Ding
ICLR 2021,Spotlight
In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning ...
https://openreview.net/pdf/1ad035bf98810a860bec5ef38d3032842170c0e5.pdf
UPDeT: Universal Multi-agent RL via Policy Decoupling with Transformers
https://openreview.net/forum?id=v9c7hr9ADKx
https://openreview.net/forum?id=v9c7hr9ADKx
Siyi Hu,Fengda Zhu,Xiaojun Chang,Xiaodan Liang
ICLR 2021,Spotlight
Recent advances in multi-agent reinforcement learning have been largely limited in training one model from scratch for every new task. The limitation is due to the restricted model architecture related to fixed input and output dimensions. This hinders the experience accumulation and transfer of the learned agent over ...
https://openreview.net/pdf/1f24b0b3a09ad8484d3887053d6c4c6a87d96ba1.pdf
Correcting experience replay for multi-agent communication
https://openreview.net/forum?id=xvxPuCkCNPO
https://openreview.net/forum?id=xvxPuCkCNPO
Sanjeevan Ahilan,Peter Dayan
ICLR 2021,Spotlight
We consider the problem of learning to communicate using multi-agent reinforcement learning (MARL). A common approach is to learn off-policy, using data sampled from a replay buffer. However, messages received in the past may not accurately reflect the current communication policy of each agent, and this complicates le...
https://openreview.net/pdf/85eff27bc850ea7f5ee060f1c1d0156c4703f81b.pdf
Improving Adversarial Robustness via Channel-wise Activation Suppressing
https://openreview.net/forum?id=zQTezqCCtNx
https://openreview.net/forum?id=zQTezqCCtNx
Yang Bai,Yuyuan Zeng,Yong Jiang,Shu-Tao Xia,Xingjun Ma,Yisen Wang
ICLR 2021,Spotlight
The study of adversarial examples and their activations have attracted significant attention for secure and robust learning with deep neural networks (DNNs). Different from existing works, in this paper, we highlight two new characteristics of adversarial examples from the channel-wise activation perspective: 1) the ...
https://openreview.net/pdf/199e4955d4d5c6f552177bff197e00df7b1a3432.pdf
Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
https://openreview.net/forum?id=D9I3drBz4UC
https://openreview.net/forum?id=D9I3drBz4UC
Xudong Wang,Long Lian,Zhongqi Miao,Ziwei Liu,Stella Yu
ICLR 2021,Spotlight
Natural data are often long-tail distributed over semantic classes. Existing recognition methods tackle this imbalanced classification by placing more emphasis on the tail data, through class re-balancing/re-weighting or ensembling over different data groups, resulting in increased tail accuracies but reduced head acc...
https://openreview.net/pdf/a53160f4df3e5d7b2d13f02d20579f6dd0460010.pdf
Generalization in data-driven models of primary visual cortex
https://openreview.net/forum?id=Tp7kI90Htd
https://openreview.net/forum?id=Tp7kI90Htd
Konstantin-Klemens Lurz,Mohammad Bashiri,Konstantin Willeke,Akshay Jagadish,Eric Wang,Edgar Y. Walker,Santiago A Cadena,Taliah Muhammad,Erick Cobos,Andreas S. Tolias,Alexander S Ecker,Fabian H. Sinz
ICLR 2021,Spotlight
Deep neural networks (DNN) have set new standards at predicting responses of neural populations to visual input. Most such DNNs consist of a convolutional network (core) shared across all neurons which learns a representation of neural computation in visual cortex and a neuron-specific readout that linearly combines t...
https://openreview.net/pdf/a10fba1a4a77e56503923a67cf4e95e82d6f9b59.pdf
Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
https://openreview.net/forum?id=Rhsu5qD36cL
https://openreview.net/forum?id=Rhsu5qD36cL
Akinori F Ebihara,Taiki Miyagawa,Kazuyuki Sakurai,Hitoshi Imaoka
ICLR 2021,Spotlight
Classifying sequential data as early and as accurately as possible is a challenging yet critical problem, especially when a sampling cost is high. One algorithm that achieves this goal is the sequential probability ratio test (SPRT), which is known as Bayes-optimal: it can keep the expected number of data samples as sm...
https://openreview.net/pdf/61baa81a79a2975a98aad96ab59d3ca65685492b.pdf
Uncertainty Sets for Image Classifiers using Conformal Prediction
https://openreview.net/forum?id=eNdiU_DbM9
https://openreview.net/forum?id=eNdiU_DbM9
Anastasios Nikolas Angelopoulos,Stephen Bates,Michael Jordan,Jitendra Malik
ICLR 2021,Spotlight
Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network’s probability estimates,...
https://openreview.net/pdf/54ecc59706032f693269ac3a32a22051e5b97bbd.pdf
Graph Convolution with Low-rank Learnable Local Filters
https://openreview.net/forum?id=9OHFhefeB86
https://openreview.net/forum?id=9OHFhefeB86
Xiuyuan Cheng,Zichen Miao,Qiang Qiu
ICLR 2021,Spotlight
Geometric variations like rotation, scaling, and viewpoint changes pose a significant challenge to visual understanding. One common solution is to directly model certain intrinsic structures, e.g., using landmarks. However, it then becomes non-trivial to build effective deep models, especially when the underlying non-E...
https://openreview.net/pdf/8bd3676b06c9fadecea1934914c2d52aedf3b689.pdf
Mind the Pad -- CNNs Can Develop Blind Spots
https://openreview.net/forum?id=m1CD7tPubNy
https://openreview.net/forum?id=m1CD7tPubNy
Bilal Alsallakh,Narine Kokhlikyan,Vivek Miglani,Jun Yuan,Orion Reblitz-Richardson
ICLR 2021,Spotlight
We show how feature maps in convolutional networks are susceptible to spatial bias. Due to a combination of architectural choices, the activation at certain locations is systematically elevated or weakened. The major source of this bias is the padding mechanism. Depending on several aspects of convolution arithmetic, t...
https://openreview.net/pdf/70ef163f0737eab414d51c5c352b8292272c77d4.pdf
Stabilized Medical Image Attacks
https://openreview.net/forum?id=QfTXQiGYudJ
https://openreview.net/forum?id=QfTXQiGYudJ
Gege Qi,Lijun GONG,Yibing Song,Kai Ma,Yefeng Zheng
ICLR 2021,Spotlight
Convolutional Neural Networks (CNNs) have advanced existing medical systems for automatic disease diagnosis. However, a threat to these systems arises that adversarial attacks make CNNs vulnerable. Inaccurate diagnosis results make a negative influence on human healthcare. There is a need to investigate potential adver...
https://openreview.net/pdf/537abf2d751c6e93209bde7c1e550fadad61af0f.pdf