title
stringlengths
19
143
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
9
347
tags
stringclasses
3 values
abstract
stringlengths
457
2.38k
pdf
stringlengths
71
71
Temporally-Extended ε-Greedy Exploration
https://openreview.net/forum?id=ONBPHFZ7zG4
https://openreview.net/forum?id=ONBPHFZ7zG4
Will Dabney,Georg Ostrovski,Andre Barreto
ICLR 2021,Poster
Recent work on exploration in reinforcement learning (RL) has led to a series of increasingly complex solutions to the problem. This increase in complexity often comes at the expense of generality. Recent empirical studies suggest that, when applied to a broader set of domains, some sophisticated exploration methods ar...
https://openreview.net/pdf/be288b1cdd527108548adea1d4d8319ce8a8eae8.pdf
Learning Associative Inference Using Fast Weight Memory
https://openreview.net/forum?id=TuK6agbdt27
https://openreview.net/forum?id=TuK6agbdt27
Imanol Schlag,Tsendsuren Munkhdalai,Jürgen Schmidhuber
ICLR 2021,Poster
Humans can quickly associate stimuli to solve problems in novel contexts. Our novel neural network model learns state representations of facts that can be composed to perform such associative inference. To this end, we augment the LSTM model with an associative memory, dubbed \textit{Fast Weight Memory} (FWM). Through ...
https://openreview.net/pdf/96ccad214bb6dc5b347aa32436f14fdd5391d21b.pdf
Multiscale Score Matching for Out-of-Distribution Detection
https://openreview.net/forum?id=xoHdgbQJohv
https://openreview.net/forum?id=xoHdgbQJohv
Ahsan Mahmood,Junier Oliva,Martin Andreas Styner
ICLR 2021,Poster
We present a new methodology for detecting out-of-distribution (OOD) images by utilizing norms of the score estimates at multiple noise scales. A score is defined to be the gradient of the log density with respect to the input data. Our methodology is completely unsupervised and follows a straight forward training sche...
https://openreview.net/pdf/639279c160eb93e79cf2ee33db8f9dc5b040f345.pdf
Learning to Sample with Local and Global Contexts in Experience Replay Buffer
https://openreview.net/forum?id=gJYlaqL8i8
https://openreview.net/forum?id=gJYlaqL8i8
Youngmin Oh,Kimin Lee,Jinwoo Shin,Eunho Yang,Sung Ju Hwang
ICLR 2021,Poster
Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL). To utilize the experience replay efficiently, the existing sampling methods allow selecting out more meaningful experiences by imposing prio...
https://openreview.net/pdf/92ef8e632b99778a17bd8e0187962812d2cd42c5.pdf
Parameter-Based Value Functions
https://openreview.net/forum?id=tV6oBfuyLTQ
https://openreview.net/forum?id=tV6oBfuyLTQ
Francesco Faccio,Louis Kirsch,Jürgen Schmidhuber
ICLR 2021,Poster
Traditional off-policy actor-critic Reinforcement Learning (RL) algorithms learn value functions of a single target policy. However, when value functions are updated to track the learned policy, they forget potentially useful information about old policies. We introduce a class of value functions called Parameter-Based...
https://openreview.net/pdf/c79ef13431e3c5decbd9f2ba989bc20a847b37be.pdf
New Bounds For Distributed Mean Estimation and Variance Reduction
https://openreview.net/forum?id=t86MwoUCCNe
https://openreview.net/forum?id=t86MwoUCCNe
Peter Davies,Vijaykrishna Gurunanthan,Niusha Moshrefi,Saleh Ashkboos,Dan Alistarh
ICLR 2021,Poster
We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $\mathbf x_v \in \mathbb R^d$, and must cooperate to estimate the mean of their inputs $\mathbf \mu = \frac 1n\sum_{v = 1}^n \mathbf x_v$, while minimizing total communication cost. DME is ...
https://openreview.net/pdf/02618eb8b76a664b33780ceb32a0450c69a54d1c.pdf
Learning to Set Waypoints for Audio-Visual Navigation
https://openreview.net/forum?id=cR91FAodFMe
https://openreview.net/forum?id=cR91FAodFMe
Changan Chen,Sagnik Majumder,Ziad Al-Halah,Ruohan Gao,Santhosh Kumar Ramakrishnan,Kristen Grauman
ICLR 2021,Poster
In audio-visual navigation, an agent intelligently travels through a complex, unmapped 3D environment using both sights and sounds to find a sound source (e.g., a phone ringing in another room). Existing models learn to act at a fixed granularity of agent motion and rely on simple recurrent aggregations of the audio ob...
https://openreview.net/pdf/fa0a991905ae30b2fa74ca7b101b3acabd532c13.pdf
Disambiguating Symbolic Expressions in Informal Documents
https://openreview.net/forum?id=K5j7D81ABvt
https://openreview.net/forum?id=K5j7D81ABvt
Dennis Müller,Cezary Kaliszyk
ICLR 2021,Poster
We propose the task of \emph{disambiguating} symbolic expressions in informal STEM documents in the form of \LaTeX files -- that is, determining their precise semantics and abstract syntax tree -- as a neural machine translation task. We discuss the distinct challenges involved and present a dataset with roughly 33,000...
https://openreview.net/pdf/006f5f9df1ed650389c8a89fd0087c3a9cb81605.pdf
Colorization Transformer
https://openreview.net/forum?id=5NA1PinlGFu
https://openreview.net/forum?id=5NA1PinlGFu
Manoj Kumar,Dirk Weissenborn,Nal Kalchbrenner
ICLR 2021,Poster
We present the Colorization Transformer, a novel approach for diverse high fidelity image colorization based on self-attention. Given a grayscale image, the colorization proceeds in three steps. We first use a conditional autoregressive transformer to produce a low resolution coarse coloring of the grayscale image. Our...
https://openreview.net/pdf/f2f5d9057587995de8d113d1ba35dd7d8b98f48e.pdf
Theoretical bounds on estimation error for meta-learning
https://openreview.net/forum?id=SZ3wtsXfzQR
https://openreview.net/forum?id=SZ3wtsXfzQR
James Lucas,Mengye Ren,Irene Raissa KAMENI KAMENI,Toniann Pitassi,Richard Zemel
ICLR 2021,Poster
Machine learning models have traditionally been developed under the assumption that the training and test distributions match exactly. However, recent success in few-shot learning and related problems are encouraging signs that these models can be adapted to more realistic settings where train and test distributions di...
https://openreview.net/pdf/f6e0a3923ea91b65f312ccf597276c427be18097.pdf
Variational Information Bottleneck for Effective Low-Resource Fine-Tuning
https://openreview.net/forum?id=kvhzKz-_DMF
https://openreview.net/forum?id=kvhzKz-_DMF
Rabeeh Karimi mahabadi,Yonatan Belinkov,James Henderson
ICLR 2021,Poster
While large-scale pretrained language models have obtained impressive results when fine-tuned on a wide variety of tasks, they still often suffer from overfitting in low-resource scenarios. Since such models are general-purpose feature extractors, many of these features are inevitably irrelevant for a given target task...
https://openreview.net/pdf/62f3ae7c05e30f870e3a6435b704afbd5c5290ba.pdf
TropEx: An Algorithm for Extracting Linear Terms in Deep Neural Networks
https://openreview.net/forum?id=IqtonxWI0V3
https://openreview.net/forum?id=IqtonxWI0V3
Martin Trimmel,Henning Petzka,Cristian Sminchisescu
ICLR 2021,Poster
Deep neural networks with rectified linear (ReLU) activations are piecewise linear functions, where hyperplanes partition the input space into an astronomically high number of linear regions. Previous work focused on counting linear regions to measure the network's expressive power and on analyzing geometric properties...
https://openreview.net/pdf/6f5f94ea1f9082d97859b79f1358b2a25baa8fcd.pdf
Seq2Tens: An Efficient Representation of Sequences by Low-Rank Tensor Projections
https://openreview.net/forum?id=dx4b7lm8jMM
https://openreview.net/forum?id=dx4b7lm8jMM
Csaba Toth,Patric Bonnier,Harald Oberhauser
ICLR 2021,Poster
Sequential data such as time series, video, or text can be challenging to analyse as the ordered structure gives rise to complex dependencies. At the heart of this is non-commutativity, in the sense that reordering the elements of a sequence can completely change its meaning. We use a classical mathematical object -- t...
https://openreview.net/pdf/bc313164adf3017b7e94a07aecbd830b43e5c49a.pdf
Representation learning for improved interpretability and classification accuracy of clinical factors from EEG
https://openreview.net/forum?id=TVjLza1t4hI
https://openreview.net/forum?id=TVjLza1t4hI
Garrett Honke,Irina Higgins,Nina Thigpen,Vladimir Miskovic,Katie Link,Sunny Duan,Pramod Gupta,Julia Klawohn,Greg Hajcak
ICLR 2021,Poster
Despite extensive standardization, diagnostic interviews for mental health disorders encompass substantial subjective judgment. Previous studies have demonstrated that EEG-based neural measures can function as reliable objective correlates of depression, or even predictors of depression and its course. However, their c...
https://openreview.net/pdf/2932815e2ab354b7f926e4803d4ba6847916d44d.pdf
Language-Agnostic Representation Learning of Source Code from Structure and Context
https://openreview.net/forum?id=Xh5eMZVONGF
https://openreview.net/forum?id=Xh5eMZVONGF
Daniel Zügner,Tobias Kirschstein,Michele Catasta,Jure Leskovec,Stephan Günnemann
ICLR 2021,Poster
Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program. Traditionally, designers of machine learning models have relied predominantly either on Structure or Context. We propose a new model, which jointly learns on Context and Structu...
https://openreview.net/pdf/69c9ae01f0f1b9a15ea1b21d87cdf95dff32a6f5.pdf
Generalized Multimodal ELBO
https://openreview.net/forum?id=5Y21V0RDBV
https://openreview.net/forum?id=5Y21V0RDBV
Thomas M. Sutter,Imant Daunhawer,Julia E Vogt
ICLR 2021,Poster
Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research. However, existing self-supervised generative models approximating an ELBO are not able to fulfill all desired requirements of multimodal models: their posterior approx...
https://openreview.net/pdf/2cfd5fea6a35d4586487da796743d75dacc7118c.pdf
Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose?
https://openreview.net/forum?id=p5uylG94S68
https://openreview.net/forum?id=p5uylG94S68
Balázs Kégl,Gabriel Hurtado,Albert Thomas
ICLR 2021,Poster
We contribute to micro-data model-based reinforcement learning (MBRL) by rigorously comparing popular generative models using a fixed (random shooting) control agent. We find that on an environment that requires multimodal posterior predictives, mixture density nets outperform all other models by a large margin. When m...
https://openreview.net/pdf/04313ea0678f51bf6e97525219f5b92003b041b9.pdf
Set Prediction without Imposing Structure as Conditional Density Estimation
https://openreview.net/forum?id=04ArenGOz3
https://openreview.net/forum?id=04ArenGOz3
David W Zhang,Gertjan J. Burghouts,Cees G. M. Snoek
ICLR 2021,Poster
Set prediction is about learning to predict a collection of unordered variables with unknown interrelations. Training such models with set losses imposes the structure of a metric space over sets. We focus on stochastic and underdefined cases, where an incorrectly chosen loss function leads to implausible predictions. ...
https://openreview.net/pdf/04c489674227569994e57717321c907597b1355c.pdf
Learning Value Functions in Deep Policy Gradients using Residual Variance
https://openreview.net/forum?id=NX1He-aFO_F
https://openreview.net/forum?id=NX1He-aFO_F
Yannis Flet-Berliac,reda ouhamma,odalric-ambrym maillard,Philippe Preux
ICLR 2021,Poster
Policy gradient algorithms have proven to be successful in diverse decision making and control tasks. However, these methods suffer from high sample complexity and instability issues. In this paper, we address these challenges by providing a different approach for training the critic in the actor-critic framework. Our ...
https://openreview.net/pdf/d19c38b4919b1481e2aa3972a928c866f4502b44.pdf
IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression
https://openreview.net/forum?id=MBOyiNnYthd
https://openreview.net/forum?id=MBOyiNnYthd
Rianne van den Berg,Alexey A. Gritsenko,Mostafa Dehghani,Casper Kaae Sønderby,Tim Salimans
ICLR 2021,Poster
In this paper we analyse and improve integer discrete flows for lossless compression. Integer discrete flows are a recently proposed class of models that learn invertible transformations for integer-valued random variables. Their discrete nature makes them particularly suitable for lossless compression with entropy cod...
https://openreview.net/pdf/049fd6f43de5700220bd49a24b2ae38e78c3782c.pdf
Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders
https://openreview.net/forum?id=agHLCOBM5jP
https://openreview.net/forum?id=agHLCOBM5jP
Mangal Prakash,Alexander Krull,Florian Jug
ICLR 2021,Poster
Deep Learning based methods have emerged as the indisputable leaders for virtually all image restoration tasks. Especially in the domain of microscopy images, various content-aware image restoration (CARE) approaches are now used to improve the interpretability of acquired data. Naturally, there are limitations to what...
https://openreview.net/pdf/2afe972808ebb66f3926468902039c366b274c59.pdf
Is Attention Better Than Matrix Decomposition?
https://openreview.net/forum?id=1FvkSpWosOl
https://openreview.net/forum?id=1FvkSpWosOl
Zhengyang Geng,Meng-Hao Guo,Hongxu Chen,Xia Li,Ke Wei,Zhouchen Lin
ICLR 2021,Poster
As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery. However, is hand-crafted attention irreplaceable when modeling the global context? Our intriguing finding is that self-attention is not better than the matrix decom...
https://openreview.net/pdf/1cb5acc6fe475a215dd1192beec6158b8a4da5dc.pdf
Improving Transformation Invariance in Contrastive Representation Learning
https://openreview.net/forum?id=NomEDgIEBwE
https://openreview.net/forum?id=NomEDgIEBwE
Adam Foster,Rattana Pukdee,Tom Rainforth
ICLR 2021,Poster
We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a traini...
https://openreview.net/pdf/401efbc12f590198cf9a4094f6a0ce66e21be5e9.pdf
On the Origin of Implicit Regularization in Stochastic Gradient Descent
https://openreview.net/forum?id=rq_Qr0c1Hyo
https://openreview.net/forum?id=rq_Qr0c1Hyo
Samuel L Smith,Benoit Dherin,David Barrett,Soham De
ICLR 2021,Poster
For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximiz...
https://openreview.net/pdf/e5f4bcf96d3ed905ac91e4ea6e3993321ecda830.pdf
Transient Non-stationarity and Generalisation in Deep Reinforcement Learning
https://openreview.net/forum?id=Qun8fv4qSby
https://openreview.net/forum?id=Qun8fv4qSby
Maximilian Igl,Gregory Farquhar,Jelena Luketina,Wendelin Boehmer,Shimon Whiteson
ICLR 2021,Poster
Non-stationarity can arise in Reinforcement Learning (RL) even in stationary environments. For example, most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Due to the transience of this non-stationarity, it is often not explicitly addressed in deep RL and a single neural ne...
https://openreview.net/pdf/ea444807010b334cd2b90645f1cfa31bd38f3ef7.pdf
Lossless Compression of Structured Convolutional Models via Lifting
https://openreview.net/forum?id=oxnp2q-PGL4
https://openreview.net/forum?id=oxnp2q-PGL4
Gustav Sourek,Filip Zelezny,Ondrej Kuzelka
ICLR 2021,Poster
Lifting is an efficient technique to scale up graphical models generalized to relational domains by exploiting the underlying symmetries. Concurrently, neural models are continuously expanding from grid-like tensor data into structured representations, such as various attributed graphs and relational databases. To addr...
https://openreview.net/pdf/6ca46d0a2419236e20aac30bbf133f4c81154953.pdf
Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective
https://openreview.net/forum?id=-qh0M9XWxnv
https://openreview.net/forum?id=-qh0M9XWxnv
Muhammet Balcilar,Guillaume Renton,Pierre Héroux,Benoit Gaüzère,Sébastien Adam,Paul Honeine
ICLR 2021,Poster
In the recent literature of Graph Neural Networks (GNN), the expressive power of models has been studied through their capability to distinguish if two given graphs are isomorphic or not. Since the graph isomorphism problem is NP-intermediate, and Weisfeiler-Lehman (WL) test can give sufficient but not enough evidence ...
https://openreview.net/pdf/859c9ee357c81e0b9a1cb989b1e23b8b42d741f1.pdf
A unifying view on implicit bias in training linear neural networks
https://openreview.net/forum?id=ZsZM-4iMQkH
https://openreview.net/forum?id=ZsZM-4iMQkH
Chulhee Yun,Shankar Krishnan,Hossein Mobahi
ICLR 2021,Poster
We study the implicit bias of gradient flow (i.e., gradient descent with infinitesimal step size) on linear neural network training. We propose a tensor formulation of neural networks that includes fully-connected, diagonal, and convolutional networks as special cases, and investigate the linear version of the formulat...
https://openreview.net/pdf/7592938b320208bd563349d1ea3385dd9e80cbe6.pdf
Balancing Constraints and Rewards with Meta-Gradient D4PG
https://openreview.net/forum?id=TQt98Ya7UMP
https://openreview.net/forum?id=TQt98Ya7UMP
Dan A. Calian,Daniel J Mankowitz,Tom Zahavy,Zhongwen Xu,Junhyuk Oh,Nir Levine,Timothy Mann
ICLR 2021,Poster
Deploying Reinforcement Learning (RL) agents to solve real-world applications often requires satisfying complex system constraints. Often the constraint thresholds are incorrectly set due to the complex nature of a system or the inability to verify the thresholds offline (e.g, no simulator or reasonable offline evaluat...
https://openreview.net/pdf/c2bc1eac3b05c897508a2b6cf4f096a98dbcc8e2.pdf
Robust Curriculum Learning: from clean label detection to noisy label self-correction
https://openreview.net/forum?id=lmTWnm3coJJ
https://openreview.net/forum?id=lmTWnm3coJJ
Tianyi Zhou,Shengjie Wang,Jeff Bilmes
ICLR 2021,Poster
Neural network training can easily overfit noisy labels resulting in poor generalization performance. Existing methods address this problem by (1) filtering out the noisy data and only using the clean data for training or (2) relabeling the noisy data by the model during training or by another model trained only on a c...
https://openreview.net/pdf/06ca7281bb3ba57d591dedb4b5127373e0c1d429.pdf
Clairvoyance: A Pipeline Toolkit for Medical Time Series
https://openreview.net/forum?id=xnC8YwKUE3k
https://openreview.net/forum?id=xnC8YwKUE3k
Daniel Jarrett,Jinsung Yoon,Ioana Bica,Zhaozhi Qian,Ari Ercole,Mihaela van der Schaar
ICLR 2021,Poster
Time-series learning is the bread and butter of data-driven *clinical decision support*, and the recent explosion in ML research has demonstrated great potential in various healthcare settings. At the same time, medical time-series problems in the wild are challenging due to their highly *composite* nature: They entail...
https://openreview.net/pdf/c4f52313ee7aa37bb754ae2f6524cc0aeb47ce43.pdf
Plan-Based Relaxed Reward Shaping for Goal-Directed Tasks
https://openreview.net/forum?id=w2Z2OwVNeK
https://openreview.net/forum?id=w2Z2OwVNeK
Ingmar Schubert,Ozgur S Oguz,Marc Toussaint
ICLR 2021,Poster
In high-dimensional state spaces, the usefulness of Reinforcement Learning (RL) is limited by the problem of exploration. This issue has been addressed using potential-based reward shaping (PB-RS) previously. In the present work, we introduce Final-Volume-Preserving Reward Shaping (FV-RS). FV-RS relaxes the strict opti...
https://openreview.net/pdf/6ab6b9e3a9fe5a364f986aaff177de866990899b.pdf
Improving VAEs' Robustness to Adversarial Attack
https://openreview.net/forum?id=-Hs_otp2RB
https://openreview.net/forum?id=-Hs_otp2RB
Matthew JF Willetts,Alexander Camuto,Tom Rainforth,S Roberts,Christopher C Holmes
ICLR 2021,Poster
Variational autoencoders (VAEs) have recently been shown to be vulnerable to adversarial attacks, wherein they are fooled into reconstructing a chosen target image. However, how to defend against such attacks remains an open problem. We make significant advances in addressing this issue by introducing methods for produ...
https://openreview.net/pdf/99d30d8f3d5b1463f05554f92526d389e651b1db.pdf
Differentiable Segmentation of Sequences
https://openreview.net/forum?id=4T489T4yav
https://openreview.net/forum?id=4T489T4yav
Erik Scharwächter,Jonathan Lennartz,Emmanuel Müller
ICLR 2021,Poster
Segmented models are widely used to describe non-stationary sequential data with discrete change points. Their estimation usually requires solving a mixed discrete-continuous optimization problem, where the segmentation is the discrete part and all other model parameters are continuous. A number of estimation algorithm...
https://openreview.net/pdf/211648c2242f789fd76f662801f326094db7433d.pdf
GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing
https://openreview.net/forum?id=kyaIeYj4zZ
https://openreview.net/forum?id=kyaIeYj4zZ
Tao Yu,Chien-Sheng Wu,Xi Victoria Lin,bailin wang,Yi Chern Tan,Xinyi Yang,Dragomir Radev,richard socher,Caiming Xiong
ICLR 2021,Poster
We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data. We construct synthetic question-SQL pairs over high-quality tables via a synchronous context-free grammar (SCFG). We pre-train our model o...
https://openreview.net/pdf/41a8d65642880c0853bfa9f37d81b4fc15cba53e.pdf
Sliced Kernelized Stein Discrepancy
https://openreview.net/forum?id=t0TaKv0Gx6Z
https://openreview.net/forum?id=t0TaKv0Gx6Z
Wenbo Gong,Yingzhen Li,José Miguel Hernández-Lobato
ICLR 2021,Poster
Kernelized Stein discrepancy (KSD), though being extensively used in goodness-of-fit tests and model learning, suffers from the curse-of-dimensionality. We address this issue by proposing the sliced Stein discrepancy and its scalable and kernelized variants, which employs kernel-based test functions defined on the opti...
https://openreview.net/pdf/39d9fa2661eb33fc05f7d9de6fddb979108767c4.pdf
Variational Intrinsic Control Revisited
https://openreview.net/forum?id=P0p33rgyoE
https://openreview.net/forum?id=P0p33rgyoE
Taehwan Kwon
ICLR 2021,Poster
In this paper, we revisit variational intrinsic control (VIC), an unsupervised reinforcement learning method for finding the largest set of intrinsic options available to an agent. In the original work by Gregor et al. (2016), two VIC algorithms were proposed: one that represents the options explicitly, and the other t...
https://openreview.net/pdf/8841dcedad713be63398c9001418c334c7479b4e.pdf
HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks
https://openreview.net/forum?id=pHXfe1cOmA
https://openreview.net/forum?id=pHXfe1cOmA
Zhou Xian,Shamit Lal,Hsiao-Yu Tung,Emmanouil Antonios Platanios,Katerina Fragkiadaki
ICLR 2021,Poster
We propose HyperDynamics, a dynamics meta-learning framework that conditions on an agent’s interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of the dynamical system. Physical and visual properties of the environm...
https://openreview.net/pdf/08774c9cdcc696092021d165f4b6e807b414198c.pdf
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
https://openreview.net/forum?id=AHOs7Sm5H7R
https://openreview.net/forum?id=AHOs7Sm5H7R
Zhiyuan Li,Yuping Luo,Kaifeng Lyu
ICLR 2021,Poster
Matrix factorization is a simple and natural test-bed to investigate the implicit regularization of gradient descent. Gunasekar et al. (2017) conjectured that gradient flow with infinitesimal initialization converges to the solution that minimizes the nuclear norm, but a series of recent papers argued that the language...
https://openreview.net/pdf/e29b53584bc9017cb15b9394735cd51b56c32446.pdf
Private Post-GAN Boosting
https://openreview.net/forum?id=6isfR3JCbi
https://openreview.net/forum?id=6isfR3JCbi
Marcel Neunhoeffer,Steven Wu,Cynthia Dwork
ICLR 2021,Poster
Differentially private GANs have proven to be a promising approach for generating realistic synthetic data without compromising the privacy of individuals. Due to the privacy-protective noise introduced in the training, the convergence of GANs becomes even more elusive, which often leads to poor utility in the output ...
https://openreview.net/pdf/9af34be61229e9ded84048009befadeb57d1957d.pdf
Characterizing signal propagation to close the performance gap in unnormalized ResNets
https://openreview.net/forum?id=IX3Nnir2omJ
https://openreview.net/forum?id=IX3Nnir2omJ
Andrew Brock,Soham De,Samuel L Smith
ICLR 2021,Poster
Batch Normalization is a key component in almost all state-of-the-art image classifiers, but it also introduces practical challenges: it breaks the independence between training examples within a batch, can incur compute and memory overhead, and often results in unexpected bugs. Building on recent theoretical analyses ...
https://openreview.net/pdf/796f0f646a7dc728f2d8d89bc6d55288c9457889.pdf
Prototypical Contrastive Learning of Unsupervised Representations
https://openreview.net/forum?id=KmykpuSrjcq
https://openreview.net/forum?id=KmykpuSrjcq
Junnan Li,Pan Zhou,Caiming Xiong,Steven Hoi
ICLR 2021,Poster
This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that bridges contrastive learning with clustering. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it implicitly encodes semantic structures of the data into ...
https://openreview.net/pdf/601011be0933cda056049e8fd0b25a10bcfd4515.pdf
Hyperbolic Neural Networks++
https://openreview.net/forum?id=Ec85b0tUwbA
https://openreview.net/forum?id=Ec85b0tUwbA
Ryohei Shimizu,YUSUKE Mukuta,Tatsuya Harada
ICLR 2021,Poster
Hyperbolic spaces, which have the capacity to embed tree structures without distortion owing to their exponential volume growth, have recently been applied to machine learning to better capture the hierarchical nature of data. In this study, we generalize the fundamental components of neural networks in a single hyperb...
https://openreview.net/pdf/83447b5937824f2d585bcbca44769d242615f9f5.pdf
Lipschitz Recurrent Neural Networks
https://openreview.net/forum?id=-N7PBXqOUJZ
https://openreview.net/forum?id=-N7PBXqOUJZ
N. Benjamin Erichson,Omri Azencot,Alejandro Queiruga,Liam Hodgkinson,Michael W. Mahoney
ICLR 2021,Poster
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity. This particular functional form facilitates stability analysis of the long-term behavio...
https://openreview.net/pdf/fab880544ab1da571de32581b8939abf93ce475f.pdf
A statistical theory of cold posteriors in deep neural networks
https://openreview.net/forum?id=Rd138pWXMvG
https://openreview.net/forum?id=Rd138pWXMvG
Laurence Aitchison
ICLR 2021,Poster
To get Bayesian neural networks to perform comparably to standard neural networks it is usually necessary to artificially reduce uncertainty using a tempered or cold posterior. This is extremely concerning: if the prior is accurate, Bayes inference/decision theory is optimal, and any artificial changes to the posterior...
https://openreview.net/pdf/ad6b61823bafd130bfd5c821fd1ceb7913a54d2d.pdf
Boost then Convolve: Gradient Boosting Meets Graph Neural Networks
https://openreview.net/forum?id=ebS5NUfoMKL
https://openreview.net/forum?id=ebS5NUfoMKL
Sergei Ivanov,Liudmila Prokhorenkova
ICLR 2021,Poster
Graph neural networks (GNNs) are powerful models that have been successful in various graph representation learning tasks. Whereas gradient boosted decision trees (GBDT) often outperform other machine learning methods when faced with heterogeneous tabular data. But what approach should be used for graphs with tabular n...
https://openreview.net/pdf/e8b53ad374bcf1f4207b1153a22ea94fb05e3311.pdf
Genetic Soft Updates for Policy Evolution in Deep Reinforcement Learning
https://openreview.net/forum?id=TGFO0DbD_pk
https://openreview.net/forum?id=TGFO0DbD_pk
Enrico Marchesini,Davide Corsi,Alessandro Farinelli
ICLR 2021,Poster
The combination of Evolutionary Algorithms (EAs) and Deep Reinforcement Learning (DRL) has been recently proposed to merge the benefits of both solutions. Existing mixed approaches, however, have been successfully applied only to actor-critic methods and present significant overhead. We address these issues by introduc...
https://openreview.net/pdf/2a012533ff0b6880941f619b1e03b63abd1414c6.pdf
Spatially Structured Recurrent Modules
https://openreview.net/forum?id=5l9zj5G7vDY
https://openreview.net/forum?id=5l9zj5G7vDY
Nasim Rahaman,Anirudh Goyal,Muhammad Waleed Gondal,Manuel Wuthrich,Stefan Bauer,Yash Sharma,Yoshua Bengio,Bernhard Schölkopf
ICLR 2021,Poster
Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalise well and are robust to changes in the input distribution. While methods that harness spatial and temporal structures find broad application, recent work has demonstrated the potentia...
https://openreview.net/pdf/3590e3dd48376daa86d4fee6c6cb3c8b051d03b9.pdf
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
https://openreview.net/forum?id=nzpLWnVAyah
https://openreview.net/forum?id=nzpLWnVAyah
Marius Mosbach,Maksym Andriushchenko,Dietrich Klakow
ICLR 2021,Poster
Fine-tuning pre-trained transformer-based language models such as BERT has become a common practice dominating leaderboards across various NLP benchmarks. Despite the strong empirical performance of fine-tuned models, fine-tuning is an unstable process: training the same model with multiple random seeds can result in a...
https://openreview.net/pdf/ecb1af8e8fc55b9e071db6ef6b56163a21f00a44.pdf
End-to-End Egospheric Spatial Memory
https://openreview.net/forum?id=rRFIni1CYmy
https://openreview.net/forum?id=rRFIni1CYmy
Daniel James Lenton,Stephen James,Ronald Clark,Andrew Davison
ICLR 2021,Poster
Spatial memory, or the ability to remember and recall specific locations and objects, is central to autonomous agents' ability to carry out tasks in real environments. However, most existing artificial memory modules are not very adept at storing spatial information. We propose a parameter-free module, Egospheric Spati...
https://openreview.net/pdf/5c9e921c94b83d510872e5e048479c56c66cad04.pdf
LEAF: A Learnable Frontend for Audio Classification
https://openreview.net/forum?id=jM76BCb6F9m
https://openreview.net/forum?id=jM76BCb6F9m
Neil Zeghidour,Olivier Teboul,Félix de Chaumont Quitry,Marco Tagliasacchi
ICLR 2021,Poster
Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limitations of handmade representations. In this work we show that we can train a sing...
https://openreview.net/pdf/426d58043e09ff47db27ab72f40e8db575a46f7b.pdf
Simple Augmentation Goes a Long Way: ADRL for DNN Quantization
https://openreview.net/forum?id=Qr0aRliE_Hb
https://openreview.net/forum?id=Qr0aRliE_Hb
Lin Ning,Guoyang Chen,Weifeng Zhang,Xipeng Shen
ICLR 2021,Poster
Mixed precision quantization improves DNN performance by assigning different layers with different bit-width values. Searching for the optimal bit-width for each layer, however, remains a challenge. Deep Reinforcement Learning (DRL) shows some recent promise. It however suffers instability due to function approximation...
https://openreview.net/pdf/4f1af14f420632aa60f163e48701a935fae3a547.pdf
The inductive bias of ReLU networks on orthogonally separable data
https://openreview.net/forum?id=krz7T0xU9Z_
https://openreview.net/forum?id=krz7T0xU9Z_
Mary Phuong,Christoph H Lampert
ICLR 2021,Poster
We study the inductive bias of two-layer ReLU networks trained by gradient flow. We identify a class of easy-to-learn (`orthogonally separable') datasets, and characterise the solution that ReLU networks trained on such datasets converge to. Irrespective of network width, the solution turns out to be a combination of t...
https://openreview.net/pdf/a68e4ef7c465175fddb6ba540763c62f8708c9e3.pdf
Monte-Carlo Planning and Learning with Language Action Value Estimates
https://openreview.net/forum?id=7_G8JySGecm
https://openreview.net/forum?id=7_G8JySGecm
Youngsoo Jang,Seokin Seo,Jongmin Lee,Kee-Eung Kim
ICLR 2021,Poster
Interactive Fiction (IF) games provide a useful testbed for language-based reinforcement learning agents, posing significant challenges of natural language understanding, commonsense reasoning, and non-myopic planning in the combinatorial search space. Agents based on standard planning algorithms struggle to play IF ga...
https://openreview.net/pdf/255385188b591f81f5ec4cb8c99ea2b92467f6be.pdf
Learning Energy-Based Models by Diffusion Recovery Likelihood
https://openreview.net/forum?id=v_1Soh8QUNc
https://openreview.net/forum?id=v_1Soh8QUNc
Ruiqi Gao,Yang Song,Ben Poole,Ying Nian Wu,Diederik P Kingma
ICLR 2021,Poster
While energy-based models (EBMs) exhibit a number of desirable properties, training and sampling on high-dimensional datasets remains challenging. Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs traine...
https://openreview.net/pdf/bb74e78ec73a15dcfd250d8dac827fa7009897b2.pdf
Capturing Label Characteristics in VAEs
https://openreview.net/forum?id=wQRlSUZ5V7B
https://openreview.net/forum?id=wQRlSUZ5V7B
Tom Joy,Sebastian Schmon,Philip Torr,Siddharth N,Tom Rainforth
ICLR 2021,Poster
We present a principled approach to incorporating labels in variational autoencoders (VAEs) that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to t...
https://openreview.net/pdf/f58d5a4d19e174d578190ec9687a1904e52596b6.pdf
Linear Mode Connectivity in Multitask and Continual Learning
https://openreview.net/forum?id=Fmg_fQYUejf
https://openreview.net/forum?id=Fmg_fQYUejf
Seyed Iman Mirzadeh,Mehrdad Farajtabar,Dilan Gorur,Razvan Pascanu,Hassan Ghasemzadeh
ICLR 2021,Poster
Continual (sequential) training and multitask (simultaneous) training are often attempting to solve the same overall objective: to find a solution that performs well on all considered tasks. The main difference is in the training regimes, where continual learning can only have access to one task at a time, which for ne...
https://openreview.net/pdf/258e0f0ad7124932b50cc607ded20cd020bfccf8.pdf
Computational Separation Between Convolutional and Fully-Connected Networks
https://openreview.net/forum?id=hkMoYYEkBoI
https://openreview.net/forum?id=hkMoYYEkBoI
eran malach,Shai Shalev-Shwartz
ICLR 2021,Poster
Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the ...
https://openreview.net/pdf/f6530436996abef24697ac8461be780c738d0b41.pdf
Rethinking Embedding Coupling in Pre-trained Language Models
https://openreview.net/forum?id=xpFFI_NtgpW
https://openreview.net/forum?id=xpFFI_NtgpW
Hyung Won Chung,Thibault Fevry,Henry Tsai,Melvin Johnson,Sebastian Ruder
ICLR 2021,Poster
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of mul...
https://openreview.net/pdf/adedfbb0966285d46a1b5e7fb42ed8f57385af9e.pdf
Physics-aware, probabilistic model order reduction with guaranteed stability
https://openreview.net/forum?id=vyY0jnWG-tK
https://openreview.net/forum?id=vyY0jnWG-tK
Sebastian Kaltenbach,Phaedon Stelios Koutsourelakis
ICLR 2021,Poster
Given (small amounts of) time-series' data from a high-dimensional, fine-grained, multiscale dynamical system, we propose a generative framework for learning an effective, lower-dimensional, coarse-grained dynamical model that is predictive of the fine-grained system's long-term evolution but also of its behavior unde...
https://openreview.net/pdf/0dbc13eb90ca0605840fb7ee708d76db95df9cbd.pdf
Disentangling 3D Prototypical Networks for Few-Shot Concept Learning
https://openreview.net/forum?id=-Lr-u0b42he
https://openreview.net/forum?id=-Lr-u0b42he
Mihir Prabhudesai,Shamit Lal,Darshan Patil,Hsiao-Yu Tung,Adam W Harley,Katerina Fragkiadaki
ICLR 2021,Poster
We present neural architectures that disentangle RGB-D images into objects’ shapes and styles and a map of the background scene, and explore their applications for few-shot 3D object detection and few-shot concept classification. Our networks incorporate architectural biases that reflect the image formation process, 3D...
https://openreview.net/pdf/b42e4e31403f7d4fdb789fb870cace1f71e6bb86.pdf
LiftPool: Bidirectional ConvNet Pooling
https://openreview.net/forum?id=kE3vd639uRW
https://openreview.net/forum?id=kE3vd639uRW
Jiaojiao Zhao,Cees G. M. Snoek
ICLR 2021,Poster
Pooling is a critical operation in convolutional neural networks for increasing receptive fields and improving robustness to input variations. Most existing pooling operations downsample the feature maps, which is a lossy process. Moreover, they are not invertible: upsampling a downscaled feature map can not recover...
https://openreview.net/pdf/723c52d5e33d391f50b4913e512241a208596a0c.pdf
Latent Convergent Cross Mapping
https://openreview.net/forum?id=4TSiOTkKe5P
https://openreview.net/forum?id=4TSiOTkKe5P
Edward De Brouwer,Adam Arany,Jaak Simm,Yves Moreau
ICLR 2021,Poster
Discovering causal structures of temporal processes is a major tool of scientific inquiry because it helps us better understand and explain the mechanisms driving a phenomenon of interest, thereby facilitating analysis, reasoning, and synthesis for such systems. However, accurately inferring causal structures within a...
https://openreview.net/pdf/973e4e487f91472cfee202c1353ca7932b83a942.pdf
You Only Need Adversarial Supervision for Semantic Image Synthesis
https://openreview.net/forum?id=yvQKLaqNE6M
https://openreview.net/forum?id=yvQKLaqNE6M
Edgar Schönfeld,Vadim Sushko,Dan Zhang,Juergen Gall,Bernt Schiele,Anna Khoreva
ICLR 2021,Poster
Despite their recent successes, GAN models for semantic image synthesis still suffer from poor image quality when trained with only adversarial supervision. Historically, additionally employing the VGG-based perceptual loss has helped to overcome this issue, significantly improving the synthesis quality, but at the sam...
https://openreview.net/pdf/296a08e6901d8e9191af10b50555200a0efb3fc4.pdf
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima
https://openreview.net/forum?id=wXgk_iCiYGo
https://openreview.net/forum?id=wXgk_iCiYGo
Zeke Xie,Issei Sato,Masashi Sugiyama
ICLR 2021,Poster
Stochastic Gradient Descent (SGD) and its variants are mainstream methods for training deep networks in practice. SGD is known to find a flat minimum that often generalizes well. However, it is mathematically unclear how deep learning can select a flat minimum among so many minima. To answer the question quantitatively...
https://openreview.net/pdf/8d09cb383c404f3ef7a8782e7e20297845235b60.pdf
Robust Learning of Fixed-Structure Bayesian Networks in Nearly-Linear Time
https://openreview.net/forum?id=euDnVs0Ynts
https://openreview.net/forum?id=euDnVs0Ynts
Yu Cheng,Honghao Lin
ICLR 2021,Poster
We study the problem of learning Bayesian networks where an $\epsilon$-fraction of the samples are adversarially corrupted. We focus on the fully-observable case where the underlying graph structure is known. In this work, we present the first nearly-linear time algorithm for this problem with a dimension-independent...
https://openreview.net/pdf/01c090bb63e775869f6bc2d003ebf3cd5e79df67.pdf
Activation-level uncertainty in deep neural networks
https://openreview.net/forum?id=UvBPbpvHRj-
https://openreview.net/forum?id=UvBPbpvHRj-
Pablo Morales-Alvarez,Daniel Hernández-Lobato,Rafael Molina,José Miguel Hernández-Lobato
ICLR 2021,Poster
Current approaches for uncertainty estimation in deep learning often produce too confident results. Bayesian Neural Networks (BNNs) model uncertainty in the space of weights, which is usually high-dimensional and limits the quality of variational approximations. The more recent functional BNNs (fBNNs) address this only...
https://openreview.net/pdf/3675d798eb4cc1b53b84850025e0a9edaee1ddcb.pdf
SkipW: Resource Adaptable RNN with Strict Upper Computational Limit
https://openreview.net/forum?id=2CjEVW-RGOJ
https://openreview.net/forum?id=2CjEVW-RGOJ
Tsiry Mayet,Anne Lambert,Pascal Leguyadec,Francoise Le Bolzer,François Schnitzler
ICLR 2021,Poster
We introduce Skip-Window, a method to allow recurrent neural networks (RNNs) to trade off accuracy for computational cost during the analysis of a sequence. Similarly to existing approaches, Skip-Window extends existing RNN cells by adding a mechanism to encourage the model to process fewer inputs. Unlike existing appr...
https://openreview.net/pdf/6c45c14eaa50cfd7a61ea01da21211148f40eccf.pdf
Wasserstein-2 Generative Networks
https://openreview.net/forum?id=bEoxzW_EXsa
https://openreview.net/forum?id=bEoxzW_EXsa
Alexander Korotin,Vage Egiazarian,Arip Asadulaev,Alexander Safin,Evgeny Burnaev
ICLR 2021,Poster
We propose a novel end-to-end non-minimax algorithm for training optimal transport mappings for the quadratic cost (Wasserstein-2 distance). The algorithm uses input convex neural networks and a cycle-consistency regularization to approximate Wasserstein-2 distance. In contrast to popular entropic and quadratic regular...
https://openreview.net/pdf/dbe3a9934dc8bb605cdc8c67d7e68c0a54cf4d38.pdf
Group Equivariant Stand-Alone Self-Attention For Vision
https://openreview.net/forum?id=JkfYjnOEo6M
https://openreview.net/forum?id=JkfYjnOEo6M
David W. Romero,Jean-Baptiste Cordonnier
ICLR 2021,Poster
We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups. This is achieved by defining positional encodings that are invariant to the action of the group considered. Since the group acts on the positional encoding directly, group equivariant self-attention networks (GSA-...
https://openreview.net/pdf/d8bac9d42bd7732afa503ae4fe5f83e1ace88bb2.pdf
Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization
https://openreview.net/forum?id=3tFAs5E-Pe
https://openreview.net/forum?id=3tFAs5E-Pe
Alexander Korotin,Lingxiao Li,Justin Solomon,Evgeny Burnaev
ICLR 2021,Poster
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport. In this paper, we present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures, which are not restricted to being discrete. While past approaches ...
https://openreview.net/pdf/e0ff5cb89ad8da4cac3b85587213f35c465757fc.pdf
RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs
https://openreview.net/forum?id=tGZu6DlbreV
https://openreview.net/forum?id=tGZu6DlbreV
Meng Qu,Junkun Chen,Louis-Pascal Xhonneux,Yoshua Bengio,Jian Tang
ICLR 2021,Poster
This paper studies learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e....
https://openreview.net/pdf/847ad1169fb024508870737fba6927e2e34b9271.pdf
Selective Classification Can Magnify Disparities Across Groups
https://openreview.net/forum?id=N0M_4BkQ05i
https://openreview.net/forum?id=N0M_4BkQ05i
Erik Jones,Shiori Sagawa,Pang Wei Koh,Ananya Kumar,Percy Liang
ICLR 2021,Poster
Selective classification, in which models can abstain on uncertain predictions, is a natural approach to improving accuracy in settings where errors are costly but abstentions are manageable. In this paper, we find that while selective classification can improve average accuracies, it can simultaneously magnify existin...
https://openreview.net/pdf/b9ac6534faf7141a9138e3cfcfed7dbada0a6f36.pdf
FedMix: Approximation of Mixup under Mean Augmented Federated Learning
https://openreview.net/forum?id=Ogga20D2HO-
https://openreview.net/forum?id=Ogga20D2HO-
Tehrim Yoon,Sumin Shin,Sung Ju Hwang,Eunho Yang
ICLR 2021,Poster
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device, thus preserving privacy and eliminating the need to store data globally. While there are promising results under the assumption of independent and identically distributed (iid) local data, current...
https://openreview.net/pdf/0258da18459084a22b881d20dbd411e7184bb3d3.pdf
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
https://openreview.net/forum?id=jznizqvr15J
https://openreview.net/forum?id=jznizqvr15J
Sang Michael Xie,Ananya Kumar,Robbie Jones,Fereshte Khani,Tengyu Ma,Percy Liang
ICLR 2021,Poster
Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both in- and out-of-distribution (OOD). The goal is to learn a model which performs well both in-distribution and OOD. In these settings, auxiliary information is often cheaply available for every input. How should we be...
https://openreview.net/pdf/b003dea7a8dcfbb18d462cb7ce96b56a1a484fc6.pdf
Sample-Efficient Automated Deep Reinforcement Learning
https://openreview.net/forum?id=hSjxQ3B7GWq
https://openreview.net/forum?id=hSjxQ3B7GWq
Jörg K.H. Franke,Gregor Koehler,André Biedenkapp,Frank Hutter
ICLR 2021,Poster
Despite significant progress in challenging problems across various domains, applying state-of-the-art deep reinforcement learning (RL) algorithms remains challenging due to their sensitivity to the choice of hyperparameters. This sensitivity can partly be attributed to the non-stationarity of the RL problem, potential...
https://openreview.net/pdf/50e735ee784190b4976fe22036a75b2ac2feee2b.pdf
A Temporal Kernel Approach for Deep Learning with Continuous-time Information
https://openreview.net/forum?id=whE31dn74cL
https://openreview.net/forum?id=whE31dn74cL
Da Xu,Chuanwei Ruan,Evren Korpeoglu,Sushant Kumar,Kannan Achan
ICLR 2021,Poster
Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information. Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes. Current approaches often handle time in a heuristic manner to be consistent with...
https://openreview.net/pdf/40fc3e707f1a7db2333d5459c3b472809d4e33c1.pdf
Convex Regularization behind Neural Reconstruction
https://openreview.net/forum?id=VErQxgyrbfn
https://openreview.net/forum?id=VErQxgyrbfn
Arda Sahiner,Morteza Mardani,Batu Ozturkler,Mert Pilanci,John M. Pauly
ICLR 2021,Poster
Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems. The non-convex and opaque nature of neural networks, however, hinders their utility in sensitive applications such as medical imaging. To cope with this challenge, this paper advocates a convex duality framewo...
https://openreview.net/pdf/cd9dfc05e045919a65b1eb93e132822e42d873e4.pdf
Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms
https://openreview.net/forum?id=fGF8qAqpXXG
https://openreview.net/forum?id=fGF8qAqpXXG
Arda Sahiner,Tolga Ergen,John M. Pauly,Mert Pilanci
ICLR 2021,Poster
We describe the convex semi-infinite dual of the two-layer vector-output ReLU neural network training problem. This semi-infinite dual admits a finite dimensional representation, but its support is over a convex set which is difficult to characterize. In particular, we demonstrate that the non-convex neural network tra...
https://openreview.net/pdf/0222bcef2a87d75e3670c0707c8b848554ecbe31.pdf
Learning Better Structured Representations Using Low-rank Adaptive Label Smoothing
https://openreview.net/forum?id=5NsEIflpbSv
https://openreview.net/forum?id=5NsEIflpbSv
Asish Ghoshal,Xilun Chen,Sonal Gupta,Luke Zettlemoyer,Yashar Mehdad
ICLR 2021,Poster
Training with soft targets instead of hard targets has been shown to improve performance and calibration of deep neural networks. Label smoothing is a popular way of computing soft targets, where one-hot encoding of a class is smoothed with a uniform distribution. Owing to its simplicity, label smoothing has found wide...
https://openreview.net/pdf/4538feaf2c0ace4bc3472484186d4cda25dc7c01.pdf
Training GANs with Stronger Augmentations via Contrastive Discriminator
https://openreview.net/forum?id=eo6U4CAwVmg
https://openreview.net/forum?id=eo6U4CAwVmg
Jongheon Jeong,Jinwoo Shin
ICLR 2021,Poster
Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting. It is still unclear, however, that which augmentations could actually improve GANs, and in particular, how to apply a wider range of augmentations...
https://openreview.net/pdf/2d308c93802630f8c000471788307eb87a9027fd.pdf
Private Image Reconstruction from System Side Channels Using Generative Models
https://openreview.net/forum?id=y06VOYLcQXa
https://openreview.net/forum?id=y06VOYLcQXa
Yuanyuan Yuan,Shuai Wang,Junping Zhang
ICLR 2021,Poster
System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to infer program secrets based on observed side channel signals. Given the ever-growing adoption of machine learning as a service (...
https://openreview.net/pdf/73fc8942e64baa03de7625e340fa3c6d84db3589.pdf
Learning to Make Decisions via Submodular Regularization
https://openreview.net/forum?id=ac288vnG_7U
https://openreview.net/forum?id=ac288vnG_7U
Ayya Alieva,Aiden Aceves,Jialin Song,Stephen Mayo,Yisong Yue,Yuxin Chen
ICLR 2021,Poster
Many sequential decision making tasks can be viewed as combinatorial optimization problems over a large number of actions. When the cost of evaluating an action is high, even a greedy algorithm, which iteratively picks the best action given the history, is prohibitive to run. In this paper, we aim to learn a greedy heu...
https://openreview.net/pdf/1c1034956d2f523aa299974f4f639d1b8ecb0026.pdf
The Recurrent Neural Tangent Kernel
https://openreview.net/forum?id=3T9iFICe0Y9
https://openreview.net/forum?id=3T9iFICe0Y9
Sina Alemohammad,Zichao Wang,Randall Balestriero,Richard Baraniuk
ICLR 2021,Poster
The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization. One key DNN architecture remains to be kernelized, namely, the recurrent neural network...
https://openreview.net/pdf/0ede6a7293a24c88d58e7542b3c44d97270a2a0c.pdf
Evaluation of Similarity-based Explanations
https://openreview.net/forum?id=9uvhpyQwzM_
https://openreview.net/forum?id=9uvhpyQwzM_
Kazuaki Hanawa,Sho Yokoi,Satoshi Hara,Kentaro Inui
ICLR 2021,Poster
Explaining the predictions made by complex machine learning models helps users to understand and accept the predicted outputs with confidence. One promising way is to use similarity-based explanation that provides similar instances as evidence to support model predictions. Several relevance metrics are used for this pu...
https://openreview.net/pdf/ede4daa61cd87856ebce2c047d94f9fdc6149edf.pdf
Adaptive Procedural Task Generation for Hard-Exploration Problems
https://openreview.net/forum?id=8xLkv08d70T
https://openreview.net/forum?id=8xLkv08d70T
Kuan Fang,Yuke Zhu,Silvio Savarese,L. Fei-Fei
ICLR 2021,Poster
We introduce Adaptive Procedural Task Generation (APT-Gen), an approach to progressively generate a sequence of tasks as curricula to facilitate reinforcement learning in hard-exploration problems. At the heart of our approach, a task generator learns to create tasks from a parameterized task space via a black-box proc...
https://openreview.net/pdf/24bbbe680bd44c907aab36d5e18bae82a7a5a48f.pdf
Linear Last-iterate Convergence in Constrained Saddle-point Optimization
https://openreview.net/forum?id=dx11_7vm5_r
https://openreview.net/forum?id=dx11_7vm5_r
Chen-Yu Wei,Chung-Wei Lee,Mengxiao Zhang,Haipeng Luo
ICLR 2021,Poster
Optimistic Gradient Descent Ascent (OGDA) and Optimistic Multiplicative Weights Update (OMWU) for saddle-point optimization have received growing attention due to their favorable last-iterate convergence. However, their behaviors for simple bilinear games over the probability simplex are still not fully understood --- ...
https://openreview.net/pdf/80ab11841a700c095d09408aebe0552dc6c2c21f.pdf
On Graph Neural Networks versus Graph-Augmented MLPs
https://openreview.net/forum?id=tiqI7w64JG2
https://openreview.net/forum?id=tiqI7w64JG2
Lei Chen,Zhengdao Chen,Joan Bruna
ICLR 2021,Poster
From the perspectives of expressive power and learning, this work compares multi-layer Graph Neural Networks (GNNs) with a simplified alternative that we call Graph-Augmented Multi-Layer Perceptrons (GA-MLPs), which first augments node features with certain multi-hop operators on the graph and then applies learnable no...
https://openreview.net/pdf/974857db041de4f514814723ec84f8c39aa35126.pdf
Solving Compositional Reinforcement Learning Problems via Task Reduction
https://openreview.net/forum?id=9SS69KwomAM
https://openreview.net/forum?id=9SS69KwomAM
Yunfei Li,Yilin Wu,Huazhe Xu,Xiaolong Wang,Yi Wu
ICLR 2021,Poster
We propose a novel learning paradigm, Self-Imitation via Reduction (SIR), for solving compositional reinforcement learning problems. SIR is based on two core ideas: task reduction and self-imitation. Task reduction tackles a hard-to-solve task by actively reducing it to an easier task whose solution is known by the RL ...
https://openreview.net/pdf/77f78b692f36356e5e5bbddd012a3367bd821b29.pdf
Conditional Generative Modeling via Learning the Latent Space
https://openreview.net/forum?id=VJnrYcnRc6
https://openreview.net/forum?id=VJnrYcnRc6
Sameera Ramasinghe,Kanchana Nisal Ranasinghe,Salman Khan,Nick Barnes,Stephen Gould
ICLR 2021,Poster
Although deep learning has achieved appealing results on several machine learning tasks, most of the models are deterministic at inference, limiting their application to single-modal settings. We propose a novel general-purpose framework for conditional generation in multimodal spaces, that uses latent variables to mod...
https://openreview.net/pdf/ad10b1238b8c96783d156228bbe0a955123a991c.pdf
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
https://openreview.net/forum?id=kDnal_bbb-E
https://openreview.net/forum?id=kDnal_bbb-E
Rishabh Joshi,Vidhisha Balachandran,Shikhar Vashishth,Alan Black,Yulia Tsvetkov
ICLR 2021,Poster
To successfully negotiate a deal, it is not enough to communicate fluently: pragmatic planning of persuasive negotiation strategies is essential. While modern dialogue agents excel at generating fluent sentences, they still lack pragmatic grounding and cannot reason strategically. We present DialoGraph, a negotiation s...
https://openreview.net/pdf/1f09e2eb0a2962d022f2fc8411de57bb2f420a25.pdf
WaNet - Imperceptible Warping-based Backdoor Attack
https://openreview.net/forum?id=eEn8KTtJOx
https://openreview.net/forum?id=eEn8KTtJOx
Tuan Anh Nguyen,Anh Tuan Tran
ICLR 2021,Poster
With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years. A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigge...
https://openreview.net/pdf/db3277f5b47619abfe13880772b864960e98f643.pdf
Nonseparable Symplectic Neural Networks
https://openreview.net/forum?id=B5VvQrI49Pa
https://openreview.net/forum?id=B5VvQrI49Pa
Shiying Xiong,Yunjin Tong,Xingzhe He,Shuqi Yang,Cheng Yang,Bo Zhu
ICLR 2021,Poster
Predicting the behaviors of Hamiltonian systems has been drawing increasing attention in scientific machine learning. However, the vast majority of the literature was focused on predicting separable Hamiltonian systems with their kinematic and potential energy terms being explicitly decoupled, while building data-drive...
https://openreview.net/pdf/c9ab2e0778f4de8dcfb0a34ffd1c09aa50ceb3b8.pdf
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
https://openreview.net/forum?id=lvRTC669EY_
https://openreview.net/forum?id=lvRTC669EY_
Zhenggang Tang,Chao Yu,Boyuan Chen,Huazhe Xu,Xiaolong Wang,Fei Fang,Simon Shaolei Du,Yu Wang,Yi Wu
ICLR 2021,Poster
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, Reward-Randomized Policy Gradient (RPG). RPG is able to discover a set of multiple distinctiv...
https://openreview.net/pdf/2062fdf1e8a1dbc3c1d293239ad291f853463ba8.pdf
Multi-timescale Representation Learning in LSTM Language Models
https://openreview.net/forum?id=9ITXiTrAoT
https://openreview.net/forum?id=9ITXiTrAoT
Shivangi Mahto,Vy Ai Vo,Javier S. Turek,Alexander Huth
ICLR 2021,Poster
Language models must capture statistical dependencies between words at timescales ranging from very short to very long. Earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power law. However, it is unclear how this knowledge can be used for analyz...
https://openreview.net/pdf/6faff0f37219bcee41b257a3d80d7eeb3df0e2d6.pdf
Explaining the Efficacy of Counterfactually Augmented Data
https://openreview.net/forum?id=HHiiQKWsOcV
https://openreview.net/forum?id=HHiiQKWsOcV
Divyansh Kaushik,Amrith Setlur,Eduard H Hovy,Zachary Chase Lipton
ICLR 2021,Poster
In attempts to produce machine learning models less reliant on spurious patterns in NLP datasets, researchers have recently proposed curating counterfactually augmented data (CAD) via a human-in-the-loop process in which given some documents and their (initial) labels, humans must revise the text to make a counterfactu...
https://openreview.net/pdf/73361dc2c4d80cb501745448d7de1e3c99d2f2a8.pdf
Revisiting Locally Supervised Learning: an Alternative to End-to-end Training
https://openreview.net/forum?id=fAbkE6ant2
https://openreview.net/forum?id=fAbkE6ant2
Yulin Wang,Zanlin Ni,Shiji Song,Le Yang,Gao Huang
ICLR 2021,Poster
Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E) training of deep networks usually suffers from high GPUs memory footprint. This paper aims to address this problem by revisiting the locally supervised learning, where a network is split into gradient-isolated modules and train...
https://openreview.net/pdf/ae46b2e0daac3e1e7af2c0b30ca3ed05b9675f66.pdf
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
https://openreview.net/forum?id=fgd7we_uZa6
https://openreview.net/forum?id=fgd7we_uZa6
Zixiang Chen,Yuan Cao,Difan Zou,Quanquan Gu
ICLR 2021,Poster
A recent line of research on deep learning focuses on the extremely over-parameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size $n$ and the inverse of the target error $\epsilon^{-1}$, deep neural networks learned by (stochastic) gradient descent...
https://openreview.net/pdf/7d4b4fabf3654c85ec7bc9a41516a3fe17bbccd8.pdf
Blending MPC & Value Function Approximation for Efficient Reinforcement Learning
https://openreview.net/forum?id=RqCC_00Bg7V
https://openreview.net/forum?id=RqCC_00Bg7V
Mohak Bhardwaj,Sanjiban Choudhury,Byron Boots
ICLR 2021,Poster
Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems that uses a model to make predictions about future behavior. For each state encountered, MPC solves an online optimization problem to choose a control action that will minimize future cost. This is a surprisingly effective str...
https://openreview.net/pdf/50c99bb8be8ec7784b7ca8b4a8b59da987b66045.pdf
Probabilistic Numeric Convolutional Neural Networks
https://openreview.net/forum?id=T1XmO8ScKim
https://openreview.net/forum?id=T1XmO8ScKim
Marc Anton Finzi,Roberto Bondesan,Max Welling
ICLR 2021,Poster
Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods. Coherently defined feature representations must depend on the values in unobserved regions of the input. Drawing from the work in probabilistic numerics, we propos...
https://openreview.net/pdf/132819644044c301e530ea14a0a17e7e4d6756d7.pdf