title
stringlengths
19
143
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
9
347
tags
stringclasses
3 values
abstract
stringlengths
457
2.38k
pdf
stringlengths
71
71
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
https://openreview.net/forum?id=TNkPBBYFkXg
https://openreview.net/forum?id=TNkPBBYFkXg
Enmao Diao,Jie Ding,Vahid Tarokh
ICLR 2021,Poster
Federated Learning (FL) is a method of training machine learning models on private data distributed over a large number of possibly heterogeneous clients such as mobile phones and IoT devices. In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very ...
https://openreview.net/pdf/da02aa1b25ebd5799fabfa9e199c793460ef9794.pdf
Semantic Re-tuning with Contrastive Tension
https://openreview.net/forum?id=Ov_sMNau-PF
https://openreview.net/forum?id=Ov_sMNau-PF
Fredrik Carlsson,Amaru Cuba Gyllensten,Evangelia Gogoulou,Erik Ylipää Hellqvist,Magnus Sahlgren
ICLR 2021,Poster
Extracting semantically useful natural language sentence representations from pre-trained deep neural networks such as Transformers remains a challenge. We first demonstrate that pre-training objectives impose a significant task bias onto the final layers of models with a layer-wise survey of the Semantic Textual Simil...
https://openreview.net/pdf/183f4e3fc886804360e6169ab1b7192bbe476098.pdf
Dataset Meta-Learning from Kernel Ridge-Regression
https://openreview.net/forum?id=l-PrrQrK0QR
https://openreview.net/forum?id=l-PrrQrK0QR
Timothy Nguyen,Zhourong Chen,Jaehoon Lee
ICLR 2021,Poster
One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm. We introduce the novel concept of $\epsilon$-approximation of datasets, obtaining datasets which are much smaller than or are significant corruptions of the original training data while maintaining similar...
https://openreview.net/pdf/e5bd67ca9948951b21c82c12b69280270a7bfe71.pdf
AUXILIARY TASK UPDATE DECOMPOSITION: THE GOOD, THE BAD AND THE NEUTRAL
https://openreview.net/forum?id=1GTma8HwlYp
https://openreview.net/forum?id=1GTma8HwlYp
Lucio M. Dery,Yann Dauphin,David Grangier
ICLR 2021,Poster
While deep learning has been very beneficial in data-rich settings, tasks with smaller training set often resort to pre-training or multitask learning to leverage data from other tasks. In this case, careful consideration is needed to select tasks and model parameterizations such that updates from the auxiliary tasks a...
https://openreview.net/pdf/abc70350e11147c46076e6b97c615c42e2ab46d5.pdf
Fast And Slow Learning Of Recurrent Independent Mechanisms
https://openreview.net/forum?id=Lc28QAB4ypz
https://openreview.net/forum?id=Lc28QAB4ypz
Kanika Madan,Nan Rosemary Ke,Anirudh Goyal,Bernhard Schölkopf,Yoshua Bengio
ICLR 2021,Poster
Decomposing knowledge into interchangeable pieces promises a generalization advantage when there are changes in distribution. A learning agent interacting with its environment is likely to be faced with situations requiring novel combinations of existing pieces of knowledge. We hypothesize that such a decomposition of ...
https://openreview.net/pdf/023176cca43806a7d1f2ee58f5d0b4940b4331b2.pdf
Auction Learning as a Two-Player Game
https://openreview.net/forum?id=YHdeAO61l6T
https://openreview.net/forum?id=YHdeAO61l6T
Jad Rahme,Samy Jelassi,S. Matthew Weinberg
ICLR 2021,Poster
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction initiated by Duetting et al. (2019) consists in building neural network architectures to find optimal auctions. ...
https://openreview.net/pdf/4d275376b7d84287985b9093947c20acab6d0751.pdf
A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks
https://openreview.net/forum?id=TR-Nj6nFx42
https://openreview.net/forum?id=TR-Nj6nFx42
Renjie Liao,Raquel Urtasun,Richard Zemel
ICLR 2021,Poster
In this paper, we derive generalization bounds for two primary classes of graph neural networks (GNNs), namely graph convolutional networks (GCNs) and message passing GNNs (MPGNNs), via a PAC-Bayesian approach. Our result reveals that the maximum node degree and the spectral norm of the weights govern the generalizatio...
https://openreview.net/pdf/0759194b8b655e695c8feaf2e2eef3a788da9130.pdf
Contextual Transformation Networks for Online Continual Learning
https://openreview.net/forum?id=zx_uX-BO7CH
https://openreview.net/forum?id=zx_uX-BO7CH
Quang Pham,Chenghao Liu,Doyen Sahoo,Steven HOI
ICLR 2021,Poster
Continual learning methods with fixed architectures rely on a single network to learn models that can perform well on all tasks. As a result, they often only accommodate common features of those tasks but neglect each task's specific features. On the other hand, dynamic architecture methods can have a separate network ...
https://openreview.net/pdf/677e7eacc15f5a8cfa20b7a38a726599b2f960ca.pdf
Adaptive and Generative Zero-Shot Learning
https://openreview.net/forum?id=ahAUv8TI2Mz
https://openreview.net/forum?id=ahAUv8TI2Mz
Yu-Ying Chou,Hsuan-Tien Lin,Tyng-Luh Liu
ICLR 2021,Poster
We address the problem of generalized zero-shot learning (GZSL) where the task is to predict the class label of a target image whether its label belongs to the seen or unseen category. Similar to ZSL, the learning setting assumes that all class-level semantic features are given, while only the images of seen classes ar...
https://openreview.net/pdf/c95de71bec56a004df30033ab55061c714367261.pdf
Online Adversarial Purification based on Self-supervised Learning
https://openreview.net/forum?id=_i3ASPp12WS
https://openreview.net/forum?id=_i3ASPp12WS
Changhao Shi,Chester Holtz,Gal Mishne
ICLR 2021,Poster
Deep neural networks are known to be vulnerable to adversarial examples, where a perturbation in the input space leads to an amplified shift in the latent network representation. In this paper, we combine canonical supervised learning with self-supervised representation learning, and present Self-supervised Online Adve...
https://openreview.net/pdf/c72b2431912d9433eb862f7ff1c59d589191b939.pdf
FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders
https://openreview.net/forum?id=N6JECD-PI5w
https://openreview.net/forum?id=N6JECD-PI5w
Pengyu Cheng,Weituo Hao,Siyang Yuan,Shijing Si,Lawrence Carin
ICLR 2021,Poster
Pretrained text encoders, such as BERT, have been applied increasingly in various natural language processing (NLP) tasks, and have recently demonstrated significant performance gains. However, recent studies have demonstrated the existence of social bias in these pretrained NLP models. Although prior works have made p...
https://openreview.net/pdf/c3b19ced57b7827c059693736c4217a27b682d92.pdf
Reset-Free Lifelong Learning with Skill-Space Planning
https://openreview.net/forum?id=HIGSa_3kOx3
https://openreview.net/forum?id=HIGSa_3kOx3
Kevin Lu,Aditya Grover,Pieter Abbeel,Igor Mordatch
ICLR 2021,Poster
The objective of \textit{lifelong} reinforcement learning (RL) is to optimize agents which can continuously adapt and interact in changing environments. However, current RL approaches fail drastically when environments are non-stationary and interactions are non-episodic. We propose \textit{Lifelong Skill Planning} (Li...
https://openreview.net/pdf/c2294e8113d0b33d3849f2a97396d946826c3de3.pdf
Efficient Empowerment Estimation for Unsupervised Stabilization
https://openreview.net/forum?id=u2YNJPcQlwq
https://openreview.net/forum?id=u2YNJPcQlwq
Ruihan Zhao,Kevin Lu,Pieter Abbeel,Stas Tiomkin
ICLR 2021,Poster
Intrinsically motivated artificial agents learn advantageous behavior without externally-provided rewards. Previously, it was shown that maximizing mutual information between agent actuators and future states, known as the empowerment principle, enables unsupervised stabilization of dynamical systems at upright positio...
https://openreview.net/pdf/59dc834b878ff1144857f1787ea553243043395c.pdf
MixKD: Towards Efficient Distillation of Large-scale Language Models
https://openreview.net/forum?id=UFGEelJkLu5
https://openreview.net/forum?id=UFGEelJkLu5
Kevin J Liang,Weituo Hao,Dinghan Shen,Yufan Zhou,Weizhu Chen,Changyou Chen,Lawrence Carin
ICLR 2021,Poster
Large-scale language models have recently demonstrated impressive empirical performance. Nevertheless, the improved results are attained at the price of bigger models, more power consumption, and slower inference, which hinder their applicability to low-resource (both memory and computation) platforms. Knowledge distil...
https://openreview.net/pdf/1973dfb092fcfb9ef04acaf338a759f67dcc68b8.pdf
CaPC Learning: Confidential and Private Collaborative Learning
https://openreview.net/forum?id=h2EbJ4_wMVq
https://openreview.net/forum?id=h2EbJ4_wMVq
Christopher A. Choquette-Choo,Natalie Dullerud,Adam Dziedzic,Yunxiang Zhang,Somesh Jha,Nicolas Papernot,Xiao Wang
ICLR 2021,Poster
Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other's data but are prevented from doing...
https://openreview.net/pdf/db02ce664fd72e5ee7ca8809ea8714aa7e6cfdb6.pdf
Multiplicative Filter Networks
https://openreview.net/forum?id=OmtmcPkkhT
https://openreview.net/forum?id=OmtmcPkkhT
Rizal Fathony,Anit Kumar Sahu,Devin Willmott,J Zico Kolter
ICLR 2021,Poster
Although deep networks are typically used to approximate functions over high dimensional inputs, recent work has increased interest in neural networks as function approximators for low-dimensional-but-complex functions, such as representing images as a function of pixel coordinates, solving differential equations, or r...
https://openreview.net/pdf/e0702b90f0766df82135fc5e14a2df510e9aa9d5.pdf
Planning from Pixels using Inverse Dynamics Models
https://openreview.net/forum?id=V6BjBgku7Ro
https://openreview.net/forum?id=V6BjBgku7Ro
Keiran Paster,Sheila A. McIlraith,Jimmy Ba
ICLR 2021,Poster
Learning dynamics models in high-dimensional observation spaces can be challenging for model-based RL agents. We propose a novel way to learn models in a latent space by learning to predict sequences of future actions conditioned on task completion. These models track task-relevant environment dynamics over a distribut...
https://openreview.net/pdf/e1667f4513f3892a3eac139e23ee5198363e6741.pdf
Semi-supervised Keypoint Localization
https://openreview.net/forum?id=yFJ67zTeI2
https://openreview.net/forum?id=yFJ67zTeI2
Olga Moskvyak,Frederic Maire,Feras Dayoub,Mahsa Baktashmotlagh
ICLR 2021,Poster
Knowledge about the locations of keypoints of an object in an image can assist in fine-grained classification and identification tasks, particularly for the case of objects that exhibit large variations in poses that greatly influence their visual appearance, such as wild animals. However, supervised training of a keyp...
https://openreview.net/pdf/60b9fa896e2494cd3d4cdf45231c251d92e4bb9f.pdf
Emergent Road Rules In Multi-Agent Driving Environments
https://openreview.net/forum?id=d8Q1mt2Ghw
https://openreview.net/forum?id=d8Q1mt2Ghw
Avik Pal,Jonah Philion,Yuan-Hong Liao,Sanja Fidler
ICLR 2021,Poster
For autonomous vehicles to safely share the road with human drivers, autonomous vehicles must abide by specific "road rules" that human drivers have agreed to follow. "Road rules" include rules that drivers are required to follow by law – such as the requirement that vehicles stop at red lights – as well as more subtle...
https://openreview.net/pdf/858edcf2544391055e14f4c41482bcc25bb9ae3f.pdf
SSD: A Unified Framework for Self-Supervised Outlier Detection
https://openreview.net/forum?id=v5gjXpmR8J
https://openreview.net/forum?id=v5gjXpmR8J
Vikash Sehwag,Mung Chiang,Prateek Mittal
ICLR 2021,Poster
We ask the following question: what training information is required to design an effective outlier/out-of-distribution (OOD) detector, i.e., detecting samples that lie far away from training distribution? Since unlabeled data is easily accessible for many applications, the most compelling approach is to develop detect...
https://openreview.net/pdf/89219c090f5f217510ca46c6b68a0b62df071e81.pdf
ECONOMIC HYPERPARAMETER OPTIMIZATION WITH BLENDED SEARCH STRATEGY
https://openreview.net/forum?id=VbLH04pRA3
https://openreview.net/forum?id=VbLH04pRA3
Chi Wang,Qingyun Wu,Silu Huang,Amin Saied
ICLR 2021,Poster
We study the problem of using low cost to search for hyperparameter configurations in a large search space with heterogeneous evaluation cost and model quality. We propose a blended search strategy to combine the strengths of global and local search, and prioritize them on the fly with the goal of minimizing the total ...
https://openreview.net/pdf/77d37e291c10e692ce47faac8bfed0bbbf8f58bd.pdf
Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks
https://openreview.net/forum?id=KYPz4YsCPj
https://openreview.net/forum?id=KYPz4YsCPj
Yanbang Wang,Yen-Yu Chang,Yunyu Liu,Jure Leskovec,Pan Li
ICLR 2021,Poster
Temporal networks serve as abstractions of many real-world dynamic systems. These networks typically evolve according to certain laws, such as the law of triadic closure, which is universal in social networks. Inductive representation learning of temporal networks should be able to capture such laws and further be appl...
https://openreview.net/pdf/6cc011fa593c3860f0afcadd1e157f9160471ce6.pdf
Robust Overfitting may be mitigated by properly learned smoothening
https://openreview.net/forum?id=qZzy5urZw9
https://openreview.net/forum?id=qZzy5urZw9
Tianlong Chen,Zhenyu Zhang,Sijia Liu,Shiyu Chang,Zhangyang Wang
ICLR 2021,Poster
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements. This intriguing problem of robust overfittin...
https://openreview.net/pdf/099a32f12e88483a4451fe099750daeb8a1a0128.pdf
Local Search Algorithms for Rank-Constrained Convex Optimization
https://openreview.net/forum?id=tH6_VWZjoq
https://openreview.net/forum?id=tH6_VWZjoq
Kyriakos Axiotis,Maxim Sviridenko
ICLR 2021,Poster
We propose greedy and local search algorithms for rank-constrained convex optimization, namely solving $\underset{\mathrm{rank}(A)\leq r^*}{\min}\, R(A)$ given a convex function $R:\mathbb{R}^{m\times n}\rightarrow \mathbb{R}$ and a parameter $r^*$. These algorithms consist of repeating two steps: (a) adding a new rank...
https://openreview.net/pdf/458331518335ba1ae4617033b3d271418ec81093.pdf
Learning Task Decomposition with Ordered Memory Policy Network
https://openreview.net/forum?id=vcopnwZ7bC
https://openreview.net/forum?id=vcopnwZ7bC
Yuchen Lu,Yikang Shen,Siyuan Zhou,Aaron Courville,Joshua B. Tenenbaum,Chuang Gan
ICLR 2021,Poster
Many complex real-world tasks are composed of several levels of subtasks. Humans leverage these hierarchical structures to accelerate the learning process and achieve better generalization. In this work, we study the inductive bias and propose Ordered Memory Policy Network (OMPN) to discover subtask hierarchy by learni...
https://openreview.net/pdf/7f228b5f98840f6f20f111e8ed6608d54277730d.pdf
Property Controllable Variational Autoencoder via Invertible Mutual Dependence
https://openreview.net/forum?id=tYxG_OMs9WE
https://openreview.net/forum?id=tYxG_OMs9WE
Xiaojie Guo,Yuanqi Du,Liang Zhao
ICLR 2021,Poster
Deep generative models have made important progress towards modeling complex, high dimensional data via learning latent representations. Their usefulness is nevertheless often limited by a lack of control over the generative process or a poor understanding of the latent representation. To overcome these issues, attenti...
https://openreview.net/pdf/7a243ac1776d6ff97e3e45e24a68cc1d897b9b36.pdf
Grounding Physical Concepts of Objects and Events Through Dynamic Visual Reasoning
https://openreview.net/forum?id=bhCDO_cEGCz
https://openreview.net/forum?id=bhCDO_cEGCz
Zhenfang Chen,Jiayuan Mao,Jiajun Wu,Kwan-Yee Kenneth Wong,Joshua B. Tenenbaum,Chuang Gan
ICLR 2021,Poster
We study the problem of dynamic visual reasoning on raw videos. This is a challenging problem; currently, state-of-the-art models often require dense supervision on physical object properties and events from simulation, which are impractical to obtain in real life. In this paper, we present the Dynamic Concept Learner ...
https://openreview.net/pdf/b0012d0f037d3416af76be33e23dacc31d14746f.pdf
gradSim: Differentiable simulation for system identification and visuomotor control
https://openreview.net/forum?id=c_E8kFWfhp0
https://openreview.net/forum?id=c_E8kFWfhp0
J. Krishna Murthy,Miles Macklin,Florian Golemo,Vikram Voleti,Linda Petrini,Martin Weiss,Breandan Considine,Jérôme Parent-Lévesque,Kevin Xie,Kenny Erleben,Liam Paull,Florian Shkurti,Derek Nowrouzezahrai,Sanja Fidler
ICLR 2021,Poster
In this paper, we tackle the problem of estimating object physical properties such as mass, friction, and elasticity directly from video sequences. Such a system identification problem is fundamentally ill-posed due to the loss of information during image formation. Current best solutions to the problem require precise...
https://openreview.net/pdf/4a6d5a30558be4f1d305beba6c91e7617ddb5c96.pdf
Generative Scene Graph Networks
https://openreview.net/forum?id=RmcPm9m3tnk
https://openreview.net/forum?id=RmcPm9m3tnk
Fei Deng,Zhuo Zhi,Donghun Lee,Sungjin Ahn
ICLR 2021,Poster
Human perception excels at building compositional hierarchies of parts and objects from unlabeled scenes that help systematic generalization. Yet most work on generative scene modeling either ignores the part-whole relationship or assumes access to predefined part labels. In this paper, we propose Generative Scene Grap...
https://openreview.net/pdf/4972f3189bc1990cd88f0c12abbe7111acfe3c15.pdf
Decentralized Attribution of Generative Models
https://openreview.net/forum?id=_kxlwvhOodK
https://openreview.net/forum?id=_kxlwvhOodK
Changhoon Kim,Yi Ren,Yezhou Yang
ICLR 2021,Poster
Growing applications of generative models have led to new threats such as malicious personation and digital copyright infringement. One solution to these threats is model attribution, i.e., the identification of user-end models where the contents under question are generated. Existing studies showed empirical feasibil...
https://openreview.net/pdf/6895a29f55a7f92f11157dd3802660ded9122484.pdf
Individually Fair Rankings
https://openreview.net/forum?id=71zCSP_HuBN
https://openreview.net/forum?id=71zCSP_HuBN
Amanda Bower,Hamid Eftekhari,Mikhail Yurochkin,Yuekai Sun
ICLR 2021,Poster
We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from supervised learning and is more nuanced than ...
https://openreview.net/pdf/79474c3ea4a5449a9adbae8a72783142b915282b.pdf
Adaptive Federated Optimization
https://openreview.net/forum?id=LkFG3lB13U5
https://openreview.net/forum?id=LkFG3lB13U5
Sashank J. Reddi,Zachary Charles,Manzil Zaheer,Zachary Garrett,Keith Rush,Jakub Konečný,Sanjiv Kumar,Hugh Brendan McMahan
ICLR 2021,Poster
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable con...
https://openreview.net/pdf/d3f38daf93af27b20819fe19a4c3ca3f2635d9b1.pdf
GANs Can Play Lottery Tickets Too
https://openreview.net/forum?id=1AoMhc_9jER
https://openreview.net/forum?id=1AoMhc_9jER
Xuxi Chen,Zhenyu Zhang,Yongduo Sui,Tianlong Chen
ICLR 2021,Poster
Deep generative adversarial networks (GANs) have gained growing popularity in numerous scenarios, while usually suffer from high parameter complexities for resource-constrained real-world applications. However, the compression of GANs has less been explored. A few works show that heuristically applying compression tech...
https://openreview.net/pdf/f9f13cd41ac8fcc30b5177eac267e3a61229f0e4.pdf
Improving Relational Regularized Autoencoders with Spherical Sliced Fused Gromov Wasserstein
https://openreview.net/forum?id=DiQD7FWL233
https://openreview.net/forum?id=DiQD7FWL233
Khai Nguyen,Son Nguyen,Nhat Ho,Tung Pham,Hung Bui
ICLR 2021,Poster
Relational regularized autoencoder (RAE) is a framework to learn the distribution of data by minimizing a reconstruction loss together with a relational regularization on the prior of latent space. A recent attempt to reduce the inner discrepancy between the prior and aggregated posterior distributions is to incorporat...
https://openreview.net/pdf/d9cff7f7264e7a99f687eeb02f8e8bcf415be338.pdf
Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues
https://openreview.net/forum?id=hPWj1qduVw8
https://openreview.net/forum?id=hPWj1qduVw8
Hung Le,Nancy F. Chen,Steven Hoi
ICLR 2021,Poster
Compared to traditional visual question answering, video-grounded dialogues require additional reasoning over dialogue context to answer questions in a multi-turn setting. Previous approaches to video-grounded dialogues mostly use dialogue context as a simple text input without modelling the inherent information flows ...
https://openreview.net/pdf/c7f4e978ac75833ccb55c20a8c0c0e1e3f25c2f0.pdf
Extreme Memorization via Scale of Initialization
https://openreview.net/forum?id=Z4R1vxLbRLO
https://openreview.net/forum?id=Z4R1vxLbRLO
Harsh Mehta,Ashok Cutkosky,Behnam Neyshabur
ICLR 2021,Poster
We construct an experimental setup in which changing the scale of initialization strongly impacts the implicit regularization induced by SGD, interpolating from good generalization performance to completely memorizing the training set while making little progress on the test set. Moreover, we find that the extent and m...
https://openreview.net/pdf/80a1ad20645ef877ad4166e2c824ab56fda83b7c.pdf
Teaching with Commentaries
https://openreview.net/forum?id=4RbdgBh9gE
https://openreview.net/forum?id=4RbdgBh9gE
Aniruddh Raghu,Maithra Raghu,Simon Kornblith,David Duvenaud,Geoffrey Hinton
ICLR 2021,Poster
Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In thi...
https://openreview.net/pdf/f5e220ca55cfe80991bc55b5fde70e5a2e3b7d71.pdf
In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
https://openreview.net/forum?id=-ODN6SbiUU
https://openreview.net/forum?id=-ODN6SbiUU
Mamshad Nayeem Rizve,Kevin Duarte,Yogesh S Rawat,Mubarak Shah
ICLR 2021,Poster
The recent research in semi-supervised learning (SSL) is mostly dominated by consistency regularization based methods which achieve strong performance. However, they heavily rely on domain-specific data augmentations, which are not easy to generate for all data modalities. Pseudo-labeling (PL) is a general SSL approach...
https://openreview.net/pdf/c979bcaed90f2b14dbf27b5e90fdbb74407f161b.pdf
Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds
https://openreview.net/forum?id=MDsQkFP1Aw
https://openreview.net/forum?id=MDsQkFP1Aw
Efthymios Tzinis,Scott Wisdom,Aren Jansen,Shawn Hershey,Tal Remez,Dan Ellis,John R. Hershey
ICLR 2021,Poster
Recent progress in deep learning has enabled many advances in sound separation and visual scene understanding. However, extracting sound sources which are apparent in natural videos remains an open problem. In this work, we present AudioScope, a novel audio-visual sound separation framework that can be trained without ...
https://openreview.net/pdf/30b613d34d9d0b25c3ed4bf3ba159cd74ba805b3.pdf
Cut out the annotator, keep the cutout: better segmentation with weak supervision
https://openreview.net/forum?id=bjkX6Kzb5H
https://openreview.net/forum?id=bjkX6Kzb5H
Sarah Hooper,Michael Wornow,Ying Hang Seah,Peter Kellman,Hui Xue,Frederic Sala,Curtis Langlotz,Christopher Re
ICLR 2021,Poster
Constructing large, labeled training datasets for segmentation models is an expensive and labor-intensive process. This is a common challenge in machine learning, addressed by methods that require few or no labeled data points such as few-shot learning (FSL) and weakly-supervised learning (WS). Such techniques, however...
https://openreview.net/pdf/483be50ec4cee1c25de217a88795d4d99938cb4a.pdf
CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
https://openreview.net/forum?id=Ozk9MrX1hvA
https://openreview.net/forum?id=Ozk9MrX1hvA
Yanru Qu,Dinghan Shen,Yelong Shen,Sandra Sajeev,Weizhu Chen,Jiawei Han
ICLR 2021,Poster
Data augmentation has been demonstrated as an effective strategy for improving model generalization and data efficiency. However, due to the discrete nature of natural language, designing label-preserving transformations for text data tends to be more challenging. In this paper, we propose a novel data augmentation fr...
https://openreview.net/pdf/f00c2ea329ae5573307a659b808b791fca635c77.pdf
Deep Learning meets Projective Clustering
https://openreview.net/forum?id=EQfpYwF3-b
https://openreview.net/forum?id=EQfpYwF3-b
Alaa Maalouf,Harry Lang,Daniela Rus,Dan Feldman
ICLR 2021,Poster
A common approach for compressing Natural Language Processing (NLP) networks is to encode the embedding layer as a matrix $A\in\mathbb{R}^{n\times d}$, compute its rank-$j$ approximation $A_j$ via SVD (Singular Value Decomposition), and then factor $A_j$ into a pair of matrices that correspond to smaller fully-connecte...
https://openreview.net/pdf/b30e3cfa2920dfd21d347c92e0226bcb13aab969.pdf
Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
https://openreview.net/forum?id=b7g3_ZMHnT0
https://openreview.net/forum?id=b7g3_ZMHnT0
Mrigank Raman,Aaron Chan,Siddhant Agarwal,PeiFeng Wang,Hansen Wang,Sungchul Kim,Ryan Rossi,Handong Zhao,Nedim Lipka,Xiang Ren
ICLR 2021,Poster
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we ...
https://openreview.net/pdf/f507111c61d895cf0cf9f23f8fdd018a9ca5717d.pdf
Knowledge Distillation as Semiparametric Inference
https://openreview.net/forum?id=m4UCf24r0Y
https://openreview.net/forum?id=m4UCf24r0Y
Tri Dao,Govinda M Kamath,Vasilis Syrgkanis,Lester Mackey
ICLR 2021,Poster
A popular approach to model compression is to train an inexpensive student model to mimic the class probabilities of a highly accurate but cumbersome teacher model. Surprisingly, this two-step knowledge distillation process often leads to higher accuracy than training the student directly on labeled data. To explain an...
https://openreview.net/pdf/1ff09cb99e2aa00a9e0a0dfe445b3bc32eee2418.pdf
Meta-Learning with Neural Tangent Kernels
https://openreview.net/forum?id=Ti87Pv5Oc8
https://openreview.net/forum?id=Ti87Pv5Oc8
Yufan Zhou,Zhenyi Wang,Jiayi Xian,Changyou Chen,Jinhui Xu
ICLR 2021,Poster
Model Agnostic Meta-Learning (MAML) has emerged as a standard framework for meta-learning, where a meta-model is learned with the ability of fast adapting to new tasks. However, as a double-looped optimization problem, MAML needs to differentiate through the whole inner-loop optimization path for every outer-loop train...
https://openreview.net/pdf/07382947621a75697286cffb9d20483d2fd8337e.pdf
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics
https://openreview.net/forum?id=9r30XCjf5Dt
https://openreview.net/forum?id=9r30XCjf5Dt
Yanchao Sun,Da Huo,Furong Huang
ICLR 2021,Poster
Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm’s vulnerabilities and cause failure of the learning. However, prior works on poisoning RL usually either unrealistically assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning m...
https://openreview.net/pdf/fb9e902c18157059497d56cdc36770d12b05acf4.pdf
Understanding and Improving Lexical Choice in Non-Autoregressive Translation
https://openreview.net/forum?id=ZTFeSBIX9C
https://openreview.net/forum?id=ZTFeSBIX9C
Liang Ding,Longyue Wang,Xuebo Liu,Derek F. Wong,Dacheng Tao,Zhaopeng Tu
ICLR 2021,Poster
Knowledge distillation (KD) is essential for training non-autoregressive translation (NAT) models by reducing the complexity of the raw data with an autoregressive teacher model. In this study, we empirically show that as a side effect of this training, the lexical choice errors on low-frequency words are propagated to...
https://openreview.net/pdf/ba4c60d18c1a69639e2d9988925bcd11396ff936.pdf
Layer-adaptive Sparsity for the Magnitude-based Pruning
https://openreview.net/forum?id=H6ATjJ0TKdf
https://openreview.net/forum?id=H6ATjJ0TKdf
Jaeho Lee,Sejun Park,Sangwoo Mo,Sungsoo Ahn,Jinwoo Shin
ICLR 2021,Poster
Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance. However, without a clear consensus on ``how to choose,'' the layerwise sparsities are mostly selected algorithm-by-a...
https://openreview.net/pdf/6c6e88f6354b6fb0bc2955ecb9e518ca2f65432f.pdf
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning
https://openreview.net/forum?id=n1HD8M6WGn
https://openreview.net/forum?id=n1HD8M6WGn
Xuebo Liu,Longyue Wang,Derek F. Wong,Liang Ding,Lidia S. Chao,Zhaopeng Tu
ICLR 2021,Poster
Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks. However, it is still not entirely clear why and when EncoderFusion should work. In this paper, our main contribu...
https://openreview.net/pdf/aabc62bd94feebbc116e4d479e55dd7b0d856959.pdf
SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization
https://openreview.net/forum?id=-M0QkvBGTTq
https://openreview.net/forum?id=-M0QkvBGTTq
A F M Shahab Uddin,Mst. Sirazam Monira,Wheemyung Shin,TaeChoong Chung,Sung-Ho Bae
ICLR 2021,Poster
Advanced data augmentation strategies have widely been studied to improve the generalization ability of deep learning models. Regional dropout is one of the popular solutions that guides the model to focus on less discriminative parts by randomly removing image regions, resulting in improved regularization. However, su...
https://openreview.net/pdf/05e902b237602356704a807abbdec8f2a5ab6414.pdf
Are wider nets better given the same number of parameters?
https://openreview.net/forum?id=_zx8Oka09eF
https://openreview.net/forum?id=_zx8Oka09eF
Anna Golubeva,Guy Gur-Ari,Behnam Neyshabur
ICLR 2021,Poster
Empirical studies demonstrate that the performance of neural networks improves with increasing number of parameters. In most of these studies, the number of parameters is increased by increasing the network width. This begs the question: Is the observed improvement due to the larger number of parameters, or is it due t...
https://openreview.net/pdf/dc2756fb031ed3eea85d3a93b530a6c1f39d81d5.pdf
Discovering Non-monotonic Autoregressive Orderings with Variational Inference
https://openreview.net/forum?id=jP1vTH3inC
https://openreview.net/forum?id=jP1vTH3inC
Xuanlin Li,Brandon Trabucco,Dong Huk Park,Michael Luo,Sheng Shen,Trevor Darrell,Yang Gao
ICLR 2021,Poster
The predominant approach for language modeling is to encode a sequence of tokens from left to right, but this eliminates a source of information: the order by which the sequence was naturally generated. One strategy to recover this information is to decode both the content and ordering of tokens. Some prior work superv...
https://openreview.net/pdf/a9bcdb59d5a61ce55316a3ef34787b838b592a4a.pdf
CompOFA – Compound Once-For-All Networks for Faster Multi-Platform Deployment
https://openreview.net/forum?id=IgIk8RRT-Z
https://openreview.net/forum?id=IgIk8RRT-Z
Manas Sahni,Shreya Varshini,Alind Khare,Alexey Tumanov
ICLR 2021,Poster
The emergence of CNNs in mainstream deployment has necessitated methods to design and train efficient architectures tailored to maximize the accuracy under diverse hardware and latency constraints. To scale these resource-intensive tasks with an increasing number of deployment targets, Once-For-All (OFA) proposed an ap...
https://openreview.net/pdf/cd9ed036121abc86a3630081eb6c6264788c8194.pdf
Representing Partial Programs with Blended Abstract Semantics
https://openreview.net/forum?id=mCtadqIxOJ
https://openreview.net/forum?id=mCtadqIxOJ
Maxwell Nye,Yewen Pu,Matthew Bowers,Jacob Andreas,Joshua B. Tenenbaum,Armando Solar-Lezama
ICLR 2021,Poster
Synthesizing programs from examples requires searching over a vast, combinatorial space of possible programs. In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next. We introduce...
https://openreview.net/pdf/8f274ee0e7de9e855efc59efc1bf500d94b68773.pdf
PolarNet: Learning to Optimize Polar Keypoints for Keypoint Based Object Detection
https://openreview.net/forum?id=TYXs_y84xRj
https://openreview.net/forum?id=TYXs_y84xRj
Wu Xiongwei,Doyen Sahoo,Steven HOI
ICLR 2021,Poster
A variety of anchor-free object detectors have been actively proposed as possible alternatives to the mainstream anchor-based detectors that often rely on complicated design of anchor boxes. Despite achieving promising performance on par with anchor-based detectors, the existing anchor-free detectors such as FCOS or Ce...
https://openreview.net/pdf/d08ca7f6d8b412afb77ae32d7522a517e41f4741.pdf
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
https://openreview.net/forum?id=Cnon5ezMHtu
https://openreview.net/forum?id=Cnon5ezMHtu
Wuyang Chen,Xinyu Gong,Zhangyang Wang
ICLR 2021,Poster
Neural Architecture Search (NAS) has been explosively studied to automate the discovery of top-performer neural networks. Current works require heavy training of supernet or intensive architecture evaluations, thus suffering from heavy resource consumption and often incurring search bias due to truncated training or ap...
https://openreview.net/pdf/097fe3785855414961469f27465c798144ea4b9e.pdf
Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding
https://openreview.net/forum?id=8qDwejCuCN
https://openreview.net/forum?id=8qDwejCuCN
Sana Tonekaboni,Danny Eytan,Anna Goldenberg
ICLR 2021,Poster
Time series are often complex and rich in information but sparsely labeled and therefore challenging to model. In this paper, we propose a self-supervised framework for learning robust and generalizable representations for time series. Our approach, called Temporal Neighborhood Coding (TNC), takes advantage of the loca...
https://openreview.net/pdf/0e06b6edae016465a8d856db9d43ae54b938746a.pdf
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
https://openreview.net/forum?id=KJNcAkY8tY4
https://openreview.net/forum?id=KJNcAkY8tY4
Thao Nguyen,Maithra Raghu,Simon Kornblith
ICLR 2021,Poster
A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effe...
https://openreview.net/pdf/cb12ae8308060f86d8970f514c2a0e8a33d13c22.pdf
Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
https://openreview.net/forum?id=cu7IUiOhujH
https://openreview.net/forum?id=cu7IUiOhujH
Beliz Gunel,Jingfei Du,Alexis Conneau,Veselin Stoyanov
ICLR 2021,Poster
State-of-the-art natural language understanding classification models follow two-stages: pre-training a large language model on an auxiliary task, and then fine-tuning the model on a task-specific labeled dataset using cross-entropy loss. However, the cross-entropy loss has several shortcomings that can lead to sub-opt...
https://openreview.net/pdf/02dcbc0bf1ebd53ed5b69a2ca9aa27b3d3c53893.pdf
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
https://openreview.net/forum?id=tlV90jvZbw
https://openreview.net/forum?id=tlV90jvZbw
Reinhard Heckel,Fatih Furkan Yilmaz
ICLR 2021,Poster
Over-parameterized models, such as large deep networks, often exhibit a double descent phenomenon, whereas a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs and has been conjectured to arise because tra...
https://openreview.net/pdf/eaf02d8eb8ad9232e0b10b405cf104b4547de602.pdf
Contrastive Syn-to-Real Generalization
https://openreview.net/forum?id=F8whUO8HNbP
https://openreview.net/forum?id=F8whUO8HNbP
Wuyang Chen,Zhiding Yu,Shalini De Mello,Sifei Liu,Jose M. Alvarez,Zhangyang Wang,Anima Anandkumar
ICLR 2021,Poster
Training on synthetic data can be beneficial for label or data-scarce scenarios. However, synthetically trained models often suffer from poor generalization in real domains due to domain gaps. In this work, we make a key observation that the diversity of the learned feature embeddings plays an important role in the gen...
https://openreview.net/pdf/a7ade6e78d9e1ddd5b9584676f313379bbfbce16.pdf
Benchmarks for Deep Off-Policy Evaluation
https://openreview.net/forum?id=kWSeGEeHvF8
https://openreview.net/forum?id=kWSeGEeHvF8
Justin Fu,Mohammad Norouzi,Ofir Nachum,George Tucker,ziyu wang,Alexander Novikov,Mengjiao Yang,Michael R Zhang,Yutian Chen,Aviral Kumar,Cosmin Paduraru,Sergey Levine,Thomas Paine
ICLR 2021,Poster
Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online...
https://openreview.net/pdf/3a90850ebecc25b81a9534180c75842a2b672812.pdf
Pre-training Text-to-Text Transformers for Concept-centric Common Sense
https://openreview.net/forum?id=3k20LAiHYL2
https://openreview.net/forum?id=3k20LAiHYL2
Wangchunshu Zhou,Dong-Ho Lee,Ravi Kiran Selvam,Seyeon Lee,Xiang Ren
ICLR 2021,Poster
Pretrained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks that require a syntactic and semantic understanding of the text. However, current pre-training objectives such as masked token prediction (for BERT-style PTLMs) and masked spa...
https://openreview.net/pdf/30f24a224a3d4133f7da640c76644f91a3d41f0a.pdf
Combining Label Propagation and Simple Models out-performs Graph Neural Networks
https://openreview.net/forum?id=8E1-f3VhX1o
https://openreview.net/forum?id=8E1-f3VhX1o
Qian Huang,Horace He,Abhay Singh,Ser-Nam Lim,Austin Benson
ICLR 2021,Poster
Graph Neural Networks (GNNs) are a predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance. Here, we show that for many standard transductive node classification benchmarks, we can exceed...
https://openreview.net/pdf/7c1b32ea12a84f37e53a2145fc40a23c3642c2e8.pdf
Learning Long-term Visual Dynamics with Region Proposal Interaction Networks
https://openreview.net/forum?id=_X_4Akcd8Re
https://openreview.net/forum?id=_X_4Akcd8Re
Haozhi Qi,Xiaolong Wang,Deepak Pathak,Yi Ma,Jitendra Malik
ICLR 2021,Poster
Learning long-term dynamics models is the key to understanding physical common sense. Most existing approaches on learning dynamics from visual input sidestep long-term predictions by resorting to rapid re-planning with short-term models. This not only requires such models to be super accurate but also limits them only...
https://openreview.net/pdf/5da931176d4a8bcb421d7a6a087fce6475f7c406.pdf
Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions
https://openreview.net/forum?id=a3wKPZpGtCF
https://openreview.net/forum?id=a3wKPZpGtCF
Yun Kuen Cheung,Yixin Tao
ICLR 2021,Poster
It is of primary interest for ML to understand how agents learn and interact dynamically in competitive environments and games (e.g. GANs). But this has been a difficult task, as irregular behaviors are commonly observed in such systems. This can be explained theoretically, for instance, by the works of Cheung and Pili...
https://openreview.net/pdf/eb21a8cbc05cb76a2135d38e48ebb0d0192bb4d5.pdf
Control-Aware Representations for Model-based Reinforcement Learning
https://openreview.net/forum?id=dgd4EJqsbW5
https://openreview.net/forum?id=dgd4EJqsbW5
Brandon Cui,Yinlam Chow,Mohammad Ghavamzadeh
ICLR 2021,Poster
A major challenge in modern reinforcement learning (RL) is efficient control of dynamical systems from high-dimensional sensory observations. Learning controllable embedding (LCE) is a promising approach that addresses this challenge by embedding the observations into a lower-dimensional latent space, estimating the ...
https://openreview.net/pdf/f0d80d862dab33f2ed69b44a0f14fda119006af8.pdf
Provably robust classification of adversarial examples with detection
https://openreview.net/forum?id=sRA5rLNpmQc
https://openreview.net/forum?id=sRA5rLNpmQc
Fatemeh Sheikholeslami,Ali Lotfi,J Zico Kolter
ICLR 2021,Poster
Adversarial attacks against deep networks can be defended against either by building robust classifiers or, by creating classifiers that can \emph{detect} the presence of adversarial perturbations. Although it may intuitively seem easier to simply detect attacks rather than build a robust classifier, this has not bour...
https://openreview.net/pdf/f8635fcc4d33b492dbd371448f02d31878d69223.pdf
Return-Based Contrastive Representation Learning for Reinforcement Learning
https://openreview.net/forum?id=_TM6rT7tXke
https://openreview.net/forum?id=_TM6rT7tXke
Guoqing Liu,Chuheng Zhang,Li Zhao,Tao Qin,Jinhua Zhu,Li Jian,Nenghai Yu,Tie-Yan Liu
ICLR 2021,Poster
Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most importan...
https://openreview.net/pdf/da82358af2f47721465fefc3dffd1bd3f3f2c16e.pdf
Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification
https://openreview.net/forum?id=ijJZbomCJIm
https://openreview.net/forum?id=ijJZbomCJIm
Francisco Utrera,Evan Kravitz,N. Benjamin Erichson,Rajiv Khanna,Michael W. Mahoney
ICLR 2021,Poster
Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains. This process consists of taking a neural network pre-trained on a large feature-rich source dataset, freezing the early layers that encode essential generic image properties, ...
https://openreview.net/pdf/566e8902b7a749d4525ff5f0933ffdae3a9bec39.pdf
Learning Structural Edits via Incremental Tree Transformations
https://openreview.net/forum?id=v9hAX77--cZ
https://openreview.net/forum?id=v9hAX77--cZ
Ziyu Yao,Frank F. Xu,Pengcheng Yin,Huan Sun,Graham Neubig
ICLR 2021,Poster
While most neural generative models generate outputs in a single pass, the human creative process is usually one of iterative building and refinement. Recent work has proposed models of editing processes, but these mostly focus on editing sequential data and/or only model a single editing pass. In this paper, we presen...
https://openreview.net/pdf/1fa89cfde10c367bd3c970a467f69d1e81ef7f40.pdf
Cross-Attentional Audio-Visual Fusion for Weakly-Supervised Action Localization
https://openreview.net/forum?id=hWr3e3r-oH5
https://openreview.net/forum?id=hWr3e3r-oH5
Jun-Tae Lee,Mihir Jain,Hyoungwoo Park,Sungrack Yun
ICLR 2021,Poster
Temporally localizing actions in videos is one of the key components for video understanding. Learning from weakly-labeled data is seen as a potential solution towards avoiding expensive frame-level annotations. Different from other works which only depend on visual-modality, we propose to learn richer audiovisual repr...
https://openreview.net/pdf/2d9210844c74d2a119c3878f1e6c2475a0d3af86.pdf
Improved Estimation of Concentration Under $\ell_p$-Norm Distance Metrics Using Half Spaces
https://openreview.net/forum?id=BUlyHkzjgmA
https://openreview.net/forum?id=BUlyHkzjgmA
Jack Prescott,Xiao Zhang,David Evans
ICLR 2021,Poster
Concentration of measure has been argued to be the fundamental cause of adversarial vulnerability. Mahloujifar et al. (2019) presented an empirical way to measure the concentration of a data distribution using samples, and employed it to find lower bounds on intrinsic robustness for several benchmark datasets. However,...
https://openreview.net/pdf/5d9950ac35e5e85a527dacf6286c7b9c148005bd.pdf
Beyond Categorical Label Representations for Image Classification
https://openreview.net/forum?id=MyHwDabUHZm
https://openreview.net/forum?id=MyHwDabUHZm
Boyuan Chen,Yu Li,Sunand Raghupathi,Hod Lipson
ICLR 2021,Poster
We find that the way we choose to represent data labels can have a profound effect on the quality of trained models. For example, training an image classifier to regress audio labels rather than traditional categorical probabilities produces a more reliable classification. This result is surprising, considering that au...
https://openreview.net/pdf/14e605cccc7af2ba01dc51b23e624ff89dbeff7c.pdf
Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers
https://openreview.net/forum?id=JCRblSgs34Z
https://openreview.net/forum?id=JCRblSgs34Z
Sahil Singla,Soheil Feizi
ICLR 2021,Poster
In deep neural networks, the spectral norm of the Jacobian of a layer bounds the factor by which the norm of a signal changes during forward/backward propagation. Spectral norm regularizations have been shown to improve generalization, robustness and optimization of deep learning methods. Existing methods to compute th...
https://openreview.net/pdf/6c7018c5dcc64de7e42204d28cf786cb4a596c69.pdf
Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction
https://openreview.net/forum?id=iOnhIy-a-0n
https://openreview.net/forum?id=iOnhIy-a-0n
Wei Deng,Qi Feng,Georgios P. Karagiannis,Guang Lin,Faming Liang
ICLR 2021,Poster
Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration. To address this issue, we study the variance ...
https://openreview.net/pdf/c077f043fe1bbbb4d720b3fb0fbe7afe580d8374.pdf
IsarStep: a Benchmark for High-level Mathematical Reasoning
https://openreview.net/forum?id=Pzj6fzU6wkj
https://openreview.net/forum?id=Pzj6fzU6wkj
Wenda Li,Lei Yu,Yuhuai Wu,Lawrence C. Paulson
ICLR 2021,Poster
A well-defined benchmark is essential for measuring and accelerating research progress of machine learning models. In this paper, we present a benchmark for high-level mathematical reasoning and study the reasoning capabilities of neural sequence-to-sequence models. We build a non-synthetic dataset from the largest rep...
https://openreview.net/pdf/c9fb7dd359102a00d8676684bd704c54961a5285.pdf
Factorizing Declarative and Procedural Knowledge in Structured, Dynamical Environments
https://openreview.net/forum?id=VVdmjgu7pKM
https://openreview.net/forum?id=VVdmjgu7pKM
Anirudh Goyal,Alex Lamb,Phanideep Gampa,Philippe Beaudoin,Charles Blundell,Sergey Levine,Yoshua Bengio,Michael Curtis Mozer
ICLR 2021,Poster
Modeling a structured, dynamic environment like a video game requires keeping track of the objects and their states (declarative knowledge) as well as predicting how objects behave (procedural knowledge). Black-box models with a monolithic hidden state often fail to apply procedural knowledge consistently and uniformly...
https://openreview.net/pdf/927b511da0f53c9d48b5dbe33f31772d15ec97ca.pdf
Provable Rich Observation Reinforcement Learning with Combinatorial Latent States
https://openreview.net/forum?id=hx1IXFHAw7R
https://openreview.net/forum?id=hx1IXFHAw7R
Dipendra Misra,Qinghua Liu,Chi Jin,John Langford
ICLR 2021,Poster
We propose a novel setting for reinforcement learning that combines two common real-world difficulties: presence of observations (such as camera images) and factored states (such as location of objects). In our setting, the agent receives observations generated stochastically from a "latent" factored state. These obser...
https://openreview.net/pdf/6a01a542edf09482d75550c673ddcb462727111a.pdf
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
https://openreview.net/forum?id=hJmtwocEqzc
https://openreview.net/forum?id=hJmtwocEqzc
Valeriia Cherepanova,Micah Goldblum,Harrison Foley,Shiyuan Duan,John P Dickerson,Gavin Taylor,Tom Goldstein
ICLR 2021,Poster
Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike. These systems are typically built by scraping social media profiles for user images. Adversarial perturbations have been proposed for bypassing...
https://openreview.net/pdf/33f4bbc102bd62362928fed6df483a1a2d5ef1ba.pdf
Neural Networks for Learning Counterfactual G-Invariances from Single Environments
https://openreview.net/forum?id=7t1FcJUWhi3
https://openreview.net/forum?id=7t1FcJUWhi3
S Chandra Mouli,Bruno Ribeiro
ICLR 2021,Poster
Despite —or maybe because of— their astonishing capacity to fit data, neural networks are believed to have difficulties extrapolating beyond training data distribution. This work shows that, for extrapolations based on finite transformation groups, a model’s inability to extrapolate is unrelated to its capacity. Rather...
https://openreview.net/pdf/f68c8dcf4d107a320df0e519d96021379ed46828.pdf
Simple Spectral Graph Convolution
https://openreview.net/forum?id=CYO5T-YjWZV
https://openreview.net/forum?id=CYO5T-YjWZV
Hao Zhu,Piotr Koniusz
ICLR 2021,Poster
Graph Convolutional Networks (GCNs) are leading methods for learning graph representations. However, without specially designed architectures, the performance of GCNs degrades quickly with increased depth. As the aggregated neighborhood size and neural network depth are two completely orthogonal aspects of graph repres...
https://openreview.net/pdf/9015cbfb15f31fdf7835279414de3b27ef3b0c01.pdf
Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics
https://openreview.net/forum?id=LhY8QdUGSuw
https://openreview.net/forum?id=LhY8QdUGSuw
Vinay Venkatesh Ramasesh,Ethan Dyer,Maithra Raghu
ICLR 2021,Poster
Catastrophic forgetting is a recurring challenge to developing versatile deep learning models. Despite its ubiquity, there is limited understanding of its connections to neural network (hidden) representations and task semantics. In this paper, we address this important knowledge gap. Through quantitative analysis of n...
https://openreview.net/pdf/d11b4b8cdf4b9f940c435a7b3c50cf2790aa071d.pdf
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning
https://openreview.net/forum?id=o81ZyBCojoA
https://openreview.net/forum?id=o81ZyBCojoA
Ren Wang,Kaidi Xu,Sijia Liu,Pin-Yu Chen,Tsui-Wei Weng,Chuang Gan,Meng Wang
ICLR 2021,Poster
Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning. It enables us to learn a $\textit{meta-initialization}$ of model parameters (that we call $\textit{meta-model}$) to rapidly adapt to new tasks using a small amount of labeled training data. Despi...
https://openreview.net/pdf/c29f970186b2b2658cd52eea7aac2b5266c649f4.pdf
The geometry of integration in text classification RNNs
https://openreview.net/forum?id=42kiJ7n_8xO
https://openreview.net/forum?id=42kiJ7n_8xO
Kyle Aitken,Vinay Venkatesh Ramasesh,Ankush Garg,Yuan Cao,David Sussillo,Niru Maheswaranathan
ICLR 2021,Poster
Despite the widespread application of recurrent neural networks (RNNs), a unified understanding of how RNNs solve particular tasks remains elusive. In particular, it is unclear what dynamical patterns arise in trained RNNs, and how those pat-terns depend on the training dataset or task. This work addresses these ques...
https://openreview.net/pdf/bc724aa9a5ce537c4e5005d963641086e1e41bb3.pdf
Towards Robust Neural Networks via Close-loop Control
https://openreview.net/forum?id=2AL06y9cDE-
https://openreview.net/forum?id=2AL06y9cDE-
Zhuotong Chen,Qianxiao Li,Zheng Zhang
ICLR 2021,Poster
Despite their success in massive engineering applications, deep neural networks are vulnerable to various perturbations due to their black-box nature. Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount. In this paper, we address the ...
https://openreview.net/pdf/596019eba6149f7c83bd7dc648809e2100b337d8.pdf
Projected Latent Markov Chain Monte Carlo: Conditional Sampling of Normalizing Flows
https://openreview.net/forum?id=MBpHUFrcG2x
https://openreview.net/forum?id=MBpHUFrcG2x
Chris Cannella,Mohammadreza Soltani,Vahid Tarokh
ICLR 2021,Poster
We introduce Projected Latent Markov Chain Monte Carlo (PL-MCMC), a technique for sampling from the exact conditional distributions learned by normalizing flows. As a conditional sampling method, PL-MCMC enables Monte Carlo Expectation Maximization (MC-EM) training of normalizing flows from incomplete data. Through exp...
https://openreview.net/pdf/32946e80b74b4bb7d6f25d74cb773ac68b9b4a36.pdf
Understanding the failure modes of out-of-distribution generalization
https://openreview.net/forum?id=fSTD6NFIW_b
https://openreview.net/forum?id=fSTD6NFIW_b
Vaishnavh Nagarajan,Anders Andreassen,Behnam Neyshabur
ICLR 2021,Poster
Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time, resulting in poor accuracy during test-time. In this work, we identify the fundamental factors that give rise to this behavior, by explaining...
https://openreview.net/pdf/2790b3f2ccfda08399e0549ba75e2da20bd2d1b1.pdf
Usable Information and Evolution of Optimal Representations During Training
https://openreview.net/forum?id=p8agn6bmTbr
https://openreview.net/forum?id=p8agn6bmTbr
Michael Kleinman,Alessandro Achille,Daksh Idnani,Jonathan Kao
ICLR 2021,Poster
We introduce a notion of usable information contained in the representation learned by a deep network, and use it to study how optimal representations for the task emerge during training. We show that the implicit regularization coming from training with Stochastic Gradient Descent with a high learning-rate and small b...
https://openreview.net/pdf/ecfb28e9a1edfd9c52876b78d81632b816d662b2.pdf
Adaptive Extra-Gradient Methods for Min-Max Optimization and Games
https://openreview.net/forum?id=R0a0kFI3dJx
https://openreview.net/forum?id=R0a0kFI3dJx
Kimon Antonakopoulos,Veronica Belmega,Panayotis Mertikopoulos
ICLR 2021,Poster
We present a new family of min-max optimization algorithms that automatically exploit the geometry of the gradient data observed at earlier iterations to perform more informative extra-gradient steps in later ones. Thanks to this adaptation mechanism, the proposed method automatically detects whether the problem is smo...
https://openreview.net/pdf/b85ffd0f421c8180b9a511a825ac3f10fc824b9b.pdf
Shapley explainability on the data manifold
https://openreview.net/forum?id=OPyWRrcjVQw
https://openreview.net/forum?id=OPyWRrcjVQw
Christopher Frye,Damien de Mijolla,Tom Begley,Laurence Cowton,Megan Stanley,Ilya Feige
ICLR 2021,Poster
Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions. The Shapley framework for explainability attributes a model’s predictions to its input features in a mathematically principled and model-agnostic way. However, general implementations of S...
https://openreview.net/pdf/ed871c78bdc2768918e12775dd57dff6b36e4c24.pdf
Reinforcement Learning with Random Delays
https://openreview.net/forum?id=QFYnKlBJYR
https://openreview.net/forum?id=QFYnKlBJYR
Yann Bouteiller,Simon Ramstedt,Giovanni Beltrame,Christopher Pal,Jonathan Binas
ICLR 2021,Poster
Action and observation delays commonly occur in many Reinforcement Learning applications, such as remote control scenarios. We study the anatomy of randomly delayed environments, and show that partially resampling trajectory fragments in hindsight allows for off-policy multi-step value estimation. We apply this princip...
https://openreview.net/pdf/744fcf663d9a7335f90ed1ec81d97b3661166e56.pdf
Shape or Texture: Understanding Discriminative Features in CNNs
https://openreview.net/forum?id=NcFEZOi-rLa
https://openreview.net/forum?id=NcFEZOi-rLa
Md Amirul Islam,Matthew Kowal,Patrick Esser,Sen Jia,Björn Ommer,Konstantinos G. Derpanis,Neil Bruce
ICLR 2021,Poster
Contrasting the previous evidence that neurons in the later layers of a Convolutional Neural Network (CNN) respond to complex object shapes, recent studies have shown that CNNs actually exhibit a 'texture bias': given an image with both texture and shape cues (e.g., a stylized image), a CNN is biased towards predicting...
https://openreview.net/pdf/bec98c0c8f3a77adc5822b10b5fd4273ff383136.pdf
NOVAS: Non-convex Optimization via Adaptive Stochastic Search for End-to-end Learning and Control
https://openreview.net/forum?id=Iw4ZGwenbXf
https://openreview.net/forum?id=Iw4ZGwenbXf
Ioannis Exarchos,Marcus Aloysius Pereira,Ziyi Wang,Evangelos Theodorou
ICLR 2021,Poster
In this work we propose the use of adaptive stochastic search as a building block for general, non-convex optimization operations within deep neural network architectures. Specifically, for an objective function located at some layer in the network and parameterized by some network parameters, we employ adaptive stocha...
https://openreview.net/pdf/19f093001a03a82a092d19740971a45fff9f47a8.pdf
Negative Data Augmentation
https://openreview.net/forum?id=Ovp8dvB8IBH
https://openreview.net/forum?id=Ovp8dvB8IBH
Abhishek Sinha,Kumar Ayush,Jiaming Song,Burak Uzkent,Hongxia Jin,Stefano Ermon
ICLR 2021,Poster
Data augmentation is often used to enlarge datasets with synthetic samples generated in accordance with the underlying data distribution. To enable a wider range of augmentations, we explore negative data augmentation strategies (NDA) that intentionally create out-of-distribution samples. We show that such negative out...
https://openreview.net/pdf/3f45494e997f0d54f6dc5dac083f571047ee0c92.pdf
Molecule Optimization by Explainable Evolution
https://openreview.net/forum?id=jHefDGsorp5
https://openreview.net/forum?id=jHefDGsorp5
Binghong Chen,Tianzhe Wang,Chengtao Li,Hanjun Dai,Le Song
ICLR 2021,Poster
Optimizing molecules for desired properties is a fundamental yet challenging task in chemistry, material science, and drug discovery. This paper develops a novel algorithm for optimizing molecular properties via an Expectation-Maximization (EM) like explainable evolutionary process. The algorithm is designed to mimic h...
https://openreview.net/pdf/885e03a6e7ca9e559b96bce0daf001f769f98de4.pdf
Estimating Lipschitz constants of monotone deep equilibrium models
https://openreview.net/forum?id=VcB4QkSfyO
https://openreview.net/forum?id=VcB4QkSfyO
Chirag Pabbaraju,Ezra Winston,J Zico Kolter
ICLR 2021,Poster
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries. However, existing bounds get substantially weaker with increasing depth of ...
https://openreview.net/pdf/62c8f87a22f20b30e037ebb6a618d34b540f0e93.pdf
Implicit Gradient Regularization
https://openreview.net/forum?id=3q5IqUrkcF
https://openreview.net/forum?id=3q5IqUrkcF
David Barrett,Benoit Dherin
ICLR 2021,Poster
Gradient descent can be surprisingly good at optimizing deep neural networks without overfitting and without explicit regularization. We find that the discrete steps of gradient descent implicitly regularize models by penalizing gradient descent trajectories that have large loss gradients. We call this Implicit Gradien...
https://openreview.net/pdf/5fac8e016a2873ec230214a072ff1cc0307e64f7.pdf
Faster Binary Embeddings for Preserving Euclidean Distances
https://openreview.net/forum?id=YCXrx6rRCXO
https://openreview.net/forum?id=YCXrx6rRCXO
Jinjie Zhang,Rayan Saab
ICLR 2021,Poster
We propose a fast, distance-preserving, binary embedding algorithm to transform a high-dimensional dataset $\mathcal{T}\subseteq\mathbb{R}^n$ into binary sequences in the cube $\{\pm 1\}^m$. When $\mathcal{T}$ consists of well-spread (i.e., non-sparse) vectors, our embedding method applies a stable noise-shaping quanti...
https://openreview.net/pdf/1eba3bf99a991505d994341a4156be4959947011.pdf
Scalable Transfer Learning with Expert Models
https://openreview.net/forum?id=23ZjUGpjcc
https://openreview.net/forum?id=23ZjUGpjcc
Joan Puigcerver,Carlos Riquelme Ruiz,Basil Mustafa,Cedric Renggli,André Susano Pinto,Sylvain Gelly,Daniel Keysers,Neil Houlsby
ICLR 2021,Poster
Transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. We explore the use of expert representations for transfer with...
https://openreview.net/pdf/659e2338755eb562f4d6d679d55eb83e71fa5007.pdf