title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Practical Quasi-Newton Methods for Training Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/192fc044e74dffea144f9ac5dc9f3395-Abstract.html
Donald Goldfarb, Yi Ren, Achraf Bahamou
https://papers.nips.cc/paper_files/paper/2020/hash/192fc044e74dffea144f9ac5dc9f3395-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9925-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-Supplemental.pdf
We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs). In DNN training, the number of variables and components of the gradient n is often of the order of tens of millions and the Hessian ha...
Approximation Based Variance Reduction for Reparameterization Gradients
https://papers.nips.cc/paper_files/paper/2020/hash/193002e668758ea9762904da1a22337c-Abstract.html
Tomas Geffner, Justin Domke
https://papers.nips.cc/paper_files/paper/2020/hash/193002e668758ea9762904da1a22337c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9926-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-Supplemental.pdf
Flexible variational distributions improve variational inference but are harder to optimize. In this work we present a control variate that is applicable for any reparameterizable distribution with known mean and covariance, e.g. Gaussians with any covariance structure. The control variate is based on a quadratic appro...
Inference Stage Optimization for Cross-scenario 3D Human Pose Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/1943102704f8f8f3302c2b730728e023-Abstract.html
Jianfeng Zhang, Xuecheng Nie, Jiashi Feng
https://papers.nips.cc/paper_files/paper/2020/hash/1943102704f8f8f3302c2b730728e023-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9927-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-Supplemental.pdf
Existing 3D human pose estimation models suffer performance drop when applying to new scenarios with unseen poses due to their limited generalizability. In this work, we propose a novel framework, Inference Stage Optimization (ISO), for improving the generalizability of 3D pose models when source and target data come f...
Consistent feature selection for analytic deep neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/1959eb9d5a0f7ebc58ebde81d5df400d-Abstract.html
Vu C. Dinh, Lam S. Ho
https://papers.nips.cc/paper_files/paper/2020/hash/1959eb9d5a0f7ebc58ebde81d5df400d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9928-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-Supplemental.zip
In this work, we investigate the problem of feature selection for analytic deep networks. We prove that for a wide class of networks, including deep feed-forward neural networks, convolutional neural networks and a major sub-class of residual neural networks, the Adaptive Group Lasso selection procedure with Group Lass...
Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification
https://papers.nips.cc/paper_files/paper/2020/hash/1963bd5135521d623f6c29e6b1174975-Abstract.html
Yulin Wang, Kangchen Lv, Rui Huang, Shiji Song, Le Yang, Gao Huang
https://papers.nips.cc/paper_files/paper/2020/hash/1963bd5135521d623f6c29e6b1174975-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9929-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-Supplemental.pdf
The accuracy of deep convolutional neural networks (CNNs) generally improves when fueled with high resolution images. However, this often comes at a high computational cost and high memory footprint. Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs effi...
Information Maximization for Few-Shot Learning
https://papers.nips.cc/paper_files/paper/2020/hash/196f5641aa9dc87067da4ff90fd81e7b-Abstract.html
Malik Boudiaf, Imtiaz Ziko, Jérôme Rony, Jose Dolz, Pablo Piantanida, Ismail Ben Ayed
https://papers.nips.cc/paper_files/paper/2020/hash/196f5641aa9dc87067da4ff90fd81e7b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9930-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-Supplemental.pdf
We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-directio...
Inverse Reinforcement Learning from a Gradient-based Learner
https://papers.nips.cc/paper_files/paper/2020/hash/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Abstract.html
Giorgia Ramponi, Gianluca Drappo, Marcello Restelli
https://papers.nips.cc/paper_files/paper/2020/hash/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9931-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Supplemental.pdf
Inverse Reinforcement Learning addresses the problem of inferring an expert's reward function from demonstrations. However, in many applications, we not only have access to the expert's near-optimal behaviour, but we also observe part of her learning process. In this paper, we propose a new algorithm for this setting, ...
Bayesian Multi-type Mean Field Multi-agent Imitation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/19eca5979ccbb752778e6c5f090dc9b6-Abstract.html
Fan Yang, Alina Vereshchaka, Changyou Chen, Wen Dong
https://papers.nips.cc/paper_files/paper/2020/hash/19eca5979ccbb752778e6c5f090dc9b6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9932-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-Supplemental.zip
Multi-agent Imitation learning (MAIL) refers to the problem that agents learn to perform a task interactively in a multi-agent system through observing and mimicking expert demonstrations, without any knowledge of a reward function from the environment. MAIL has received a lot of attention due to promising results achi...
Bayesian Robust Optimization for Imitation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1a669e81c8093745261889539694be7f-Abstract.html
Daniel Brown, Scott Niekum, Marek Petrik
https://papers.nips.cc/paper_files/paper/2020/hash/1a669e81c8093745261889539694be7f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9933-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-Supplemental.pdf
One of the main challenges in imitation learning is determining what action an agent should take when outside the state distribution of the demonstrations. Inverse reinforcement learning (IRL) can enable generalization to new states by learning a parameterized reward function, but these approaches still face uncertaint...
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
https://papers.nips.cc/paper_files/paper/2020/hash/1a77befc3b608d6ed363567685f70e1e-Abstract.html
Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, Yaron Lipman
https://papers.nips.cc/paper_files/paper/2020/hash/1a77befc3b608d6ed363567685f70e1e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9934-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-Supplemental.zip
In this work we address the challenging problem of multiview 3D surface reconstruction. We introduce a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera. The geometry is represe...
Riemannian Continuous Normalizing Flows
https://papers.nips.cc/paper_files/paper/2020/hash/1aa3d9c6ce672447e1e5d0f1b5207e85-Abstract.html
Emile Mathieu, Maximilian Nickel
https://papers.nips.cc/paper_files/paper/2020/hash/1aa3d9c6ce672447e1e5d0f1b5207e85-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9935-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-Supplemental.pdf
Normalizing flows have shown great promise for modelling flexible probability distributions in a computationally tractable way. However, whilst data is often naturally described on Riemannian manifolds such as spheres, torii, and hyperbolic spaces, most normalizing flows implicitly assume a flat geometry, making them e...
Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation
https://papers.nips.cc/paper_files/paper/2020/hash/1abb1e1ea5f481b589da52303b091cbb-Abstract.html
Isabella Pozzi, Sander Bohte, Pieter Roelfsema
https://papers.nips.cc/paper_files/paper/2020/hash/1abb1e1ea5f481b589da52303b091cbb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9936-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-Supplemental.zip
Much recent work has focused on biologically plausible variants of supervised learning algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons and learning in the brain depends on reward and punishment. We demonstrate a biologically plausible reinforcement learning scheme for deep ...
Asymptotic Guarantees for Generative Modeling Based on the Smooth Wasserstein Distance
https://papers.nips.cc/paper_files/paper/2020/hash/1ac978c8020be6d7212aa71d4f040fc3-Abstract.html
Ziv Goldfeld, Kristjan Greenewald, Kengo Kato
https://papers.nips.cc/paper_files/paper/2020/hash/1ac978c8020be6d7212aa71d4f040fc3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9937-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-Supplemental.pdf
Minimum distance estimation (MDE) gained recent attention as a formulation of (implicit) generative modeling. It considers minimizing, over model parameters, a statistical distance between the empirical data distribution and the model. This formulation lends itself well to theoretical analysis, but typical results are ...
Online Robust Regression via SGD on the l1 loss
https://papers.nips.cc/paper_files/paper/2020/hash/1ae6464c6b5d51b363d7d96f97132c75-Abstract.html
Scott Pesme, Nicolas Flammarion
https://papers.nips.cc/paper_files/paper/2020/hash/1ae6464c6b5d51b363d7d96f97132c75-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9938-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-Supplemental.pdf
We consider the robust linear regression problem in the online setting where we have access to the data in a streaming manner, one data point after the other. More specifically, for a true parameter $ \theta^* $, we consider the corrupted Gaussian linear model $y = + \varepsilon + b$ where the adversarial noise $b$ ca...
PRANK: motion Prediction based on RANKing
https://papers.nips.cc/paper_files/paper/2020/hash/1b0251ccb8bd5f9ccf444e4bda7713e3-Abstract.html
Yuriy Biktairov, Maxim Stebelev, Irina Rudenko, Oleh Shliazhko, Boris Yangel
https://papers.nips.cc/paper_files/paper/2020/hash/1b0251ccb8bd5f9ccf444e4bda7713e3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9939-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-Supplemental.pdf
Predicting the motion of agents such as pedestrians or human-driven vehicles is one of the most critical problems in the autonomous driving domain. The overall safety of driving and the comfort of a passenger directly depend on its successful solution. The motion prediction problem also remains one of the most challeng...
Fighting Copycat Agents in Behavioral Cloning from Observation Histories
https://papers.nips.cc/paper_files/paper/2020/hash/1b113258af3968aaf3969ca67e744ff8-Abstract.html
Chuan Wen, Jierui Lin, Trevor Darrell, Dinesh Jayaraman, Yang Gao
https://papers.nips.cc/paper_files/paper/2020/hash/1b113258af3968aaf3969ca67e744ff8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9940-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-Supplemental.pdf
Imitation learning trains policies to map from input observations to the actions that an expert would choose. In this setting, distribution shift frequently exacerbates the effect of misattributing expert actions to nuisance correlates among the observed variables. We observe that a common instance of this causal confu...
Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model
https://papers.nips.cc/paper_files/paper/2020/hash/1b33d16fc562464579b7199ca3114982-Abstract.html
Raphaël Berthier, Francis Bach, Pierre Gaillard
https://papers.nips.cc/paper_files/paper/2020/hash/1b33d16fc562464579b7199ca3114982-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9941-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-Supplemental.pdf
In the context of statistical supervised learning, the noiseless linear model assumes that there exists a deterministic linear relation $Y = \langle \theta_*, \Phi(U) \rangle$ between the random output $Y$ and the random feature vector $\Phi(U)$, a potentially non-linear transformation of the inputs~$U$. We analyze the...
Structured Prediction for Conditional Meta-Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1b69ebedb522700034547abc5652ffac-Abstract.html
Ruohan Wang, Yiannis Demiris, Carlo Ciliberto
https://papers.nips.cc/paper_files/paper/2020/hash/1b69ebedb522700034547abc5652ffac-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9942-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-Supplemental.pdf
The goal of optimization-based meta-learning is to find a single initialization shared across a distribution of tasks to speed up the process of learning new tasks. Conditional meta-learning seeks task-specific initialization to better capture complex task distributions and improve performance. However, many existing c...
Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient
https://papers.nips.cc/paper_files/paper/2020/hash/1b742ae215adf18b75449c6e272fd92d-Abstract.html
Ankit Pensia, Shashank Rajput, Alliot Nagle, Harit Vishwakarma, Dimitris Papailiopoulos
https://papers.nips.cc/paper_files/paper/2020/hash/1b742ae215adf18b75449c6e272fd92d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9943-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-Supplemental.pdf
The strong lottery ticket hypothesis (LTH) postulates that one can approximate any target neural network by only pruning the weights of a sufficiently over-parameterized random network. A recent work by Malach et al. [MYSS20] establishes the first theoretical analysis for the strong LTH: one can provably approximate a...
The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
https://papers.nips.cc/paper_files/paper/2020/hash/1b84c4cee2b8b3d823b30e2d604b1878-Abstract.html
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, Davide Testuggine
https://papers.nips.cc/paper_files/paper/2020/hash/1b84c4cee2b8b3d823b30e2d604b1878-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9944-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Supplemental.pdf
This work proposes a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes. It is constructed such that unimodal models struggle and only multimodal models can succeed: difficult examples (“benign confounders”) are added to the dataset to make it hard to rely on unimodal...
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
https://papers.nips.cc/paper_files/paper/2020/hash/1b9a80606d74d3da6db2f1274557e644-Abstract.html
Lingkai Kong, Molei Tao
https://papers.nips.cc/paper_files/paper/2020/hash/1b9a80606d74d3da6db2f1274557e644-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9945-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-Supplemental.zip
This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors. In particular, it shows that if the objective function exhibit multiscale behaviors, then in a large learning rate regime which only resolves the macroscopic but n...
Identifying Learning Rules From Neural Network Observables
https://papers.nips.cc/paper_files/paper/2020/hash/1ba922ac006a8e5f2b123684c2f4d65f-Abstract.html
Aran Nayebi, Sanjana Srivastava, Surya Ganguli, Daniel L. Yamins
https://papers.nips.cc/paper_files/paper/2020/hash/1ba922ac006a8e5f2b123684c2f4d65f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9946-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-Supplemental.pdf
The brain modifies its synaptic strengths during learning in order to better adapt to its environment. However, the underlying plasticity rules that govern learning are unknown. Many proposals have been suggested, including Hebbian mechanisms, explicit error backpropagation, and a variety of alternatives. It is an open...
Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions
https://papers.nips.cc/paper_files/paper/2020/hash/1bd413de70f32142f4a33a94134c5690-Abstract.html
Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Emmanouil Zampetakis
https://papers.nips.cc/paper_files/paper/2020/hash/1bd413de70f32142f4a33a94134c5690-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9947-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-Supplemental.pdf
respect to the Renyi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization.
Weakly-Supervised Reinforcement Learning for Controllable Behavior
https://papers.nips.cc/paper_files/paper/2020/hash/1bd69c7df3112fb9a584fbd9edfc6c90-Abstract.html
Lisa Lee, Ben Eysenbach, Russ R. Salakhutdinov, Shixiang (Shane) Gu, Chelsea Finn
https://papers.nips.cc/paper_files/paper/2020/hash/1bd69c7df3112fb9a584fbd9edfc6c90-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9948-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-Supplemental.zip
Reinforcement learning (RL) is a powerful framework for learning to take actions to solve tasks. However, in many settings, an agent must winnow down the inconceivably large space of all possible tasks to the single task that it is currently being asked to solve. Can we instead constrain the space of tasks to those tha...
Improving Policy-Constrained Kidney Exchange via Pre-Screening
https://papers.nips.cc/paper_files/paper/2020/hash/1bda4c789c38754f639a376716c5859f-Abstract.html
Duncan McElfresh, Michael Curry, Tuomas Sandholm, John Dickerson
https://papers.nips.cc/paper_files/paper/2020/hash/1bda4c789c38754f639a376716c5859f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9949-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-Supplemental.zip
In barter exchanges, participants swap goods with one another without exchanging money; these exchanges are often facilitated by a central clearinghouse, with the goal of maximizing the aggregate quality (or number) of swaps. Barter exchanges are subject to many forms of uncertainty--in participant preferences, the fea...
Learning abstract structure for drawing by efficient motor program induction
https://papers.nips.cc/paper_files/paper/2020/hash/1c104b9c0accfca52ef21728eaf01453-Abstract.html
Lucas Tian, Kevin Ellis, Marta Kryven, Josh Tenenbaum
https://papers.nips.cc/paper_files/paper/2020/hash/1c104b9c0accfca52ef21728eaf01453-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9950-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-Supplemental.pdf
Humans flexibly solve new problems that differ from those previously practiced. This ability to flexibly generalize is supported by learned concepts that represent useful structure common across different problems. Here we develop a naturalistic drawing task to study how humans rapidly acquire structured prior knowledg...
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- A Neural Tangent Kernel Perspective
https://papers.nips.cc/paper_files/paper/2020/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html
Kaixuan Huang, Yuqing Wang, Molei Tao, Tuo Zhao
https://papers.nips.cc/paper_files/paper/2020/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9951-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-Supplemental.pdf
Deep residual networks (ResNets) have demonstrated better generalization performance than deep feedforward networks (FFNets). However, the theory behind such a phenomenon is still largely unknown. This paper studies this fundamental problem in deep learning from a so-called ``neural tangent kernel'' perspective. Specif...
Dual Instrumental Variable Regression
https://papers.nips.cc/paper_files/paper/2020/hash/1c383cd30b7c298ab50293adfecb7b18-Abstract.html
Krikamol Muandet, Arash Mehrjou, Si Kai Lee, Anant Raj
https://papers.nips.cc/paper_files/paper/2020/hash/1c383cd30b7c298ab50293adfecb7b18-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9952-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-Supplemental.pdf
We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation. Inspired by problems in stochastic programming, we show that two-stage procedures for non-linear IV regression can be reformulated as a convex-concave saddle-...
Stochastic Gradient Descent in Correlated Settings: A Study on Gaussian Processes
https://papers.nips.cc/paper_files/paper/2020/hash/1cb524b5a3f3f82be4a7d954063c07e2-Abstract.html
Hao Chen, Lili Zheng, Raed AL Kontar, Garvesh Raskutti
https://papers.nips.cc/paper_files/paper/2020/hash/1cb524b5a3f3f82be4a7d954063c07e2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9953-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-Supplemental.pdf
Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage. However, the fact that the stochastic gradient is a biased estimator of...
Interventional Few-Shot Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1cc8a8ea51cd0adddf5dab504a285915-Abstract.html
Zhongqi Yue, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua
https://papers.nips.cc/paper_files/paper/2020/hash/1cc8a8ea51cd0adddf5dab504a285915-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9954-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-Supplemental.pdf
We uncover an ever-overlooked deficiency in the prevailing Few-Shot Learning (FSL) methods: the pre-trained knowledge is indeed a confounder that limits the performance. This finding is rooted from our causal assumption: a Structural Causal Model (SCM) for the causalities among the pre-trained knowledge, sample feature...
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html
Nan Jiang, Jiawei Huang
https://papers.nips.cc/paper_files/paper/2020/hash/1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9955-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-Supplemental.pdf
We study minimax methods for off-policy evaluation (OPE) using value functions and marginalized importance weights. Despite that they hold promises of overcoming the exponential variance in traditional importance sampling, several key problems remain: (1) They require function approximation and are generally biased. Fo...
Biased Stochastic First-Order Methods for Conditional Stochastic Optimization and Applications in Meta Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1cdf14d1e3699d61d237cf76ce1c2dca-Abstract.html
Yifan Hu, Siqi Zhang, Xin Chen, Niao He
https://papers.nips.cc/paper_files/paper/2020/hash/1cdf14d1e3699d61d237cf76ce1c2dca-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9956-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-Supplemental.pdf
Conditional stochastic optimization covers a variety of applications ranging from invariant learning and causal inference to meta-learning. However, constructing unbiased gradient estimators for such problems is challenging due to the composition structure. As an alternative, we propose a biased stochastic gradient des...
ShiftAddNet: A Hardware-Inspired Deep Network
https://papers.nips.cc/paper_files/paper/2020/hash/1cf44d7975e6c86cffa70cae95b5fbb2-Abstract.html
Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin
https://papers.nips.cc/paper_files/paper/2020/hash/1cf44d7975e6c86cffa70cae95b5fbb2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9957-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-Supplemental.pdf
Multiplication (e.g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs). However, intensive multiplications cause expensive resource costs that challenge DNNs' deployment on resource-constrained edge devices, driving several attempts for multiplication-less deep networks. This paper presented...
Network-to-Network Translation with Conditional Invertible Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/1cfa81af29c6f2d8cacb44921722e753-Abstract.html
Robin Rombach, Patrick Esser, Bjorn Ommer
https://papers.nips.cc/paper_files/paper/2020/hash/1cfa81af29c6f2d8cacb44921722e753-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9958-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-Supplemental.pdf
Given the ever-increasing computational costs of modern machine learning models, we need to find new ways to reuse such expert models and thus tap into the resources that have been invested in their creation. Recent work suggests that the power of these massive models is captured by the representations they learn. Ther...
Intra-Processing Methods for Debiasing Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/1d8d70dddf147d2d92a634817f01b239-Abstract.html
Yash Savani, Colin White, Naveen Sundar Govindarajulu
https://papers.nips.cc/paper_files/paper/2020/hash/1d8d70dddf147d2d92a634817f01b239-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9959-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-Supplemental.zip
In this work, we initiate the study of a new paradigm in debiasing research, intra-processing, which sits between in-processing and post-processing methods. Intra-processing methods are designed specifically to debias large models which have been trained on a generic dataset, and fine-tuned on a more specific task. We ...
Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems
https://papers.nips.cc/paper_files/paper/2020/hash/1da546f25222c1ee710cf7e2f7a3ff0c-Abstract.html
Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong
https://papers.nips.cc/paper_files/paper/2020/hash/1da546f25222c1ee710cf7e2f7a3ff0c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9960-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-Supplemental.pdf
This paper proposes two efficient algorithms for computing approximate second-order stationary points (SOSPs) of problems with generic smooth non-convex objective functions and generic linear constraints. While finding (approximate) SOSPs for the class of smooth non-convex linearly constrained problems is computational...
Model-based Policy Optimization with Unsupervised Model Adaptation
https://papers.nips.cc/paper_files/paper/2020/hash/1dc3a89d0d440ba31729b0ba74b93a33-Abstract.html
Jian Shen, Han Zhao, Weinan Zhang, Yong Yu
https://papers.nips.cc/paper_files/paper/2020/hash/1dc3a89d0d440ba31729b0ba74b93a33-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9961-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-Supplemental.pdf
Model-based reinforcement learning methods learn a dynamics model with real data sampled from the environment and leverage it to generate simulated data to derive an agent. However, due to the potential distribution mismatch between simulated data and real data, this could lead to degraded performance. Despite much eff...
Implicit Regularization and Convergence for Weight Normalization
https://papers.nips.cc/paper_files/paper/2020/hash/1de7d2b90d554be9f0db1c338e80197d-Abstract.html
Xiaoxia Wu, Edgar Dobriban, Tongzheng Ren, Shanshan Wu, Zhiyuan Li, Suriya Gunasekar, Rachel Ward, Qiang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/1de7d2b90d554be9f0db1c338e80197d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9962-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-Supplemental.pdf
Normalization methods such as batch, weight, instance, and layer normalization are commonly used in modern machine learning. Here, we study the weight normalization (WN) method \cite{salimans2016weight} and a variant called reparametrized projected gradient descent (rPGD) for overparametrized least squares regression a...
Geometric All-way Boolean Tensor Decomposition
https://papers.nips.cc/paper_files/paper/2020/hash/1def1713ebf17722cbe300cfc1c88558-Abstract.html
Changlin Wan, Wennan Chang, Tong Zhao, Sha Cao, Chi Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/1def1713ebf17722cbe300cfc1c88558-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9963-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-Supplemental.pdf
Boolean tensor has been broadly utilized in representing high dimensional logical data collected on spatial, temporal and/or other relational domains. Boolean Tensor Decomposition (BTD) factorizes a binary tensor into the Boolean sum of multiple rank-1 tensors, which is an NP-hard problem. Existing BTD methods have bee...
Modular Meta-Learning with Shrinkage
https://papers.nips.cc/paper_files/paper/2020/hash/1e04b969bf040acd252e1faafb51f829-Abstract.html
Yutian Chen, Abram L. Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew Hoffman, Nando de Freitas
https://papers.nips.cc/paper_files/paper/2020/hash/1e04b969bf040acd252e1faafb51f829-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9964-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-Supplemental.pdf
Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task- specific components. Updating only these task-specific modules then allows the model to be adapted to low-data tasks for as many steps as necessary without ri...
A/B Testing in Dense Large-Scale Networks: Design and Inference
https://papers.nips.cc/paper_files/paper/2020/hash/1e0b802d5c0e1e8434a771ba7ff2c301-Abstract.html
Preetam Nandy, Kinjal Basu, Shaunak Chatterjee, Ye Tu
https://papers.nips.cc/paper_files/paper/2020/hash/1e0b802d5c0e1e8434a771ba7ff2c301-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9965-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-Supplemental.zip
Design of experiments and estimation of treatment effects in large-scale networks, in the presence of strong interference, is a challenging and important problem. Most existing methods' performance deteriorates as the density of the network increases. In this paper, we present a novel strategy for accurately estimating...
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/1e14bfe2714193e7af5abc64ecbd6b46-Abstract.html
Vitaly Feldman, Chiyuan Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/1e14bfe2714193e7af5abc64ecbd6b46-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9966-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-Supplemental.pdf
In this work we design experiments to test the key ideas in this theory. The experiments require estimation of the influence of each training example on the accuracy at each test example as well as memorization values of training examples. Estimating these quantities directly is computationally prohibitive but we show ...
Partially View-aligned Clustering
https://papers.nips.cc/paper_files/paper/2020/hash/1e591403ff232de0f0f139ac51d99295-Abstract.html
Zhenyu Huang, Peng Hu, Joey Tianyi Zhou, Jiancheng Lv, Xi Peng
https://papers.nips.cc/paper_files/paper/2020/hash/1e591403ff232de0f0f139ac51d99295-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9967-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-Supplemental.pdf
In this paper, we study one challenging issue in multi-view data clustering. To be specific, for two data matrices $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ corresponding to two views, we do not assume that $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ are fully aligned in row-wise. Instead, we assume that only a small por...
Partial Optimal Tranport with applications on Positive-Unlabeled Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1e6e25d952a0d639b676ee20d0519ee2-Abstract.html
Laetitia Chapel, Mokhtar Z. Alaya, Gilles Gasso
https://papers.nips.cc/paper_files/paper/2020/hash/1e6e25d952a0d639b676ee20d0519ee2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9968-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-Supplemental.pdf
Classical optimal transport problem seeks a transportation map that preserves the total mass between two probability distributions, requiring their masses to be equal. This may be too restrictive in some applications such as color or shape matching, since the distributions may have arbitrary masses and/or only a fra...
Toward the Fundamental Limits of Imitation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1e7875cf32d306989d80c14308f3a099-Abstract.html
Nived Rajaraman, Lin Yang, Jiantao Jiao, Kannan Ramchandran
https://papers.nips.cc/paper_files/paper/2020/hash/1e7875cf32d306989d80c14308f3a099-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9969-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-Supplemental.pdf
Imitation learning (IL) aims to mimic the behavior of an expert policy in a sequential decision-making problem given only demonstrations. In this paper, we focus on understanding the minimax statistical limits of IL in episodic Markov Decision Processes (MDPs). We first consider the setting where the learner is provide...
Logarithmic Pruning is All You Need
https://papers.nips.cc/paper_files/paper/2020/hash/1e9491470749d5b0e361ce4f0b24d037-Abstract.html
Laurent Orseau, Marcus Hutter, Omar Rivasplata
https://papers.nips.cc/paper_files/paper/2020/hash/1e9491470749d5b0e361ce4f0b24d037-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9970-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-Supplemental.pdf
The Lottery Ticket Hypothesis is a conjecture that every large neural network contains a subnetwork that, when trained in isolation, achieves comparable performance to the large network. An even stronger conjecture has been proven recently: Every sufficiently overparameterized network contains a subnetwork that, even w...
Hold me tight! Influence of discriminative features on deep network boundaries
https://papers.nips.cc/paper_files/paper/2020/hash/1ea97de85eb634d580161c603422437f-Abstract.html
Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi, Pascal Frossard
https://papers.nips.cc/paper_files/paper/2020/hash/1ea97de85eb634d580161c603422437f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9971-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-Supplemental.pdf
Important insights towards the explainability of neural networks reside in the characteristics of their decision boundaries. In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary. This enabl...
Learning from Mixtures of Private and Public Populations
https://papers.nips.cc/paper_files/paper/2020/hash/1ee942c6b182d0f041a2312947385b23-Abstract.html
Raef Bassily, Shay Moran, Anupama Nandi
https://papers.nips.cc/paper_files/paper/2020/hash/1ee942c6b182d0f041a2312947385b23-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9972-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-Supplemental.pdf
We initiate the study of a new model of supervised learning under privacy constraints. Imagine a medical study where a dataset is sampled from a population of both healthy and unhealthy individuals. Suppose healthy individuals have no privacy concerns (in such case, we call their data ``public'') while the unhealthy in...
Adversarial Weight Perturbation Helps Robust Generalization
https://papers.nips.cc/paper_files/paper/2020/hash/1ef91c212e30e14bf125e9374262401f-Abstract.html
Dongxian Wu, Shu-Tao Xia, Yisen Wang
https://papers.nips.cc/paper_files/paper/2020/hash/1ef91c212e30e14bf125e9374262401f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9973-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Supplemental.pdf
The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, adversarial training is the most promising one, which flattens the \textit{input loss landscape} (loss change with respect to input) via training on adversarially perturbed examples. How...
Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes
https://papers.nips.cc/paper_files/paper/2020/hash/1f10c3650a3aa5912dccc5789fd515e8-Abstract.html
Yuval Emek, Ron Lavi, Rad Niazadeh, Yangguang Shi
https://papers.nips.cc/paper_files/paper/2020/hash/1f10c3650a3aa5912dccc5789fd515e8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9974-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-Supplemental.pdf
In this paper, a rather general online problem called \emph{dynamic resource allocation with capacity constraints (DRACC)} is introduced and studied in the realm of posted price mechanisms. This problem subsumes several applications of stateful pricing, including but not limited to posted prices for online job scheduli...
Adversarial Self-Supervised Contrastive Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1f1baa5b8edac74eb4eaa329f14a0361-Abstract.html
Minseon Kim, Jihoon Tack, Sung Ju Hwang
https://papers.nips.cc/paper_files/paper/2020/hash/1f1baa5b8edac74eb4eaa329f14a0361-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9975-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-Supplemental.zip
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness. While some recent works propose semi-supervised adversarial learning methods that utilize unlabeled data,...
Normalizing Kalman Filters for Multivariate Time Series Analysis
https://papers.nips.cc/paper_files/paper/2020/hash/1f47cef5e38c952f94c5d61726027439-Abstract.html
Emmanuel de Bézenac, Syama Sundar Rangapuram, Konstantinos Benidis, Michael Bohlke-Schneider, Richard Kurle, Lorenzo Stella, Hilaf Hasson, Patrick Gallinari, Tim Januschowski
https://papers.nips.cc/paper_files/paper/2020/hash/1f47cef5e38c952f94c5d61726027439-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9976-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-Supplemental.pdf
This paper tackles the modelling of large, complex and multivariate time series panels in a probabilistic setting. To this extent, we present a novel approach reconciling classical state space models with deep learning methods. By augmenting state space models with normalizing flows, we mitigate imprecisions stemming f...
Learning to summarize with human feedback
https://papers.nips.cc/paper_files/paper/2020/hash/1f89885d556929e98d3ef9b86448f951-Abstract.html
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul F. Christiano
https://papers.nips.cc/paper_files/paper/2020/hash/1f89885d556929e98d3ef9b86448f951-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9977-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Supplemental.pdf
As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we ...
Fourier Spectrum Discrepancies in Deep Network Generated Images
https://papers.nips.cc/paper_files/paper/2020/hash/1f8d87e1161af68b81bace188a1ec624-Abstract.html
Tarik Dzanic, Karan Shah, Freddie Witherden
https://papers.nips.cc/paper_files/paper/2020/hash/1f8d87e1161af68b81bace188a1ec624-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9978-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-Supplemental.zip
Advancements in deep generative models such as generative adversarial networks and variational autoencoders have resulted in the ability to generate realistic images that are visually indistinguishable from real images which raises concerns about their potential malicious usage. In this paper, we present an analysis of...
Lamina-specific neuronal properties promote robust, stable signal propagation in feedforward networks
https://papers.nips.cc/paper_files/paper/2020/hash/1fc214004c9481e4c8073e85323bfd4b-Abstract.html
Dongqi Han, Erik De Schutter, Sungho Hong
https://papers.nips.cc/paper_files/paper/2020/hash/1fc214004c9481e4c8073e85323bfd4b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9979-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-Supplemental.pdf
Feedforward networks (FFN) are ubiquitous structures in neural systems and have been studied to understand mechanisms of reliable signal and information transmission. In many FFNs, neurons in one layer have intrinsic properties that are distinct from those in their pre-/postsynaptic layers, but how this affects network...
Learning Dynamic Belief Graphs to Generalize on Text-Based Games
https://papers.nips.cc/paper_files/paper/2020/hash/1fc30b9d4319760b04fab735fbfed9a9-Abstract.html
Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Côté, Mikuláš Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, Will Hamilton
https://papers.nips.cc/paper_files/paper/2020/hash/1fc30b9d4319760b04fab735fbfed9a9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9980-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-Supplemental.pdf
Playing text-based games requires skills in processing natural language and sequential decision making. Achieving human-level performance on text-based games remains an open challenge, and prior research has largely relied on hand-crafted structured representations and heuristics. In this work, we investigate how an ag...
Triple descent and the two kinds of overfitting: where & why do they appear?
https://papers.nips.cc/paper_files/paper/2020/hash/1fd09c5f59a8ff35d499c0ee25a1d47e-Abstract.html
Stéphane d'Ascoli, Levent Sagun, Giulio Biroli
https://papers.nips.cc/paper_files/paper/2020/hash/1fd09c5f59a8ff35d499c0ee25a1d47e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9981-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-Supplemental.zip
A recent line of research has highlighted the existence of a ``double descent'' phenomenon in deep learning, whereby increasing the number of training examples N causes the generalization error of neural networks to peak when N is of the same order as the number of parameters P. In earlier works, a similar phenomenon w...
Multimodal Graph Networks for Compositional Generalization in Visual Question Answering
https://papers.nips.cc/paper_files/paper/2020/hash/1fd6c4e41e2c6a6b092eb13ee72bce95-Abstract.html
Raeid Saqur, Karthik Narasimhan
https://papers.nips.cc/paper_files/paper/2020/hash/1fd6c4e41e2c6a6b092eb13ee72bce95-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9982-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-Supplemental.zip
Compositional generalization is a key challenge in grounding natural language to visual perception. While deep learning models have achieved great success in multimodal tasks like visual question answering, recent studies have shown that they fail to generalize to new inputs that are simply an unseen combination of tho...
Learning Graph Structure With A Finite-State Automaton Layer
https://papers.nips.cc/paper_files/paper/2020/hash/1fdc0ee9d95c71d73df82ac8f0721459-Abstract.html
Daniel Johnson, Hugo Larochelle, Daniel Tarlow
https://papers.nips.cc/paper_files/paper/2020/hash/1fdc0ee9d95c71d73df82ac8f0721459-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9983-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-Supplemental.zip
Graph-based neural network models are producing strong results in a number of domains, in part because graphs provide flexibility to encode domain knowledge in the form of relational structure (edges) between nodes in the graph. In practice, edges are used both to represent intrinsic structure (e.g., abstract syntax tr...
A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions
https://papers.nips.cc/paper_files/paper/2020/hash/2000f6325dfc4fc3201fc45ed01c7a5d-Abstract.html
Yulong Lu, Jianfeng Lu
https://papers.nips.cc/paper_files/paper/2020/hash/2000f6325dfc4fc3201fc45ed01c7a5d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9984-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-Supplemental.pdf
This paper studies the universal approximation property of deep neural networks for representing probability distributions. Given a target distribution $\pi$ and a source distribution $p_z$ both defined on $\mathbb{R}^d$, we prove under some assumptions that there exists a deep neural network $g:\mathbb{R}^d\gt \mathbb...
Unsupervised object-centric video generation and decomposition in 3D
https://papers.nips.cc/paper_files/paper/2020/hash/20125fd9b2d43e340a35fb0278da235d-Abstract.html
Paul Henderson, Christoph H. Lampert
https://papers.nips.cc/paper_files/paper/2020/hash/20125fd9b2d43e340a35fb0278da235d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9985-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-Supplemental.zip
A natural approach to generative modeling of videos is to represent them as a composition of moving objects. Recent works model a set of 2D sprites over a slowly-varying background, but without considering the underlying 3D scene that gives rise to them. We instead propose to model a video as the view seen while moving...
Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization
https://papers.nips.cc/paper_files/paper/2020/hash/201d7288b4c18a679e48b31c72c30ded-Abstract.html
Haoliang Li, Yufei Wang, Renjie Wan, Shiqi Wang, Tie-Qiang Li, Alex Kot
https://papers.nips.cc/paper_files/paper/2020/hash/201d7288b4c18a679e48b31c72c30ded-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9986-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-Supplemental.pdf
Recently, we have witnessed great progress in the field of medical imaging classification by adopting deep neural networks. However, the recent advanced models still require accessing sufficiently large and representative datasets for training, which is often unfeasible in clinically realistic environments. When traine...
Multi-label classification: do Hamming loss and subset accuracy really conflict with each other?
https://papers.nips.cc/paper_files/paper/2020/hash/20479c788fb27378c2c99eadcf207e7f-Abstract.html
Guoqiang Wu, Jun Zhu
https://papers.nips.cc/paper_files/paper/2020/hash/20479c788fb27378c2c99eadcf207e7f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9987-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-Supplemental.pdf
Various evaluation measures have been developed for multi-label classification, including Hamming Loss (HL), Subset Accuracy (SA) and Ranking Loss (RL). However, there is a gap between empirical results and the existing theories: 1) an algorithm often empirically performs well on some measure(s) while poorly on others,...
A Novel Automated Curriculum Strategy to Solve Hard Sokoban Planning Instances
https://papers.nips.cc/paper_files/paper/2020/hash/2051bd70fc110a2208bdbd4a743e7f79-Abstract.html
Dieqiao Feng, Carla P. Gomes, Bart Selman
https://papers.nips.cc/paper_files/paper/2020/hash/2051bd70fc110a2208bdbd4a743e7f79-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9988-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-Supplemental.pdf
In recent years, we have witnessed tremendous progress in deep reinforcement learning (RL) for tasks such as Go, Chess, video games, and robot control. Nevertheless, other combinatorial domains, such as AI planning, still pose considerable challenges for RL approaches. The key difficulty in those domains is that a posi...
Causal analysis of Covid-19 Spread in Germany
https://papers.nips.cc/paper_files/paper/2020/hash/205e73579f21c2ed134dbd6ce7e4a1ea-Abstract.html
Atalanti Mastakouri, Bernhard Schölkopf
https://papers.nips.cc/paper_files/paper/2020/hash/205e73579f21c2ed134dbd6ce7e4a1ea-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9989-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-Supplemental.pdf
In this work, we study the causal relations among German regions in terms of the spread of Covid-19 since the beginning of the pandemic, taking into account the restriction policies that were applied by the different federal states. We loose a strictly formulated assumption for a causal feature selection method for tim...
Locally private non-asymptotic testing of discrete distributions is faster using interactive mechanisms
https://papers.nips.cc/paper_files/paper/2020/hash/20b02dc95171540bc52912baf3aa709d-Abstract.html
Thomas Berrett, Cristina Butucea
https://papers.nips.cc/paper_files/paper/2020/hash/20b02dc95171540bc52912baf3aa709d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9990-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-Supplemental.pdf
We find separation rates for testing multinomial or more general discrete distributions under the constraint of alpha-local differential privacy. We construct efficient randomized algorithms and test procedures, in both the case where only non-interactive privacy mechanisms are allowed and also in the case where all se...
Adaptive Gradient Quantization for Data-Parallel SGD
https://papers.nips.cc/paper_files/paper/2020/hash/20b5e1cf8694af7a3c1ba4a87f073021-Abstract.html
Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M. Roy, Ali Ramezani-Kebrya
https://papers.nips.cc/paper_files/paper/2020/hash/20b5e1cf8694af7a3c1ba4a87f073021-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9991-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-Supplemental.zip
Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantizati...
Finite Continuum-Armed Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/20c86a628232a67e7bd46f76fba7ce12-Abstract.html
Solenne Gaucher
https://papers.nips.cc/paper_files/paper/2020/hash/20c86a628232a67e7bd46f76fba7ce12-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9992-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-Supplemental.pdf
We consider a situation where an agent has $T$ ressources to be allocated to a larger number $N$ of actions. Each action can be completed at most once and results in a stochastic reward with unknown mean. The goal of the agent is to maximize her cumulative reward. Non trivial strategies are possible when side informati...
Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies
https://papers.nips.cc/paper_files/paper/2020/hash/20d749bc05f47d2bd3026ce457dcfd8e-Abstract.html
Itai Gat, Idan Schwartz, Alexander Schwing, Tamir Hazan
https://papers.nips.cc/paper_files/paper/2020/hash/20d749bc05f47d2bd3026ce457dcfd8e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9993-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-Supplemental.pdf
Many recent datasets contain a variety of different data modalities, for instance, image, question, and answer data in visual question answering (VQA). When training deep net classifiers on those multi-modal datasets, the modalities get exploited at different scales, i.e., some modalities can more easily contribute to ...
Compact task representations as a normative model for higher-order brain activity
https://papers.nips.cc/paper_files/paper/2020/hash/2109737282d2c2de4fc5534be26c9bb6-Abstract.html
Severin Berger, Christian K. Machens
https://papers.nips.cc/paper_files/paper/2020/hash/2109737282d2c2de4fc5534be26c9bb6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9994-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-Supplemental.pdf
Higher-order brain areas such as the frontal cortices are considered essential for the flexible solution of tasks. However, the precise computational role of these areas is still debated. Indeed, even for the simplest of tasks, we cannot really explain how the measured brain activity, which evolves over time in complic...
Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs
https://papers.nips.cc/paper_files/paper/2020/hash/211b39255232ab59ce78f2e28cd0292b-Abstract.html
Edouard Leurent, Odalric-Ambrym Maillard, Denis Efimov
https://papers.nips.cc/paper_files/paper/2020/hash/211b39255232ab59ce78f2e28cd0292b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9995-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-Supplemental.pdf
We consider the problem of robust and adaptive model predictive control (MPC) of a linear system, with unknown parameters that are learned along the way (adaptive), in a critical setting where failures must be prevented (robust). This problem has been studied from different perspectives by different communities. Howeve...
Co-exposure Maximization in Online Social Networks
https://papers.nips.cc/paper_files/paper/2020/hash/212ab20dbdf4191cbcdcf015511783f4-Abstract.html
Sijing Tu, Cigdem Aslay, Aristides Gionis
https://papers.nips.cc/paper_files/paper/2020/hash/212ab20dbdf4191cbcdcf015511783f4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9996-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-Supplemental.zip
We show that the problem of maximizing co-exposure is NP-hard and its objective function is neither submodular nor supermodular. However, by exploiting a connection to a submodular function that acts as a lower bound to the objective, we are able to devise a greedy algorithm with provable approximation guarantee. We fu...
UCLID-Net: Single View Reconstruction in Object Space
https://papers.nips.cc/paper_files/paper/2020/hash/21327ba33b3689e713cdff1641128004-Abstract.html
Benoit Guillard, Edoardo Remelli, Pascal Fua
https://papers.nips.cc/paper_files/paper/2020/hash/21327ba33b3689e713cdff1641128004-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9997-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-Supplemental.pdf
We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones. Furthermore, the single-view pipeline naturally extends to multi-view reconstruction, which we also show.
Reinforcement Learning for Control with Multiple Frequencies
https://papers.nips.cc/paper_files/paper/2020/hash/216f44e2d28d4e175a194492bde9148f-Abstract.html
Jongmin Lee, Byung-Jun Lee, Kee-Eung Kim
https://papers.nips.cc/paper_files/paper/2020/hash/216f44e2d28d4e175a194492bde9148f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9998-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-Supplemental.zip
Many real-world sequential decision problems involve multiple action variables whose control frequencies are different, such that actions take their effects at different periods. While these problems can be formulated with the notion of multiple action persistences in factored-action MDP (FA-MDP), it is non-trivial to ...
Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval
https://papers.nips.cc/paper_files/paper/2020/hash/2172fde49301047270b2897085e4319d-Abstract.html
Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Krzakala, Pierfrancesco Urbani, Lenka Zdeborová
https://papers.nips.cc/paper_files/paper/2020/hash/2172fde49301047270b2897085e4319d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9999-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-Supplemental.pdf
Despite the widespread use of gradient-based algorithms for optimising high-dimensional non-convex functions, understanding their ability of finding good minima instead of being trapped in spurious ones remains to a large extent an open problem. Here we focus on gradient flow dynamics for phase retrieval from random me...
Neural Message Passing for Multi-Relational Ordered and Recursive Hypergraphs
https://papers.nips.cc/paper_files/paper/2020/hash/217eedd1ba8c592db97d0dbe54c7adfc-Abstract.html
Naganand Yadati
https://papers.nips.cc/paper_files/paper/2020/hash/217eedd1ba8c592db97d0dbe54c7adfc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10000-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-Supplemental.pdf
Message passing neural network (MPNN) has recently emerged as a successful framework by achieving state-of-the-art performances on many graph-based learning tasks. MPNN has also recently been extended to multi-relational graphs (each edge is labelled), and hypergraphs (each edge can connect any number of vertices). How...
A Unified View of Label Shift Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/219e052492f4008818b8adb6366c7ed6-Abstract.html
Saurabh Garg, Yifan Wu, Sivaraman Balakrishnan, Zachary Lipton
https://papers.nips.cc/paper_files/paper/2020/hash/219e052492f4008818b8adb6366c7ed6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10001-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-Supplemental.pdf
Under label shift, the label distribution $p(y)$ might change but the class-conditional distributions $p(x|y)$ do not. There are two dominant approaches for estimating the label marginal. BBSE, a moment-matching approach based on confusion matrices, is provably consistent and provides interpretable error bounds. Howeve...
Optimal Private Median Estimation under Minimal Distributional Assumptions
https://papers.nips.cc/paper_files/paper/2020/hash/21d144c75af2c3a1cb90441bbb7d8b40-Abstract.html
Christos Tzamos, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Ilias Zadik
https://papers.nips.cc/paper_files/paper/2020/hash/21d144c75af2c3a1cb90441bbb7d8b40-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10002-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-Supplemental.pdf
We study the fundamental task of estimating the median of an underlying distribution from a finite number of samples, under pure differential privacy constraints. We focus on distributions satisfying the minimal assumption that they have a positive density at a small neighborhood around the median. In particular, the d...
Breaking the Communication-Privacy-Accuracy Trilemma
https://papers.nips.cc/paper_files/paper/2020/hash/222afbe0d68c61de60374b96f1d86715-Abstract.html
Wei-Ning Chen, Peter Kairouz, Ayfer Ozgur
https://papers.nips.cc/paper_files/paper/2020/hash/222afbe0d68c61de60374b96f1d86715-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10003-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-Supplemental.pdf
In particular, we consider the problems of mean estimation and frequency estimation under epsilon-local differential privacy and b-bit communication constraints. For mean estimation, we propose a scheme based on Kashin’s representation and random sampling, with order-optimal estimation error under both constraints. For...
Audeo: Audio Generation for a Silent Performance Video
https://papers.nips.cc/paper_files/paper/2020/hash/227f6afd3b7f89b96c4bb91f95d50f6d-Abstract.html
Kun Su, Xiulong Liu, Eli Shlizerman
https://papers.nips.cc/paper_files/paper/2020/hash/227f6afd3b7f89b96c4bb91f95d50f6d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10004-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-Supplemental.zip
We present a novel system that gets as an input, video frames of a musician playing the piano, and generates the music for that video. The generation of music from visual cues is a challenging problem and it is not clear whether it is an attainable goal at all. Our main aim in this work is to explore the plausibility o...
Ode to an ODE
https://papers.nips.cc/paper_files/paper/2020/hash/228669109aa3ab1b4ec06b7722efb105-Abstract.html
Krzysztof M. Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani
https://papers.nips.cc/paper_files/paper/2020/hash/228669109aa3ab1b4ec06b7722efb105-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10005-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-Supplemental.pdf
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d). This nested system of two flows, where the parameter-flow is constrained to lie on the compact manifold, provides stability and effective...
Self-Distillation Amplifies Regularization in Hilbert Space
https://papers.nips.cc/paper_files/paper/2020/hash/2288f691b58edecadcc9a8691762b4fd-Abstract.html
Hossein Mobahi, Mehrdad Farajtabar, Peter Bartlett
https://papers.nips.cc/paper_files/paper/2020/hash/2288f691b58edecadcc9a8691762b4fd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10006-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-Supplemental.zip
Knowledge distillation introduced in the deep learning context is a method to transfer knowledge from one architecture to another. In particular, when the architectures are identical, this is called self-distillation. The idea is to feed in predictions of the trained model as new target values for retraining (and itera...
Coupling-based Invertible Neural Networks Are Universal Diffeomorphism Approximators
https://papers.nips.cc/paper_files/paper/2020/hash/2290a7385ed77cc5592dc2153229f082-Abstract.html
Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
https://papers.nips.cc/paper_files/paper/2020/hash/2290a7385ed77cc5592dc2153229f082-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10007-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-Supplemental.zip
Invertible neural networks based on coupling flows (CF-INNs) have various machine learning applications such as image synthesis and representation learning. However, their desirable characteristics such as analytic invertibility come at the cost of restricting the functional forms. This poses a question on their repres...
Community detection using fast low-cardinality semidefinite programming

https://papers.nips.cc/paper_files/paper/2020/hash/229aeb9e2ae66f2fac1149e5240b2fdd-Abstract.html
Po-Wei Wang, J. Zico Kolter
https://papers.nips.cc/paper_files/paper/2020/hash/229aeb9e2ae66f2fac1149e5240b2fdd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10008-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-Supplemental.pdf
Modularity maximization has been a fundamental tool for understanding the community structure of a network, but the underlying optimization problem is nonconvex and NP-hard to solve. State-of-the-art algorithms like the Louvain or Leiden methods focus on different heuristics to help escape local optima, but they still ...
Modeling Noisy Annotations for Crowd Counting
https://papers.nips.cc/paper_files/paper/2020/hash/22bb543b251c39ccdad8063d486987bb-Abstract.html
Jia Wan, Antoni Chan
https://papers.nips.cc/paper_files/paper/2020/hash/22bb543b251c39ccdad8063d486987bb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10009-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-Supplemental.pdf
The annotation noise in crowd counting is not modeled in traditional crowd counting algorithms based on crowd density maps. In this paper, we first model the annotation noise using a random variable with Gaussian distribution, and derive the pdf of the crowd density value for each spatial location in the image. We then...
An operator view of policy gradient methods
https://papers.nips.cc/paper_files/paper/2020/hash/22eda830d1051274a2581d6466c06e6c-Abstract.html
Dibya Ghosh, Marlos C. Machado, Nicolas Le Roux
https://papers.nips.cc/paper_files/paper/2020/hash/22eda830d1051274a2581d6466c06e6c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10010-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-Supplemental.pdf
We cast policy gradient methods as the repeated application of two operators: a policy improvement operator $\mathcal{I}$, which maps any policy $\pi$ to a better one $\mathcal{I}\pi$, and a projection operator $\mathcal{P}$, which finds the best approximation of $\mathcal{I}\pi$ in the set of realizable policies. We ...
Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases
https://papers.nips.cc/paper_files/paper/2020/hash/22f791da07b0d8a2504c2537c560001c-Abstract.html
Senthil Purushwalkam, Abhinav Gupta
https://papers.nips.cc/paper_files/paper/2020/hash/22f791da07b0d8a2504c2537c560001c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10011-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-Supplemental.pdf
Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image classification. Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it's augm...
Online MAP Inference of Determinantal Point Processes
https://papers.nips.cc/paper_files/paper/2020/hash/23378a2d0a25c6ade2c1da1c06c5213f-Abstract.html
Aditya Bhaskara, Amin Karbasi, Silvio Lattanzi, Morteza Zadimoghaddam
https://papers.nips.cc/paper_files/paper/2020/hash/23378a2d0a25c6ade2c1da1c06c5213f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10012-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-Supplemental.pdf
In this paper, we provide an efficient approximation algorithm for finding the most likelihood configuration (MAP) of size $k$ for Determinantal Point Processes (DPP) in the online setting where the data points arrive in an arbitrary order and the algorithm cannot discard the selected elements from its local memory. Gi...
Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement
https://papers.nips.cc/paper_files/paper/2020/hash/234833147b97bb6aed53a8f4f1c7a7d8-Abstract.html
Yongqing Liang, Xin Li, Navid Jafari, Jim Chen
https://papers.nips.cc/paper_files/paper/2020/hash/234833147b97bb6aed53a8f4f1c7a7d8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10013-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-Supplemental.zip
This paper presents a new matching-based framework for semi-supervised video object segmentation (VOS). Recently, state-of-the-art VOS performance has been achieved by matching-based algorithms, in which feature banks are created to store features for region matching and classification. However, how to effectively orga...
Inferring learning rules from animal decision-making
https://papers.nips.cc/paper_files/paper/2020/hash/234b941e88b755b7a72a1c1dd5022f30-Abstract.html
Zoe Ashwood, Nicholas A. Roy, Ji Hyun Bak, Jonathan W. Pillow
https://papers.nips.cc/paper_files/paper/2020/hash/234b941e88b755b7a72a1c1dd5022f30-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10014-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-Supplemental.pdf
How do animals learn? This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire n...
Input-Aware Dynamic Backdoor Attack
https://papers.nips.cc/paper_files/paper/2020/hash/234e691320c0ad5b45ee3c96d0d7b8f8-Abstract.html
Tuan Anh Nguyen, Anh Tran
https://papers.nips.cc/paper_files/paper/2020/hash/234e691320c0ad5b45ee3c96d0d7b8f8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10015-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-Supplemental.zip
In recent years, neural backdoor attack has been considered to be a potential security threat to deep learning systems. Such systems, while achieving the state-of-the-art performance on clean data, perform abnormally on inputs with predefined triggers. Current backdoor techniques, however, rely on uniform trigger patte...
How hard is to distinguish graphs with graph neural networks?
https://papers.nips.cc/paper_files/paper/2020/hash/23685a2431acad7789c1e3d43ea1522c-Abstract.html
Andreas Loukas
https://papers.nips.cc/paper_files/paper/2020/hash/23685a2431acad7789c1e3d43ea1522c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10016-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-Supplemental.zip
A hallmark of graph neural networks is their ability to distinguish the isomorphism class of their inputs. This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN). MPNN encompasses the majority of graph neural networks used today and is universal when ...
Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition
https://papers.nips.cc/paper_files/paper/2020/hash/236f119f58f5fd102c5a2ca609fdcbd8-Abstract.html
Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi
https://papers.nips.cc/paper_files/paper/2020/hash/236f119f58f5fd102c5a2ca609fdcbd8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10017-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-Supplemental.pdf
We study the problem of switching-constrained online convex optimization (OCO), where the player has a limited number of opportunities to change her action. While the discrete analog of this online learning task has been studied extensively, previous work in the continuous setting has neither established the minimax ra...
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks
https://papers.nips.cc/paper_files/paper/2020/hash/23937b42f9273974570fb5a56a6652ee-Abstract.html
Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi
https://papers.nips.cc/paper_files/paper/2020/hash/23937b42f9273974570fb5a56a6652ee-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10018-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-Supplemental.pdf
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms. However, it often degrades the model performance on normal images and more importantly, the defense does not generalize well to novel attacks. Given the success of deep generative models such as GANs and VAEs in chara...
Cross-Scale Internal Graph Neural Network for Image Super-Resolution
https://papers.nips.cc/paper_files/paper/2020/hash/23ad3e314e2a2b43b4c720507cec0723-Abstract.html
Shangchen Zhou, Jiawei Zhang, Wangmeng Zuo, Chen Change Loy
https://papers.nips.cc/paper_files/paper/2020/hash/23ad3e314e2a2b43b4c720507cec0723-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10019-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-Supplemental.pdf
Non-local self-similarity in natural images has been well studied as an effective prior in image restoration. However, for single image super-resolution (SISR), most existing deep non-local methods (e.g., non-local neural networks) only exploit similar patches within the same scale of the low-resolution (LR) input imag...
Unsupervised Representation Learning by Invariance Propagation
https://papers.nips.cc/paper_files/paper/2020/hash/23af4b45f1e166141a790d1a3126e77a-Abstract.html
Feng Wang, Huaping Liu, Di Guo, Sun Fuchun
https://papers.nips.cc/paper_files/paper/2020/hash/23af4b45f1e166141a790d1a3126e77a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10020-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-Supplemental.pdf
Unsupervised learning methods based on contrastive learning have drawn increasing attention and achieved promising results. Most of them aim to learn representations invariant to instance-level variations, which are provided by different views of the same instance. In this paper, we propose Invariance Propagation to fo...
Restoring Negative Information in Few-Shot Object Detection
https://papers.nips.cc/paper_files/paper/2020/hash/240ac9371ec2671ae99847c3ae2e6384-Abstract.html
Yukuan Yang, Fangyun Wei, Miaojing Shi, Guoqi Li
https://papers.nips.cc/paper_files/paper/2020/hash/240ac9371ec2671ae99847c3ae2e6384-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10021-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-Review.html
null
Few-shot learning has recently emerged as a new challenge in the deep learning field: unlike conventional methods that train the deep neural networks (DNNs) with a large number of labeled data, it asks for the generalization of DNNs on new classes with few annotated samples. Recent advances in few-shot learning mainly ...
Do Adversarially Robust ImageNet Models Transfer Better?
https://papers.nips.cc/paper_files/paper/2020/hash/24357dd085d2c4b1a88a7e0692e60294-Abstract.html
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry
https://papers.nips.cc/paper_files/paper/2020/hash/24357dd085d2c4b1a88a7e0692e60294-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10022-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-Supplemental.pdf
Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on standard datasets can be efficiently adapted to downstream tasks. Typically, better pre-trained models yield better transfer results, suggesting that initial accuracy is a key aspect of transfer learning performance. In this work,...
Robust Correction of Sampling Bias using Cumulative Distribution Functions
https://papers.nips.cc/paper_files/paper/2020/hash/24368c745de15b3d2d6279667debcba3-Abstract.html
Bijan Mazaheri, Siddharth Jain, Jehoshua Bruck
https://papers.nips.cc/paper_files/paper/2020/hash/24368c745de15b3d2d6279667debcba3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10023-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-Supplemental.zip
Varying domains and biased datasets can lead to differences between the training and the target distributions, known as covariate shift. Current approaches for alleviating this often rely on estimating the ratio of training and target probability density functions. These techniques require parameter tuning and can be u...
Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
https://papers.nips.cc/paper_files/paper/2020/hash/24389bfe4fe2eba8bf9aa9203a44cdad-Abstract.html
Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
https://papers.nips.cc/paper_files/paper/2020/hash/24389bfe4fe2eba8bf9aa9203a44cdad-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10024-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-Supplemental.pdf
In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained ...